Amber Heard to be jailed – Daily Times

Australian House of Representative speaker calls on the US to hand over Amber Heard for jail term amid ongoing perjury charges.

The politician made this revelation during his interview with the Australian breakfast show,Sunrise.

He showcased no sympathy for Heard during his admissions and even went as far as to compare her with WikiLeaks Julian Assange.

For those unversed, the US has been demanding for Assange to be extradited to the country over spying charges.

Joyce was quoted saying, Those dogs, when they came in, there were a lot of documents that were signed that said there were no animals there. And it now looks like Ms. Heard has allegedly not told the truth.

[The U.S.] wants Julian Assange [She] can come over to Australia and possibly spend some time at Her Majestys convenience.

Perjury carries a maximum jail term of over 14 years within Australia.

Joyce also hypothesized the potential outcomes if Heard were found guilty by Australian courts and he admitted, Its up to the Americans. I suppose the U.S. want to show the purity of it and theyre insisting on getting Julian Assange.

So [the U.S.] should say, We have got to be fair dinkum and straight with both these things, dont we? Lets see how that goes.

While extradition is an unlikely outcome at this point in time, the chance of her being arrested upon landing on Australian soil is highly likely.

Excerpt from:
Amber Heard to be jailed - Daily Times

What is the difference between free software and open source software? – rea corporativa Banco Santander

The content generated by users is a trend that we are used to seeing on the Internet and that we see every day on social media. In the world of IT, it is also possible to create or improve applications, tools or programs using the same collaborative model.

Explained in a very simple way, a software program is a set of computer instructions needed for our electronic devices to perform the tasks they are designed for. These instructions, which are written in a programming language, are known as source code. Although we tend to associate the word software with computers or smartphones, most of the devices we now have at home or in the office have integrated software: televisions, video game consoles, cleaning robots, smartwatches, etc.

You have probably had to call for technical support when one of these devices has stopped working properly but, can you imagine being able to fix it yourself? In the 80s, US programmer Richard Stallman worked in an office where the printer often had paper jams. His colleagues would only notice the problem when finding that the documents they had sent to the printer hadnt been printed. He decided to modify the printers source code so that whenever there was a paper jam, the users would receive a notification alerting them of the error so that they could fix it.

After a while, the office replaced the printer with a new one and the problems caused by the paper jams returned. This time, Stallman was unable to do the same thing he had done with the previous printer, because access to the source code had been restricted by the manufacturer. That was when he started a "free software movement", which sought to give users the freedom to view, modify and distribute the source code to adapt it to their needs.

See original here:

What is the difference between free software and open source software? - rea corporativa Banco Santander

GitHub Copilot and Open Source: A Love Story That Won’t End Well? – thenewstack.io

Sasha Medvedovsky

Sasha is a software engineer with over 20 years experience, and a co-founder of Diversion, which offers open source source control management software. He has been around long enough to have seen quite an evolution of programming languages and developer tools. Hes passionate about building the next generation of tools for developer productivity and collaboration, leveraging current technologies to create a world in which software is developed faster and with ease.

GitHub has been an important part of the software development world, and of open source software in particular. It has provided free hosting for open source projects (the Apache Software Foundation moved its entire operation to GitHub a few years ago), and played a large part in turning the open source git into the popular source control management (SCM) system it is now.

However, it seems that the cooperation is now coming to an abrupt and ugly ending, with the Software Freedom Conservancy (SFC) joining Free Software Foundation in a recommendation to cut ties with GitHub over the creation of GitHub Copilot.

GitHubs recently commercialized offering of Copilot (which was free until very recently), which delivers AI-powered code composition/auto-completion, was built upon the sourcing of code from the millions of open source projects hosted in GitHub. Needless to say, not all open source projects were created equal, with many different licenses (learn more about OSS licenses), some of which DO NOT enable the reuse or copyleft of code, despite being publicly available on GitHub.

Its true that using the code for training an AI model is somewhat different from simply using the code as it is. But shouldnt the codes creators at least be consulted whether they agree to this use of their creation?

To many open source developers, this constitutes unauthorized use of their work, and a breach of their trust. Obviously, Copilot wouldnt work without ingesting millions of code samples from GitHub, so its safe to say that the open source code is an integral part of it. Moreover, any code created by Copilot could be considered a derivative of this open source code (in some cases whole snippets of open source code could find their way into a closed-source codebase).

Its true that using the code for training an AI model is somewhat different from simply using the code as it is. But shouldnt the codes creators at least be consulted whether they agree to this use of their creation?

If this recent divorce between GitHub and open source organizations may seem surprising, it shouldnt be. It really stems from a misalignment of goals and ideals.

From the beginning, GitHub has been a commercial organization that has turned open source software git into a business. While theres nothing wrong with doing so plenty of companies have built thriving businesses through commercial offerings of open source technology its imperative we dont get confused and consider GitHub an open source company or project. Its neither. This confusion lies in its business model, where production-grade, hosted git was provided for a fee to commercial organizations, and free for open source projects.

As someone once said, if the product is free, YOU are the product. Never has this sentence been more correct than in the case of GitHub. In 2018 Microsoft acquired GitHub for $7.5 billion The common understanding was that the high price (for 2018) was paid not for GitHubs technology (again, it didnt develop git, and there were many competitors, e.g. BitBucket and GitLab); but rather for its developer community, which at that time was 28 million strong.

If Microsoft paid for the OSS community, Microsoft was ultimately going to use the community to make profit. Microsoft is a commercial entity with shareholders and has an obligation to make as much profit as possible. Copilot is just the perfect example of that. Microsoft owns both GitHub, and a large stake in OpenAI, the AI company that trained the Copilot AI model. The cooperation makes so much corporate sense that can be summarized as: they have all of the most popular OSS projects in the world that they are hosting, alongside amazing AI capabilities. It just makes sense to use the synergies to make a commercially successful product.

Theres just one problem with this line of thought: hosting the code doesnt mean that Microsoft owns the code. And this is not the first time this company has made this mistaken assumption.

One illustrative exchange that took place recently points at the potential dangers.

A developer, who goes by the handle of Marak, intentionally broke the code of his open source Faker mock data generator, because he allegedly felt his work was thankless. He complained about the lack of funding for his popular projects, including Faker, which are used by hundreds of companies.

This opened the whole Pandoras Box of who really owns open source code. What if companies are using the code in production? The developer can just break the code And thats it?

GitHub got involved, and reverted the changes, and denied Marak access to his own projects (around 100).

NPM (incidentally, owned by Microsoft as well) has also reverted his repo to a previous version effectively taking control of his code.

Imagine the situation: a programmer has created a very useful open source project. They have maintained and provided it for free for hundreds of companies. Then they decide to make a change that the companies did not like. Then Microsoft (through GitHub and NPM) took over their code repositories and reverted their changes.

Does this look like Microsoft understands that the developer owns the code, or do they think that Microsoft owns the code?

I dont think the open source movement should cut all ties with commercial organizations, or stop using commercial products. Cooperation is a good thing. Its not a zero-sum game, and it helps to benefit humanity as a whole.

But the boundaries should be clearly set. If a developer doesnt want their code to be used in commercial applications, they should be given a right to refuse. If they are ok with it, then theres no problem. But companies (be it Microsoft, Google or Amazon Web Services) shouldnt just assume that if they give something for free they can take something else in return.

At the company I co-founded, Diversion, we have developed our own SCM. We plan to release it as open source (on our own platform, not on GitHub), and we hope it will become useful to millions of developers.

We will also offer free hosting for open source and indie developers, as our thanks and giveback to the amazing people whove given their time and effort for the betterment of all humankind, without asking for anything in return.

In light of these recent developments, I feel that theres a need to make a promise: we pledge, right here, to honor the software creators license agreements, and to not use their code in ways they do not agree with.

To me its something that should go without saying; but apparently, it needs to be said explicitly.

Note: Sharone Zitzman contributed to this post.

Follow this link:

GitHub Copilot and Open Source: A Love Story That Won't End Well? - thenewstack.io

Dronecode and the PX4 Open Source Drone Platform: the Benefits of Open Source, and What Comes Next – DroneLife

PX4 is an open source flight control software for drones and other uncrewed vehicles, hosted by Dronecode, a Linux Foundation non-profit. The project provides a flexible set of tools for drone developers to share technologies to create tailored solutions for drone applications. Annually, the Dronecode Foundation hosts the PX4 Developer Summit, a flagship conference for the drone development community. DroneLife contributor Dawn Zoldi attended the event and provided coverage of a few takeaways about the latest technologies in the PX4 ecosystem.

PX4 Drone Code Top 4 Benefits and Needs

The Drone Code Foundation, a non-profit organization administered by Linux Foundation, leads the development efforts for PX4, the leading open source autopilot software for uncrewed vehicles.

Continue reading below, or listen:

PX4 provides a standard to deliver a scalable hardware support and a software stack. It evolves through a collaborative ecosystem of over 10,000 developers and commercial adopters that both build and maintain the software.

The drone community uses PX4 for a wide variety of use-cases ranging from recreational flight, research and development to commercial and industrial applications. PX4 has changed the game for drone operators globally. As the drone game itself continues to change, so too do end user needs. This article provides perspectives from two industry leaders on how the drone industry benefits from open source software and what it needs from the developer community, going forward.

Open Source Drone Code Benefits Today

Ryan Johnston, CEO and Co-Founder of Austin, Texas-based Applied Aeronautics, manufacturers of long-range fixed wing uncrewed aerial vehicles (UAV) such as the flagship Albatross, uses PX4 for its commercial enterprises, service providers and military customers. He believes this open source autopilot software has propelled the industry in terms of accessibility, flexibility, transparency and cyber security.

PX4 removes barriers to entry for entrepreneurs, small businesses and researchers by providing widespread accessibility. Open source coding provides a foundation for everyone, lowering barriers to entry and evolving with industry needs.

It specifically removes financial barriers to entry and affords everyone, including new entrants, with ready access. This levels the playing field and propels the innovation, refinement and adoption that, in turn, propels the drone industry.

Applied Aeronautics first engaged with PX4 in 2014, when the company needed to find a way to slow down its large aircraft, which had a 30:1 glide ratio like a sailplane. At that time, no autopilot software could do this. Johnston flew to Zurich to spend time with PX4 developers, who got the autonomous takeoff and landing to work successfully on the first try. The iteration from challenge identification, to communication with the community, experimentation and resolution took just a matter of days, he said. I immediately became a believer in open source coding.

Johnston opined that open standards and software will be key to ensuring OEMs can deliver solutions that meet the ever-changing needs of the user base and regulators. Its an adaptable ecosystem, he explained. Its a great foundation and starting point, which can be modified and adapted to suit ones own objectives. Customers can add new software modules, external sensors or a companion computer to achieve their goals.

That adaptability applies, not just just during the planning stage of a project, but throughout a projects life cycle. PX4 allows customers to ask questions and gain invaluable insights from experts in various fields, who troubleshoot in real time with tech support. Johnston said. Without the foundation PX4 provided, many of our customer projects, both military and commercial, would literally and figuratively not have been able to get off the ground.

The PX4 community also creates a living record of these shared challenges and holistic expert community feedback. This results in vetted foundational standards that enable innovation and interoperability.

No one company or piece of hardware will be able to solve all existing industry or customer challenges, Johnston noted. This is why interoperability among subsystems is so important to moving the industry forward. PX4 opens up these pathways because it can work with almost anything, including a wide variety of hardware, sensors, and user interfaces with varying levels of risk tolerance.

As an added bonus, open source software also helps to mitigate cyber threats. The fact that multiple parties access and edit the code base on an ongoing basis leads to increased accountability. This level of code auditing is impossible in closed systems, according to Johnston

What The Industry Needs From Developers Next

According to Michael Blades, Senior Director of Platforms for DroneUp, a leading contract drone services provider network and last-mile delivery system, PX4s ability to continue to evolve will be key to operating at scale in the future. Like so many companies, DroneUp also uses PX4. Blades believes high operational tempos will require upgraded software to keep pace. To do this, he said, developers must keep the following four key goals in mind.

While DroneUps story is unique, it provides an example of what others in the drone industry will likely also require for wide scale repeatable operations.

DroneUp originated in 2016 using drones to assist emergency services during a natural disaster. In just four years it has grown to a network of more than 22,000 pilots and partnered with the nations #1 retailer, Walmart, for drone delivery.

Last November, DroneUp opened its first drone airport, called The DroneUp Hub, Farmington, Arkansas. So far in 2022, it has launched two more Hubs in the state. By this July, it will open additional Hubs in Florida, Texas, Arizona, Virginia and Utah, as part of its nationwide expansion with Walmart. The ultimate goal is to have enough operational Hubs to service more than four thousand Walmart stores across. This will require about 34 hubs with a total of 40,000 80,000 drones.

Tens of thousands of drones will ultimately require one-to-many remote pilot operations and beyond visual line of sight operations. Autopilot software will need to account for this.

This same software must support dissimilar fleets, across multiple domains. Flight distances and cargo loads will vary, as will the environments in which drones will operate. This will necessitate the use of a wide variety of drones. At some point, drones may deliver to autonomous ground fleets for last mile deliveries. Software will need to plug-and-play across all of these vehicles.

As discussed above, PX4 development accounts for cyber security by virtue of its own processes. Even so, according to Blades, additional hardening against cyber attacks remains a critical software requirement. This becomes even more crucial when operating at scale.

Finally, the need to rapidly test and field foundational source code updates becomes even more amplified when a company utilizes large fleets in wide ranging operations.

Ramn Roche, the General Manager of the Dronecode Foundation, 2021 Airwards Industry Impactor Award recipient, and active contributor, advocate and leader in open-source code for drones for more than a decade, noted, At the Dronecode Foundation, during the past seven years, when the drone industry faced multiple challenges, our community pitched in to help solve even the most complex aspects of managing aerial vehicles. We plan to continue to support the industry by keeping open technologies aligned with current industry needs, looking beyond what lies ahead, and providing new opportunities and solutions.

To do this, The Dronecode Foundation plans to expand its efforts on open standards. Roche said, We are doubling down on the work we have been carrying out over the last two years. We strongly believe standards are the way forward for our industry, and we want to open the doors to any organization to collaborate with us.

The Foundation just concluded a successful PX4 Developer Summit at the end of June. There, Roche alluded to upcoming announcements on additional face-to-face meetings this year to share in-depth plans and progress in the ecosystem. So, stay tuned for whats next.

In the meantime, to learn more about The Dronecode Foundation visit: https://www.dronecode.org/

Read more about open source software for drones, and the Dronecode Foundation:

Dawn M.K. Zoldi (Colonel, USAF, Retired) is a licensed attorney with 28 years of combined active duty military and federal civil service to the U.S. Air Force. She is the CEO & Founder of P3 Tech Consulting and an internationally recognized expert on uncrewed aircraft system law and policy. Zoldicontributes to several magazines and hosts popular tech podcasts. Zoldi is also an Adjunct Professor for two universities, at the undergraduate and graduate levels. In 2022, she received the Airwards Peoples Choice Industry Impactor Award, was recognized as one of the Top Women to Follow on LinkedIn and listed in the eVTOL Insights 2022 PowerBook. For more information, follow her on social media and visit her website at:https://www.p3techconsulting.com.

Miriam McNabb is the Editor-in-Chief of DRONELIFE and CEO of JobForDrones, a professional drone services marketplace, and a fascinated observer of the emerging drone industry and the regulatory environment for drones. Miriam has penned over 3,000 articles focused on the commercial drone space and is an international speaker and recognized figure in the industry. Miriam has a degree from the University of Chicago and over 20 years of experience in high tech sales and marketing for new technologies.For drone industry consulting or writing,Email Miriam.

TWITTER:@spaldingbarker

Subscribe to DroneLife here.

Read the rest here:

Dronecode and the PX4 Open Source Drone Platform: the Benefits of Open Source, and What Comes Next - DroneLife

SD Times Open-Source Project of the Week: Open Source Hub – SDTimes.com

Open Source Hub (OSH) is a new open source project from code visibility company CodeSee. Previously it existed as OSS Port, but with the new name comes several updates and changes to the platform.

Similar to OSS Port, OSH is a development community for finding, exploring, and contributing to open source projects.

According to CodeSee, many existing open source communities only aggregate projects, while OSH provides tools for onboarding developers and helping them understand the code in a project.

Developers will be able to use OSH to see the impact of their contributions, build personal profiles, search for projects that fit their needs, access engaging content and programs, and participate in events.

We need more developers learning from and contributing to open source so that all of our codebases are more maintainable and resilient. Every codebase is affected by open source so we need to help each other ramp up in these codebases quickly and support one another right now, said Shanea Leven, co-founder and CEO at CodeSee. Open Source Hub is more than a product or network, it is a movement. A movement for developers at all skill levels to come together, learn, collaborate, and contribute to and support open source with the code visibility tools and manpower it desperately needs.

Learn more about the project here.

Read more here:

SD Times Open-Source Project of the Week: Open Source Hub - SDTimes.com

Red Hat follows open-source game plan to drive the enterprise hybrid future – SiliconANGLE News

Red Hat Inc. Chief Executive OfficerPaul Cormierrecently offered two key points at the start of the companys Summit in May: Companies will have to adopt a hybrid model whether desired or not, and open-source code is driving the future of information technology.

Cormiers post captured the essence of Red Hats innovation game plan. As he noted, Red Hat had been on the hybrid bandwagon for a long time and its product portfolio has been geared to architect, develop and operate applications in the modern hybrid enterprise environment.

The hybrid model is extending deeply into major sectors of the economy, including the automotive industry. The companys product announcements in May included an in-vehicle operating system that will support the software platform for General Motors Co.

Thats a mini data center in every car, Cormier said during an interview with SiliconANGLE at the Summit event. You have to update in such a way that you stay within the safety protocols. Thats what hybrid is all about; its tying all of those pieces together.

Tying pieces together requires a common bond, and Linux is the foundation for Red Hat. A hybrid model where critical infrastructure must run in a wide range of environments needs an operating system that can work anywhere, and the companys latest release for Red Hat Enterprise Linux, or RHEL, during the Summit underscores the evolving role that Linux is playing in compute deployment outside of the datacenter.

Red Hat made a number of enhancements for RHEL 9 designed to address operational consistency in multiple infrastructures and provide stronger security. These included new features to manage RHEL from a single interface and detection protocols for failed updates to software containers. Additional tweaks involving RHEL 8.6 bolstered security through a jointly vetted RHEL and SAP HANA configuration that enabled SELinux, offering privilege escalation attack protection.

Another noteworthy step that Red Hat recently took with Linux is to join the Open Programmable Infrastructure Project in June. Launched by the Linux Foundation, OPI is an initiative to standardize software and APIs in support of data processing units, or DPUs, and intelligent processing units, or IPUs, for enterprise datacenters.Red Hat was joined by Intel Corp., Nvidia Corp., Marvell Technology Inc., Dell Technologies Inc., F5 Inc. and Keysight Technologies Inc. as founding members.

DPU- and IPU-like devices can enable a broad range of services across network and storage domains. OPIs goal is to bring greater consistency across multiple platforms, an increasingly important role for Linux.

You can take Linux any place, from the public cloud out to the edge,Stefanie Chiras, senior vice president of partner ecosystem success at Red Hat, said during a Summit media briefing in May. Its like continuing to build a beautiful house on a solid piece of ground. Its the land that matters.

Photo: Mark Albertson/SiliconANGLE

An increasingly valuable piece of real estate in 2022 is the telco edge. Research firm International Data Corp.forecasts that worldwide spending on edge computing will reach $176 billion this year, a 14% increase over 2021.

Red Hat has been especially active in the 5G/telco space over the past nine months with the announcement of several new deals, including partnerships with Ericsson, Mavenir and Vodafone Ziggo.

In March, Red Hat shared details of its collaboration with Verizon Communications Inc. using OpenShift to manage the telecommunications providers 5G edge deployments. Verizon customers can use OpenShift as a single platform to control different types of infrastructure and build applications across multiple clouds.

The real opportunity around 5G is the industrial applications, things like the connected car automotive driving, factory floor automation, how you actually interface digitally with your bank,Darrell Jordan-Smith, senior vice president of industries and global accounts at Red Hat, explained during an interview with SiliconANGLE. Were doing all sorts of things more intelligently at the edge of the network, using artificial intelligence and machine learning. All of those things are going to deliver a new experience for everyone that interacts with the network. And the telcos are at the heart of it.

In late June, Red Hat announced a joint development pact with Hewlett Packard Enterprise Co. to offer pay-per-use OpenShift on HPE GreenLake, an edge to cloud platform. The agreement brings OpenShift pay-per-use to an on-premises cloud service for the first time, according to Red Hat. Perhaps more significantly, it positions Red Hat to capitalize on expected growth in 5G and the edge.

There is a great deal of growth in the market right now around edge, edge AI, and telco and the 5G rollout,Ryan King, senior director of the global hardware partner ecosystem for Red Hat, said in a recent interview with CRN. Those are all areas of growth that we are looking at with this model.

Along with Linux and the edge, Red Hat has been focused over the past year on the field of high-performance computing, or HPC. The use of AI at scale and increased enterprise reliance on data-driven decision-making have created a need for processing complex calculations at high speed across multiple servers.

Linux runs on all of the worlds top 500 supercomputers, and Red Hat has been focused on extending its hybrid expertise to encompass massive-scale HPC deployments. This has included adapting container technologies, such as Podman, to handle workloads in highly demanding HPC environments.

Red Hats work signals the move of HPC into the cloud and container ecosystem, and the company is beginning to see evidence of that in its customer base. Near the end of 2021, Ghent University in Belgium adopted Red Hats HPC technology to create a developer-friendly environment with massively scalable data storage.

HPC is a bit slow in adopting new technologies,Kenneth Hoste, HPC system administrator at Ghent University, said in an interview with SiliconANGLE. But were definitely seeing some impact from cloud, especially things like containers and Kubernetes, and were starting to hear these things in the HPC community as well.

Red Hats collaboration with customers such as Ghent is creating ripples of interest elsewhere, including the U.S. federal government. In June, the company announced that it would collaborate with multiple U.S. Department of Energy labs to support cloud-native standards in HPC environments, such as Lawrence Livermore National Laboratory and Sandia National Laboratories. The work will include an intriguing effort at Sandia to explore deployment scenarios of Kubernetes-based infrastructure at extreme scale.

Until recently, the South Korean technology giant Samsung Electronics Co. Ltd. had never partnered with a company in the open-source space. That changed when Red Hat and Samsung jointly announced a collaboration in May to develop software for next-generation memory hardware.

The field of next-gen memory has become more significant in recent years as enterprises seek to bridge the gap between storage hierarchies and an ability to deliver data. Memory storage is a key component in a wide range of aerospace and defense applications, in addition to mobile phones, internet of things and AI functions that rely on massive amounts of data.

Red Hat and Samsung will focus on development of open-source software for nonvolatile memory express solid-state drives computer express link or CXL technology computational memory, such as AI-tailored HBM-PIM or Smart SSD solutions, as well as fabrics. Samsung also announced the launch of a cloud-based research initiative that Red Hat will participate in for development and verification of software in server environments.

Like most businesses, Red Hat is navigating a post-pandemic world where the phrase new normal has taken on new meaning. In his May blog post, Cormier challenged the enterprise community to think differently about what the new normal will mean by emphasizing that it is anything but pre-determined and static.

You get to define your new normal, Cormier said. How will you drive your technology strategies closer to innovation? The only way you get closer to this innovation and the only way you can use this innovation to keep pace with changing demands is by adopting open-source developed technology. Thats what is going to get you to the new normal.

Go here to see the original:

Red Hat follows open-source game plan to drive the enterprise hybrid future - SiliconANGLE News

Alibaba affiliate Ant Group open sources its privacy software and a ‘Secure Processing Unit’ – The Register

Alibaba's financial services affiliate, Ant Group, has open sourced its "privacy-preserving Computation Framework."

The goal of the release, according to an Ant announcement, is "to make the technologies more accessible to global developers and speed up the framework's application."

The Framework, called SecretFlow, can be found on both GitHub and China's analog Gitee. In the repos you'll find code for:

SecretFlow, which is designed to promote privacy during data analysis and machine learning efforts, was under development for six years and has been used internally at Ant Group already, as well as a few external organizations.

Ant Group's privacy computer general manager, Lei Wang, billed the endeavor as a zero-cost shortcut to privacy for developers.

The Alibaba affiliate is no stranger to open-sourcing it placed its low-latency service registry SOFARegistry on GitHub in 2019 and its interface design language Ant Design in 2015.

The financial services company also announced that it was investing over $600,000 in a newly launched fund. The endeavor, titled the CCF-Ant Group Privacy Computing Special Fund, is in collaboration with professional organization China Computer Federation.

The cash will go to researching privacy computing, including post-quantum multiparty computation, and will be available worldwide.

Ant's announcement of the code release reminded world+dog it topped the list of patent applications for privacy-preserving computation technologies in 2022 and therefore fancies itself an expert in a field it expects will continue growing.

Read the original:

Alibaba affiliate Ant Group open sources its privacy software and a 'Secure Processing Unit' - The Register

How Development Teams Can Orchestrate Their Workflow with Pipelines as Code – InfoQ.com

Key Takeaways

Looking at the state of software engineering, its clear that the industry underwent a level of transformation akin to a chameleons. What used to be mainstream is now almost extinct, replaced by completely different tools and technologies.

If I look at what I used to do ten years ago, I remember working heavily with centralised version control systems, being bound by the choice of operating system a workload was running upon, and in general a strong sense of demarcation between being a developer and working in infrastructure.

Things have obviously changed, however the single biggest disruptor in this field remains Git. Git changed everything it democratised good engineering practices like source control, and it allowed a wealth of tools to be built upon its foundation. DevOps obviously played a major part in this, being the glue collating together a number of new tools, approaches and technologies. In short, this bottom-up proliferation and the broad adoption of DevOps practices led to the industry organically moving to an as code approach.

Thats how Terraform (and similar) tools emerged, pushed by the tools ecosystem and by DevOps being broadly adopted and mainstream for most companies. Infrastructure as Code is now ubiquitous, and every cloud provider offers infrastructure deployment capabilities via code files and APIs which should be the default choice for any application that is not a Hello World sample.

Infrastructure as Code was just the beginning. Configuration as Code followed shortly after again becoming extremely commonplace and enabling organisations to scale their engineering capacity by a number of times. And in order to continuously increase the value development teams generate, Pipelines as Code was the natural consequence.

Pipelines as Code is the natural evolution of a key artefact engineering teams use every day. Think about it: you have Infrastructure as Code, Configuration as Code why not Pipelines as Code?

The concept is simple - rather than thinking about a pipeline just in terms of a CI/CD engine, you can expand it to being an orchestrator for your development platform, and all its artefacts are stored in code files.

That will provide you with versioning, team working capabilities, etc. while at the same time giving you the power to automate all of your processes. And the more you automate, your quality increases, your speed improves, and your resiliency goes up exponentially. Its a game changer for any development team.

Look at my blog publishing system - its all hosted on GitHub, and whenever I post something this is what happens:

Two pipelines (or workflows in GitHubs jargon) will run every time, one for publishing and one for other activities, under certain conditions. You might wonder why two, and why the CI workflow exists near the pages-build-deployment workflow. The first one is custom, the second one is out of the box for publishing. Lets take a look at the custom one:

This workflow automatically tweets on my behalf. It will run every time, unless a commit contains NO CI in the message. Its a code file, and it is stored in my repository. Should I ever move it from my account to another repository, it will keep working without issues.

All CI/CD orchestrators are going in this direction: Azure Pipelines, GitHub Actions, Jenkins, etc. The UI is no longer a focus, and Pipelines as Code allows for some very specific advantages for a developer.

Being just code means that your pipelines will benefit from all the tools already used in any engineering organisation. This includes version control, branching, pull requests, etc. Developers know how to deal with code, so pipelines become just another artefact stored in Git.

This also facilitates a number of situations where you must maintain traceability, auditability, etc. while still maintaining ease of access and familiarity. Once again, Pipelines as Code means that everything is stored with full history and access controls, while still maintaining ease of use and repeatability.

Eventually, portability. Yes, there will be different dialects of Pipelines as Code depending on the target platform, however, the concept remains the same across the board. I think about GitHub Actions and Azure Pipelines for example - both based on YAML with a different syntax and some peculiarities. It takes a day at most for a developer to get up to speed, a week tops to be comfortable with the differences. The productivity boost is unbelievable, given there is no more distinction between a build pipeline and a release pipeline. Everything is just a set of orchestrated tasks performed by the same engine.

There are some advanced features in each modern orchestrator. Templates are really common now, and a true life-saver. Youll define a pipeline once and you can re-use it across multiple automations and projects with minimal changes. Your template will contain all the logic, and the possible configurations - which you will invoke from your main pipeline. Lets take a look.

This would be a template, named template.yml in your repository:

This template accepts an input array, and it will relay the individual items making up the array one by one using a command-line task. Its a very simple logic, however within a pipeline you can see you can already use complex constructs like for loops (via the each keyword) and it will allow you to dynamically compose as many tasks as the input arrays items.

Now, if you invoke it from another pipeline, all you have to do is this:

The output of this main pipeline is as follows:

Four command-line tasks generated on-demand, printing out the values. All orchestrated on the fly.

Another feature I really like is the Matrix in Azure Pipelines, for example:

This snippet will run the tasks specified in the steps section across three different pipelines, each running in a different agent with different operating systems. This is all it takes.

Needless to say, its not all plain sailing and straightforward there is a learning curve. Unsurprisingly, the biggest hurdle to go past is the lack of a UI. For at least a decade our build orchestrators relied on UIs to make the process simpler and easier to digest as the developers lacked full control over their process. As an industry we settled on the expectation that the UI had to be there to make things easier to digest.

Then the as code movement came along, and started breaching through. Infrastructure as Code was the first foray, then everything else followed through. Fast forward ten years, we are now able to deal with the fact that a UIs no longer have the most features and options, instead becoming just a gateway to a build orchestrator to learn the main functionalities before moving to the as code implementation.

The other important change factor is the fact that now everything runs in a pipeline, with potentially no distinction between build or release. Its up to the developer to define these boundaries, and migrating can require some work as there will be no 1:1 mapping for everything. It is however a fairly lightweight job, so not so big of an obstacle.

After working with many platforms you will realise there are patterns and reusable approaches, however the main lesson learned still is about getting into the habit of implementing Pipelines as Code as early as possible. Creating the build definition should be the first thing an engineer does, because it will evolve with the application code and it will provide a seamless experience once used with the DevOps platform.

A typical example is this: having pipeline definitions embedded in your code repositories means that your repositories will immediately become fully granular and independent, as they will contain not just the source code for the application, but also the build definition required in order to compile and deploy such application, making it a movable artefact across a DevOps platform. Microservices development becomes way easier. Testing gets simpler. Automating mundane tasks can yield so much additional value to the team, given any engineer can focus on solving actual problems rather than repeating the same steps all the time. Pipelines as Code does wonders.

Moving to Pipelines as Code doesnt happen overnight, but can open so many doors and paths for your development team. If you are just getting started, do one thing - pick up any of your build and release processes and start replicating it in your code files. Its as simple as that. The more you automate these processes, the more you will start implementing them as the default option and you will save a huge amount of time which is otherwise wasted on repetitive tasks.

Doing so will naturally guide you towards automating the steps currently holding you back, all with the benefit of the development experience engineers are used to. Changes become easier to track, merging is simple and coupled with a peer review process it will be accessible to every developer.

Here is the original post:

How Development Teams Can Orchestrate Their Workflow with Pipelines as Code - InfoQ.com

Now Comes The Hard Part, AMD: Software – The Next Platform

From the moment the first rumors surfaced that AMD was thinking about acquiring FPGA maker Xilinx, we thought this deal was as much about software as it was about hardware.

We like that strange quantum state between hardware and software where the programmable gates in FPGAs, but that was not as important. Access to a whole set of new embedded customers was pretty important, too. But the Xilinx deal was really about the software, and the skills that Xilinx has built up over the decades crafting very precise dataflows and algorithms to solve problems where latency and locality matter.

After the Financial Analyst Day presentations last month, we have been mulling the one by Victor Peng, formerly chief executive officer at Xilinx and now president of the Adaptive and Embedded Computing Group at AMD.

This group mixes together embedded CPUs and GPUs from AMD with the Xilinx FPGAs and has over 6,000 customers. It brought in a combined $3.2 billion in 2021 and is on track to grow by 22 percent or so this year to reach $3.9 billion or so; importantly Xilinx had total addressable market of about $33 billion for 2025, but with the combination of AMD and Xilinx, the TAM has expanded to $105 billion for AECG. Of that, $13 billion is from the datacenter market that Xilinx has been trying to cater to, $33 billion is from embedded systems of various kinds (factories, weapons, and such), $27 billion is from the automotive sector (Lidar, Radar, cameras, automated parking, the list goes on and on), and $32 billion is from the communications sector (with 5G base stations being the important workload). This is roughly a third of the $304 billion TAM for 2025 of the new and improved AMD, by the way. (You can see how this TAM has exploded in the past five years here. Its remarkable, and hence we remarked upon it in great detail.)

But a TAM is not a revenue stream, just a giant glacier off in the distance that can be melted with brilliance to make one.

Central to the strategy is AMDs pursuit of what Peng called pervasive AI, and that means using a mix of CPUs, GPUs, and FPGAs to address this exploding market. What it also means is leveraging the work that AMD has done designing exascale systems in conjunction with Hewlett Packard Enterprise and some of the major HPC centers of the world to continue to flesh out an HPC stack. AMD will need both if it hopes to compete with Nvidia and to keep Intel at bay. CUDA is a formidable platform, and oneAPI could be if Intel keeps at it.

When I was with Xilinx, I never said that adaptive computing was the end all, be all of computing, Peng explained in his keynote address. A CPU is going to always be driving a lot of the workloads, as will GPUs. But Ive always said that in a world of change, adaptability is really an incredibly valuable attribute. Change is happening everywhere you hear about it, the architecture of a datacenter is changing. The platform of cars is totally changing. Industrial is changing. There is change everywhere. And if hardware is adaptable, then that means not only can you change it after its been manufactured, but you can change it even when its deployed in the field.

Well, the same can be said of software, which follows hardware of course. Even though Peng didnt say that. People were messing around with SmallTalk back in the late 1980s and early 1990s after it had been maturing for two decades because of the object oriented nature of the programming, but the market chose what we would argue was an inferior Java only a few years later because of its absolute portability thanks to the Java Virtual Machine. Companies not only want to have the options of lots of different hardware, tuned specifically for situations and workloads, but they want the ability to have code be portable across those scenarios.

This is why Nvidia needs a CPU that can run CUDA (we know how weird that sounds), and why Intel is creating oneAPI and anointing Data Parallel C++ with SYCL as its Esperanto across CPUs, GPUs, FPGAs, NNPs, and whatever else it comes up with.

This is also why AMD needed Xilinx. AMD has plenty of engineers well, north of 16,000 of them now and many of them are writing software. But as Jensen Huang, co-founder and chief executive officer of Nvidia explained to us last November, three quarters of Nvidias 22,500 employees are writing software. And it shows in the breadth and depth of the development tools, algorithms, frameworks, middleware available for CUDA and how that variant of GPU acceleration has become the de facto standard for thousands of applications. If AMD s going to have the algorithmic and industry expertise to port applications to a combined ROCm and Vitis stack, and do it in less time than Nvidia took, it needed to buy that industry expertise.

That is why Xilinx cost AMD $49 billion. And it is also why AMD is going to have to invest much more heavily in software developers than it has in the past, and why the Heterogeneous Interface for Portability, or HIP, API, which is a CUDA-like API that allows for runtimes to target a variety of CPUs as well as Nvidia and AMD GPUs, is such a key component of ROCm. It gets AMD going a lot faster on taking on CUDA applications for its GPU hardware.

But in the long run, AMD needs to have a complete stack of its own covering all of the AI use cases across its many devices:

That stack has been evolving, and Peng will be steering it from here on our with the help of some of those HPC centers that have tapped AMD CPUs and GPUs as their compute engines in pre-exascale and exascale class supercomputers.

Peng didnt talk about HPC simulation and modeling in his presentation at all and only lightly touched on the idea that AMD would craft an AI training stack atop of the ROCm software that was created for HPC. Which makes sense. But he did show how the AI inference stack at AMD would evolve, and with this we can draw some parallels across HPC, AI training, and AI inference.

Here is what the AI inference software stack looks like for CPUs, GPUs, and FPGAs today at AMD:

With the first iteration of its unified AI inference software which Peng called the Unified AI Stack 1.0 the software teams at AMD and the former Xilinx are going to create a unified inference front end that can span the ML graph compilers on the three different sets of compute engines as well as the popular AI frameworks, and then compile code down to those devices individually.

But in the long run, with the Unified AI Stack 2.0, the ML graph compilers are unified and a common set of libraries span all of these devices; moreover, some of the AI Engine DSP blocks that are hard-coded into Versal FPGAs will be moved to CPUs and the Zen Studio AOCC and Vitis AI Engine compilers will be mashed up to create runtimes for Windows and Linux operating systems for APUs that add AI Engines for inference to Epyc and Ryzen CPUs.

And that, in terms of the software, is the easy part. Having created a unified AI inferencing stack, AMD has to create a unified HPC and AI training stack atop ROCm, which again is not that big of a deal, and then the hard work starts. That is getting the close to 1,000 key pieces of open source and closed source applications that run on CPUs and GPUs ported so they can run on any combination of hardware that AMD can bring to bear and probably the hardware of its competitors, too.

This is the only way to beat Nvidia and to keep Intel off balance.

Continue reading here:

Now Comes The Hard Part, AMD: Software - The Next Platform

Fortress Information Security Sponsors Open Web Application Security Project To Work on Industry-Wide Software Bill of Materials Standards -…

Orlando, FL, July 6, 2022 Fortress Information Security, the nations leading cybersecurity provider for critical infrastructure organizations with digitized assets, today joined the Open Web Application Security Project (OWASP) as a silver sponsor. Fortress has allocated a portion of that sponsorship to support the CycloneDX project focused on promoting a lightweight Software Bill of Materials (SBOM) standard for application security and supply chain component analysis.

OWASP is a nonprofit foundation that works to improve software security by making application security risks visible. OWASP activities include community-led open source software projects, over 250+ local chapters worldwide, tens of thousands of members, and industry-leading educational and training conferences.

OWASP and the CycloneDX project are critical to making universal SBOM principles and standards a reality, said Betsy Jones, chief operating officer of Fortress Information Security. Bringing software developers and cybersecurity professionals together openly and collaboratively will foster the development of trusted SBOM solutions.

Joined by Tony Turner, Fortress vice president of research and development and an OWASP chapter and project leader for over 10 years, Fortress utilizes multiple OWASP projects such as CycloneDX, SCVS, OWASP Risk Ranking methodology, and many others to secure critical infrastructure.

OWASP is an open community dedicated to enabling organizations to conceive, develop, acquire, operate, and maintain applications that can be trusted. All projects, tools, documents, forums, and chapters are free and open to anyone interested in improving application security.

About Fortress Information SecurityFortress Information Security secures critical industries from cybersecurity and operational threats stemming from vendors, assets, and software in their supply chains. Fortress is the only end-to-end platform that connects intelligence surrounding vendors, information technology and operational technology assets, and software through a holistic, fit-for-purpose approach. Fortress has also partnered with its customers and suppliers to form the Asset-to-Vendor (A2V) network, which facilitates the secure and seamless exchange of asset information and security intelligence, enabling collaborative workflows to better understand and remediate potential issues. Fortress serves critical industries such as energy, government, aerospace & defense, critical manufacturing, industrial automation, automotive, and healthcare.

About OWASPAs the worlds largest non-profit organization concerned with software security, OWASP: supports the building of impactful projects; develops & nurtures communities through events and chapter meetings worldwide; and provides educational publications & resources to enable developers to write better software and security professionals to make the worlds software more secure.

See the article here:

Fortress Information Security Sponsors Open Web Application Security Project To Work on Industry-Wide Software Bill of Materials Standards -...