Confidential Computing with WebAssembly The New Stack – thenewstack.io

AUSTIN, TEX. Back when they worked at Red Hat, Mike Bursell and Nathaniel McCallum grappled with the challenges of confidential computing isolating an organizations most sensitive data in a secure enclave while processing it.

Confidential computing is of particular use to organizations that deal in sensitive, high value data such as financial institutions, but also a wide variety of organizations.

We felt that confidential computing was going to be a very big thing be that it should be easy to use, said Bursell, was then chief security architect in the office of Red Hats chief technology officer. And rather than having to rewrite all the applications and learn how to use confidential computing, it should be simple.

But it wasnt simple. Among the biggest puzzles: attestation, the mechanism by which a host measures a workload cryptographically and communicates that measurement to a third party.

One of the significant challenges that we have is that all the attestation processes are different, said McCallum, who led Red Hats confidential computing strategy as a virtualization security architect.

And all of the technologies within confidential computing are different. And so theyre all going to produce different cryptographic caches, even if its the same underlying code thats running on all.

And with more organizations deploying their workloads to multicloud and hybrid environments, these differences pose a technical problem for workload equivalence. If a single workload is deployed to three different architectures, with three different technologies running their confidential computing, McCallum asked, how do I know that those are all the same?

At Red Hat, McCallum and Bursell worked on a solution to this issue and initiated a project called Enarx, an open source framework for running applications in Trusted Execution Environments (TEEs). Red Hat donated Enarx to the Linux Foundations Confidential Computing Consortium.

In 2021, Bursell, based near Cambridge, England, and McCallum, who lives near Raleigh, N.C., co-founded a company, Profian, built around Enarx. In doing so, they planted a flag in the rapidly growing WebAssembly territory.

At the Linux Foundations Open Source Summit North America in June, Profians two co-founders told The New Stack about their plans for the project, which CEO Bursell said include releasing a minimum viable product (MVP) this quarter.

The solution to the attestation challenge, McCallum said, was to use some sort of bytecode, like WebAssembly (Wasm). (McCallum, Profians chief technology officer, was a founding member of the Bytecode Alliance while at Red Hat; Bursell serves as a director on its governing board.)

Wasm, a binary instruction format for a stack-based virtual machine, works as a portable compilation target for programming languages, enabling deployment on the web for client and server applications.

WebAssembly allows you to say, Ive created a single application, and I can prove that that is exactly the application thats running on all of these instances. Cryptographic proof. And thats the big win.

Mike Bursell, co-founder and CEO, Profian

WebAssemblys vaunted advantage build once, run anywhere avoids having to build systems to manage all the cryptographic caches generated from the various attestation technologies in the various deployment environments.

Enarx provides a single run-time TEE and attestation based on WebAssembly, allowing developers to deploy applications using their preferred language, such as Rust, C/C++, C#, Go, Java, Python, Haskell and more. Even COBOL.

The framework is both hardware and cloud service provider neutral; in keeping with Wasms promise of build once, run anywhere, developers can deploy the same code transparently across multiple targets.

WebAssembly allows you to say, Ive created a single application, and I can prove that that is exactly the application thats running on all of these instances. Cryptographic proof, Bursell said.

And thats the big win, quite apart from the fact that WebAssembly allows us to run on Intel boxes, ARM boxes, AMD boxes, with exactly the same binary bytecode, which is just fantastic for us.

The problem that Enarx is designed to address is widespread.

Its difficult to find people who dont have the problem, Bursell said. If youve got sensitive data or sensitive applications, and youre highly regulated, or strongly audited, or just risk-averse, you just cant put certain workloads in the cloud. Banks cant, health care, pharmaceutical, energy, telco, government, defense, security not to mention just standard enterprises.

As a result, he added, those organizations have to keep that data on-premises, forgoing the benefits of the cloud. And that means that its not just the cost of keeping all that going. Its the inability to be able to surge out into the cloud and scale up quickly, as things take off.

Mike Bursell, CEO and co-founder of Profian.

If youve got a new application, and suddenly everyones using it, can you afford to wait five weeks to get a new server? No, you cant; you want to be able to put it straight in the cloud.

Confidential computing offers the promise of ironclad privacy, Bursell noted: Not even the cloud service provider can look in, or change your application or your data. For an organization that not only deals with sensitive customer data but also proprietary information, such as an investment algorithm for a financial-service company.

The crown jewels of the investment bank are actually in the application, rather than the data, he said.

Also, McCallum said, new use cases are just around the corner, due to the increasingly distributed nature of networks, through the edge and the Internet of Things (IoT).

The perimeter is gone, Profians CTO said. If theres anything the last 15, 20 years told us, the attacks are both external and internal. And so if youre going to protect this stuff, even internally, even on-prem, you still need all of the same guarantees.

As it continues to develop Enarx and move toward an MVP, Profian has established partnerships with a number of tech companies, including Enarx project sponsors Equinix and PhoenixNAP. It is also working closely with chip manufacturers IBM, Intel, AMD and ARM.

Profians solution requires server chips at least the level of the Ice Lake Xeon Scalable or the AMD Milan Epyc, which the major cloud providers are now in the midst of deploying, McCallum said. The company is also making plans to support ARMs Version 9 CCA Realms and Intels forthcoming TDX.

One of the things were about is allowing people to deploy wherever the hardware is, Bursell said. There may be particular reasons to select a particular CSP or particular geography. But you get the same assurances whether youre deploying in Dublin or in San Francisco or in Shanghai because youre using the same chips with the same cryptographic proofs.

Nathaniel McCallum, co-founder and CTO of Profian.

And because Enarx is built on WebAssembly, he added, it doesnt matter where the workload is deployed.

McCallum echoes this notion. There are some people who are in desperate amounts of pain, who need this stuff yesterday, he said. And theyre deploying on existing infrastructures. So theyre coding specifically to that hardware technology. But if that becomes vulnerable, right, what are your options to switch to another hardware technology?

One of the key advantages that WebAssembly gives us is that, if there is a hardware vulnerability on one platform, youre not sunk. You can just deploy on another platform, whilst we created the mitigation with the hardware vendor.

And, he added, as new platforms become available, such as ARMs, you dont have to modify your workload at all, your workload stays exactly the same. And all of a sudden you just get new platform support. And then as soon as the hardware is available, you continue to deploy, exactly the way youve always deployed in the past.

As a model for how to introduce a new project to the developer community, Bursell looks to Docker, the Platform as a Service project that allows devs to build, test and deploy apps quickly.

One of these Docker got right in the early days, just make it really easy for people try stuff out, he said. And thats absolutely the approach that we think is right.

Therefore, Profian launched a demo of Enarx at the end of July. Anyone can use it, anyone can play with it, Bursell said. Because we want to make it easy to play with.

All of a sudden, WebAssembly is going to emerge very quickly as a mature stable platform, with very broad language support.

Nathaniel McCallum, co-founder and CTO, Profian

The demo, McCallum said, will allow users to deploy a workload for a short period of time, without having to set anything up: The hardware or the kernel, all the cloud resources, everything is set up for you. And it gives you a chance to actually experiment with the platform with zero friction, essentially.

The ease of debugging in confidential computing will be showcased as part of the demo, Bursell said. The debugging environment Profian will provide, he said, will use the same environment and the Wasm runtime.

You can test it on your Linux box, on your Mac, on your Windows box, or even on a Raspberry Pi. So you can test it and know what youre running once, then deploy it into a Trusted Execution Environment with Profian, and itll still work.

As it leaves the browser, WebAssembly is just beginning to deliver on its promise, said McCallum.

For a lot of people, it feels like its a long time coming and never here, he said. But theres a lot of work happening. And its happening in precisely those ways that dont draw a lot of attention to the people who are working on them. And so all of a sudden, WebAssembly is going to emerge very quickly as a mature stable platform, with very broad language support.

For more on whats new in Wasm, check out this recent episode of The New Stacks Makers podcast, recorded at Open Source Summit North America in June:

The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.

Featured image by Jason Pofahl via Unsplash.

Go here to read the rest:
Confidential Computing with WebAssembly The New Stack - thenewstack.io

Goodbye FTL Kioxia reconstructing flash drives with software-enabled flash Blocks and Files – Blocks and Files

Kioxia is redesigning SSDs without a traditional Flash Translation Layer (FTL), a minimal drive microcontroller, and an API for hyperscaler host software to have pretty direct flash hardware control for latency, garbage collection and more.

This is part of the Linux Foundations open source Software-Enabled Flash (SEF) project, and is being presented at this weeks Flash Memory Summit Conference & Expo. The aim is to get rid of hard disk drive-era thinking regarding SSD controllers, and provide hyperscaler customers with a way to make their flash media operate more efficiently and consistently. SSDs contain flash dies as before, but the existing FTL-running controller is no more, replaced by a minimal processor running low-level SSD operations and a much-reduced scope FTL.

Eric Ries, SVP, Memory Storage Strategy Division (MSSD) at Kioxia America, said in a statement: Software-Enabled Flash technology fundamentally redefines the relationship between the host and solid-state storage, offering our hyperscaler customers real value while enabling new markets and increasing demand for our flash solutions.

A SEF web page identifies five SEF attributes:

An overview web page tells us that the project is based around purpose-built, media-centric NAND hardware, called a SEF unit, focused on hyperscaler requirements, together with an optimized command set at the PCIe- and NVMe-level for communicating with the host.

We are told: The SEF hardware unit is architected to combine the most recent flash memory generation with a small onboard SoC controller that resides on a PCB module. As an option, the SEF architecture supports an on-device DRAM controller allowing the module to be populated with DRAM, based upon the needs of each hyperscale user. This combination of components comprise a SEF unit that is designed to deliver flash-based storage across a PCIe connection.

Behind the interface, individual SEF units handle all aspects of block and page programming (such as timing, ECC and endurance) for any type or generation of flash memory being used. SEF units also handle low-level read tasks that include error correction, flash memory cell health and life extension algorithms.

The small SEF onboard microcontroller that resides on the PCB module is responsible for managing flash-based media. It abstracts and controls generational differences in flash memory relating to page sizes, endurance control and the way that flash dies are programmed. Through the software API, new generations of flash memory can be deployed quickly, cost-effectively and efficiently, providing developers with full control over data placement, latency, storage management, data recovery, data refreshing and data persistence.

The SEF unit also delivers advanced scheduling functionality that provides developers with a flexible mechanism for implementing separate prioritized queues used for read, write, copy and erase operations. This capability, in combination with die time scheduling features, enables weighted fair queuing (WFQ) and command prioritization in hardware that is accessible from the API.

There is an open source, low-level API and an open source, high-level software development kit (SDK).

Read a trio of downloadable white papers to find out more.

Or watch one of, or all of, up to eight videos discussing the technology ideas involved.

Judging by the white papers and videos above, a lot of marketing effort has gone into SEF already it looks like a fairly mature project. Only Kioxia amongst the NAND and SSD manufacturers seems to be involved. If the hyperscalers react positively and wed guess they have all been approached already then the other suppliers will probably get involved alongside Kioxia.

At this stage it doesnt look as if there is an enterprise (on-premises) market for this, as enterprises would be loath to put the effort into developing the software involved. But if a third party were to develop SEF hardware vendor-agnostic software, then that picture could change. Were thinking of JBOFD (Just a Bunch of Flash Dies) software equivalent to Kioxias array-led JBOF (Just a Bunch of Flash) Kumoscale software, but vendor agnostic at the SEF hardware level.

View original post here:
Goodbye FTL Kioxia reconstructing flash drives with software-enabled flash Blocks and Files - Blocks and Files

Why the government is backing open source software – Open Access Government

This trend is only growing stronger, as organisations look to access the benefits of agility and scalability that non-proprietary code can offer.

Since open source software is now a prominent and indispensable aspect of the digital infrastructure, it is not surprising to see the UK government take advantage of open source technology.Researchby Aiven has discovered that 71% of UK government tech workers report the Government is now using more open source software compared to five years ago.

Multiple advantages arise from the use of open source software that governments are beginning to wake up to, such as recruiting talent, retaining and sharing knowledge as well as greatly enabling digital transformation strategies. And lets not forget that open source software also enables the government to save on costly licensing fees.

With an ongoing tech talent shortage, giving developers access to open source software has considerable benefits that governments can enjoy, chief of which is their ability to recruit and retain top talent. Indeed,three-quartersof tech workers stated that providing access to open source will help the UK government hire more software developers and engineers.

This is ever more imperative at a time when the public sectorcannot matchthe salaries of their private sector counterparts regarding technology-related roles. The availability of open source software offers potential recruits a transparent view of the work they will be undertaking. When a software engineer comes to a government department for an interview, they can see precisely the codebase theyll be working on, allowing for a greater understanding of the nature and scope of work, which is highly sought after by developers.

Open source software also allows governments to retain skills and knowledge within departments. With software development being highly specialised, there is a significant risk of departmental knowledge loss with staff turnover. Knowledge is better shared and spread across when working in the open using open source techniques. Additionally, troubleshooting existing problems is easier when using open source solutions, leading to a reduction of frustration from software engineers, causing in turn less turnover.

This same accessibility encourages the sharing of code between different departments, avoiding writing new solutions from scratch to solve similar problems. Different subdivisions are easily able to view others work, improving agility and efficiency when working towards shared goals, such as thenew planfor digital health and social care.

Governments worldwide are having to catch up with the pace of technological change and how this affects the provision of their services. In the UK, Government Digital Services (GDS), is responsible for unifying and digitising the governments online function and provides a perfect case of how effective open source can be incorporated into government services.

Governments worldwide are having to catch up with the pace of technological change

GDS utilised open source technology to launch GOV.UK in 2012, which now hosts over 20,000 websites on one platform. This realised a vision of the government for shared digital systems, in which easy-to-build, user-centric services are available.

GDS required a search service that could run multiple government websites and the GDS itself. It opted to use open source searching tools like OpenSearch, as much of its code was already open source, demonstrating the capacity for open source in government. Now, many branches, such as local councils and fire departments are using managed open source technology, accessing the benefits without additional procurement or information assurance due diligence.

I strongly believe that all software produced by governmental sources should be open source, so taxpayers can examine and inspect how their tax money is spent, this is why I think we should applaud governments like the UK massively adopting open source software.

Open source has proven to be valuable in the public and private sectors alike. The technology has the capacity to increase visibility, meet the demands of developers and provide a smoother platform for digital transformation, which is why it has been so readily adopted by GDS. With the governmental demands for talent retention, departmental alignment and a focussed digital strategy, it will be no surprise to see open source continue to be adopted by the UK government and beyond.

This piece was provided byJosep Prat, Open Source Engineer Manager of Aiven.

Editor's Recommended Articles

Originally posted here:

Why the government is backing open source software - Open Access Government

The Silent Threat Of Software Supply Chain Jacking – Forbes

Organizations are facing increased risk from threat actors exploiting weaknesses in open source code ... [+] and the software supply chain.

There is a complex web of interdependencies required to source, process, manufacture, and transport goods that has to occur before a vehicle is available on a dealer lot, a product is sitting on the shelf at Target, or the Amazon delivery guy shows up at your door. The same is actually true for software today. There is a supply chain of software code involved in delivering an application or serviceand attackers are taking advantage of its weaknesses.

The supply chain is one of those things that was always there, but most people didnt know about it and never thought of it. We shop, and buy, and consume with little understanding of, or regard for the many moving parts that must align to produce goods.

An apple grows on a tree. Its relatively simple. However, getting the apple from the tree to the produce section at your grocery store requires effort to plant, grow, harvest, sort, clean, and transport the apples. Many factors such as extreme weather, fuel prices, skill and availability of workers, and more all impact the supply chain.

There is a ripple effect to the supply chain, which is responsible for a number of global issues right now. Seemingly unrelated events at the beginning of the supply chain can cascade and amplify into huge production challenges at the other end. The Covid pandemic, Climate Change, and other factors continue to disrupt regions and industries in ways that are impacting everyone around the world.

There is also increasing supply chain risk for cybersecurity. Successfully attacking thousands of targets is a Herculean task. Threat actors recognized that they could compromise one target further back in the supply chain, and leverage that to gain access to the thousands of companies or individuals that rely on that target.

A blog post from Checkmarx explains, Todays attackers realize that infecting the supply chain of open source libraries, packages, components, modules, etc., in the context of open source repositories, a whole new Pandora's box can be opened. And as we all know, once you open that box, its nearly impossible to close.

The attack on SolarWinds at the end of 2020 was a supply chain attack. Companies and government agencies around the world use SolarWinds software. Threat actors were able to compromise the SolarWinds software and embed malicious codewhich was then downloaded and executed by customers.

Researchers discussed these issues at the RSA Security Conference 2022 in June. Erez Yalon, VP of Security Research at Checkmarx, and Jossef Harush Kadouri, Head of Engineering for Supply Chain Security at Checkmarx, presented the session, titled The Simple, Yet Lethal, Anatomy of a Software Supply Chain Attack, revealed insightful research and provided an attackers perspective on open source flows and flawsand how threat actors can take advantage of software supply chain weaknesses.

Nation-state cyberattacks and cybercriminals generally seek out the path of least resistance, which is why software supply chain jacking is a growing threat. I spoke with Erez, and Tzachi (Zack) Zornstain, Head of Software Supply Chain at Checkmarx, about the increasing risk.

Zack noted that the way developers write code and create software has evolved. The shift from Waterfall, to Agile, and now to DevOps principles has accelerated and fundamentally changed the process. There's a huge rise in speed and velocity of change in the last five years. We are moving towards a future or even a present already that has way more moving parts. Suddenly application security is not only about your codeits also about containers, and third party, and open source, and APIs that are talking to each other. Everything out there is somehow connected in all of these small building blocks, and what we see is that the attackers are moving towards it.

Part of that shift has been an increased use of and dependence on open source code. 80% of the lines of code come from open source, shared Erez. So, its not a small part of the code. Most of the code in modern applications is from open source.

Leveraging open source code makes sense. It is more expedient to incorporate open source code that performs the function needed. There is also no point in duplicating effort and reinventing the wheel if the code already exists. However, developersand the organizations that use these applicationsneed to be aware of the implications of those choices.

The thing about open source software is that anyone can contribute or modify code, and nobody is designated as responsible for resolving vulnerabilities or validating that its secure. It is a community effort. The belief is that exposing it to the public makes it more secure because it is open for anyone to see the code and resolve issues.

But there are thousands and thousands of open source projects, and many of them are more or less derelict. They are actively used, but not necessarily actively maintained. The original developers have lives and day jobs. The open source code is being provided for free, so there is little incentive to invest continuous effort monitoring and updating it.

Erez and Zack shared with me a couple examples of very popular open source code components being modified in ways that compromised millions of devices running applications that leverage the open source code. One was an example of attackers hijacking the account of a developer of widely used open source code and embedding malicious code in it. The code has been used and trusted for years, and the developer had an established reputation, so it didnt occur to anyone to question or distrust the code.

That was a malicious takeover. The other example illustrates how software supply chain jacking can be a threat when it is intentional as well. Erez and Zack told me about a developer of a popular open source element who modified his code in support of Ukraine in the wake of Russias invasion. The code was changed to effectively brick or wipe computers in Russia. He didnt hide the updatethe change was made public and he was clear about his motives. However, few organizations in Russia that rely on his code are actually aware they use his code, and even fewer would have any reason to read his posts or monitor changes on Github.

Software supply chain jacking and issues with the software supply chain in general will continue to expose organizations to risk. Erez summed up, Basically, the question is whose responsibility is it? We think that because its our software, its our responsibility.

Organizations cannot afford to assume that the open source code running in their environments is secure. They also cant assume that just because the developer has a solid reputation, and the open source code has great reviews, and the code has been used safely for years, that it can be inherently trusted. Erez added, Its our job to make sure things are actually working as expected.

Read the rest here:

The Silent Threat Of Software Supply Chain Jacking - Forbes

SD Times Open-Source Project of the Week: Gerrit – SDTimes.com

Gerrit, an open-source project from Google, is a highly extensible and configurable tool for web-based code review and repository management for all projects that utilize the Git version control system.

It works to enable teams to discuss code, serve Git as an integrated experience within the larger code review flow, and manage workflows with integrated and delegatable access controls.

According to Google, Gerrit is an essential part of the development workflow for products that are developed with Git, including Android and Chromium.

With Gerrit, teams are able to discuss code and boost code fu by talking about specifics, serve Git as an integrated experience within a larger code review flow, and manage workflows with deeply ingrained access controls.

Additionally, users can read and discuss old and new versions of files with syntax highlighting and colored differences. With this, specific sections of code can be communicated about in order to ensure that the right changes are being made.

Gerrit also offers users Git-enabled SSH and HTTPS servers compatible with every Git client. This allows for the simplification of Git repositories by allowing teams to host several Git repositories together.

Furthermore, Gerrit Code Review can be extended and customized by installing server-side plugins. Source code for additional plugins can be found through the project listing.

For more information and to download the latest version of Gerrit, visit the website.

See more here:

SD Times Open-Source Project of the Week: Gerrit - SDTimes.com

Why your company needs no-code tools to outpace your competitors – Global Banking And Finance Review

By Olivier Maes, Co-Founder and Chief Revenue Officer, Baserow

According to Gartner, the no-code landscape is rapidly evolving, with 70% of applications leveraging no code tools by 2024. The no-code sector is democratising tech innovation within organisations by providing non-programmers with the means to use and create software tailored to their business needs. Unlike traditional software development, no code tools do not require expensive developers and have very low adoption ramp-up time as they are intended to be user-friendly for non-technical users. No-code tools also increase the productivity of DevOps teams as they focus on integrations or extensions rather than building applications from scratch.

With only 3% of the world population having the skills to write code and most business processes, services, or products being supported by an application, the tension between business and IT keeps growing as IT budgets are still mainly consumed to maintain existing systems.

Business units must be creative to keep up with time-to-market deadlines, productivity imperatives, and innovation pressure. And they need to address their applications needs without going through cumbersome IT processes. That is where no code comes into play.

Why does your business need no-code tools?

First, what are no code tools? No-code is a web application, or web app, utilising web browsers and web technology to perform different functions, allowing users to interact and take logical actions. These tools offer a visual app-building experience through their drag-and-drop interface. Then, the applications are developed on forms and reports, with options to automate workflows without requiring user coding. This means that the code is already written and optimised on the backend, and automatically configured as different modules and extensions are added to the application.

Here are some real-world examples to make this more tangible. Think about a manufacturing plant where digital boards replace paper wallboards with incident management, shift schedules, production schedules, and inventory. The data they need to manage the factory floor workers and processes can be visualised and stored in a no-code database built by the users to capture precisely the information they need.

Furthermore, inventory, product lifecycle, returns, customer support tickets and other business operations can all be handled with no-code tools involving a web frontend database and some backend automation process to eliminate repetitive manual work.

Here is another example. HR departments can capture recruitment applicants information through the company website, store the details, and process the steps in a no-code database to manage the hiring and onboarding processes.

Finally, marketing teams can employ no-code tools to manage multi-channel campaigns involving multiple stakeholders, content types, agencies, and distribution channels. Additionally, they can easily leverage the information to pull out insights to track the progress of their campaigns and KPIs. Today, modern organisations recognise that Excel Sheets need to be replaced with collaborative no-code databases and project management tools that can do precisely what the marketing teams want.

The list of processes that benefit from applications developed and maintained by the business units is endless, as every process in every company is specific.

So what are the tangible benefits of no-code applications:

Which tool is right for you?

No-code is transforming IT and business operations at a critical time. A growing developer shortage and increased reliance on IT teams for business success means more businesses and entrepreneurs have to revolutionise their advanced IT processes independently without the need for sophisticated, costly, and sometimes proprietary code. In the past few years, the no-code landscape has evolved and matured considerably, offering enterprises the next-level tool to manage every process step in their customer, partner, supplier, and employee journeys.

See below a recent no-code industry map that captures the tools which are Enterprise-ready:

Image source: Baserow

Here are the things to consider when deciding on the right tool for your business.

Extensibility and Customisation:

Applications built with no-code platforms are similar to manually-coded software because they are flexible, extensible, and scalable. Developers can create any plug-ins if their organisations code architecture supports it. There is an excellent opportunity for the users to leverage other available extensions and applications to meet their specific needs. For instance, the modular architecture of Baserow empowers developers to create their custom fields seamlessly.

Business Continuity and Security

If a company or public sector entity uses a no-code tool to collaborate on sensitive data or builds all sorts of processes around it, they do not want to risk losing any of that work or applications in the future. An open-source no-code software alleviates that risk as the source code is in our users hands forever. Combined with the option to self-host, many businesses will further benefit from eliminating vendor lock-in, which future-proofs their applications.

Innovation

The speed of innovation and the quality assurance from an open-source community goes well beyond the software vendors own developer teams. A strong user community with contributors is excellent for users who benefit from these new ideas and innovations from the broader knowledge and technical pool instead of spending the time and investment needed to develop their applications or plug-ins from scratch.

Data governance and compliance

When choosing a no-code tool, it is crucial to consider the Enterprise or Public Sector rules related to data governance, SAAS usage, and other aspects of data sovereignty. Still, many no-code tools are SaaS only and provided by US-based companies. That can be an issue for companies and governments with stringent data governance policies.

Read this article:

Why your company needs no-code tools to outpace your competitors - Global Banking And Finance Review

Igalia: the Open Source Powerhouse You’ve Never Heard of – thenewstack.io

Earlier this year Mozilla decided to stop development on its mixed reality browser. Rather than shuttering the project completely, it passed the source code to open source consultancy Igalia, which is using it to build the Wolvic browser. If youve been following browser and JavaScript development closely, then you may know about Igalia. Chances are though, youve never heard of them. Yet youre almost certainly using something that Igalia helped build.

That includes big-ticket items like CSS Grid and dozens of other improvements, optimizations and even foundational web features.

Igalia was involved in the arrow functions and destructuring that were standardized in ECMAScript 2015; major features now used universally. It worked on generators and the async functions in ECMAScript 2017, that offer cleaner, less verbose execution for what developers previously had to do with manual promise chains. It also worked on async await (which Igalia implemented in V8 and JavaScript Core for WebKit) and top-level await.

For BigInt, Igalia was involved in the spec and testing and implemented the feature in both SpiderMonkey and JavaScript Core. Igalia contributors are working on Class Fields, a long-awaited approach that will make plain JavaScript classes powerful enough to express the constructs developers currently need for internal proprietary class systems for, the universally adored Temporal replacement for the JavaScript Date field, and more speculative features like type annotations and erasable types. Its also on track to finally produce a MathML Core specification that browsers will adopt, resolving a process that predates the W3C.

Igalia is the premiere web standards consultancy and their mission is to improve the commons.

Rob Palmer, Bloomberg

In 2019, Igalia was the second largest contributor to both Chromium (after Google) and WebKit (after Apple), as well as a major contributor to Mozillas Servo. Igalia has contributed to many efforts in the web platform, including moving standards forward, implementing missing features, and fixing bugs that positively impact both web developers and browser users, the Microsoft Edge team told us when we asked how a browser maker views their work.

Its not just browsers. The consultancy is also involved with projects like Node.js and Wayland, and Igalias work also shows up on the Steam Deck because of its contributions to graphics libraries like Mesa and Vulkan.

But who is Igalia and how can it make such significant contributions to the web (and related platforms)?

Igalia is the premiere web standards consultancy and their mission is to improve the commons, said Rob Palmer, head of Bloombergs JavaScript Infrastructure and Tooling team and co-chair of the TC39 ECMAScript standardization committee.

Its not a typical consultancy and much of its success comes from how different it is: We are a worker-owned cooperative, explains Brian Kardell, a developer advocate at Igalia known for his work on the Extensible Web Manifesto and HitchJS. We have a flat structure. There are no bosses, there are no shareholders. Its our lives, our company and we want to work on something that is valuable. For Igalia, that means focusing on open source and free software almost exclusively, and on filling gaps: we try very hard to improve what we think are failures in the status quo and create a system that is healthier for everyone.

Although the company is based in Spain and the pay may not match Silicon Valley, being able to work fully remote on technology they view as significant allows Igalia to hire an almost unique combination of experts.

We have a flat structure. There are no bosses, there are no shareholders. Its our lives, our company and we want to work on something that is valuable.

Brian Kardell, Igalia

Because the mission is so attractive, you get top tier candidates, people who have worked directly on the engines for the browsers and other projects but choose to work for Igalia because they believe in that fundamental mission to improve the web and improve the commons for all, Palmer suggests.

Calling Igalia influential and well respected in the browser development community is almost an understatement. In recent years, a number of senior developers have moved to Igalia from the browser engineering teams at Apple, Firefox, Google and other projects, giving the company expertise in codebases like WebKit, Gecko, Servo, SpiderMonkey, V8, Chromium and Blink; along with excellent connections to those projects, often with commit rights and membership of organizations like Blink API owners (which makes decisions about which developer-facing features become available in Chromium).

That means Igalia has the technical ability to work on significant features (which isnt necessarily rare) and can also help get the code to deliver them into multiple browsers at almost the same time (which is rare).

Igalia brings expertise in standardization, Palmer explains. Consensus building, having the relationships and the expertise to engage and to make forward progress, which is a very tough thing to do in this world because were trying to get many disparate parties to all agree. But also, theyre not just doing the standardization, theyre also doing things like implementation and test: the full end to end story of what is required.

All the major web browser engines are open source and, in theory, anyone can contribute to the underlying projects. But not everyone can invest the necessary time; plus, those projects have a core group of maintainers who decide what code goes into them. For Chromium, the Chrome API owners have to agree that its something that largely fits the architecture and principles of the web, Kardell points out. Not every contribution would be accepted. But Igalias contributions almost always are.

We have expertise. We belong to all the standards bodies, we have good relationships with people in all the standards bodies, we belong to a lot of working groups with members who are actively involved and we do implementation work. We are core contributors, API owners, reviewers for all kinds of things in all those browsers, he explains.

Part of what attracts browser engineers with this level of expertise is Igalias funding approach, which avoids common problems of burnout and unsustainable business models, Kardell says.

Open source is great in many ways. You can take software and try it out, inspect it, you can mold it and fork it and help evolution happen. You can create a startup very quickly. There are all kinds of things I love about open source, but what I dont love is that it can become a source of burnout and non-compensation.

There are all kinds of things I love about open source, but what I dont love is that it can become a source of burnout and non-compensation.

Brian Kardell, Igalia

Igalia does work directly for paying clients, encouraging them to use open source and contribute the technology it builds to the commons. It also works with sponsors like Bloomberg, Salesforce and the AMP Project (which is part of the OpenJS Foundation). And most recently it experimented with fundraising from smaller organizations and individual web developers, to have the web community rather than a single paying client drive the implementation of a missing feature.

Even organizations that dont sponsor any work through Igalia welcome its contributions. We believe that the evolution of the web is best served through open debate from a wide variety of perspectives, and we appreciate the perspective that Igalia and others bring to standards discussions and the Chromium community, Microsoft told us.

A single organization might sponsor a feature but that ends up with something thats useful for a lot of web developers, even or especially when the different priorities of the browser makers mean there hadnt been significant progress before.

We helped unblock container queries, which was the number one ask in CSS forever, Kardell told us. We unblocked has(), which is now in two browsers. The has() selector had been in the CSS spec since 1988 and was also a top request from developers, but it was a complex proposal and so browser makers were concerned it would affect performance. Kardell tried to make progress on it in the CSS working group: every year or two I would say lets do something about this, its a big deal and we just could not make it happen.

When Eeyo, the company behind AdBlock, sponsored Igalia to work on it so they could use CSS for their rules, they were able to get past what he terms a nuclear standoff. With a little investment and research showing that it could work, and it could be performant, once we did that Apple said we can do that and they did it and in fact they landed it already.

Some browser engineers say that if it wasnt for Igalia, CSS Grid might not have become widely available.

Its a similar story with CSS Grid, which lets developers achieve much more advanced and custom layouts than Flexbox: Palmer calls it a huge feature thats loved by developers. But some browser engineers say if it wasnt for Igalia, it might not have become widely available. Microsoft started work on what became the original CSS Grid Layout specification, shipping the prefixed version in IE10 in 2012: Google started to add support for CSS Grid to WebKit in 2011 but then forked WebKit to create Blink in 2012, while Mozilla didnt adopt it because it was focused on its own XUL grid layout.

Bloomberg uses web technologies both for serverside operations and rendering on the Bloomberg terminal, which Palmer describes as a data-intensive real-time rendering system that really pushes the limits of Chromium; in 2013, it sponsored Igalia for a multi-year project to work on a new approach to CSS Grid, which it implemented in both Blink and WebKit.

Its in our interests, to truly become successful, for us to build amazing fast and rich applications for our users, Palmer told us. But when we can do more [with web technologies], the world can do more as a result. We run into bottlenecks that we find are worth optimizing that maybe not everyone runs into, and when we fund those optimizations, everyone benefits, because everyones browser goes a little bit faster.

If there is any uncertainty about whether there is demand, about whether everyone will step forwards together, we can help provide that push. We can say these two browsers are moving ahead [with a feature] because its the top of their priority list and this one is not, so we should fund the one that is behind, we should fill that gap. And by achieving that completeness, everyone moves forward.

He refers to the work Bloomberg and Igalia do as pipe cleaning a process, because it isnt just getting a new feature into browsers or the JavaScript runtime: Igalia also works on the toolchain required to use it and develops test suites to help drive interoperability between different browser engines. Sometimes it can also lead to more significant features in future.

BigInt in ECMAScript was a sponsored improvement that Bloomberg wanted for working with numbers bigger than can be expressed with IEEE double precision variables; BigInt means they can ergonomically pass those around. But the precedent of adding a new numeric type to JavaScript may make it easier to add the decimal numbers everyone uses in daily life. Bloomberg wants that because financial market data is often supplied as 64-bit decimal numbers, but it would also help any developer who finds simple arithmetic like adding up 0.1 and 0.2 (which doesnt equal 0.3 in any language that uses IEEE numbers) counterintuitive in JavaScript. This would solve one of the most frequently reported problems with the language, Palmer suggested.

Its clear how important Igalias contributions are to the web platform, but theres sometimes confusion over why they come from Igalia although the occasional misunderstandingor controversy is often for political rather than technical reasons. It may seem odd that, for example, both Google and the web community effectively pay Igalia to work on features in WebKit that Apple hasnt prioritized. While Apple has been hiring well-respected developers to expand the Safari team and is now adding key features after a period of underinvestment, its also salutary to note how many more web platform features (both experimental and stable) are unavailable only in Safari.

Historically, browser makers like Apple, Firefox, Google and Microsoft have acted as what Kardell terms stewards of the web, with pressure from the broader web community pushing them to implement W3C standards. But while the commons of the web has become fundamental to systems far beyond the browser, in everything from TVs to cars, adopting those standards is still completely voluntary.

Different browsers have their own different priorities and even the largest budget has limits.

Its not great that weve set up a system in which everything is dependent on the completely voluntary budget and participation of what is effectively three organizations. Its great that weve gotten it this far: its open and we have multiple contributors. But different browsers have their own different priorities and even the largest budget has limits.

With the web platform being at least as large and complex as an operating system, building a browser takes a wide range of expertise. Inevitably, even though browser makers want to be competitive by pushing the web platform forward (or at least not being the last browser to implement a feature), their priorities and commitments dictate what gets implemented and what doesnt.

The strength of the W3C is the breadth of who is involved beyond the browser makers there are over 500 members, although many are involved with a single working group rather than contributing broadly but that also leads to what Kardell characterizes as potentially long, difficult, incredibly complex discussions, that can take an extraordinary amount of time from your limited resources.

A lot of things just dont move forward because implementers are in the critical path, its completely voluntary, and its independently prioritized by them. Getting all those stars to align is really, really, really hard.

Thats the problem Igalia is so good at unblocking.

Most web developers care less about the priorities of individual browsers and more about not relying on features that arent supported across all browsers. Normally, Palmer notes, new features turn up in all the browsers and thats what makes things wildly adoptable and its easy to think that this is a natural flow a fountain of features where the platform just gets better and all by itself.

Actually, it takes a lot of hard work and funding and time: not just writing the code, but getting it reviewed, tested for compliance, put through QA and accepted into multiple codebases.

Its almost a superpower that Igalia has, says Palmer: to work across browsers and help everyone move forward in consensus-based lockstep.

Thats something individual browser makers, with their individual priorities and expertise in their own specific codebase, find difficult to do.

If you come to us and you have a reasonable case, if we think there is some there there that we can help you with, then you can pay us and we can help you, Kardell explains. We can be the implementor that you need to have to move the conversation.

Its almost a superpower that Igalia has, to work across browsers and help everyone move forward in consensus-based lockstep.

Rob Palmer, Bloomberg

Even if a feature is a high priority for all the browser makers, it can also be more difficult to implement a feature in one browser than it is in another: what it will cost to do it for Chrome isnt what it will cost to do it for Safari and isnt what it will cost to do it for Firefox, he notes. Standards require multiple implementations, which means a significant commitment from multiple browser makers, which is where some proposals get stuck.

The shortage of people with the deep expertise to build browsers results in the kind of nuclear standoff that held up has(), he explains. Where theres something thats going to be hard and potentially expensive and we dont know how valuable yet because we havent had the discussion, we just know we cant afford to do it because doing it means not doing something else. So it gets to where nobodys willing to be the first one to pull the trigger and you have these things that linger for lots and lots and lots of years. They cant get past go. But once someone gets passed go, suddenly people are like, okay, I guess were going to have to figure this out and Igalia plays that role sometimes.

In some cases, a feature is important for one particular use case like embedded systems and mainstream browser makers dont see it as a priority even though they would benefit from it.

While Apple controls the way WebKit powers Safari, WebKit-based browsers on PlayStation, Epiphany and embedded devices like smart TVs and refrigerators, digital signage and in-vehicle displays use WPE WebKit, which Igalia maintains. Appliance makers like Thermomix (which uses the embedded browser for the screen of its smart food processor) and set-top box manufacturers come to Igalia for help with it; and their investment has driven major improvements in Canvas and SVG hardware acceleration.

Despite having developed for the web since the mid-90s, even Kardell didnt expect JavaScripts Off-Screen Canvas to be relevant to him. The number of times that I have ever professionally programmed against Canvas is zero but I use Canvas every single day without realizing it and I have used libraries that use Canvas to do things. Maps, blob databases and Google Docs all use Canvas and the way Canvas blocked the main thread, so everything else in the browser was interrupted while you pan or zoom, might be bearable on a high-end device, but was a significant problem for performance on resource-constrained embedded devices. Fixing that improves the experience for everyone.

Thats a clear example of why prioritizing features in browser development is so hard, he suggests. When you ship Off-Screen Canvas, a whole bunch of the world will say: why dont you do this instead? This is clearly more important but the problem is its all the most important.

Rather than letting anyone buy a standard, sponsorship is a way to get responsible development of features that browser developers are asking for that involves collaboration and co-design with different browser makers and thorough testing with developers, without expecting developers to work for free.

Kardell understands the concern because he felt it himself before learning more about Igalia, but hes clear that it doesnt work like that. If we agree to work with you, its because we think theres a chance of us helping you do something valuable. What you can buy is us championing [your feature] and the priority of someone who has implementer experience and implementer credibility, who has the right skills and ability to help move that forward.

They dont just do anything that is asked of them: they consider the impact, whether it is good for the community, whether its the right thing for the platform, Palmer agrees.

Because all the work is open anyway, you cant just subvert it by saying I want my pet feature in the web platform. It always involves going through that consensus-building committee process.

In fact, this is an advantage of having an open ecosystem rather than centralized decision-making, he suggests. You can spin this either way. On one hand, you can say, why is the trillion-dollar company not moving things forward themselves? But the other way of looking at it is, wow, these browsers are open source and were able to contribute the features that we want.

This is the opportunity given by open source, lets celebrate that. Lets encourage more people to get involved and contribute, lets encourage more people to fund that style of development, because it means that then the priorities can be more and more set by the community and a large, wide base of developer interests.

Companies like Igalia can help bring attention to new customer problems that arent already being discussed by browser vendors.

Microsoft representative

Having Igalia work on a particular web feature doesnt guarantee that it will happen but its a signal to browser makers that the feature is worth taking seriously. Companies like Igalia can help bring attention to new customer problems that arent already being discussed by browser vendors, Microsoft told us.

In a way, Igalia can act as a filter for all the requests that browser makers get, Kardell suggests. The trouble with being at the core of everything in the whole world is that everybody can see very clearly the problem that they have, and they send it into the bucket but the bucket is the size of an ocean.

He also hopes the Open Prioritization experiment can help with highlighting what organizations like Igalia should work on. The idea came from the question: why do we need single, very, very rich companies to fund something? It would be great if we had diversity of funding that would help the web last, that would help it reach its potential.

That could be smaller companies or working groups or even individuals. It could be all of us or a few of us that sponsor the work and unblock it and make the thing happen, and then we control the priority.

Why couldnt a million developers democratically decide this is worth a dollar and if you collected a million dollars in funding, then you could do a million dollars worth of work and thats amazing.

Feature image via Shutterstock.

Read more here:

Igalia: the Open Source Powerhouse You've Never Heard of - thenewstack.io

Academy Software Foundation Adds OpenFX Image Processing Standard as Newest Hosted Project – PR Newswire

LOS ANGELES, Aug. 3, 2022 /PRNewswire/ -- The Academy Software Foundation the motion picture industry's premier organization for advancing open source software development across image creation, visual effects, animation, and sound technologies today announces OpenFX as its newest hosted project. First developed in 2004, OpenFX is a popular open source plugin standard that allows interoperability between image processing tools in the VFX industry.

Originally designed by Bruno Nicoletti, OpenFX serves as an open, extensible C API that defines an industry-wide common interface between image-based visual effects plugins and host applications. This makes it easier both for creative applications to support a variety of plug-ins, and for plug-in developers to support many host applications reducing proprietary development and industry fragmentation. By creating an interoperable ecosystem of plugins, OpenFX has become the reference standard for visual effects and video processing software creators. Leading software solutions including Autodesk Flame, Foundry Nuke, Blackmagic Design DaVinci Resolve and Fusion, Sony Catalyst and MAGIX Vegas Pro, Assimilate Scratch, Filmlight Baselight, Boris FX Sapphire and Silhouette, RE:Vision Effects, and others support OpenFX commercial plug-ins. By allowing the same plugins to run on multiple editing, video processing, and VFX applications with little or no modification, OpenFX makes it easier for artists to access a wider set of tools.

"OpenFX is the work of smart engineers who focused on developing a standard for interoperability in the image-based software ecosystem. We are very happy to welcome them to the Foundation," shared David Morin, Executive Director of the Academy Software Foundation. "In a world where interoperability is more important than ever, OpenFX will contribute to our growing community, and benefit from the resources of the Academy Software Foundation."

OpenFX has previously been managed by the non-profit Open Effects Association, which will dissolve. Its existing directors Gary Oberbrunner, Pierre Jasmin, Peter Huisma, Dennis Adams, John-Paul Smith will join the project's Technical Steering Committee at the Academy Software Foundation.

"We're very much looking forward to being part of the Academy Software Foundation and the added visibility and infusion of new ideas and contributors that go along with that," said Oberbrunner. "With the backing of the Foundation, we expect to be able to add new features more quickly, thereby enhancing the overall ecosystem for image-based VFX throughout the industry."

Currently on version 1.4, new features are already in the works for OpenFX version 1.5, anticipated for release later this year. Most notably, the team recently added an overlay draw suite so that the host application and the plugin can automatically negotiate and agree on the desired graphics API (e.g. OpenGL, DirectX, Vulkan, Metal or others).

Developers interested in learning more or contributing to OpenFX can visit https://tac.aswf.io/engagement/#OpenFX.

Companies interested in supporting the mission of the Academy Software Foundation can learn more and join at aswf.io/join.

About Academy Software FoundationDeveloped in partnership by the Academy of Motion Picture Arts and Sciences and the Linux Foundation, the Academy Software Foundation was created to provide a world-class home for open source software developers in the motion picture and broader media industries to share resources and collaborate on technologies for image creation, visual effects, animation and sound. The Academy Software Foundation is home to DPEL, MaterialX, OpenVDB, OpenColorIO, OpenEXR, OpenCue, OpenTimelineIO, Open Shading Language, rawtoaces and Rez. For more information about the Academy Software Foundation, visit https://www.aswf.io/.

Contact:Emily OlinAcademy Software Foundation(281) 380-9661

SOURCE Academy Software Foundation

Read more:

Academy Software Foundation Adds OpenFX Image Processing Standard as Newest Hosted Project - PR Newswire

Solana Hack Could Have Been Prevented With Source Code Change – CoinGape

Even as it is unclear as to what is the exact loss incurred due to the Solana hack, the reason behind it is still unknown. A huge hack in the Solana ecosystem affected over 8,000 wallets on Wednesday, draining out at least $8 million and counting. Assets in the form of SOL and USDC were withdrawn from the wallets by the perpetrators.

Responding to the attack, the Solana management said several engineers and security expert firms were trying to find the cause of the hack. One of the many theories being speculated is the possibility of aprivate key compromise. Meanwhile, Senor Doggo, a Twitter profile that goes with the name, said the hack was avoidable with a different approach. They said having an open source code could have helped the management figure out what went wrong with the hack.

Doggo added that the closed source code is not helping the cause of researchers trying to figure out the issue. The intellectual property protection was unnecessary as it is leading to loss of money, he said.

The Solana wallet hack demonstrates why it is irresponsible not to have open source code in crypto. Researchers have been working around the clock to discover what the issue is and cant because the code is closed source. Hundreds of millions lost due to unnecessary IP protection.

Earlier on Wednesday, the news of a security compromise on Solana led to a sharp fall in the assets price. From trading at around $41, SOL dropped to just over $38 within the space of an hour. However, the price has been steadily recovering since then. As of writing, SOL is trading at $40.31, down 2.38% in the last 24 hours, according to CoinMarketCap.

On the other side, assets stored in the hardware wallets are not part of the compromise. Solana said there was no evidence of any impact on hardware wallets. It said an exploit allowed a malicious actor to drain funds from a number of wallets on Solana.

The presented content may include the personal opinion of the author and is subject to market condition. Do your market research before investing in cryptocurrencies. The author or the publication does not hold any responsibility for your personal financial loss.

Next Story

More:

Solana Hack Could Have Been Prevented With Source Code Change - CoinGape

OpenChrome, An Open Source Driver, Is Not Yet Ready To Be Integrated Into Linux 5.20 – Open Source For You

You might remember that one month ago, the sole developer left working on open source VIA x86 graphics support for Linux intended to eventually mainline this OpenChrome DRM/KMS driver for the Linux 5.20 cycle. Even though Linux 5.19 is being published today and the Linux 5.20 merge window is now open, the OpenChrome DRM driver is still in the development stage.

For the Linux 5.20 merge window, the OpenChrome DRM/KMS driver has not yet been queued into the DRM-Next tree. According to the most recent activity on dri-devel as of this weekend, at least one more patch series revision is still required to resolve outstanding problems identified during the current v3 round of review. Before the code is integrated, the outstanding issues must be resolved.

Since they rarely pose a threat to present customers, etc., new drivers are occasionally submitted after the merge window has closed. There is still a potential that something similar will occur, but it is more likely that the OpenChrome driver will be delayed by at least one more cycle.

This OpenChrome DRM/KMS driver has been in development for more than ten years, albeit intermittently, and is intended to work with the VIA CLE266/KM400/K8M800/P4M800 Pro/PM800/P4M890/K8M890/P4M900/CX700/VX800/VX855/VX900 chipsets. Kevin Brace was the last developer actively involved in advancing open source VIA x86 graphics driver support. However, even as of 2022, this OpenChrome driver does not yet provide 2D or 3D hardware acceleration, therefore it is essentially adequate for kernel mode-setting and display functions.

The driver will only be loaded by default if the via.modeset=1 kernel option is passed and will be treated as experimental when it is finally integrated, at least until 2D acceleration is implemented.

See more here:

OpenChrome, An Open Source Driver, Is Not Yet Ready To Be Integrated Into Linux 5.20 - Open Source For You