As DCPS Goes on Spring Break, the District to Offer Daytime and Evening Programming for Students | mayormb – Executive Office of the Mayor

(Washington, DC)As DC Public Schools (DCPS) prepare to go on Spring Break, Mayor Muriel Bowser is reminding families that the District will be hosting daytime and evening programming next week and on Monday, April 18, to help keep students engaged during their out of school time.

DPR is providing full-day options for parents of youth ages 3-5 years old and6-12 years old at recreation centers across the District from April 11-14. The camps will offer a blend of enrichment activities, sports, and arts and crafts.

On Monday, April 18, when DCPS will be closed for staff development, DPR will offer Fun Day and Wee Fun Day camps for children ages 6-12 and 3-5, respectively, featuring enriching activities, sports, arts and crafts, and more.

Next week, the District will host #HealthyHeroesDC Kids Week, featuring fun activities and community events with opportunities for students and families to get vaccinated against COVID-19. DC Health will also offer vaccination pop-ups at DPR Spring Break camps throughout the week. Visit backtoschool.dc.gov/kidsweek to learn more and see the full list of events.

Hair Love StoryWalk @ Bellevue LibraryWhat: Children are invited to participate in an outdoor interactive picture walk of the book Hair Love. The event will also feature face painting and other surprise activities.When: Monday, April 11, 12 pm 4 pmWhere: Bellevue Library

Rosedale Spring FlingWhat: A fun-filled spring event where families will enjoy some food, games, and music.When: Wednesday, April 13, 2 pm 6 pmWhere: Rosedale Recreation Center Football Field

Hillcrest Easter JamboreeWhat: A fun day of arts & crafts, scavenger hunts, and more.When: Thursday, April 14, 5:30 pm 7:30 pmWhere: Hillcrest Recreation Center

Easter Eggstravaganza and Movie Night @ Fort Stanton Recreation CenterWhat: Community Easter event with music, game stations, obstacle courses, arts and crafts tables, and easter egg hunt, followed by a special showing.When: Thursday, April 141 pm 4 pm (Egg Hunt)5 pm 7 pm (Movie)Where: Fort Stanton Recreation Center

Spring Fling DiveWhat: Youth will participate in an egg dive in the swimming pool.When: Saturday, April 16, 1 pm 4:30 pmWhere: Turkey Thicket Aquatic Center

Mayor Bowser also continues to encourage DC youth ages 5-12 to participate in the #HealthyHeroesDC Youth Art Contest. Students can submit their artwork showcasing how they are protecting their community, including getting vaccinated against COVID-19. Selected winners will have their artwork featured in a District wide media campaign. Visit backtoschool.dc.gov/YouthArtContest to learn more.

Pastel Color ExtravaganzaWhat: Youth will decorate eggs and baskets to take home, incorporating an egg hunt and candy.When: Monday, April 11, 3 pm 5 pmWhere: Barry Farm Recreation Center

Intergenerational Spring Walk-a-ThonWhat: Come walk into spring! We will have a pre-game warm-up party, an all-in team stretch session, and cool down.When: Wednesday, April 13, 11:30 am 1:30 pmWhere: Fort Stevens Recreation Center Basketball Court

Intergenerational Spring BlingWhat: Seniors and teens will come together to meet and greet, and congregate listening to different genres of music using silent headphones.When: Wednesday, April 13, 1 pm 3 pmWhere: Columbia Heights Community Center

Spring Seekers Finders KeepersWhat: A scavenger hunt for eggs with small prizes.When: Wednesday, April 13, 10 am 12 pmWhere: Arthur Capper Playground

Late Night Hype 2.0What: A teen-night and pool party from 7 11 pm that continues with open rec activities until sunrise.When: Thursday, April 14, 7 pm 11 pm (Open rec: 11 pm sunrise)Where: Deanwood Recreation Center and Pool

Trinidad & Tobago Under-20 National Team TryoutsWhat: Residents can watch invited North American-based Trinbagonian players scrimmage for a chance to join the Trinidad & Tobago Under-20 National Team that will compete in CONCACAF championships this summer in Honduras.When: Saturday, April 16 & Sunday, April 17, 9 am 12 pmWhere: Edgewood Recreation Center

Intergenerational Easter Egg HuntWhen: Monday, April 18, 4 pm 5 pmWhere: Fort Stevens Seniors Center

Eagles Easter Egg-stravaganza & Egg-luminate Night HuntWhat: Annual Easter celebration event for the Bald Eagle & Ward 8 community.When: Monday, April 18, 6:30 pm 9:30 pmWhere: Bald Eagle Recreation Center

To learn more and register for events, please visit dpr.events.

PetalpaloozaWhat: Celebrate spring at Petalpalooza for a full day of live music and engaging activities. The evening is capped by a fireworks show.When: Saturday, April 16, 1 pm 9 pmWhere: Capitol Riverfront at the Yards

Admission is free of charge at all Smithsonian museums and the National Zoo. Timed-entry passes are only required at the National Museum of African American History and Culture and the National Zoo. A full schedule for Smithsonian Museums and the National Zoo can be found online.

The Districts family success centers in Wards 7 and 8 will be open next week, Monday Thursday, from 8:15 am 4:45 pm. Vaccinations will be offered throughout the success centers.

Nine indoor pools are open across the District next week, including Marie Reed, Wilson, Roosevelt, Turkey Thicket, Dunbar, William H. Rumsey, Deanwood, HD Woodson, and Barry Farm. For hours of individual pools, please visit dpr.dc.gov/page/indoor-pools (all DPR facilities are closed Friday, April 15, in observance of DC Emancipation Day).

DC Public Library (DCPL) locations will be open all week, with select locations open on Friday, April 15. The Anacostia Library, Benning Library, Mt. Pleasant Library, Petworth Library, Southwest Library, Tenley-Friendship Library, Woodridge Library, and the Martin Luther King Jr. Memorial Library will be open on Friday, April 15, from 10 am 6 pm All Library locations will be closed on Saturday, April 16 for DC Emancipation Day and on Sunday, April 17 for Easter.

The Districts Office of Out of School Time Grants and Youth Outcomes (OST) partners with community-based organizations to offer youth enriching opportunities to grow and thrive outside the classroom. A full list of available programs across all eight wards can be found here.

DPR Meal Distribution sites will offer meals from 12 pm 2 pm at 18 sites from Monday, April 11 through Thursday, April 14. A full list of sites is available at dpr.dc.gov/afterschoolmeals.

Meal distribution sites at DCPS be closed from April 11 through April 15. On Friday, April 8 from 10 am 2 pm, open meal sites will distribute up to seven breakfast and lunch kits since meal sites will be closed during Spring Break. Youth can visit Anacostia HS, Ballou HS, Columbia Heights EC, Dunbar HS, Eastern HS, Hardy MS, Jackson-Reed (Wilson) HS, Ron Brown HS, Roosevelt HS, and HD Woodson HS to receive meals.

During Spring Break, DC Central Kitchens mobile meal truck will be making stops at select schools. The free lunch bags - that include two days worth of healthy proteins, fruit, and milk - will be distributed on a first come, first serve basis. Meal delivery schedule below.

Monday, April 11; Wednesday, April 13; and Friday, April 15:

Tuesday, April 12; Thursday, April 14; and Monday, April 18

Social Media:Mayor Bowser Twitter:@MayorBowserMayor Bowser Instagram:@Mayor_BowserMayor Bowser Facebook:facebook.com/MayorMurielBowserMayor Bowser YouTube:https://www.bit.ly/eomvideos

Read more:
As DCPS Goes on Spring Break, the District to Offer Daytime and Evening Programming for Students | mayormb - Executive Office of the Mayor

BlackCat Is the Latest Successor of Ransomware Group, BlackMatter – Tech.co

Ransomware gang on-the-rise, BlackCat (ALPHV), have been linked to previously defunct groups BlackMatter and REvil, due to their shared use of the sophisticated BlackCat malware.

The cybercriminals have already launched a number of attacks on industrial companies and universities in the U.S. and are spectated to be using some of the most advanced ransomware in circulation.

According to a recent report by the VPN provider Kaspersky, the tools and techniques used by BlackCat bear much resemblance to those used by BlackMatter the hacking circle responsible for the 2021 Colonial Pipeline attack. This revelation shows how hard it is to wipe out the use of this rapidly advancing malware.

BlackCat is a ransomware-as-a-service (RaaS) gang that has been active since December 2021. Since their inception, they've been targeting a number of global organizations by stealing sensitive data, extorting money, and threatening to launch a disrupted denial-or-service (DDoS) attack if demands aren't met.

Far from being your run-of-the-mill cyber gang, BlackCat has attracted global attention because it relies on sophisticated ransomware of the same name.

Unlike other types of ransomware, BlackCat runs on Rust, a programming language with cross-compilation capabilities. Due to these advanced capabilities, the language can run on both Windows and Linux systems. The use of Rust also makes finding encrypted files easier, while making the malware less detectable to security researchers.

But what does it actually look like to be targeted? Well, users who are targeted by BlackCat could have their files locked and be demanded to pay for their decryption. The malicious program also can rename encrypted files to align with their specific requests.

Then, if users refuse to agree to the payout fees which commonly exceed six digits the ransomware groups may add additional pressure by threatening to publish the compromised data publicly.

While the actions BlackCat are taking might seem rare, this isn't the first time they've been used to target users.

The same tactics have also been used by notorious ransomware groups like BlackMatter, REvil, and DarkSide a string of affiliate RaaS groups that have been responsible for thousands of high-profile attacks worldwide.

After the REvil and BlackMatter groups shut down their operations, it was only a matter of time before another ransomware group took over their niche, said Dmitry Galov, security researcher at Kaspersky.

And this isn't just a coincidence. In Kaspersky's report A bad luck BlackCat released last Thursday, it was revealed that BlackCat is just the latest iteration of these groups, with the gang using near-identical tools and techniques to its predecessors.

Specifically, the research found that the new RaaS group were using a custom exfiltration tool called Fendr and a batch file called Mimikatz, both of which had been used by BlackMatter and REvil.

Additional research from Tripwire also suggests that the RaaS group's similarities may even extend to its members, with the software company finding that a number of criminals previously involved with these groups are now working with BlackCat.

BlackCat ransomware and similar threats cause unprecedented damage to businesses. To avoid being targeted by these breaches or to limit their impact, it's recommended that companies take note of the cybersecurity precautions below.

For a more detailed breakdown of how to stay safe online, read our top tips for managing cyber threats here.

Read more here:
BlackCat Is the Latest Successor of Ransomware Group, BlackMatter - Tech.co

MCA San Diegos $105 M. Expansion Is An Odd, But Often Stunning Attempt To Create A More Inclusive Museum – ARTnews

After a four-year wait and a $105 million expansion, the Museum of Contemporary Art San Diegos reopening is a study in the changing shape of institutions.

Overlooking the Pacific Ocean in the seaside neighborhood of La Jolla, the newly renovated complex is essentially two different buildings joined at the hip.

On the right, youll find a composite of white-stuccoed boxes, punctuated by curved windows that riff on the surrounding buildings Mediterranean-inspired archways. The first box was designed by celebrated modernist Irving Gill in 1916, and in later decades, more boxes were added by architects Mosher & Drew and Venturi Scott Brown & Associates (VSBA).

On the left, meanwhile, architect Annabelle Selldorfs new expansion is roughly the same scale, but totally distinct in materiality. In lieu of stucco and curves, she chose a palette of glass walls, sandy-colored travertine, and aluminum beams joined at right angles.

All museum expansions, in a sense, are a type of rebranding, where new architecture coincides with a new public image.The two buildings odd union is emblematic of both the museums and the architects task: to align contemporary culture with a canonical history.

The goal of this project was to create a more inviting and inclusive museum with a greater connection to the community, the architect said at the ribbon-cutting ceremony last Tuesday.

When Selldorf joined the project in 2014, the MCASD had issues to resolve, primarily the lack of space for its 5,600-piece collection. But the building was also an iconic bit of architecture that had perplexed visitors for years. Its cartoonishly fat columns, designed in 1996 by the beloved postmodernists Robert Venturi and Denise Scott Brown, obscured the front door in a way that was both a practical and symbolic problem.

A museum can feel somewhat hard for people to enter in the first place, and then we hid the entrance, MCASD Board Chair Mark Jacobs explained in his remarks.

Despite the outcry from Venturi Scott Brown fans, Selldorf replaced the columns with an entrance that, she said, represents a true welcome for everyone.

Its glass walls are unobscured by a column-less aluminum brise-soleil, and the ticket counter is always visible from the outside. She and her team added 46,400 square feet of new build, effectively doubling the museums footprint while quadrupling its exhibition space. Skirting height restrictions on new construction, the existing auditorium was repurposed as a 20-foot-tall, 7,000-square-foot gallery.

If this isnt museum sized, I dont know what is, Selldorf said as she led a tour of the building.

A favorite of gallerists David Zwirner, Hauser & Wirth, and other high-profile members of the art world, Selldorf Architects operates with whats best described as an elegant pragmatism.

The MCASDs new galleries possess clear circulation paths and a minimalists grandeur, where natural light fills generously proportioned, open spaces. Tall, thin windows frame exterior landmarks individual palm trees, bell towers, and towering pines alongside top-notch examples from the museums collection.

Roughly organized by era, theres a triangular gallery of Color Field painters including Rothko, Morris, and Motherwell, and an enormous trapezoidal gallery for Light and Space artists like Gisela Colon, Larry Bell, and Peter Alexander. (Most galleries are normal rectangles, but these were pinched where the new construction connected to the old.)

Rather than construct a new traditional auditorium, Selldorf added a more current flexible events space, a hallmark of contemporary museum architecture that provides a blank slate for more varied public programming. Here, that includes a luxurious floor-to-ceiling view of the ocean.

The museums new luxurious Big Little Lies-esque views are not in fact distractions from the art, but complementary, Selldorf said twice during the museum preview, perhaps anticipating criticism.

For all of you who live here, the incredible light of Southern California and the incredible view of the Pacific Ocean is something you may take for granted, the New York-based architect said. We were thrilled to make it part and parcel of the experience. I think it will contribute to you remembering where you are, and what you have seen.

For the most part, the historically relevant architecture of the original building was left untouched, providing an interesting side-by-side study of how much the shape and culture of museums has changed. The interior has no demarcations between the old and new, though there is a distinct sensation of entering another era in the original space, a time when museums were perhaps considered less destinations than rarified containers for art.

On this older side, the relatively low-slung, windowless galleries with gray-and-white terrazzo floors form a warren thats decidedly confusing to navigate. And the original VSBA lobby, still adorned on the ceiling with the architects metal-and-neon fins, is intact, but will likely be challenging to program. It still reads very much like a lobby, only without an entrance.

The museum approached Selldorf Architects in 2014 seeking a new architecture that would reach our full potential as a community resource for culture and education, Kathryn Kanjo, MCASDs director and CEO, said during her walkthrough of the building.

Her sentiments and Selldorfs reflected the institutional reckoning thats been going on for a decade or more, as museums have acknowledged their own exclusivity and lack of representation. Corrective measures are architectural as well as curatorial. Honoring its proximity to the U.S.-Mexico-border, MCASD emphasizes its commitment to showing and collecting artists in the region. Its first year of programming also emphasizes solo shows of women artists, starting with Nikki de Saint Phalle, followed by Alexis Smith and Celia Alvarez Muoz.

The now-headlining Niki de Saint Phalle in the 1960s is a sprawling survey of the late San Diego resident, co-presented with The Menil Collection, a Houston museum that houses the art collection of oil tycoonsJohn andDominique de Menil. The show fills the enormous former auditorium gallery with Nanas, Saint Phalles sculptures of archetypal women in defiant poses, and large-scale Tirs, or shooting paintings, goopy assemblages where the artist buried bags of paint in globs of plaster and shot them with a rifle. The most fragile pieces took years to secure on loan from European institutions, according to Menil senior curator Michelle White

A lot of these works which are being shown in the United States for the first time may not come back, she said during the exhibition preview. We feel very lucky to have been able to bring together this group of work.

In the former VSBA lobby, a suite of works by various artists responding to the social and political tension on the San Diego-Tijuana border unfortunately recedes behind the spaces columns. Elsewhere, flanked by soaring galleries devoted to the movements of Pop Art and Hard-edge painting, the wall text in a modest mezzanine describes works from a group of Latinx artists from the broader Americas, made from the 1970s onward as engaging in a a range of issues these span Felipe Almadas altar of religious and secular objects, including a figurine of Bart Simpson, to the surrealist portraiture of Daniela Gallois.

I do wonder: As we retrofit art history with the underrepresented, will we categorize them as we did in the past, based on specific movements of formal exploration? Or will they be grouped by shared politics of representation, and broadly defined ethnic categories?

As values evolve, the way that the art and architecture of the present will be perceived by the future is anyones guess. When VSBA renovated the museum in 1996, critical of the previous Mosher & Drew overhaul, they described their own intervention cartoon columns and all as a restoration of Gills original vision that would be more inviting for visitors. Two decades later, Selldorf removed those columns citing the exact same reason, completing the cycle of modern to postmodern and back again.

Trumping MCASDs exquisite new building, and even its Primetime Emmy-caliber views, the museums must-see crown jewel remains the 1997 installation 1234 by San Diegos own Robert Irwin.

Its a simple premise: three squares cut from the brown-tinted glass of a gallery facing the beach, resulting in an extraordinary effect on the viewers perception. The squares frame landmarks in the distance, somehow bringing them closer, while simultaneously making the sky bluer, as the ocean breeze and smell of salt permeate the gallery.

Selldorf was rightthe windows here are extremely memorable.

View original post here:
MCA San Diegos $105 M. Expansion Is An Odd, But Often Stunning Attempt To Create A More Inclusive Museum - ARTnews

Which are the best technology and stacks for blockchain development? – Medium

Blockchain

Were not going to try and sell you on anything, just a list of the best places to start learning blockchain development. This is meant for people who already know how to code in some form or are interested in finding out more about blockchain engineering. Well be looking at popular blockchains and stacks that are being used for the development of various smart contracts.

This is not a list of the best programming languages, although you dont need to speak SQL. Im also pushing back on the idea that you need to know concurrency, multithreading, or any other aspect of what makes blockchain interesting. It just so happens that a lot of talks are getting around things like state reconciliation and whatnot, but it isnt necessary. We wont be touching on those topics at all; there are plenty of guides and videos available online if you want to learn more about them.

Figure out what you want to create and then pick the stack that lets you create it. This list looks at common transactions, what technologies are being used to build different smart contracts and dApps, and the general order of importance. Ill start by looking at basic blockchain concepts before moving on to more complex things like Turing complete blockchains and RSK Rootstock smart contract development. Were not going to get into specific examples, but this should give you a good place to start when learning about blockchains.

This is where we should start looking at how blockchains work from an engineering perspective. A lot of blockchain technology is about the transactions that occur within each block, and you should start with the basics. You can find more advanced blockchain topics later on in this post.

Ethereum and Solidity: The Complete Developers Guide

If youre interested in Ethereum smart contracts development, then this is a must-read. It covers how to create a smart contract, as well as deploy it to the main Ethereum network. There are also plenty of Ethereum specific examples to get your feet wet with Solidity (Ethereums programming language).

Learn Blockchain By Building Your Own

This guide is a free course that teaches you the basics of blockchain development through the use of an IDE. Youll learn how to deploy your first smart contract, as well as blockchain testing and debugging.

In this section, were going to look at some of the most popular alternatives for blockchain development in areas like smart contracts, IoT, Big Data, and more. Not only will we take a look at how specific dApps or contracts are built on different blockchains, but well also talk about what stack they utilize.

Ethereum and Solidity Development: The Complete Developers Guide

We already talked about this book earlier, but were repeating it because it is something you should read. It covers how to create smart contracts on the Ethereum network using Solidity, which is the language of choice for most decentralized applications. This will teach you how to deploy a smart contract, and even what to do if things go wrong. You can also read this guide as an ebook for free on Github or Amazon.

Ethereum Development Guide for Dummies

This Ethereum guide is meant for people who are interested in learning about the Ethereum blockchain from a developers perspective. It goes into detail about how Ethereum works and what you can do with each block. This guide is not meant to be a deep dive into smart contract creation, but it will give you a good understanding of how the network functions overall. You can download the ebook here.

Zcash and Zcash Parity Tech Stack Guide: Everything You Need to Know

This guide explains how to utilize Parity to run the Zcash node implementation. This includes a brief overview of BigChainDB, which is used for storage purposes by many blockchains. In this guide, youll also learn how to create a private and public chain.

Maidsafe and SAFE Network: Decentralized Cloud Technologies

The SAFE network is built on top of the MaidSafe stack, which is used for decentralized cloud storage solutions. This guide will introduce you to both the MaidSafe and SAFE networks. It covers topics like distributed hash table (DHT) operations, data redistribution, and more. You can find the source code here or download the ebook as a PDF.

Quorum: The Future of Private Transactions Technology in Financial Services?

Quorum is a distributed ledger technology that runs on JVM (Java Virtual Machine). It is used in conjunction with JPMorgan, and other financial institutions. It can be used for on-chain and off-chain transactions. This guide talks about Quorum and how it could potentially be used for private financial transactions. You can also find the source code for this guide here.

BigchainDB Blockchain Database Technology For Everyone

This overview of BigchainDB explains how exactly blockchain technology works to create an immutable database that can be easily accessed by anyone with a web browser. BigchainDB is a blockchain database that supports multiple companies and their private networks, as well as individual developers working on smart contracts. The bulk of this guide goes into explaining how to interact with BigchainDB using dApps like Ethereum, and it also explains chain interoperability. You can find the source code for this guide here.

So far, weve only looked at the Ethereum blockchain in a lot of detail. It is by far the most powerful, or Turing complete blockchain that exists today. But just because Ethereum isnt particularly useful for development doesnt mean other blockchains arent as well. The following sections go over some of the most popular alternatives for smart contract development today.

Ripple and XRP: A Guide for Newcomers

This guide goes into detail about Ripple technology and how it can be used to exchange currency. Ripple is considered a distributed ledger technology and is often compared to the blockchain as well. This guide goes over the basics of how Ripple works as well as describes XRP (Ripples native digital asset). You can find the source code for this guide here on Github.

Golem Project Development Status Report: Yellow Paper v0.10.0

The Golem Project is a decentralized global network that allows users to spend computing resources from their own devices. It is built on top of Ethereum smart contracts and cryptography. This guide goes into detail about the golem stack and what you can do with it. You can find the source code for this guide here on Github.

Ethereum Smart Contract Tutorials: The Bare Essentials

This guide covers the fundamentals of creating a smart contract using Solidity, which is the programming language for the Ethereum blockchain. In this tutorial, we cover the whole process, from beginning to end. We also cover contract creation, deployment, and testing in detail. You can find the source code for this tutorial here on Github.

Ethereum Book: Beginners Guide to Smart Contracts

This is a very technical guide that covers all the ins and outs of writing smart contracts. It goes into detail about the Ethereum blockchain and how it works, as well as how smart contracts work. You can find the source code for this guide here on Github.

Before we break things down into individual projects and tools that work with specific blockchains, lets first look at what the most popular platforms are used for. There are many different open-source blockchain frameworks that developers use to create dApps or decentralized applications. Some of the most popular platforms are:

Ethereum Formerly known as Ethereum Classic, this is also the most popular blockchain in existence. It allows developers to create smart contracts using Solidity, which is a programming language for smart contract developers.

Hyperledger Fabric This is a permissioned blockchain framework that was invented and designed by IBM. It comes with its ecosystem. The Hyperledger Project was created to enable diverse business blockchain technologies and platforms to connect and communicate with each other.

R3 Corda R3 Corda is another permission distributed ledger (blockchain) that runs on Java, similar to Quorum. This platform enables users to create new financial systems using blockchain technology.

Developers need the proper tools to use the technology. They need to be able to access it and test changes, as well as create more advanced applications. This is why open source tools and platforms are so widely used. In this section, we will discuss the tools you can use to create blockchain applications with:

Ethereum Wallet: Where To Store Your Ether & NEO-GAS Tokens

This guide will cover the different Ethereum wallet options you have today and how they can be used for storing your ETH (Ethereums native token). It also covers NEO-GAS, which is also known as Red Pulse tokens. You can find the source code for this guide here on Github.

Read the original:
Which are the best technology and stacks for blockchain development? - Medium

Synopsys: Enterprises struggling with open source software – TechTarget

While nearly every enterprise environment contains open source applications, organizations are still struggling to properly manage the code, according to a report by Synopsys.

The 2022 Open Source Security and Risk Analysis (OSSRA) report revealed the sheer volume of open source software used by enterprises across a variety of industries, as well as challenges with out-of-date code and high-risk vulnerabilities such as Log4Shell. While problems with visibility and prioritization persist, the report highlighted improvements in a few areas, most notably an increasing awareness of open source software.

To compile the report, the Synopsys Cybersecurity Research Center and Black Duck Audit Services examined findings from more than 2,400 commercial codebases across 17 industries. While analysis determined that 97% of the codebases contained open source software, a breakdown by industry showed four contained open source in 100% of their codebases. The affected areas were in computer hardware and semiconductors, cybersecurity, energy and clean tech and the internet of things.

"Even the sector with the lowest percentage -- healthcare, health tech, life sciences -- had 93%, which is still very high," the report said.

Additionally, the report found 78% of the code within codebases was also open source. Tim Mackey, principal security strategist at Synopsys Cybersecurity, contributed to the report and told SearchSecurity he was not surprised by the high percentage. It tracks with the last four or five years, during which more than two-thirds of the code within codebases was open source. In 2020, it was 75%. While the usage of open source software does vary by industry, Mackey said it is just the way the world works.

"I suspect that [we'll] probably creep into the 80s over time, but we're nearing the bifurcation of propriety and custom versus open source for most industries," he said.

One aspect that has accelerated the pace of innovation over the last 10 years is how developers can focus on unique value propositions and features for employers. From there, Mackey said they can access libraries that do the foundational work. The challenge, he said, is that a development team will follow a different set of security rules and release criteria for open source software.

While it can be beneficial that anyone can examine the source code, Mackey said in practice most people just focus on what it does, download it and use it. Therein lies the risk for companies.

With all the open source that's powering our modern world, that makes it a prime target for being an attack vector. Tim MackeyPrincipal security strategist, Synopsys Cybersecurity

"So, with all the open source that's powering our modern world, that makes it a prime target for being an attack vector," he said.

A recurring trend in the report is "that open source itself doesn't create business risk, but its management does."

Mackey reiterated that sentiment, and said enterprises that change vendors after an incident may be pointing the finger in the wrong direction. He referred to the issues with open source as a "process problem."

"The open source itself might have a bug, but any other piece of software will have a bug as well," Mackey said.

However, the high volume does make it tricky to maintain. The OSSR determined that 81% of software used by enterprises contained at least one vulnerability. Codebases JQuery and Lodash contained the highest percentage of vulnerable components. Spring Framework, which caused issues last month after researchers reported two flaws in the development framework, also made the list in 2021.

Additionally, Black Duck Audit Services risk assessments found that out of 2,000 codebases, 88% contained outdated versions of open source components, meaning "an update or patch was available but had not been applied."

More significantly, 85% contained open source code that was more than four years out of date. That percentage has been consistent over the years, according to Mackey.

He said that while it requires more digging to identify the issue, it highlights how the lack of an update process can make it easy to get out date. The sheer volume of open source code is also an issue -- there could be hundreds to thousands of applications, with hundreds of components per application.

"That's really one of the cruxes of what we're seeing on a consistent basis, is that companies struggle to figure out what the most efficient way to manage this stuff really is," he said.

One flaw that caused enterprises a management and scale nightmare last year was Log4Shell. While the report noted a "decrease in high-risk vulnerabilities, 2021 was still a year filled with open source issues." That included supply chain attacks and hacker exploits of Docker images, but "most notably" the zero-day vulnerability in Apache Log4j utility known as Log4Shell. It allowed attackers to execute arbitrary code on vulnerable servers, according to the report.

"What's most notable about Log4Shell, however, is not its ubiquity but the realizations it spurred. In the wake of its discovery, businesses and government agencies were compelled to re-examine how they use and secure open source software created and maintained largely by unpaid volunteers, not commercial vendors. What also came to light was that many organizations are simply unaware of the amount of open source used in their software," the report said.

Researchers analyzed the percentage of audited Java codebases and found 15% "contained vulnerable Log4j component." Though Mackey acknowledged the quantity of Java applications has changed and log data has improved, he said 15% was lower than he expected.

"My crystal ball says we'll be talking about this next year because that's actually one of the big problems that we see year over year is that people don't necessarily do a good job of patching the vulnerabilities that have been around for a few years," he said.

Differences between commercial and open source software hinder enterprises when it comes to patching. The report noted that commercial patching "usually requires the involvement of a procurement department, as well as review standards that are part of a vendor risk management program." On the other hand, "open source may simply have been downloaded and used at the developer's discretion."

Part of that management extends to security following a merger or acquisition. Mackey said one of the biggest challenges that acquirers have is a lack of visibility and the skill set to evaluate exactly what they are buying. It appears 2021 was a big year for M&As.

"The growth in the number of audited codebases -- 64% larger than last year's -- reflects the significant increase in merger and acquisition transactions throughout 2021," the report said.

Based on the statistics, Mackey said it's exceedingly difficult for enterprises not to use open source.

"I'd argue it's all but impossible," he said. "They'd also have to not be using companies like Amazon or Microsoft or Google, because they're all using open source. It's what powers their clouds. So, it's life today."

While there is work to be done to minimize open source risk, Mackey said Synopsys observed many improvements last year. Enterprises did a better job of managing licensing conflicts, the number of vulnerabilities decreased and the number of applications with high-severity flaws also decreased.

"People are recognizing they need to 'get with the program.' That may be Biden going about beating them over the head, that might be 'Oh wait, I don't want to be the next Colonial Pipeline,'" Mackey said. "We can't necessarily say, but those are good trends. I don't like to say open source is bad in any way; it's just managed differently."

Read the rest here:

Synopsys: Enterprises struggling with open source software - TechTarget

Aqua Security (Argon) Recognized as a Representative Vendor in Gartner Innovation Insight for SBOMs Report – Yahoo Finance

Aqua Security

Software Bill of Materials improve the visibility, transparency, security and integrity of proprietary and open source code in software supply chains

BOSTON, April 12, 2022 (GLOBE NEWSWIRE) -- Aqua Security, the leading pure-play cloud native security provider, today announced that Aqua Security has been recognized as a Representative Vendor in Gartner Innovation Insight for Software Bill of Materials (SBOMs) Report under Commercial SBOM Tools for Argon.* To realize the full benefits of SBOM, Gartner recommends software engineering leaders integrate SBOMs throughout the software delivery life cycle.

The report highlights a critical visibility gap that is growing in frequency and severity: organizations are unable to accurately record and summarize the massive volume of software they produce, consume and operate, said Eran Orzel, Senior Director of Argon Sales at Aqua Security. We agree with the assessment by Gartner that integrating SBOMs into software development workflows is key to achieving software supply chain security at scale. We believe that our technology aligns with their recommendations and was built to help organizations mitigate risks and eliminate security blind spots.

According to Gartner, SBOMs improve the visibility, transparency, security and integrity of proprietary and open source code in software supply chains. The firm predicts that By 2025, 60% of organizations building or procuring critical infrastructure software will mandate and standardize SBOMs in their software engineering practice.

Aquas Argon released its SBOM manifest capability as part of its Integrity Gates. It enables companies to enforce strong security measures while checking on their CI/CD pipeline and its output to improve quality and reduce runtime security issues. Argons SBOM manifest identifies dependencies and key risks in the artifact development process. Organizations can use it to implement strict security evaluations of artifacts and mitigate security threats once discovered.

Story continues

For more information on Aquas Argon solution and to download the report courtesy of Aqua Security, visit Aquasec.com.

*Gartner, Innovation Insight for SBOMs, Manjunath Bhat, Dale Gardner, Mark Horvath, 14 February 2022.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartners research organization and should not be construed as statements of fact. Gartner disclaims all warranties, express or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Aquas Argon Supply Chain Security Solution Argon, an Aqua company, is a pioneer in software supply chain security and enables security and DevOps teams to protect their software supply chain against vulnerabilities, security risks and supply chain attacks. With Argon, Aqua offers the industrys first solution to secure all stages of software build and release and stop cloud native attacks. Aqua Securitys Cloud Native Application Protection Platform (CNAPP) is the only solution that can protect the full software development life cycle (SDLC) from code through build to runtime, ensuring the end-to-end integrity of applications.

About Aqua SecurityAqua Security is the largest pure-play cloud native security company, providing customers the freedom to innovate and accelerate their digital transformations. The Aqua Platform is the leading Cloud Native Application Protection Platform (CNAPP) and provides prevention, detection and response automation across the entire application life cycle to secure the supply chain, secure cloud infrastructure and secure running workloads wherever they are deployed. Aqua customers are among the worlds largest enterprises in financial services, software, media, manufacturing and retail, with implementations across a broad range of cloud providers and modern technology stacks spanning containers, serverless functions and cloud VMs. For more information, visit http://www.aquasec.com or follow us on twitter.com/AquaSecTeam.

Media ContactJennifer TannerLook Left Marketingaqua@lookleftmarketing.com

Read the original post:

Aqua Security (Argon) Recognized as a Representative Vendor in Gartner Innovation Insight for SBOMs Report - Yahoo Finance

The little-known open-source community behind the government’s new environmental justice tool – GCN.com

This story was originally published by Grist. You can subscribe to its weekly newsletter here.

In February, the White House published a beta version of its new environmental justice screening tool, a pivotal step toward achieving the administrations climate and equity goals. The interactive map analyzes every census tract in the U.S. using socioeconomic and environmental data, and designates some of those tracts as disadvantaged based on a complicated formula.

Once finalized, this map and formula will be used by government agencies to ensure that at least 40 percent of the overall benefits of certain federal climate, clean energy, affordable and sustainable housing, clean water, and other programs are directed to disadvantaged communities an initiative known as Justice40.

But this new screening tool is not only essential to environmental justice goals. Its also a pioneering experiment in open governance. Since last May, the software development for the tool has been open source, meaning it was in the public domain even while it was a work in progress. Anyone could find it on GitHub, an online code management platform for developers, and then download it and explore exactly how it worked.

In addition, the government created a public Google Group where anyone who was interested in the project could share ideas, help troubleshoot issues, and discuss what kinds of data should be included in the tool. There were monthly community chats on Zoom to allow participants to have deeper discussions, regular office hours on Zoom for less formal conversations, and even a Slack channel that anyone could join.

All of this was led by the U.S. Digital Service, or USDS, the governments in-house staff of data scientists and web engineers. The office was tasked with gathering the data for the tool, building the map and user interface, and advising the Council on Environmental Quality, or CEQ, another White House agency, in developing the formula that determines which communities are deemed disadvantaged.

These were unprecedented efforts by a federal agency to work both transparently and collaboratively. They present a model for a more democratic, more participatory form of government, and reflect an attempt to incorporate environmental justice principles into a federal process.

Environmental justice has a long history of participatory practices, said Shelby Switzer, the USDS open community engineer and technical advisor to Justice40, citing the Jemez Principles for Democratic Organizing, a sort of Bible for inclusivity in environmental justice work. Running this project from the start in as open and participatory of a way as possible was important to the team as part of living environmental justice values.

The experiment gave birth to a lively community, and some participants lauded the agencys effort. But others were skeptical of how open and participatory it actually was. Despite being entirely public, it was not widely advertised and ultimately failed to reach key experts.

Open source doesnt just mean allowing the public to look into the mechanics of a given software or technology. Its an invitation to tinker around with it, add to it, and bend it to your own needs. If you use a web browser with extensions like an ad blocker or a password manager, youre benefiting from the fact that the browser is open source and allows savvy developers to build all sorts of add-ons to improve your experience.

The Justice40 map is intended to be used similarly. Environmental organizations or community groups can build off the existing code, adding more data points to the map that might help them visualize patterns of injustice and inform local solutions. The code isnt just accessible. The public can also report bugs, request features, and leave comments and questions that the USDS will respond to.

The USDS hoped to gather input from people with expertise in coding, mapping technology, and user experience, as well as environmental justice issues. Many similar screening tools have already been developed at the state level in places like California, New York, Washington, and Maryland.

We know that we can learn from a wide variety of communities, including those who will use or will be impacted by the tool, who are experts in data science or technology, or who have experience in climate, economic, or environmental justice work, the agency wrote in a mission statement pinned to the Justice40 data repository.

Garry Harris, the founder of a nonprofit called the Center for Sustainable Communities, was one such participant. Harris organization uses science and technology to implement community-based sustainability solutions, and he found out about the Google Group from a colleague while working on a project to map pollution in Virginia. As a grassroots organization, I feel really special to be in the room, he said. I know in the absence of folks like us who look at it both from a technology and an environmental justice lens, the outcomes are not going to be as beneficial.

Through the Google Group and monthly community chats, the agency solicited input on finding reliable data sources to measure things like a communitys exposure to extreme heat and to pollution from animal feedlots.

That level of transparency is not common, said Rohit Musti, the director of software and data engineering at the nonprofit American Forests. Musti found out about the open-source project through some federal forest policy work his organization was doing and became a regular participant. He said he felt the USDS did a lot of good outreach to people who work in this space, and made people like him feel like they could contribute.

Musti submitted American Forests Tree Equity Score, a measure of how equitably trees are distributed across urban neighborhoods, to the Justice40 data repository. Although the Tree Equity Score data did not make it into the beta version of the Justice40 screening tool, it is included in a separate comparison tool that the USDS created.

Right now theres no user-friendly way to access this comparison tool, but if youre skilled in the programming language Python, you can generate reports that compare the governments environmental justice map to other established environmental justice screening methods, including the Tree Equity Score. You can also view all of the experiments the USDS ran to explore different approaches to identifying disadvantaged communities.

But to Jessie Mahr, director of technology at the nonprofit Environmental Policy Innovation Center, who was also active in the Justice 40 open-source community, the Python fluency prerequisite signifies an underlying problem.

You can call it open source, she said, but to which community? If the community thats going to be using it cannot access that tool, does it matter that its open source?

Mahr said she respected what the USDS team was trying to do but was not convinced by the result. She said that relatively little of the discussion and information sharing that went on in the Google Group and monthly community chats seemed to make it into the tool. While the USDS staffers running the effort seemed genuinely interested in gathering outside expertise, they werent the ones making the final decisions CEQ was. And the open-source platforms did not offer any window into what was being conveyed to the decision-makers. Mahr was disappointed that the beta tool that was released to the public in February did not reflect the research that outside participants shared related to data on extreme heat and proximity to animal feedlots, for example.

Switzer, the USDS technical adviser, told Grist that CEQ was part of the effort from the start. They said that a senior advisor to CEQ regularly participated in the Google Group and that learnings from the group were brought to CEQ in various formats as relevant.

CEQ has not explained the logic behind the choices embedded in the tool, like which data sets were included, though it is planning to release more details on the methodology soon. The agency is also holding listening and training sessions where the public can learn more.

But it was also strange to Mahr that despite the high profile of the White Houses Justice40 initiative in the environmental justice world, the open-source efforts were not advertised. I never heard about it through any other channels working on Justice40 that I would have expected to, said Mahr. I enjoyed participating in the USDSs teams efforts and dont think they were trying to hide them, she added in an email. I just think that they didnt have the license or capacity to really promote it. Like the other participants Grist spoke to, Mahr heard about the project through word of mouth, from a colleague who knew the USDS team.

Switzer confirmed that the USDS team largely relied on word of mouth to get the word out and noted that they did reach out to people who had expertise working on environmental justice screening tools.

But its clear that the word-of-mouth system failed to reach key voices in the field. Esther Min, a researcher at the University of Washington who helped build Washingtons state-level environmental justice screening tool, told Grist that she had met with folks from CEQ about a year ago to talk them through that project. But she hadnt heard anything about the Google Group until February, after the beta version of the federal tool was released. Alvaro Sanchez, the vice president of policy at the nonprofit Greenlining Institute and a participant in the development of Californias environmental justice screening tool, said he had no idea about the group until Grist reached out to him in March.

Sanchez was frustrated, especially because for months the government offered very little information about the status of the tool. On one hand, he understands that the USDS team may not have had the capacity to reach out far and wide and invite every grassroots organization in the country. But the bar that Im setting is actually fairly low, he said. The people who have been working on this stuff for such a long time, we didnt know what was happening with the tool? To me, that indicates that the level of engagement was actually really minimal.

Sacoby Wilson, a pioneer of environmental justice screening tools based at the University of Maryland, received an invite to the group from another White House agency called the Office of Management and Budget last May. He said he didnt get the sense that the group was hidden but agreed that the USDS hadnt done a great job of getting the word out to either the data experts who build these environmental mapping tools at the state level, or the community organizations that actually work on the issues that the tool is trying to visualize.

But Wilson pointed out that the federal government used another channel to gather input from communities: The White House Environmental Justice Advisory Council, which is made up of leaders from grassroots organizations all over the country, submitted extensive recommendations to CEQ on which considerations should be reflected in the screening tool. To Wilson, an overlooked issue was that the Advisory Council didnt have enough environmental mapping experts.

In response to a question about whether USDS did enough outreach, Switzer said the agency was still working on it. We hope to continue to broaden this kind of community engagement and making the open source group as inclusive and equitable as possible.

Of course, it has been a learning experience as were kind of pioneers in this as a government practice! they also said.

The tool is still in beta form, and CEQ plans to update it based on public feedback and research. The public can attend CEQ listening sessions and submit comments through the Federal Register or through the screening tool website. The discussion in the open-source Google Group is also ongoing, and the USDS team will continue to host monthly community chats as well as weekly office hours.

In a recent email announcing upcoming office hours, Switzer encouraged people to attend if you dont know how to use this Github thing and would like an intro 🙂

This story has been updated to clarify the types of federal programs included in the Justice40 initiative.

See the rest here:

The little-known open-source community behind the government's new environmental justice tool - GCN.com

Sorry, developers: Microsoft’s new tool fixes the bugs in software code written by AI – ZDNet

Microsoft reckons machine-generated code should be treated with a "mixture of optimism and caution" because programming can be automated with large language models, but the code also can't always be trusted.

These large pre-trained language models include OpenAI's Codex, Google's BERT natural language program and DeepMind's work on code generation. OpenAI's Codex, unveiled in August, is available through Microsoft-owned GitHub's Copilot tool.

To address the question of code quality from these language models, Microsoft researchers have created Jigsaw, a tool that can improve the performance of these models using "post-processing techniques that understand the programs' syntax and semantics and then leverages user feedback to improve future performance."

SEE: Software development is changing again. These are the skills companies are looking for

It's currently designed to synthesize code for Python Pandas API using multi-modal inputs, says Microsoft. Pandas is a popular data manipulation and analysis library for data scientists who use the Python programming language.

The language models like Codex can allow a developer to use an English description for a snippet of code and the model can synthesize the intended code in say Python or JavaScript. But, as Microsoft notes, that code might be incorrect or fail to compile or run, so the developer needs to check the code before using it.

"With Project Jigsaw, we aim to automate some of this vetting to boost the productivity of developers who are using large language models like Codex for code synthesis," explains the Jigsaw team at Microsoft Research.

Microsoft reckons Jigsaw can "completely automate" the entire process of checking whether code compiles, addressing error messages, and testing whether the code produces what the developer wanted it to output.

"Jigsaw takes as input an English description of the intended code, as well as an I/O example. In this way, it pairs an input with the associated output, and provides the quality assurance that the output Python code will compile and generate the intended output on the provided input," they note.

The paper, Jigsaw: Large Language Models meet Program Synthesis, looks at the approach in Python Pandas.

Using Jigsaw, a data scientist or developer provides a description of the intended transformation in English, an input dataframe, and the corresponding output dataframe. Jigsaw then synthesizes the intended code.

SEE: Remote-working jobs vs back to the office: Why tech's Great Resignation may have only just begun

Microsoft found that Jigsaw can create the correct output 30% of the time. In this system, natural language and other parameters are pre-processed, fed into Codex and GPT-3, and then the post-process output is returned to the human for verification and editing. That final human check is fed back into the pre- and post-process mechanisms to improve them. If the code fails, Jigsaw repeats the repair process during the post-processing stage.

Jigsaw improves the accuracy of output to greater than 60% and, through user feedback, the accuracy improves to greater than 80%, according to Microsoft Research.

Microsoft notes that several challenges need to be overcome before it has a true "pair programmer". For example, it only tested quality of I/O of synthesized code. In reality, code quality would include whether the code performance is good, does not have security flaws, and respects licensing attribution.

The rest is here:

Sorry, developers: Microsoft's new tool fixes the bugs in software code written by AI - ZDNet

What do you do when all your source walks out the door? – The Register

Who, Me? Who has got your back up? Forget comments in code, what do you do when all your source has been packed into the trunk of a family sedan? Welcome to Who, Me?

Today's story, from a reader Regomised as "Al", concerns his time at a company in the 1980s. The company was working on a project to replace thousands of ageing "dumb" terminals with PCs. "The Great PC Invasion and Distributed Computing Revolution were under way," Al observed.

"The company had hired a collection of experienced PC and minicomputer programmers who were led by a management team of Mainframe Gods (as they viewed themselves)."

We know just the type.

"As a bunch of hotshot PC and UN*X types," he went on, "we demanded a version control system and a tool for backing up the source tree. In their wisdom, the Mainframe Gods chose not to invest in spurious tech like backups and version control, therefore each programmer had a personal responsibility to back up their source code."

It went about as well as you might imagine. Some staff followed the process for a bit, but after a while nobody bothered. Nobody, that is, except for the person who did the builds. "Dave" (for that was not his name) had all the current production code on his PC. Everything. In one place.

It was fine at first. Dave worked hard and also wrote a lot of code. Al couldn't tell how good it was; the words "Code Review" were alien to the company. But the builds happened and the terminal emulation software was delivered. Everyone was happy. Even though Dave had the only copy of the "official" source code.

Al described Dave as "a big guy." He was softly spoken and tended to (mostly) keep his opinions to himself. "He had his eccentricities," remembered Al, "such as a fondness for rifles that he kept in the trunk of his car."

Of course, the inevitable happened. After what Al delicately described as a series of "issues," Dave quit or was asked to leave the company (both mysteriously happened at the same time).

However, rather than march Dave directly out of the building, the geniuses in management gave him the rest of the day to finish up his work.

"During the afternoon of that day, Dave's manager looked out the window to see Dave loading boxes and boxes of floppy disks into his car," said Al.

A curious thing to do on one's last day. Perhaps Dave was just doing a final clear-out of office stationery? Or perhaps...

The manager scurried over to Dave's PC and found it coming to the end of a FORMAT operation. The hard disk containing the only complete copy of the source had been wiped. The floppies Dave was loading up into his car, nestled among the rifles, were the only backups in existence.

"To his credit, Dave's manager talked him off the cliff and got the backups returned to the office," Al told us. Dave was then politely escorted to a coffee shop to complete his last day while a panicked staffer managed to restore the backups to the now blank PC and the project could continue.

"The project did eventually succeed, but we did have further harrowing moments with more conventional causes," said Al. "We did establish regular backups and duplication of key source code."

"The moral of the story: back up your data. Trust but verify."

We like to think Dave had no malicious intent and was simply tidying up after himself. Surely not a final act of vengeance? And considering what else was in the trunk, it could have gone very, very differently.

Ever been tempted to dash out a quick FORMAT C: or a sudo shred just for giggles? Confess all with an email to Who, Me?

Go here to read the rest:

What do you do when all your source walks out the door? - The Register

This Open-Source Library Accelerates AI Inference by 5-20x in a Few Lines of Code – hackernoon.com

The nebullvm library is an open-source tool to accelerate AI computing. It takes your AI model as input and outputs an optimized version that runs 5-20 times faster on your hardware. Nebullvm is quickly becoming popular, with 250+ GitHub stars on release day. The library aims to be: deep learning model agnostic. It takes a few lines of code to install the library and optimize your models. It is easy-to-use. It runs locally on your machine. Everything runs locally.

Your one stop-shop for AI acceleration.

How doesnebullvmwork?

It takes your AI model as input and outputs an optimized version that runs 5-20 times faster on your hardware. In other words, nebullvm tests multiple deep learning compilers to identify the best possible way to execute your model on your specific machine, without impacting the accuracy of your model.

And that's it. In just a few lines of code.

And a big thank you to everyone for supporting this open-source project! The library received 250+ Github stars on release day, and that's just amazing

Let's learn more about nebullvm and AI optimization. Where should we start? From...

Or let's jump straight to the library nebullvm

Finally, the adoption of Artificial Intelligence (AI) is growing rapidly, although we are still far from exploiting the full potential of this technology.

Indeed, what typically happens is that AI developers spend most of their time on data analysis, data cleaning, and model testing/training with the objective of building very accurate AI models.

Yet... few models make it into production. If they do, two situations arise:

AI models are developed by skilled data scientists and great AI engineers, who often have limited experience with cloud, compilers, hardware, and all the low-level matters. When their models are ready to be deployed, they select the first GPU or CPU they can think of on the cloud or their company/university server, unaware of the severe impact on model performance (i.e. much slower and more expensive computing) caused by uninformed hardware selection, poor cloud infrastructure configuration, and lack of model/hardware post-training optimization.

Other companies have developed in-house AI models that work robustly. AI inference is critical to these companies, so they often build a team of hardware/cloud engineers who spend hours looking for out-of-the-box methods to optimize model deployment.

Do you fall into one of these two groups? Then you might be interested in the nebullvm library, and below we explain why.

How does nebullvm work?

You import the library, nebullvm does some magic, and your AI model will run 5-20 times faster.

And that's it. In just a few lines of code.

The goal of nebullvm library is to let any developer benefit from deep learning compilers without having to waste tons of hours understanding, installing, testing and debugging this powerful technology.

Nebullvm is quickly becoming popular, with 250+ GitHub stars on release day and hundreds of active users from both startups and large tech companies. The library aims to be:

Deep learning model agnostic.nebullvmsupports all the most popular architectures such as transformers, LSTMs, CNNs and FCNs.

Hardware agnostic. The library now works on most CPUs and GPUs and will soon support TPUs and other deep learning-specific ASICs.

Framework agnostic.nebullvmsupports the most widely used frameworks (PyTorch, TensorFlow and Hugging Face) and will soon support many more.

Secure.Everything runs locally on your machine.

Easy-to-use. It takes a few lines of code to install the library and optimize your models.

Leveraging the best deep learning compilers. There are tons of DL compilers that optimize the way your AI models run on your hardware. It would take tons of hours for a developer to install and test them at every model deployment. The library does it for you!

Why is accelerating computing by 5-20x so valuable?

Tosave time Accelerate your AI services and make them real-time.

Tosave money Reduce cloud computing costs.

Tosave energy Reduce the electricity consumption and carbon footprint of your AI services.

Probably you can easily grasp how accelerated computing can benefit your specific use case. We'll also provide you with some use cases on how nebullvm is helping many in the community across different sectors:

Fast computing makes search and recommendation engines faster, which leads to a more enjoyable user experience on websites and platforms. Besides, near real-time AI is a strict requirement for many healthtech companies and for autonomous driving, when slow response time can put people's lives in danger. The metaverse and the gaming industry also require near-zero latency to allow people to interact seamlessly. Speed can also provide an edge in sectors such as crypto/NFT/fast trading.

Lowering costs with minimal effort never hurts anyone. There is little to explain about this.

Green AI is a topic that is becoming more popular over time. Everyone is well aware of the risks and implications of climate change and it is important to reduce energy consumption where possible. Widespread awareness of the issue is reflected in how purchasing behavior across sectors is moving toward greater sustainability. In addition, low power consumption is a system requirement in some cases, especially on IoT/edge devices that may not be connected to continuous power sources.

We suggest testing the library on your AI model right away by following the installation instructions onGithub. If instead you want to get a hands-on sense of the library's capabilities, check out thenotebooks at this linkwhere you can test nebullvm on popular deep learning models. Note that notebooks will still require you to install the library as you will to test nebullvm on your models, which will take several minutes. Once it's installed, nebullvm will optimize your models in a short time.

We have also tested nebullvm on popular AI models and hardware from leading vendors.

At first glance, we can observe that acceleration varies greatly across hardware-model couplings. Overall, the library provides great positive results, most ranging from 2 to 30 times speedup.

To summarize, the results are:

Nebullvm provides positive acceleration to non-optimized AI models

The table below shows the response time in milliseconds (ms) of the non-optimized model and the optimized model for the various model-hardware couplings as an average value over 100 experiments. It also displays thespeedupprovided by nebullvm, where speedup is defined as the response time of the optimized model over the response time of the non-optimized model.

Hardware used for the experiment is the following:

Nebullvm leverages the best deep learning compilers to accelerate AI models in inference.

So what exactly are deep learning compilers?

A deep learning compiler takes your model as input and produces an efficient version of it that runs the model computation graph faster on a specific hardware.

How?

There are several methods that, in principle, all attempt to rearrange the computations of neural networks to make better use of the hardware memory layout and optimize hardware utilization.

In very simplistic terms, deep learning optimization can be achieved by optimizing the entire end-to-end computation graph, as well as by restructuring operators (mainly for loops related to matrix multiplications) within the graph [1,2]. Here are some examples of optimization techniques:

Deep learning optimization depends greatly on the specific hardware-software coupling, and specific compilers work best on specific couplings. So it is difficult to know a priori the performance of the many deep learning compilers on the market for each specific use case and testing is necessary. This is exactly whatnebullvmdoes, saving programmers countless hours.

The team behind nebullvm are a group of former MIT, ETH, and EPFL folks who team up together and launchedNebuly. They developed this open-source library along with a lot of other great technologies to make AI more efficient. You can find out more about Nebuly on itswebsite,LinkedIn,TwitterorInstagram.

Many kudos go toDiego Fiori, the library's main contributor. Diego is a curious person and always thirsty for knowledge, which he likes to consume as much as good food and wine. He is a versatile programmer, very jealous of his code, and never lets his code look less than magnificent. In short, Diego is the CTO ofNebuly.

Huge thanks also go to the open-source community that has developed numerous DL compilers that enable to accelerate AI models.

And finally, many thanks to all those who are supporting thenebullvm open-source community, finding bugs and fixing them, and enabling the creation of a state-of-the-art, this super-powerful AI accelerator.

Papers and articles about deep learning compilers.

Documentation of deep learning compilers used by nebullvm.

View post:

This Open-Source Library Accelerates AI Inference by 5-20x in a Few Lines of Code - hackernoon.com