Research Analyst / Associate / Fellow in Machine Learning and Artificial Intelligence job with NATIONAL UNIVERSITY OF SINGAPORE | 289568 – Times…

The Role

The Sustainable and Green Finance Institute (SGFIN) is a new university-level research institute in the National University of Singapore (NUS), jointly supported by the Monetary Authority of Singapore (MAS) and NUS. SGFIN aspires to develop deep research capabilities in sustainable and green finance, provide thought leadership in the sustainability space, and shape sustainability outcomes across the financial sector and the economy at large.

This role is ideally suited for those wishing to work in academic or industry research in quantitative analysis, particularly in the area of machine learning and artificial intelligence. The responsibilities of the role will include designing and developing various analytical frameworks to analyze structure, unstructured and non-traditional data related to corporate financial, environmental, and social indicators.

There are no teaching obligations for this position, and the candidate will have the opportunity to develop their research portfolio.

Duties and Responsibilities

The successful candidate will be expected to assume the following responsibilities:

Qualifications

Covid-19 Message

At NUS, the health and safety of our staff and students are one of our utmost priorities, and COVID-vaccination supports our commitment to ensure the safety of our community and to make NUS as safe and welcoming as possible. Many of our roles require a significant amount of physical interactions with students/staff/public members. Even for job roles that may be performed remotely, there will be instances where on-campus presences are required.

In accordance with Singapore's legal requirements, unvaccinated workers will not be able to work on the NUS premises with effect from 15 January 2022. As such, job applicants will need to be fully COVID-19 vaccinated to secure successful employment with NUS.

View post:
Research Analyst / Associate / Fellow in Machine Learning and Artificial Intelligence job with NATIONAL UNIVERSITY OF SINGAPORE | 289568 - Times...

Synopsys: Enterprises struggling with open source software – TechTarget

While nearly every enterprise environment contains open source applications, organizations are still struggling to properly manage the code, according to a report by Synopsys.

The 2022 Open Source Security and Risk Analysis (OSSRA) report revealed the sheer volume of open source software used by enterprises across a variety of industries, as well as challenges with out-of-date code and high-risk vulnerabilities such as Log4Shell. While problems with visibility and prioritization persist, the report highlighted improvements in a few areas, most notably an increasing awareness of open source software.

To compile the report, the Synopsys Cybersecurity Research Center and Black Duck Audit Services examined findings from more than 2,400 commercial codebases across 17 industries. While analysis determined that 97% of the codebases contained open source software, a breakdown by industry showed four contained open source in 100% of their codebases. The affected areas were in computer hardware and semiconductors, cybersecurity, energy and clean tech and the internet of things.

"Even the sector with the lowest percentage -- healthcare, health tech, life sciences -- had 93%, which is still very high," the report said.

Additionally, the report found 78% of the code within codebases was also open source. Tim Mackey, principal security strategist at Synopsys Cybersecurity, contributed to the report and told SearchSecurity he was not surprised by the high percentage. It tracks with the last four or five years, during which more than two-thirds of the code within codebases was open source. In 2020, it was 75%. While the usage of open source software does vary by industry, Mackey said it is just the way the world works.

"I suspect that [we'll] probably creep into the 80s over time, but we're nearing the bifurcation of propriety and custom versus open source for most industries," he said.

One aspect that has accelerated the pace of innovation over the last 10 years is how developers can focus on unique value propositions and features for employers. From there, Mackey said they can access libraries that do the foundational work. The challenge, he said, is that a development team will follow a different set of security rules and release criteria for open source software.

While it can be beneficial that anyone can examine the source code, Mackey said in practice most people just focus on what it does, download it and use it. Therein lies the risk for companies.

With all the open source that's powering our modern world, that makes it a prime target for being an attack vector. Tim MackeyPrincipal security strategist, Synopsys Cybersecurity

"So, with all the open source that's powering our modern world, that makes it a prime target for being an attack vector," he said.

A recurring trend in the report is "that open source itself doesn't create business risk, but its management does."

Mackey reiterated that sentiment, and said enterprises that change vendors after an incident may be pointing the finger in the wrong direction. He referred to the issues with open source as a "process problem."

"The open source itself might have a bug, but any other piece of software will have a bug as well," Mackey said.

However, the high volume does make it tricky to maintain. The OSSR determined that 81% of software used by enterprises contained at least one vulnerability. Codebases JQuery and Lodash contained the highest percentage of vulnerable components. Spring Framework, which caused issues last month after researchers reported two flaws in the development framework, also made the list in 2021.

Additionally, Black Duck Audit Services risk assessments found that out of 2,000 codebases, 88% contained outdated versions of open source components, meaning "an update or patch was available but had not been applied."

More significantly, 85% contained open source code that was more than four years out of date. That percentage has been consistent over the years, according to Mackey.

He said that while it requires more digging to identify the issue, it highlights how the lack of an update process can make it easy to get out date. The sheer volume of open source code is also an issue -- there could be hundreds to thousands of applications, with hundreds of components per application.

"That's really one of the cruxes of what we're seeing on a consistent basis, is that companies struggle to figure out what the most efficient way to manage this stuff really is," he said.

One flaw that caused enterprises a management and scale nightmare last year was Log4Shell. While the report noted a "decrease in high-risk vulnerabilities, 2021 was still a year filled with open source issues." That included supply chain attacks and hacker exploits of Docker images, but "most notably" the zero-day vulnerability in Apache Log4j utility known as Log4Shell. It allowed attackers to execute arbitrary code on vulnerable servers, according to the report.

"What's most notable about Log4Shell, however, is not its ubiquity but the realizations it spurred. In the wake of its discovery, businesses and government agencies were compelled to re-examine how they use and secure open source software created and maintained largely by unpaid volunteers, not commercial vendors. What also came to light was that many organizations are simply unaware of the amount of open source used in their software," the report said.

Researchers analyzed the percentage of audited Java codebases and found 15% "contained vulnerable Log4j component." Though Mackey acknowledged the quantity of Java applications has changed and log data has improved, he said 15% was lower than he expected.

"My crystal ball says we'll be talking about this next year because that's actually one of the big problems that we see year over year is that people don't necessarily do a good job of patching the vulnerabilities that have been around for a few years," he said.

Differences between commercial and open source software hinder enterprises when it comes to patching. The report noted that commercial patching "usually requires the involvement of a procurement department, as well as review standards that are part of a vendor risk management program." On the other hand, "open source may simply have been downloaded and used at the developer's discretion."

Part of that management extends to security following a merger or acquisition. Mackey said one of the biggest challenges that acquirers have is a lack of visibility and the skill set to evaluate exactly what they are buying. It appears 2021 was a big year for M&As.

"The growth in the number of audited codebases -- 64% larger than last year's -- reflects the significant increase in merger and acquisition transactions throughout 2021," the report said.

Based on the statistics, Mackey said it's exceedingly difficult for enterprises not to use open source.

"I'd argue it's all but impossible," he said. "They'd also have to not be using companies like Amazon or Microsoft or Google, because they're all using open source. It's what powers their clouds. So, it's life today."

While there is work to be done to minimize open source risk, Mackey said Synopsys observed many improvements last year. Enterprises did a better job of managing licensing conflicts, the number of vulnerabilities decreased and the number of applications with high-severity flaws also decreased.

"People are recognizing they need to 'get with the program.' That may be Biden going about beating them over the head, that might be 'Oh wait, I don't want to be the next Colonial Pipeline,'" Mackey said. "We can't necessarily say, but those are good trends. I don't like to say open source is bad in any way; it's just managed differently."

Read the rest here:

Synopsys: Enterprises struggling with open source software - TechTarget

Sorry, developers: Microsoft’s new tool fixes the bugs in software code written by AI – ZDNet

Microsoft reckons machine-generated code should be treated with a "mixture of optimism and caution" because programming can be automated with large language models, but the code also can't always be trusted.

These large pre-trained language models include OpenAI's Codex, Google's BERT natural language program and DeepMind's work on code generation. OpenAI's Codex, unveiled in August, is available through Microsoft-owned GitHub's Copilot tool.

To address the question of code quality from these language models, Microsoft researchers have created Jigsaw, a tool that can improve the performance of these models using "post-processing techniques that understand the programs' syntax and semantics and then leverages user feedback to improve future performance."

SEE: Software development is changing again. These are the skills companies are looking for

It's currently designed to synthesize code for Python Pandas API using multi-modal inputs, says Microsoft. Pandas is a popular data manipulation and analysis library for data scientists who use the Python programming language.

The language models like Codex can allow a developer to use an English description for a snippet of code and the model can synthesize the intended code in say Python or JavaScript. But, as Microsoft notes, that code might be incorrect or fail to compile or run, so the developer needs to check the code before using it.

"With Project Jigsaw, we aim to automate some of this vetting to boost the productivity of developers who are using large language models like Codex for code synthesis," explains the Jigsaw team at Microsoft Research.

Microsoft reckons Jigsaw can "completely automate" the entire process of checking whether code compiles, addressing error messages, and testing whether the code produces what the developer wanted it to output.

"Jigsaw takes as input an English description of the intended code, as well as an I/O example. In this way, it pairs an input with the associated output, and provides the quality assurance that the output Python code will compile and generate the intended output on the provided input," they note.

The paper, Jigsaw: Large Language Models meet Program Synthesis, looks at the approach in Python Pandas.

Using Jigsaw, a data scientist or developer provides a description of the intended transformation in English, an input dataframe, and the corresponding output dataframe. Jigsaw then synthesizes the intended code.

SEE: Remote-working jobs vs back to the office: Why tech's Great Resignation may have only just begun

Microsoft found that Jigsaw can create the correct output 30% of the time. In this system, natural language and other parameters are pre-processed, fed into Codex and GPT-3, and then the post-process output is returned to the human for verification and editing. That final human check is fed back into the pre- and post-process mechanisms to improve them. If the code fails, Jigsaw repeats the repair process during the post-processing stage.

Jigsaw improves the accuracy of output to greater than 60% and, through user feedback, the accuracy improves to greater than 80%, according to Microsoft Research.

Microsoft notes that several challenges need to be overcome before it has a true "pair programmer". For example, it only tested quality of I/O of synthesized code. In reality, code quality would include whether the code performance is good, does not have security flaws, and respects licensing attribution.

The rest is here:

Sorry, developers: Microsoft's new tool fixes the bugs in software code written by AI - ZDNet

The little-known open-source community behind the government’s new environmental justice tool – GCN.com

This story was originally published by Grist. You can subscribe to its weekly newsletter here.

In February, the White House published a beta version of its new environmental justice screening tool, a pivotal step toward achieving the administrations climate and equity goals. The interactive map analyzes every census tract in the U.S. using socioeconomic and environmental data, and designates some of those tracts as disadvantaged based on a complicated formula.

Once finalized, this map and formula will be used by government agencies to ensure that at least 40 percent of the overall benefits of certain federal climate, clean energy, affordable and sustainable housing, clean water, and other programs are directed to disadvantaged communities an initiative known as Justice40.

But this new screening tool is not only essential to environmental justice goals. Its also a pioneering experiment in open governance. Since last May, the software development for the tool has been open source, meaning it was in the public domain even while it was a work in progress. Anyone could find it on GitHub, an online code management platform for developers, and then download it and explore exactly how it worked.

In addition, the government created a public Google Group where anyone who was interested in the project could share ideas, help troubleshoot issues, and discuss what kinds of data should be included in the tool. There were monthly community chats on Zoom to allow participants to have deeper discussions, regular office hours on Zoom for less formal conversations, and even a Slack channel that anyone could join.

All of this was led by the U.S. Digital Service, or USDS, the governments in-house staff of data scientists and web engineers. The office was tasked with gathering the data for the tool, building the map and user interface, and advising the Council on Environmental Quality, or CEQ, another White House agency, in developing the formula that determines which communities are deemed disadvantaged.

These were unprecedented efforts by a federal agency to work both transparently and collaboratively. They present a model for a more democratic, more participatory form of government, and reflect an attempt to incorporate environmental justice principles into a federal process.

Environmental justice has a long history of participatory practices, said Shelby Switzer, the USDS open community engineer and technical advisor to Justice40, citing the Jemez Principles for Democratic Organizing, a sort of Bible for inclusivity in environmental justice work. Running this project from the start in as open and participatory of a way as possible was important to the team as part of living environmental justice values.

The experiment gave birth to a lively community, and some participants lauded the agencys effort. But others were skeptical of how open and participatory it actually was. Despite being entirely public, it was not widely advertised and ultimately failed to reach key experts.

Open source doesnt just mean allowing the public to look into the mechanics of a given software or technology. Its an invitation to tinker around with it, add to it, and bend it to your own needs. If you use a web browser with extensions like an ad blocker or a password manager, youre benefiting from the fact that the browser is open source and allows savvy developers to build all sorts of add-ons to improve your experience.

The Justice40 map is intended to be used similarly. Environmental organizations or community groups can build off the existing code, adding more data points to the map that might help them visualize patterns of injustice and inform local solutions. The code isnt just accessible. The public can also report bugs, request features, and leave comments and questions that the USDS will respond to.

The USDS hoped to gather input from people with expertise in coding, mapping technology, and user experience, as well as environmental justice issues. Many similar screening tools have already been developed at the state level in places like California, New York, Washington, and Maryland.

We know that we can learn from a wide variety of communities, including those who will use or will be impacted by the tool, who are experts in data science or technology, or who have experience in climate, economic, or environmental justice work, the agency wrote in a mission statement pinned to the Justice40 data repository.

Garry Harris, the founder of a nonprofit called the Center for Sustainable Communities, was one such participant. Harris organization uses science and technology to implement community-based sustainability solutions, and he found out about the Google Group from a colleague while working on a project to map pollution in Virginia. As a grassroots organization, I feel really special to be in the room, he said. I know in the absence of folks like us who look at it both from a technology and an environmental justice lens, the outcomes are not going to be as beneficial.

Through the Google Group and monthly community chats, the agency solicited input on finding reliable data sources to measure things like a communitys exposure to extreme heat and to pollution from animal feedlots.

That level of transparency is not common, said Rohit Musti, the director of software and data engineering at the nonprofit American Forests. Musti found out about the open-source project through some federal forest policy work his organization was doing and became a regular participant. He said he felt the USDS did a lot of good outreach to people who work in this space, and made people like him feel like they could contribute.

Musti submitted American Forests Tree Equity Score, a measure of how equitably trees are distributed across urban neighborhoods, to the Justice40 data repository. Although the Tree Equity Score data did not make it into the beta version of the Justice40 screening tool, it is included in a separate comparison tool that the USDS created.

Right now theres no user-friendly way to access this comparison tool, but if youre skilled in the programming language Python, you can generate reports that compare the governments environmental justice map to other established environmental justice screening methods, including the Tree Equity Score. You can also view all of the experiments the USDS ran to explore different approaches to identifying disadvantaged communities.

But to Jessie Mahr, director of technology at the nonprofit Environmental Policy Innovation Center, who was also active in the Justice 40 open-source community, the Python fluency prerequisite signifies an underlying problem.

You can call it open source, she said, but to which community? If the community thats going to be using it cannot access that tool, does it matter that its open source?

Mahr said she respected what the USDS team was trying to do but was not convinced by the result. She said that relatively little of the discussion and information sharing that went on in the Google Group and monthly community chats seemed to make it into the tool. While the USDS staffers running the effort seemed genuinely interested in gathering outside expertise, they werent the ones making the final decisions CEQ was. And the open-source platforms did not offer any window into what was being conveyed to the decision-makers. Mahr was disappointed that the beta tool that was released to the public in February did not reflect the research that outside participants shared related to data on extreme heat and proximity to animal feedlots, for example.

Switzer, the USDS technical adviser, told Grist that CEQ was part of the effort from the start. They said that a senior advisor to CEQ regularly participated in the Google Group and that learnings from the group were brought to CEQ in various formats as relevant.

CEQ has not explained the logic behind the choices embedded in the tool, like which data sets were included, though it is planning to release more details on the methodology soon. The agency is also holding listening and training sessions where the public can learn more.

But it was also strange to Mahr that despite the high profile of the White Houses Justice40 initiative in the environmental justice world, the open-source efforts were not advertised. I never heard about it through any other channels working on Justice40 that I would have expected to, said Mahr. I enjoyed participating in the USDSs teams efforts and dont think they were trying to hide them, she added in an email. I just think that they didnt have the license or capacity to really promote it. Like the other participants Grist spoke to, Mahr heard about the project through word of mouth, from a colleague who knew the USDS team.

Switzer confirmed that the USDS team largely relied on word of mouth to get the word out and noted that they did reach out to people who had expertise working on environmental justice screening tools.

But its clear that the word-of-mouth system failed to reach key voices in the field. Esther Min, a researcher at the University of Washington who helped build Washingtons state-level environmental justice screening tool, told Grist that she had met with folks from CEQ about a year ago to talk them through that project. But she hadnt heard anything about the Google Group until February, after the beta version of the federal tool was released. Alvaro Sanchez, the vice president of policy at the nonprofit Greenlining Institute and a participant in the development of Californias environmental justice screening tool, said he had no idea about the group until Grist reached out to him in March.

Sanchez was frustrated, especially because for months the government offered very little information about the status of the tool. On one hand, he understands that the USDS team may not have had the capacity to reach out far and wide and invite every grassroots organization in the country. But the bar that Im setting is actually fairly low, he said. The people who have been working on this stuff for such a long time, we didnt know what was happening with the tool? To me, that indicates that the level of engagement was actually really minimal.

Sacoby Wilson, a pioneer of environmental justice screening tools based at the University of Maryland, received an invite to the group from another White House agency called the Office of Management and Budget last May. He said he didnt get the sense that the group was hidden but agreed that the USDS hadnt done a great job of getting the word out to either the data experts who build these environmental mapping tools at the state level, or the community organizations that actually work on the issues that the tool is trying to visualize.

But Wilson pointed out that the federal government used another channel to gather input from communities: The White House Environmental Justice Advisory Council, which is made up of leaders from grassroots organizations all over the country, submitted extensive recommendations to CEQ on which considerations should be reflected in the screening tool. To Wilson, an overlooked issue was that the Advisory Council didnt have enough environmental mapping experts.

In response to a question about whether USDS did enough outreach, Switzer said the agency was still working on it. We hope to continue to broaden this kind of community engagement and making the open source group as inclusive and equitable as possible.

Of course, it has been a learning experience as were kind of pioneers in this as a government practice! they also said.

The tool is still in beta form, and CEQ plans to update it based on public feedback and research. The public can attend CEQ listening sessions and submit comments through the Federal Register or through the screening tool website. The discussion in the open-source Google Group is also ongoing, and the USDS team will continue to host monthly community chats as well as weekly office hours.

In a recent email announcing upcoming office hours, Switzer encouraged people to attend if you dont know how to use this Github thing and would like an intro 🙂

This story has been updated to clarify the types of federal programs included in the Justice40 initiative.

See the rest here:

The little-known open-source community behind the government's new environmental justice tool - GCN.com

Aqua Security (Argon) Recognized as a Representative Vendor in Gartner Innovation Insight for SBOMs Report – Yahoo Finance

Aqua Security

Software Bill of Materials improve the visibility, transparency, security and integrity of proprietary and open source code in software supply chains

BOSTON, April 12, 2022 (GLOBE NEWSWIRE) -- Aqua Security, the leading pure-play cloud native security provider, today announced that Aqua Security has been recognized as a Representative Vendor in Gartner Innovation Insight for Software Bill of Materials (SBOMs) Report under Commercial SBOM Tools for Argon.* To realize the full benefits of SBOM, Gartner recommends software engineering leaders integrate SBOMs throughout the software delivery life cycle.

The report highlights a critical visibility gap that is growing in frequency and severity: organizations are unable to accurately record and summarize the massive volume of software they produce, consume and operate, said Eran Orzel, Senior Director of Argon Sales at Aqua Security. We agree with the assessment by Gartner that integrating SBOMs into software development workflows is key to achieving software supply chain security at scale. We believe that our technology aligns with their recommendations and was built to help organizations mitigate risks and eliminate security blind spots.

According to Gartner, SBOMs improve the visibility, transparency, security and integrity of proprietary and open source code in software supply chains. The firm predicts that By 2025, 60% of organizations building or procuring critical infrastructure software will mandate and standardize SBOMs in their software engineering practice.

Aquas Argon released its SBOM manifest capability as part of its Integrity Gates. It enables companies to enforce strong security measures while checking on their CI/CD pipeline and its output to improve quality and reduce runtime security issues. Argons SBOM manifest identifies dependencies and key risks in the artifact development process. Organizations can use it to implement strict security evaluations of artifacts and mitigate security threats once discovered.

Story continues

For more information on Aquas Argon solution and to download the report courtesy of Aqua Security, visit Aquasec.com.

*Gartner, Innovation Insight for SBOMs, Manjunath Bhat, Dale Gardner, Mark Horvath, 14 February 2022.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartners research organization and should not be construed as statements of fact. Gartner disclaims all warranties, express or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Aquas Argon Supply Chain Security Solution Argon, an Aqua company, is a pioneer in software supply chain security and enables security and DevOps teams to protect their software supply chain against vulnerabilities, security risks and supply chain attacks. With Argon, Aqua offers the industrys first solution to secure all stages of software build and release and stop cloud native attacks. Aqua Securitys Cloud Native Application Protection Platform (CNAPP) is the only solution that can protect the full software development life cycle (SDLC) from code through build to runtime, ensuring the end-to-end integrity of applications.

About Aqua SecurityAqua Security is the largest pure-play cloud native security company, providing customers the freedom to innovate and accelerate their digital transformations. The Aqua Platform is the leading Cloud Native Application Protection Platform (CNAPP) and provides prevention, detection and response automation across the entire application life cycle to secure the supply chain, secure cloud infrastructure and secure running workloads wherever they are deployed. Aqua customers are among the worlds largest enterprises in financial services, software, media, manufacturing and retail, with implementations across a broad range of cloud providers and modern technology stacks spanning containers, serverless functions and cloud VMs. For more information, visit http://www.aquasec.com or follow us on twitter.com/AquaSecTeam.

Media ContactJennifer TannerLook Left Marketingaqua@lookleftmarketing.com

Read the original post:

Aqua Security (Argon) Recognized as a Representative Vendor in Gartner Innovation Insight for SBOMs Report - Yahoo Finance

This Open-Source Library Accelerates AI Inference by 5-20x in a Few Lines of Code – hackernoon.com

The nebullvm library is an open-source tool to accelerate AI computing. It takes your AI model as input and outputs an optimized version that runs 5-20 times faster on your hardware. Nebullvm is quickly becoming popular, with 250+ GitHub stars on release day. The library aims to be: deep learning model agnostic. It takes a few lines of code to install the library and optimize your models. It is easy-to-use. It runs locally on your machine. Everything runs locally.

Your one stop-shop for AI acceleration.

How doesnebullvmwork?

It takes your AI model as input and outputs an optimized version that runs 5-20 times faster on your hardware. In other words, nebullvm tests multiple deep learning compilers to identify the best possible way to execute your model on your specific machine, without impacting the accuracy of your model.

And that's it. In just a few lines of code.

And a big thank you to everyone for supporting this open-source project! The library received 250+ Github stars on release day, and that's just amazing

Let's learn more about nebullvm and AI optimization. Where should we start? From...

Or let's jump straight to the library nebullvm

Finally, the adoption of Artificial Intelligence (AI) is growing rapidly, although we are still far from exploiting the full potential of this technology.

Indeed, what typically happens is that AI developers spend most of their time on data analysis, data cleaning, and model testing/training with the objective of building very accurate AI models.

Yet... few models make it into production. If they do, two situations arise:

AI models are developed by skilled data scientists and great AI engineers, who often have limited experience with cloud, compilers, hardware, and all the low-level matters. When their models are ready to be deployed, they select the first GPU or CPU they can think of on the cloud or their company/university server, unaware of the severe impact on model performance (i.e. much slower and more expensive computing) caused by uninformed hardware selection, poor cloud infrastructure configuration, and lack of model/hardware post-training optimization.

Other companies have developed in-house AI models that work robustly. AI inference is critical to these companies, so they often build a team of hardware/cloud engineers who spend hours looking for out-of-the-box methods to optimize model deployment.

Do you fall into one of these two groups? Then you might be interested in the nebullvm library, and below we explain why.

How does nebullvm work?

You import the library, nebullvm does some magic, and your AI model will run 5-20 times faster.

And that's it. In just a few lines of code.

The goal of nebullvm library is to let any developer benefit from deep learning compilers without having to waste tons of hours understanding, installing, testing and debugging this powerful technology.

Nebullvm is quickly becoming popular, with 250+ GitHub stars on release day and hundreds of active users from both startups and large tech companies. The library aims to be:

Deep learning model agnostic.nebullvmsupports all the most popular architectures such as transformers, LSTMs, CNNs and FCNs.

Hardware agnostic. The library now works on most CPUs and GPUs and will soon support TPUs and other deep learning-specific ASICs.

Framework agnostic.nebullvmsupports the most widely used frameworks (PyTorch, TensorFlow and Hugging Face) and will soon support many more.

Secure.Everything runs locally on your machine.

Easy-to-use. It takes a few lines of code to install the library and optimize your models.

Leveraging the best deep learning compilers. There are tons of DL compilers that optimize the way your AI models run on your hardware. It would take tons of hours for a developer to install and test them at every model deployment. The library does it for you!

Why is accelerating computing by 5-20x so valuable?

Tosave time Accelerate your AI services and make them real-time.

Tosave money Reduce cloud computing costs.

Tosave energy Reduce the electricity consumption and carbon footprint of your AI services.

Probably you can easily grasp how accelerated computing can benefit your specific use case. We'll also provide you with some use cases on how nebullvm is helping many in the community across different sectors:

Fast computing makes search and recommendation engines faster, which leads to a more enjoyable user experience on websites and platforms. Besides, near real-time AI is a strict requirement for many healthtech companies and for autonomous driving, when slow response time can put people's lives in danger. The metaverse and the gaming industry also require near-zero latency to allow people to interact seamlessly. Speed can also provide an edge in sectors such as crypto/NFT/fast trading.

Lowering costs with minimal effort never hurts anyone. There is little to explain about this.

Green AI is a topic that is becoming more popular over time. Everyone is well aware of the risks and implications of climate change and it is important to reduce energy consumption where possible. Widespread awareness of the issue is reflected in how purchasing behavior across sectors is moving toward greater sustainability. In addition, low power consumption is a system requirement in some cases, especially on IoT/edge devices that may not be connected to continuous power sources.

We suggest testing the library on your AI model right away by following the installation instructions onGithub. If instead you want to get a hands-on sense of the library's capabilities, check out thenotebooks at this linkwhere you can test nebullvm on popular deep learning models. Note that notebooks will still require you to install the library as you will to test nebullvm on your models, which will take several minutes. Once it's installed, nebullvm will optimize your models in a short time.

We have also tested nebullvm on popular AI models and hardware from leading vendors.

At first glance, we can observe that acceleration varies greatly across hardware-model couplings. Overall, the library provides great positive results, most ranging from 2 to 30 times speedup.

To summarize, the results are:

Nebullvm provides positive acceleration to non-optimized AI models

The table below shows the response time in milliseconds (ms) of the non-optimized model and the optimized model for the various model-hardware couplings as an average value over 100 experiments. It also displays thespeedupprovided by nebullvm, where speedup is defined as the response time of the optimized model over the response time of the non-optimized model.

Hardware used for the experiment is the following:

Nebullvm leverages the best deep learning compilers to accelerate AI models in inference.

So what exactly are deep learning compilers?

A deep learning compiler takes your model as input and produces an efficient version of it that runs the model computation graph faster on a specific hardware.

How?

There are several methods that, in principle, all attempt to rearrange the computations of neural networks to make better use of the hardware memory layout and optimize hardware utilization.

In very simplistic terms, deep learning optimization can be achieved by optimizing the entire end-to-end computation graph, as well as by restructuring operators (mainly for loops related to matrix multiplications) within the graph [1,2]. Here are some examples of optimization techniques:

Deep learning optimization depends greatly on the specific hardware-software coupling, and specific compilers work best on specific couplings. So it is difficult to know a priori the performance of the many deep learning compilers on the market for each specific use case and testing is necessary. This is exactly whatnebullvmdoes, saving programmers countless hours.

The team behind nebullvm are a group of former MIT, ETH, and EPFL folks who team up together and launchedNebuly. They developed this open-source library along with a lot of other great technologies to make AI more efficient. You can find out more about Nebuly on itswebsite,LinkedIn,TwitterorInstagram.

Many kudos go toDiego Fiori, the library's main contributor. Diego is a curious person and always thirsty for knowledge, which he likes to consume as much as good food and wine. He is a versatile programmer, very jealous of his code, and never lets his code look less than magnificent. In short, Diego is the CTO ofNebuly.

Huge thanks also go to the open-source community that has developed numerous DL compilers that enable to accelerate AI models.

And finally, many thanks to all those who are supporting thenebullvm open-source community, finding bugs and fixing them, and enabling the creation of a state-of-the-art, this super-powerful AI accelerator.

Papers and articles about deep learning compilers.

Documentation of deep learning compilers used by nebullvm.

View post:

This Open-Source Library Accelerates AI Inference by 5-20x in a Few Lines of Code - hackernoon.com

What Is a Software Bill of Materials (SBOM)? – BizTech Magazine

What Is the Purpose of a Software Bill Of Materials?

The goal is to provide transparency into the composition and provenance of software, says Moyle. For the customer, you can trace the provenance and composition of what you own, and the developer can keep track of what's in their dependencies so they can offer more transparency to their customers.

Its also important for managing potential risk. SBOMs are often compared with nutritional information labels that list all the ingredients contained in a food item. The reason for that? Consider the experience of someone allergic to a commonly occurring ingredient like soy. Obviously, theyd avoid products that make first-order use of soy products that are obviously made out of it, like tofu, says Moyle. But what about second- and third-order usage? For example, a cake with chocolate icing where the chocolate in the icing uses soy lecithin as an emulsifier. Theyd still need to know about that, right? Even a small dose of the allergen can be problematic depending on the severity of the allergy.

How does that tie to SBOMs? In some situations, dependencies in software can introduce risk in a similar way; for example, when there are severe vulnerabilities in commonly occurring and widely deployed software components, says Moyle.

MORE FROM BIZTECH: Learn key lessons about protecting your organization from cyberattack.

SBOMs help make it possible to protect your supply chain because they identify what is included in your supply chain, says IDCs Al Gillen. Its like surveying your home for vulnerable access points to know where to install alarm sensors. By providing insights into software components, organizations may not merely identify potential risks but, ideally, identify them early enough that they dont make it to a final product.

That protection will be imperative as supply chains become increasingly vulnerable. In 2021, the European Union Agency for Security estimated that the number of attacks on the supply chain would increase fourfold from the previous year, Worthington says. When all it takes is a single vulnerability to disrupt a supply chain, knowing what that vulnerability might be and how to eliminate it is critical.

Experts caution, however, that SBOMs arent foolproof. Collecting, managing, inventorying, and making use of the data from them is a large, complicated exercise, says Moyle. Yes, it makes the problem of software vulnerabilities potentially more manageable, but its not magic and won't fix all problems. Worthington agrees: An SBOM is only a piece of the puzzle. Securing your software supply chain requires people, process and technology.

Because SBOMs can be code heavy, reproducing a sample here would be difficult. However, those interested can find a few examples provided by Worthington at Github, SPDX and the NTIA.

See the rest here:

What Is a Software Bill of Materials (SBOM)? - BizTech Magazine

What do you do when all your source walks out the door? – The Register

Who, Me? Who has got your back up? Forget comments in code, what do you do when all your source has been packed into the trunk of a family sedan? Welcome to Who, Me?

Today's story, from a reader Regomised as "Al", concerns his time at a company in the 1980s. The company was working on a project to replace thousands of ageing "dumb" terminals with PCs. "The Great PC Invasion and Distributed Computing Revolution were under way," Al observed.

"The company had hired a collection of experienced PC and minicomputer programmers who were led by a management team of Mainframe Gods (as they viewed themselves)."

We know just the type.

"As a bunch of hotshot PC and UN*X types," he went on, "we demanded a version control system and a tool for backing up the source tree. In their wisdom, the Mainframe Gods chose not to invest in spurious tech like backups and version control, therefore each programmer had a personal responsibility to back up their source code."

It went about as well as you might imagine. Some staff followed the process for a bit, but after a while nobody bothered. Nobody, that is, except for the person who did the builds. "Dave" (for that was not his name) had all the current production code on his PC. Everything. In one place.

It was fine at first. Dave worked hard and also wrote a lot of code. Al couldn't tell how good it was; the words "Code Review" were alien to the company. But the builds happened and the terminal emulation software was delivered. Everyone was happy. Even though Dave had the only copy of the "official" source code.

Al described Dave as "a big guy." He was softly spoken and tended to (mostly) keep his opinions to himself. "He had his eccentricities," remembered Al, "such as a fondness for rifles that he kept in the trunk of his car."

Of course, the inevitable happened. After what Al delicately described as a series of "issues," Dave quit or was asked to leave the company (both mysteriously happened at the same time).

However, rather than march Dave directly out of the building, the geniuses in management gave him the rest of the day to finish up his work.

"During the afternoon of that day, Dave's manager looked out the window to see Dave loading boxes and boxes of floppy disks into his car," said Al.

A curious thing to do on one's last day. Perhaps Dave was just doing a final clear-out of office stationery? Or perhaps...

The manager scurried over to Dave's PC and found it coming to the end of a FORMAT operation. The hard disk containing the only complete copy of the source had been wiped. The floppies Dave was loading up into his car, nestled among the rifles, were the only backups in existence.

"To his credit, Dave's manager talked him off the cliff and got the backups returned to the office," Al told us. Dave was then politely escorted to a coffee shop to complete his last day while a panicked staffer managed to restore the backups to the now blank PC and the project could continue.

"The project did eventually succeed, but we did have further harrowing moments with more conventional causes," said Al. "We did establish regular backups and duplication of key source code."

"The moral of the story: back up your data. Trust but verify."

We like to think Dave had no malicious intent and was simply tidying up after himself. Surely not a final act of vengeance? And considering what else was in the trunk, it could have gone very, very differently.

Ever been tempted to dash out a quick FORMAT C: or a sudo shred just for giggles? Confess all with an email to Who, Me?

Go here to read the rest:

What do you do when all your source walks out the door? - The Register

The State Of The IBM i Base 2022: Third Party Software Conundrum – IT Jungle

April 11, 2022Timothy Prickett Morgan

Aside from death, most problems are not intractable. But people surely can be, and sometimes are. But luckily not often, and the thing about people is that, generally speaking, they can be reasonable when they are reasoned with. It is with all of this in mind that we come to the next in the State of IBM i Base stories for 2022, where we want to talk about the software trap that the remaining OS/400, i5/OS, and some IBM i shops have gotten themselves into and how we might help them get out of it to the mutual benefit of all.

As best we can figure, based on the data from the annual HelpSystems survey of the IBM i base, about two-third of the companies that take the survey have consistently said that they have homegrown applications running on their systems something that was not asked in the original surveys from years gone by and a thing that I pointed out to HelpSystems and had the question changed because I simply did not believe that most of the base had third-party applications. Over the decades, the readers of The Four Hundred have consistently been do-it-yourself application shops and I just simply did not believe that somewhere along the way that changed so dramatically.

In theory, that means that two-thirds of the base is not facing what I will call the third-party software conundrum, where they are stuck on old releases without maintenance and no easy or cost-effective way to get those applications current. In practice, many IBM i shops are facing massive technical debt in their code, a lack of people with skills and insufficient funding to update their older RPG III, RPG IV, and ILE RPG applications to free form RPG, or worse yet, have lost the source code for their applications. And as a consequence they are just as stuck as anyone using an old suite of applications from a vendor that ended up inside of Infor or Oracle or one of the few remaining mid-sized ERP vendors catering to the IBM i crowd.

Part of the problem with regard to third-party application software, I think, is the fact that there is a long history of open source application code in the IBM midrange, and another part of the problem is the long practice of selling software with a perpetual use license that also has an annual software maintenance fee.

The fact that many of the thousands of application suites available for System/3X and AS/400 systems were available as source code meant that companies buying the software could indulge in customizing software at a level that we have generally not seen in the application space heretofore. There are plenty of IBM midrange shops that used a mix of custom code and heavily customized third-party code to create the systems that run their businesses, and at some point, the code has changed so much that there is no point in paying third-party maintenance on it. Companies could not upgrade to new application versions and suites form the vendor if they wanted because all of those customizations would have to be done again. So it is not just a matter of people not wanting to pay maintenance on application software it would not get them anything if they had.

There are, of course, IBM i shops that have done a modest amount of customization on third-party code and when the budget gets tight, they stop paying for maintenance on it because they are not changing it, even if they do have the source code. And these days, with modern ERP, CRM, and SCM suites, they probably are not getting the source code for the new software unless it is grandfathered into their vendor contracts.

But even absent that, the way these licenses are sold were always a budgetary headache, and the problem is that people costs rise with gross domestic product and do not have Moores Law economic scaling, where things get cheaper per unit of capacity with each passing year, as happens with most elements of the system. This is why software maintenance really exists, and it is why it is set at 15 percent to 25 percent of the list price of the application software. That means every four to seven years, the maintenance fees are like buying the software all over again from an economic standpoint. And if the code is not changing and because customers dont want it to and all the vendor is really doing is supplying security patches, then you can understand why IBM i customers might resent these fees.

Yes, it is unfair that some customers stayed on maintenance and paid and others did not, and that some IBM i shops expect a break on after license maintenance fees if they return to the fold and upgrade to software that is certified for modern IBM i operating systems and modern Power Systems hardware. But as I explained to a reader on LinkedIn last week in response to the software release problem with IBM i, where so many customers are on 7.1 or earlier releases, we can all dig our heels in and go straight to hell together, or we can figure out some way that everyone gives a little and we all benefit.

The application ISVs can dig their heels in and say they are entitled to all the back maintenance before getting customers current, and they will probably not get very far. If customers are going to spend huge amounts of money and have to massively customize a newer version of the code anyway, they will very likely just move to a different platform for political reasons more than technical ones. The economics will suck no matter what.

The customers who just sit there on older releases of operating systems and applications are sitting on a ticking timebomb, but sometimes this is in fact the least risky behavior as well as the least costly right up to the catastrophe where all of this comes home to roost.

IBM has a hand in this, too, and has shown the way with amnesties on after license charges for IBM i Software Maintenance in 2015 and again in 2020, which you can see in the Related Stories section below.

All I know is that IBM i shops, application ISVs, and Big Blue have to all work together to solve this problem for those companies who rely on third party applications, and the fees that IBM i shops have to pay should be proportional to the amount of work it takes to get either old suites certified on new IBM hardware and operating systems or to get customizations ported to new suites that are already ported to them. (I think we all know which one is easier and cheaper.)

There is one more thing that we know. Real mitigation for Log4j security vulnerabilities has to be done, and that means IBM has to write new and secure logging software that snaps in place of Log4j and allows IBM i 7.1 releases and forward, including the Heritage Navigator as well as the new IBM i Navigator to work. Telling people with older releases to turn off Log4j and only turn it on to use Heritage Navigator at their own risk is not doing right by the customer, and IBM damned well knows it. When nearly half of the base is on IBM i 7.1, as I believe it is, and another fifth is on IBM i 6.1, and many of them are stuck, Big Blue simply cannot behave this way.

7.1 Flew Over The Cuckoos Nest

The State Of The IBM Base 2022, Part Three: The Rusting Iron

The State Of The IBM i Base 2022, Part Two: Upgrade Plans

The State Of The IBM i Base 2022, Part One: The Operating System

IBM Grants Amnesty On Software Maintenance After License Charges

Where Is The Power Systems-IBM i Stimulus Package?

IBM Grants After License Amnesty For Software Maintenance

IBM i 7.1 Extended Out To 2024 And Up To The IBM Cloud

Big Blue Revives IBM i 7.1 With Power9 Support

IBM Further Extends Service Extension For IBM i 7.1

Service Extension Outlined For IBM i 7.1 And PowerHA 7.1

Say Sayonara To IBM i 7.1 Next Spring

Big Blue To Sunset IBM i 6.1 A Year From Now

Follow this link:

The State Of The IBM i Base 2022: Third Party Software Conundrum - IT Jungle

Can we solve the zero-day threat once and for all? No, but heres what we can do – The Register

Webinar Last Decembers Log4j crisis brought the danger of zero day vulnerabilities to the front pages. But while one key flaw has been put under the microscope, does that mean the problem is over?

Sadly, the answer is no. There is no way of knowing how many other open-source apps have zero day vulns, not to mention enterprise apps and APIs.

The fact is Log4 was a wakeup call and remediating zero days is going to be an ongoing chore for security teams for the foreseeable future.

Which is why you should join this webcast, Mitigate Zero-Day Exploits, on April 26 at 5pm BST (9am PT), which doesnt just bring together experts in the field but takes you through the methods they use.

Our own Tim Phillips will be joined by Contrast Securitys Larry Maccherone, previously head of DevSecOps at Comcast; as well as CM.com CISO Sandor Incze; security architect at Floor and Dcor Darius Radford; and Joe Zanchi, lead cyber security policy and standards at Humana.

This stellar panel will explain how they grappled with the Log4Shell crisis and continue to deal with vulnerabilities whether in open-source code, enterprise web applications or APIs. And theyll show you how to understand your open-source estate and how to keep it close to latest.

Theyll also explain whole-app analysis, and why this is better at finding vulnerabilities. And theyll show you how to bock attacks short term, without having to rely on a web applications firewall.

Tapping into this cybersec brains trust is simple. Just head here, register, and well remind you on the day. The spectre of zero days isnt going away, but after this session youll be far better placed to tackle it.

Sponsored by Contrast Security

Read the original post:

Can we solve the zero-day threat once and for all? No, but heres what we can do - The Register