End of open source: Dispelling the myth – ITProPortal

Following the Log4j vulnerability disclosure in December 2021 and the recent case of a developer sabotaging his own Javascript libraries color.js & faker.js, the stateof open source software has been called into question.

With a high-profile meeting at the White House on open source, and Executive Orders from US President Biden, some have even suggested it is the end of open source. While it might be tempting to view a major vulnerability as an indication of open source somehow being deficient, the reality is far from that.

Open source software is not more nor less secure than commercial software.In fact, most commercial software either includes or runs on open source technologies. Open source simply means that the software is developed in a manner where the source code is available to anyone who wants it.

What we have been seeing with theLog4jresponse from the ApacheLog4jteam is exactly what we would expect to see a team that is taking the software they produce seriously and being responsive to the needs of their install base. Considering that they are volunteers, such a response is indicative of the pride of ownership we often see within open source communities.

Rather than instigate the end of open source, an incident likeLog4jis likely to improve open source development as a whole much in the same way that Heartbleed improved development practices of both open and closed source development teams.So, if open source is here to stay, what should organisations be doing moving forwards to more efficiently identify and mitigate vulnerabilities?

Well, the idea of identifying and mitigating vulnerabilities requires us to define some roles up-front. Most people expect their software suppliers - that is to say the people who produce the software they depend upon - to test that software. The outcome of that testing would be a set of findings highlighting the weaknesses in the software that the supplier produces. In an ideal world, each of those weaknesses would be resolved prior to the software shipping.

In the real world, however, some of those weaknesses will be fixed, some will be marked as no plan to fix and some will optimistically be fixed in a future release. What the list of weaknesses are, and which ones were fixed, is not something a supplier typically divulges. Moreover, there is no one tool that can find all weaknesses, and some only work if you have the source code, while others require a running application.

You will note that no mention was made of the word vulnerability in this, as it has a special and simple meaning. In software, a vulnerability is simply a weakness that can be exploited or that has a reasonable chance of exploitation.

Most, but not all, vulnerabilities are disclosed via a centralised system known as the National Vulnerability Database, or simply the NVD. While the NVD has roots in the US, and is maintained by the US Government, the contents of the NVD are available to all and replicated in multiple countries. From a governance perspective, monitoring for changes in the contents of the NVD is a good way of staying on top of new vulnerability disclosures.

The problem is that the NVD updates slower than media coverage, so with major vulnerabilities like Log4Shell, HeartBleed and Dirty Cow, the team discovering the vulnerability might create a branded name for the vulnerability in an effort to broaden awareness of the issue. Creating a governance policy that monitors for media coverage of these cyber-events is certainly not great practice.

If media coverage as an input to vulnerability management is a bad idea, and the NVD is a bit slow to provide all details, what is the best governance policy then? That comes from a type of security tool known as Software Composition Analysis, or SCA. An SCA tool looks at either the source code for an application, or the executable or libraries that define the application, and attempts to determine which open source libraries were used to create that application.

The listing of those libraries is known as an SBOM, or Software Bill of Materials. Assuming the SCA software does its job properly, then a governance policy can be created that maps the NVD data to the SBOM so you know what to patch Except that there is still that latent NVD data to account for.

Some of the more advanced SCA tools solve that problem by creating advisories that proactivelyalert users when there is an NVD entry pending but where the details of that NVD entry are augmented by the SCA vendor. Some of the most advanced tools also invest in testing or validating which versions of the software are impacted by the vulnerability disclosure.

Nevertheless, while SCA software can close the gap between disclosure and identification, it should be noted that it does have a fundamental limitation. If the SCA software has not scanned all of your applications, then at best it can only flag new vulnerability disclosures for a subset of your applications.

From a governance policy perspective, it then becomes an IT function to identify all software and a procurement function to ensure that all software, including updates and free downloads, both come under an SBOM and that the SBOM is validated using SCA software. Since software is available in both source and binary formats, it is critical that governance teams heading down this path select SCA software that can effectively process software in all forms and formats. Such a governance policy would assist the identification of new vulnerability disclosures and the impact to the business, but would leave the matter of effective mitigation to a different policy, since mitigation would require application testing.

Ensuring the security of ones own technology is one thing, but the beauty of open-source is that it is built to be collaborative.

To paraphrase Abraham Lincoln, open source is technology of the people, by the people and for the people. The modern open source movement was founded on the principle that if you did not like the way the code was working, then you were free to modify it and address whatever gaps in functionality that were perceived to exist.

Part of the problem that we face today is a sentiment that has consumers or users of open source projects behaving as if the open source project is a commercial software vendor.

If you look at the issues list of any reasonably popular open source project on GitHub, you will see feature requests and comments about when certain problems might be resolved. Suchissue reports and complaints about serviceability have an implicit expectation that a product manager is on the receiving end of those requests and that they will be added to a roadmap and eventually be released all for free.

In reality, gaps in functionality and even in perceived bugs, represent opportunities not to request free programming services but instead to contribute to the future success of code that is significantly important to the person complaining.

Yes, some people wont know the programming language used by the project, but to expect other people to prioritise a complaint from an unknown third party over changes that solve problems for active contributors is not realistic. As much as anything, open source functions through the altruism of contributors.

Over recent years we have heard core contributors for popular open source projects express frustration about the profits made by large businesses from the use of their software. While it is easy to relate to someone putting their energy into a project only to have a third party profit from the efforts, the reality is that if that third party is profiting from the efforts of an open source development team, then they should be contributing to its future success.

If they dont, then they run the risk that not only the code in question might change in ways they didnt expect, but also that when security issues are identified and resolved, that they might have delays in applying those fixes. After all, if a business isnt taking the time to engage with teams creating the software that powers their business, then it is likely they do not know where all the software powering their business originates and cannot reliably patch it.

Finding vulnerabilities in open source is not a problem, but the detection of software defects representing a weakness that could be exploited, is an important topic. While open source and closed source software have an equal potential for security issues, with open source it is possible for anyone to identify those issues. With that in mind, organisations must take proactive steps - that does not rely on media coverage - to monitor the latest vulnerabilities.

Equally important, they must play a contributing role to the open source projects they benefit from, otherwise they might fall victim to unexpected code changes or delayed awareness of critical patches.

Tim Mackey is Principal Security Strategist at Synopsys.

See the original post:
End of open source: Dispelling the myth - ITProPortal

The wild world of non-C operating systems – The Register

Believe it or not, not everything is based on C. There are current, shipping, commercial OSes written before C was invented, and now others in both newer and older languages that don't involve C at any level or layer.

Computer hardware is technology yet very few people can design their own processor, or build a graphics card. But software is a form of culture. Open source is created by volunteers, even if they end up getting paid jobs doing it. Even rejecting open source is a choice: paying for Windows or macOS instead reflects a preference.

This is especially visible when it comes to text editors, and even more so about programming languages. People get passionate about this stuff. So statements such as "C isn't a programming language any more" can be upsetting. Most people live and work in the cultures that are Unix and Windows and if they are all you've ever known, or know best, then it's easy to think they are the whole world.

But that doesn't make it true.

C fans tend to regard BCPL as an unimportant transitional step, but it was used in two celebrated OSes. One was the TRIPOS operating system, later used as the original basis of AmigaOS. The core system software of the original GUI workstation, the Xerox Alto, was also written in BCPL, although much of it was later rewritten in Mesa. The Alto OS survived into the late 1990s as GlobalView, which Xerox sold as a high-end desktop publishing tool.

In the 1960s, ALGOL was huge. It's the granddaddy of most modern imperative languages.

Burroughs Corporation designed a series of mainframes, the Burroughs Large Systems, around the pioneering idea of writing the OS and all applications in a high-level language, ALGOL. The first machine, the B5000, was launched in 1961. Burroughs merged with Sperry UNIVAC in 1986 to form Unisys. The Unisys Clearpath MCP OS is a direct descendant of the B5000's MCP or Master Control Program. (Yes, the same name as the big baddie in Tron.)

MCP was the first commercial OS to support virtual memory and shared libraries. Its purely stack-based design inspired Chuck Moore to develop Forth, HP to design the HP3000 mid-range computers, and influenced Alan Kay in the development of Smalltalk at Xerox PARC.

The current version of ClearPath MCP is 20.0, released in May 2021.

Swiss boffin Niklaus Wirth worked at Xerox PARC for two one-year sabbaticals. Today he's best known for inventing Pascal, which happened to catch on as an applications language notably as Borland's Delphi. An earlier implementation, the UCSD p-System, was a complete OS one of the three IBM offered for the original PC in 1981.

Pascal was just one stage in a series of "Wirthian" languages. The successor to Pascal was Modula, but it was quickly superseded by Modula-2, based on Wirth's time working with Mesa on the Alto at Xerox PARC. Modula-2 was specifically designed for OSes as well as apps.

After his first sabbatical in Palo Alto in 1976-1977, once Wirth got back home to ETH Zrich in 1977, he and his team designed and built the Lilith workstation as a cheaper replacement for the $32,000 Alto. Its object-oriented OS, Medos-2, was entirely built in Modula-2.

Meanwhile, in the USSR, a team at the Soviet Academy of Sciences in Novosibirsk, inspired by Wirth's work but limited by COCOM import restrictions, built their own OS called Excelsior in Modula-2, which ran on a series of workstations called Kronos.

Wirth returned to Xerox PARC in 1984-1985, and after that trip, back at ETH he designed the Ceres workstation, with an OS implemented in a new language, Oberon, whose text-based tiling-window interface inspired the successor to Unix, Plan 9 from Bell Labs.

(The influence went both ways. A team at Xerox PARC developed Pascal into the Euclid language, and a variant was even used to develop a Unix-compatible OS called Tunis, the Toronto University System.)

Oberon has been called the "overlooked jewel" [PDF] of computer science. It has multiple descendants today. Wirth came out of retirement in 2013 to help Project Oberon, which runs a modernized version of the OS on modern FPGA-based hardware. Another version, Native Oberon, runs on x86-32 PCs and under QEMU.

If you want to try it without installing, there's also a JavaScript version that runs in your browser. Its UI is efficient but quite strange, because it was designed in the late 1980s and predates almost every other graphical desktop except the original Mac.

Later researchers at ETH built a language called Active Oberon, and using that, a newer OS with a slightly more conventional zooming GUI called "Bluebottle". Originally the new Oberon OS was called AOS, but so are multiple other projects, so now it's called A2.

Most of these systems are relatively obscure, though. C and its descendants and derivatives remain far more mainstream. Even if Linus Torvalds is famously not fond of it, "the world is built on C++." It's not usually thought of as suitable for implementing OS kernels in, but two of the most-loved ever nonetheless were.

The Mac OS X might-have-been BeOS was entirely written in C++, and so is its modern FOSS re-implementation Haiku. It took the Haiku project 17 years to get to its first beta version, but it's now on Beta 3. Unlike its ancestor, these days it has a lot of ported FOSS apps from Linux and even some limited ability to run Windows programs.

The C++ OS that was by far the biggest commercial success was Symbian. Psion built it from scratch in the late 1990s, and for a while it was the dominant smartphone OS. We did an epic three-part feature, "Symbian, The Secret History" (part 2 and part 3) a decade ago.

But Nokia failed to capitalize on it, and Apple's iOS and Android killed it off. It eventually went open source, although not without the odd hitch.

Today, C++ is being used to build the Genode OS Framework. The latest version, release 22.02, came out just last month.

In terms of popularity, depending on which survey you look at, Rust is stalking or eclipsing C++, and there are several projects to build OSes in Rust, notably including the somewhat Minix-like Redox OS, the more experimental Theseus, and the embedded Tock.

Even Microsoft has experimented with Singularity, partly implemented in C# and other .NET-based languages, as is the FOSS COSMOS.

BCPL, C, C++, Rust, and the Pascal family all inherit from the design of ALGOL. All are imperative programming languages. There are other, different schools of language design. One of the oldest and best-loved programming languages around is Lisp, designed by John McCarthy in 1958.

Symbolics, the company which owned the first ever dotcom domain, built an entire OS in Lisp, called Genera. The last version, OpenGenera, ran on an emulator on DEC Alpha workstations, and today you can run it on Linux, but sadly the inheritors of the defunct company won't open-source it.

However, there was also a family of Lisp-based graphical operating systems that ran on Xerox's later Star workstations, written in a dialect called Interlisp. This is now FOSS and being actively modernized.

There are also several modern efforts to build OSes in Lisp, including Mezzano, Interim, and ChrysaLisp, the latter by one of the lead developers of Tao Systems' TAOS.

It would also be unfair not to include Urbit. This is a completely from-scratch effort that has reinvented some very Lisp-like technology, although the project's links to cryptocurrencies, and some of the politics of its founder Curtis Yarvin, make the author wary.

This is not intended to be a comprehensive list. There are too many such efforts to count, and that's even with intentionally excluding early OSes that were partly or wholly written in assembly language. There were several Java-based OSes, including Sun JavaOS and smartphone OS SavaJe. There are still stranger things out there, such as House, implemented in the purely functional Haskell.

This is just an overview, but I hope it makes a point: that OSes do not begin and end with C. If you can name a language that can run on bare metal, someone somewhere has probably tried to implement an OS in it, and some of them have become a lot more successful than you might expect.

Read more:
The wild world of non-C operating systems - The Register

New Spring4Shell Zero-Day Vulnerability Confirmed: What it is and how to be prepared – Security Boulevard

On March 29, 2022, a Chinese cybersecurity research firm leaked an attack that could impact most enterprise Java applications, globally. An investigation of the issue showed that the root cause was a vulnerability in the widely used, free, community-developed, open-source programming framework called Spring Core.

The Spring Framework is the foundation for most enterprise applications written in the Java programming language. Our recent data showed Spring Core as being used by 74% of Java applications. Specifically, Spring provides the plumbing of enterprise applications so that teams can focus on application-level business logic, without unnecessary ties to specific deployment environments.

As of Wednesday, March 30, the Contrast Security Labs team confirmed the 0-day vulnerability by use of a public poc, Spring4Shell, which could be the source of Remote Code Execution (RCE).

Spring translates the body and parameters of an HTTP request and turns them into a domain object for developers to use. This makes their lives easier.

In the process of building an object graph to give to the developer, Spring takes special care not to let attackers control any parts of the Class, ProtectionDomain, and ClassLoader of the instance being created. Unfortunately, changes to the Class object in Java 9 meant the checks Spring performed were no longer enough.

The code in question is shown here:

https://github.com/spring-projects/spring-framework/blob/b595dc1dfad9db534ca7b9e8f46bb9926b88ab5a/spring-beans/src/main/java/org/springframework/beans/CachedIntrospectionResults.java#L288

PropertyDescriptor[] pds = this.beanInfo.getPropertyDescriptors();for (PropertyDescriptor pd : pds) {if (Class.class == beanClass && (classLoader.equals(pd.getName()) || protectionDomain.equals(pd.getName()))) {// Ignore Class.getClassLoader() and getProtectionDomain() methods nobody needs to bind to thosecontinue;}

This code attempts to restrict access from overriding these object graph paths:

However, because the Class object now exposes a getModule() method, attackers can now take this slightly different path:

The introduction of Class#getModule() couldnt have been directly foreseen when they wrote this code, although we could have a spirited debate about the robustness of this style of check.

The consequences of handing users control of properties of the ClassLoader depend on the features of the ClassLoader being exploited.

The exploit and PoC being run around shows an attacker exploiting features of Tomcat 9s WebAppClassLoaderBase. The exploit works in a few stages:

Java 9 was released in July of 2017, so this vulnerability has been exploitable in production apps and APIs for five years.

The video below shows the exploit in a few quick requests. The exploit posts a payload to the index of the basic Spring boot application. The exploit takes advantage of the missing binding configuration and creates a malicious JSP on the filesystem in a web-accessible directory. From there, a request is sent with the id command to request the current ID of the user which returns as uid=0(root) gid=0(root) groups=0(root) which shows in this case the application is running as root.

There are a few requirements for an application to be vulnerable:

All of the above, plus you must be running Tomcat (unknown ranges of versions yet, but certainly including 9), because the exploit takes advantage of Tomcats ClassLoader and logging facility to write a malicious, backdoor JSP.

Its prudent to assume that exploits will be coming that take advantage of different class loaders or another environment context and that any vulnerable Spring applications that satisfy the conditions of the first section will be vulnerable.

For now, Contrast Labs recommends:

1) For all who are using Spring core and binding to non-basic types such as POJOs, set the allowed fields to specify the only bindings you wish your application to use.

2) For Contrast customers, ensure Contrast Protect is enabled on your Spring applications (especially those on JDK 9+). As you can see in the video below, when Protect is configured properly, it blocks the attack.

Spring4Shell with Protect Video:

To best protect your applications these are the settings you should enable in your Protect monitored applications in blocking mode.

(a) Command Injection in blocking mode

Example when Command Injection is in Blocking mode.

(b) For more visibility of attacks targeting your environment Enable CVE Shields for CVE-2014-0112 and CVE-2014-0114 (these specific CVE shields are for Struts issues, however, due to the similar nature of the payloads, this provides visibility into attacks through Probes)

Example when CVE shields are enabled

We have published communication to our Support Portal letting customers know we are researching implications and we will let them know exactly how they can fix this issue in their systems. Our team is currently researching the ability to exploit this vulnerability outside of a Tomcat environment.

As always, Contrast will continue to monitor the situation with Spring4Shell. The security of our customers is of utmost importance to us. If you have any questions, concerns, or would like to discuss this issue further, please dont hesitate to reach out to us at [emailprotected].

*This blog article will include updates about Spring4Shell as they become available.

David Lindner & Arshan Dabirsiaghi

By subscribing to our blog you will stay on top of all the latest appsec news and devops best practices. You will also be informed of the latest Contrast product news and exciting application security events.

On March 29, 2022, a Chinese cybersecurity research firm leaked an attack that could impact most enterprise Java applications, globally. An investigation of the issue showed that the root cause was a vulnerability in the widely used, free, community-developed, open-source programming framework called Spring Core.

The Spring Framework is the foundation for most enterprise applications written in the Java programming language. Our recent data showed Spring Core as being used by 74% of Java applications. Specifically, Spring provides the plumbing of enterprise applications so that teams can focus on application-level business logic, without unnecessary ties to specific deployment environments.

As of Wednesday, March 30, the Contrast Security Labs team confirmed the 0-day vulnerability by use of a public poc, Spring4Shell, which could be the source of Remote Code Execution (RCE).

Spring translates the body and parameters of an HTTP request and turns them into a domain object for developers to use. This makes their lives easier.

In the process of building an object graph to give to the developer, Spring takes special care not to let attackers control any parts of the Class, ProtectionDomain, and ClassLoader of the instance being created. Unfortunately, changes to the Class object in Java 9 meant the checks Spring performed were no longer enough.

The code in question is shown here:

https://github.com/spring-projects/spring-framework/blob/b595dc1dfad9db534ca7b9e8f46bb9926b88ab5a/spring-beans/src/main/java/org/springframework/beans/CachedIntrospectionResults.java#L288

PropertyDescriptor[] pds = this.beanInfo.getPropertyDescriptors();for (PropertyDescriptor pd : pds) {if (Class.class == beanClass && (classLoader.equals(pd.getName()) || protectionDomain.equals(pd.getName()))) {// Ignore Class.getClassLoader() and getProtectionDomain() methods nobody needs to bind to thosecontinue;}

This code attempts to restrict access from overriding these object graph paths:

However, because the Class object now exposes a getModule() method, attackers can now take this slightly different path:

The introduction of Class#getModule() couldnt have been directly foreseen when they wrote this code, although we could have a spirited debate about the robustness of this style of check.

The consequences of handing users control of properties of the ClassLoader depend on the features of the ClassLoader being exploited.

The exploit and PoC being run around shows an attacker exploiting features of Tomcat 9s WebAppClassLoaderBase. The exploit works in a few stages:

Java 9 was released in July of 2017, so this vulnerability has been exploitable in production apps and APIs for five years.

The video below shows the exploit in a few quick requests. The exploit posts a payload to the index of the basic Spring boot application. The exploit takes advantage of the missing binding configuration and creates a malicious JSP on the filesystem in a web-accessible directory. From there, a request is sent with the id command to request the current ID of the user which returns as uid=0(root) gid=0(root) groups=0(root) which shows in this case the application is running as root.

There are a few requirements for an application to be vulnerable:

All of the above, plus you must be running Tomcat (unknown ranges of versions yet, but certainly including 9), because the exploit takes advantage of Tomcats ClassLoader and logging facility to write a malicious, backdoor JSP.

Its prudent to assume that exploits will be coming that take advantage of different class loaders or another environment context and that any vulnerable Spring applications that satisfy the conditions of the first section will be vulnerable.

For now, Contrast Labs recommends:

1) For all who are using Spring core and binding to non-basic types such as POJOs, set the allowed fields to specify the only bindings you wish your application to use.

2) For Contrast customers, ensure Contrast Protect is enabled on your Spring applications (especially those on JDK 9+). As you can see in the video below, when Protect is configured properly, it blocks the attack.

Spring4Shell with Protect Video:

To best protect your applications these are the settings you should enable in your Protect monitored applications in blocking mode.

(a) Command Injection in blocking mode

Example when Command Injection is in Blocking mode.

(b) For more visibility of attacks targeting your environment Enable CVE Shields for CVE-2014-0112 and CVE-2014-0114 (these specific CVE shields are for Struts issues, however, due to the similar nature of the payloads, this provides visibility into attacks through Probes)

Example when CVE shields are enabled

We have published communication to our Support Portal letting customers know we are researching implications and we will let them know exactly how they can fix this issue in their systems. Our team is currently researching the ability to exploit this vulnerability outside of a Tomcat environment.

As always, Contrast will continue to monitor the situation with Spring4Shell. The security of our customers is of utmost importance to us. If you have any questions, concerns, or would like to discuss this issue further, please dont hesitate to reach out to us at [emailprotected].

*This blog article will include updates about Spring4Shell as they become available.

Read more:
New Spring4Shell Zero-Day Vulnerability Confirmed: What it is and how to be prepared - Security Boulevard

10 Podcasts on Programming to Learn How to Be a Better Developer – Business Insider

With so many free resources from websites to YouTube tutorials available to help teach new skills, it's a great time to be a developer.

Whether you're learning how to code, a self-taught programmer, or an industry veteran, podcasts in particular can be a convenient tool for enhancing your skills and technical knowledge. But being a software developer isn't just about complicated code and programming languages developers create and maintain products that influence our everyday lives, so it's important to understand the human impact software creates, too.

While many shows help break down the technical, there are several popular podcasts that discuss that impact and touch on often overlooked aspects of development. Many are also hosted by developers themselves, who provide behind-the-scenes looks at careers in programming. Software developers also use podcasts to share career tips, thoughts on current tech news, and what it's like to be a tech worker during a huge moment of change within the industry.

Based on feedback from developers on social media, Insidercompiled a list of podcasts every aspiring software developer can add to their queue to improve their programming skills.

Go here to see the original:
10 Podcasts on Programming to Learn How to Be a Better Developer - Business Insider

University Recreation Tennis Center Set to Open – University of Arkansas Newswire

Photo Submitted

Students test out the new tennis courts

University Recreation's (UREC) new tennis courts are scheduled to open on Thursday, March 31, at 11 a.m. The UREC Tennis Center is located at 1357 W. Indian Trail off Martin Luther King Blvd. in Fayetteville. The UREC Tennis Center features 12 tennis courts (four courts withbackboards to allow for individual play) and a service facility that includes an equipment checkout center and restrooms.Pickleball can be played on the courts. Tennis racquets, pickleball racquets and balls can be checked out for use. Participants are encouraged to bring their own tennis balls. "There have been so many positive experiences being a part of the Tennis Club at the University of Arkansas the past four years. Our freshman year we played on two courts, where we all took turns playing. We are so excited that this facility will allow us the opportunity togrow even more as a team," said Clare Kellough and Craig Owens, 2021-2022 UREC Tennis Club co-presidents.

"We are excited to open the UREC Tennis Center; this project will bring tennis back on campus for recreational users and expandoptions for programming and educational opportunities,"said Becky Todd, executive director ofUniversity Recreation. "We look forward to welcoming the campus community to this beautiful facility. "A grand opening celebration is set forTuesday, April 12.

Read more here:
University Recreation Tennis Center Set to Open - University of Arkansas Newswire

Bonitasoft introduces new tooling for developers to ease testing and deployment of automation projects – CIO Dive

SAN FRANCISCO

Bonita Digital Process Automation platform intentionally designed for developers offers a brand-new test kit and improved deployment technologies

March 31, 2022 - San Francisco, CA Bonitasoft, the leading open-source digital process automation company, today announced the launch of significant new additions to empower developers to its flagship Bonita digital process automation platform. With the new Bonita Test Toolkit and other technologies delivered with Bonita 2022.1, development teams now have the tooling they need to test, deploy and continuously deliver business process automation projects more easily. More and better testing supports sustainable automation, providing good stability for Bonita projects and allowing the development team to have more confidence in changes.

The Bonita Test Toolkit accelerates testing and deployment of complex, core, and critical automation projects. Projects can be tested efficiently at each step (as well as end-to-end) with the tooling provided to write and execute tests locally - in the Bonita Studio development environment or in any remote non-production environment. The Bonita Test Toolkit can be executed from an IDE, a command line or a Continuous Integration. It is also integrated into the Bonita Continuous Delivery add-on to include testing in the delivery pipeline.

Along with this new Bonita Test Toolkit and Bonita 2022.1, Bonitasoft is also releasing a new Docker image, robust security, with stable update handling for user authorizations, and an updated Bonita Update Tool.

Bonita 2022.1 also comes with an updated Docker image. Its healthcheck checks all Bonita levels so if something goes wrong, it is immediately visible and can be managed. Users can now take either the Bonita Docker image with its own database schema or use their specific database configured for Bonita. This new Bonita Docker image also now includes its own LDAP synchronizer, all adding up to a smoother, easier deployment.

The Bonita 2022.1 platform also includes better performance and robustness in its Runtime engine, with robust security vulnerabilities handling. Dynamic permission authorizations are activated by default. Any custom code added for permissions remains intact during platform updates, so the development team needs no time-consuming rework.

The Bonita Update tool permits easy, automated migration of Bonita process applications from one Bonita version to another, and it comes with improved documentation.

With these and other new and improved capabilities in and with Bonita, we are providing the tools we promised to developers to ease testing and deployment for enterprise-critical applications, said Miguel Valdes Faura, Bonitasoft CEO and co-founder. Our strategy is to empower developers, and the release of Bonita 2022.1 is specifically intended to support their needs.

The Bonita Community open-source edition includes all capabilities required to develop and deploy process automation projects, and can be downloaded here. About Bonitasoft Bonitasoft fully supports digital operations and modernization of information systems with Bonita, an open-source and extensible platform for automation and optimization of business processes. The Bonita platform accelerates development and production with a clear separation between visual programming and coding capabilities. Bonita integrates with existing information systems, orchestrates heterogeneous systems and provides deep visibility across all enterprise processes.

###

About Bonitasoft

Bonitasoft fully supports digital operations and modernization of information systems with Bonita, an open-source and extensible platform for automation and optimization of business processes. The Bonita platform accelerates development and production with a clear separation between visual programming and coding capabilities. Bonita integrates with existing information systems, orchestrates heterogeneous systems and provides deep visibility across all enterprise processes.

Go here to read the rest:
Bonitasoft introduces new tooling for developers to ease testing and deployment of automation projects - CIO Dive

Daily Crunch: Intel will reportedly buy cloud-optimization startup Granulate for $650M – TechCrunch

To get a roundup of TechCrunchs biggest and most important stories delivered to your inbox every day at 3 p.m. PT, subscribe here.

Hello and welcome to Daily Crunch for Thursday, March 31, 2022!

Its a beautiful day in our neck of the woods, and we have a great lineup of news for you today, so lets goooooo.

Grab your calendar and add these two: Were doing a Data and Culture Transformation event on April 26 for the big data aficionados, and now is your last chance to buy discounted tickets for our in-person TC Sessions: Mobility event on May 18 and 19, as well as the virtual event on the 20th.

Dont worry, its Thursday. The weekend is almost here. You can do it; we believe in you. Christine and Haje

We get a teensy bit excited whenever Y Combinator does a set of demo days. I recommend that you read all our coverage this week, obvz, but if you want a quick summary, read part 1 and part 2 of our everything you need to know posts, make yourself a cup of coffee, and follow that up with our favorite startups part 1 and part 2, then pour yourself an adult beverage and wrap it all up with Devins irreverently irresistible (and irrationally ironic) review of his favorite YC logos.

Tis the season for new venture funds, apparently. Freestyle closed its sixth fund, adding $130 million of dry powder to invest, while Gumi Cryptos Capital (gCC) has a $110 million block of cash in the form of its second to deploy into the crypto universe.

Docker was on the ropes for a little while, there, but hooo boy did it make a comeback. The company just announced a whale of a round, raising $105 million of fresh capital on a $2 billion valuation.

More stories of up, up, and away:

Nothing beats experience like experience, which is why we were happy to run a column written by Zach DeWitt, winner of the 2013 TechCrunch Meetup and Pitch-off.

DeWitt, who became a VC after selling Drop, Inc. to Snapchat in 2016, shares five essential lessons for first-time founders wandering in the wilderness in search of an investor wholl be a true partner.

Theres an inherent power imbalance when asking a stranger for money, but VCs should work to earn your trust, writes DeWitt.

In many ways, its like finding the right spouse.

(TechCrunch+ is our membership program, which helps founders and startup teams get ahead. You can sign up here.)

Read the original post:
Daily Crunch: Intel will reportedly buy cloud-optimization startup Granulate for $650M - TechCrunch

Top 10 Algorithms Helping the Superintelligent AI Growth in 2022 – Analytics Insight

Superintelligent AI is not here yet, but these top 10 algorithms are extensively working towards its growth.

Superintelligence, roughly defined as an AI algorithm that can solve all problems better than people, will be a watershed for humanity and tech. Even the best human experts have trouble making predictions about highly probabilistic, wicked problems. And yet those wicked problems surround us. We are all living through an immense change in complex systems that impact the climate, public health, geopolitics, and basic needs served by the supply chain. Even though the actual concept of superintelligent AI is yet to be materialized, several algorithms are working to help in its growth. Here are such top 10 algorithms that are building a future for the growth of superintelligent AI.

This is the beginning of a superintelligent AI system that translates natural language to code. Codex is the model that powers GitHub Copilot, which was built and launched in partnership with GitHub a month ago. Proficient in more than a dozen programming languages, Codex can now interpret simple commands in natural language and execute them on the users behalfmaking it possible to build a natural language interface to existing applications.

CLEVER (Combining Levels of Bug Prevention and Resolution techniques) was created in a joint effort with Ubisoft and Mozilla designers. The Clever-Commit is an AI coding assistant which combines data from the bug tracking system and the codebase and helps in looking through the mistakes and bugs in the codes. The coding partner is right now being utilized inside Ubisoft for game improvement purposes. It is one of the best AI coding systems aiming for superintelligent AI.

AlphaCode was tested against challenges curated by Codeforces, a competitive coding platform that shares weekly problems and issues rankings for coders similar to the Elo rating system used in chess. These challenges are different from the sort of tasks a coder might face while making, say, a commercial app.

Built-in AI and Machine Learning, Embold is an intelligent, multi-layered analyzer for programming projects that looks forward to the growth of superintelligent AI. It comprehends the situation with the product quality and identifies issues as well as suggests arrangements and recommends code examination for the specific issue. It analyses source code utilizing strategies like natural language processing (NLP), machine learning, and a set of algorithms in order to find design issues, bugs, and so on.

Tabnines Public Code AI algorithm is the foundation for all its code completion tools and its the perfect algorithm set for the emergence of superintelligent AI. The Free, Advanced, and Business level solutions train on trusted open-source code with permissive licenses. Tabnines AI Assistant anticipates your coding needs, providing code completions for you and your development team that boosts your productivity.

mabl is a Software-as-a-Service (SaaS) supplier and abound together with the DevTestOps stage for AI and Machine Learning-based test robotization. The critical highlights of this arrangement incorporate auto-recuperating tests, Artificial Intelligence-driven relapse testing, visual peculiarity discovery, secure testing, information-driven useful testing, cross-program testing, test yield, reconciliation with well-known devices, and substantially more.

Augmented Coding is a set of tools that leverage the power of AI to enhance the coding process, making it easier for developers to cover compliance needs over documentation, reuse of existing code, and code retrieval within your IDE. It is one of the best AI coding systems available in the market today.

Pylint is a Python source code analyzer that searches for programming mistakes, assists with authorizing a coding standard, and other such. This quality checker for Python programming incorporates a few elements, for example, coding standard where it checks for the length of line codes, mistake identification, refactoring by recognizing the copied code, among others. It is one of the best AI coding systems that are going to be a vital element in the growth of superintelligent AI.

Sketch2Code is a web-based solution that uses Artificial Intelligence to transform a handwritten user interface plan from an image to a legitimate HTML markup code. The arrangement works in a manner, for example, it initially recognizes the plan designs, comprehends the manually written draw or text, comprehends the construction, and afterward assembles a legitimate HTML code as needed to the identified format containing the distinguished plan components. It is one of the best AI coding systems available in the market today.

AI-assisted development. IntelliCode saves you time by putting what youre most likely to use at the top of your completion list. IntelliCode recommendations are based on thousands of open source projects on GitHub each with over 100 stars. When combined with the context of your code, the completion list is tailored to promote common practices. It is one of the best AI coding systems that are as good as human programmers.

Share This ArticleDo the sharing thingy

More:
Top 10 Algorithms Helping the Superintelligent AI Growth in 2022 - Analytics Insight

How this bootstrapped software company grew over 100 pc amidst COVID-19 helping SMEs, enterprises – YourStory

The COVID-19 pandemic enabled a never-before-seen digital adoption among Indian businesses from all sectors.

Be it micro, small, or even large enterprises embracing technology or digital means of operating business has been the highest amidst the pandemic.

Gautam Rege, Co-founder and Director of Josh Software Inc, a Pune-based software development company, says, It might sound ambiguous, but the pandemic period was insanely good for tech companies. Our growth was 105 percent in FY 2020-21, and we are on the rising curve.

Founded in 2007, Josh Software provides sustainable tech solutions to SMEs, startups, and large enterprises across industries, including healthcare, manufacturing, insurance, education, sports, media, and more.

Besides India, the Pune-based company is present in Dubai and the US. In FY 2021-22, the company claims to have clocked Rs 70 crore in turnover.

In an interaction with SMBStory, Gautam discussed how JOSHs product-based software service is helping businesses grow, scale, and streamline their operations.

Edited excerpts of the interview:

Gautam Rege [GR]: We started Josh Software from our shared passion for coding. My co-founder, Sethupathi Asokan (Sethu) and I with our respective ten and eight years of experience in programming decided to leave our corporate jobs and get into the sun.

We started Josh from ground zero, with absolutely no experience in marketing and finance. We had no investment, insurance, or family support, besides the initial seed money Sethu and I had invested.

Since its inception, we have believed in a collaborative and community approach to delivering technology solutions, thus preferring open source frameworks.

Our struggles and challenges were immense. We had no office, we had to form an entity and set up a bank account.

With a lack of trust in the industry, we had to build our community through continuous networking to get our first few requests. Additionally, we did not have any support or an earning member back home, and we had no option but to get onto the floor and start generating some revenue.

It helped us learn and define who we are today. Soon, our work started to speak for itself, and our business kept growing majorly through word of mouth and inbound requests.

Today, we are a team of close to 400 people and counting, helping customers disrupt their industry through our innovative IT solutions. We remain a bootstrapped company, and we have never raised funds.

GR: A software services company, Josh specialises in outsourced product development, building solutions for businesses to facilitate high performance, scalability, and high-standard code quality.

We are highly driven by innovation, disruption, and opportunity to learn which sets the criteria for us with whom we want to work irrespective of a funded startup, bootstrapped SME, large enterprise, or a large-scale business entity.

To date, we have worked with companies like Star TV, Tata Projects, and Rakuten, to name a few, and startups from the US, UK, Europe, Indonesia, Australia, India, etc.

Our core speciality lies in converting our customers' product visions into reality. We have built multiple products for clients.

For instance, we provided an end-to-end software service to develop a B2C model for QuickInsure a specially designed platform for buying motor vehicle insurance and third party insurance where vehicle owners can buy insurance from the providers.

Moreover, Groupbuzz is another successful client a platform that organises various meetup activities for different groups.

GR: One of the perennial challenges in the software and product development industry is the lack of trained and industry-ready talent.

At Josh, we prioritise upskilling for our entire team for a straightforward reason the technology landscapes shift very fast, and not upskilling means becoming irrelevant.

We also look at colleges and campuses for hiring young talent, providing exposure to the industry while we train and groom them into the Josh DNA.

While any Indian or global software development company can be termed as our competitors, we are different. To explain with a simple analogy we prefer to be a team of highly focused artists working on a million-dollar painting rather than painting over 10,000 buildings at a cheaper rate and earning the same amount of money.

We believe programming is an art. We provide services to our customers, supported by lean processes and performance visibility, offering freedom to our developers to choose the right open source technologies to help build the products better.

GR: The need for technology is critical in almost every sector today because of the rapid digitisation across the globe. The COVID-19 pandemic further accelerated this, and the demand continues to grow.

Traditional companies have found ways to navigate the digital world, while new-age startups are born tech natives.

According to IBEF, by 2025, the Indian IT and business services industry is expected to grow to $19.93 billion.

At Josh, we aim to make a difference as a game-changer in the software industry by giving our customers and developers the freedom to experiment and build the right product.

GR: Our revenue is directly linked to the customised and solution-driven services we offer to our customers.

As a policy, we ensure to not create any IPs of our own as it all belongs to our customers. We treat each new idea we want to invest in differently.

We set up a separate company, create specialised teams to look into the technology, growth, and strategy. At present, we have four such companies

QuickInsure, an insurance broking agency

SimplySmart, an IoT-based smart home automation platform

BidWheelz, an online live bidding platform for the used car retail industry

Clipp.tv, a Singapore-based media-tech company for events and video processing.

GR: Over the years, we have seen ourselves grow from a tiny office to a highly talented workforce. We are now in a hyper-growth stage as we chart out our future vision.

By the end of 2025, we aim to be at least a $25 million revenue company, and we aspire to be IPO-ready by 2027. We have firm plans to scale in terms of people and revenue, but at the same time, we want to ensure to continue to stay grounded and work as one large family.

See the article here:
How this bootstrapped software company grew over 100 pc amidst COVID-19 helping SMEs, enterprises - YourStory

White House Meets With Software Firms and Open Source Orgs on Security – DARKReading

Driven by vulnerabilities in widespread software affecting organizations worldwide, the US government met with the open source community and major software firms on Jan. 13 at the White House to find ways to support the innovative software development community, while at the same time reducing the likelihood of future security flaws in common software components.

The White House Software Security Summit brought together officials from the various government agencies that deal with national security and technology with representatives from major software companies includingAkamai, Amazon, Apple, GitHub, Google, Meta, Microsoft, and RedHat as well as members of the open source software community, such as the Apache Software Foundation and the Linux Foundation.

The summit aimed to find ways of "preventing security defects and vulnerabilities in code and open source packages, improving the process for finding defects and fixing them, and shortening the response time for distributing and implementing fixes," the Biden administration said in a statement.

At the heart of the discussion, however, is how the innovative development of open source communities can continue to flourish while improving efforts to create secure software and speed the patching in the face of vulnerabilities.

"Open source software brings unique value, and has unique security challenges, because of its breadth of use and the number of volunteers responsible for its ongoing security maintenance," the administration stated. "Participants had a substantive and constructive discussion on how to make a difference in the security of open source software, while effectively engaging with and supporting, the open source community."

The summit took place as companies continue to struggle to find and patch a significant vulnerability in the Log4j logging framework for Java applications, which is widely used in enterprise applications. More than 80% of the Java applications on the Maven Central Repository, a widely used package management repository, had Log4j as a dependency meaning those Java applications and components are likely vulnerable. While the vulnerability has not yet led to a major compromise, according to US officials, the issue will likely take years to remediate because of its ubiquity.

A Long History of Widespread VulnsVulnerability in widespread software packages are not new. The 2014 Heartbleed vulnerability in OpenSSL and the 2018 SPECTRE and Meltdown vulnerabilities demonstrated that security issues found in ubiquitous software and firmware have long tails.

"The world runs on software, which in turn relies on open source, [which] means that vulnerabilities in open source code can have a global ripple effect across the billions of developers and services that rely on it," Mike Hanley, chief security officer at GitHub, said in a statement on the summit. "Weve seen how just one or two lines of vulnerable code can have a dramatic impact on the health, safety, and trustworthiness of entire systems in the blink of an eye."

The summit aimed to find ways for government and industry to work together to improve the security of open source code, such as integrating security features into developer tools and services as well as ensuring the integrity of the platforms used to store and distribute packages. Initial efforts will likely focus on ways to improve the security of popular and critical open source software projects and packages and speed the adoption of software bills of materials to allow developers and companies to track their dependencies.

"This all begins with a common effort to increase visibility into the use of open source software," says Boaz Gelbord, chief security officer with Akamai. "Government and private sector organizations must invest in tools that reveal the reliance on open source technologies and, crucially, take action to mitigate and contain risks to strengthen the security of the ecosystem at large."

The efforts will be a balance between maintaining the innovative and standards-setting efforts of independent open source development and enforcing secure development practices on projects and products that become part of the critical infrastructure on which industry and government rely, says Brian Behlendorf, executive director of the Open Source Security Foundation (OpenSSF).

"At the beginning of the supply chain is the raw, sometimes messy, but also often incredibly innovative processes of writing code in a group that so often leads to great software," he says. "Thats precious and shouldnt be shackled by bureaucracy or requirements that create no value for those upstream core devs."

However, the OpenSSF recognizes that more secure development processes need to be added to each step in the chain from core developer to package manager to the development teams that eventually use the software component or library.

"Whats important now, in a world of millions of software projects and developers, is to help scale up what used to be informal, high-trust processes along this chain into more rigorous, automatable tools and practices," Behlendorf says.

The industry has already started investing in securing open source software, as well as their own software products. At a similar summit in August, Google and Microsoft pledged to spend billions on software security and cybersecurity efforts in the next five years. Google, for example, has committed to an invisible security initiative to integrate protections so that developers and businesses reap the benefits, and also hasworked with the OpenSSF to release tools for developers. Akamai committed to continuing to help the open source community find ways to detect vulnerabilities in software and contain attacks, but recognized that the work is only starting.

"While this executive order is a move in the right direction, more needs to be done to support the open source community to thrive within our ever-evolving threat landscape," Akamai's Gelbord says.

Last year, the Biden administration released an executive order on cybersecurity that was widely praised for being more detailed than past administrations. In addition, the administration announced in October that it would create the Bureau of Cyberspace and Digital Policy within the US Department of State to lead international diplomacy on the issue.

Read more:

White House Meets With Software Firms and Open Source Orgs on Security - DARKReading