IoT News | The Security Risks of Open Source Software – IoT Business News

The phrase no person is an island means that no person is completely self-sufficient; all of us rely on others to some extent in order to survive and thrive. The same is true of software. While it is technically possible for every piece of software to be built completely from scratch, this simply isnt practical in most cases.

Instead, developers frequently use modules or packages of code, frequently found in open source repositories such as Github, which they can use to piece together their software. Think of these as the pre-constructed window frames, doors, and bricks that a builder might use to construct a new house.

There are multiple reasons why developers might rely on open source code in this way. A big one is the speed at which developers must often work. A developer likely has a fixed budget and deadline that theyre working to, making it impractical to spend time building every single component of the software theyre working on. Using open source code also allows them to build their programs using code that they might not have the expertise to build. To return to the house-building analogy, a person building a house may not have the expertise to create beautifully constructed doors. In addition, the crowdsourced nature of open source code, which has been contributed to and examined by large numbers of users, can help with spotting and fixing bugs and potential vulnerabilities.

With this in mind, its no surprise to hear that open source ecosystems are booming, whether thats Java, JavScript, .NET, or Python: contributing to hundreds of thousands of projects, drawing on millions of downloadable packages available to developers. Those numbers are only going to increase over time.

But while open source software brings no shortage of benefits to developers, it nonetheless poses potential risks to developers. Thats where tools like WAF can help. What is WAF? Short for web application firewall, its one of the many cybersecurity tools available to help devs tackle a growing problem. Consider it a must have.

Open source, by its nature, attracts large numbers of users from all over the world. According to one report, open source code is found in upward of 30 percent of commercially released applications and far more when considering tools such as software for internal use. Unfortunately, its not just the good folks that are attracted to open source.

The number of attacks on open source projects have ramped up significantly. One piece of analysis suggests that the number of attacks have increased by upward of 650 percent over the past year.

For attackers, one of the reasons for trying to target open source projects is because it allows them to poison the well that is then used by large numbers of applications. Rather than targeting proprietary or custom code, if an attacker can find a way to carry out malicious code injection or some other attack targeting open source projects, this tainted code could then be baked into legitimate software.

Although open source code is, by its nature, open and inspectable, many developers may not spend the necessary time carrying out this inspection process. Instead, they could assume that this bug-spotting has been carried out by other users, opting instead to spend that time developing new features or getting on with other projects.

Companies which do not do their proper due diligence when it comes to the use of open source modules or packages in applications could introduce serious vulnerabilities making possible everything from large scale data exfiltration to remote code execution. The damage could be major, whether thats non-compliance with laws around protecting data, operational risks, or damage to the reputation of the companies that use this open source code.

Protecting vulnerable open source code is essential. Luckily, there are tools that can help. A WAF or WAAP (web app and API solution) can help to virtually patch open source vulnerabilities, preventing them from being exploited. These tools can assist with offering protection against security issues that may plague open source code. They can assist with detecting and quickly blocking any attempted exploitation by hackers of code vulnerabilities.

Adopting these tools is among the smartest moves organizations can make. This way, customers and users can continue to enjoy the myriad advantages the open source software community has to offer without having to worry about potential risks.

While its still crucial that developers properly inspect the code they use, this is nonetheless a valuable safeguard for any potential vulnerabilities that slip through the cracks. Attacks on open source projects arent going away. But by using solutions such as this, its possible to mitigate the worst potential damages they can cause.

Read the rest here:

IoT News | The Security Risks of Open Source Software - IoT Business News

"Proactive security scanning of code is a must" – JAXenter

JAXenter: Hi Chris, thanks for taking the time to answer our questions. Can you tell us about the UA-Parser-JS NPM Open Source library hack? What happened and how many people were affected?

Chris Eng: UA-Parser-JS is a popular open source library that performs a simple but useful function it determines the browser, engine, operating system, CPU, and device type by inspecting the User-Agent header sent by the end users web browser. The library is downloaded millions of times per week and is used by projects at major enterprises including Facebook, Amazon, Google, and Microsoft.

For a 4-hour period on Friday, October 22, the library contained malicious code that would infect devices with cryptominers and password-stealing malware. When a compromised package is installed on a users device, a script will check the operating system of the device and launch a Linux shell script or Windows batch file, resulting in attempting to steal passwords stored on the device. Cryptominers are also being deployed in a similar way.

Any project using ua-parser-js that performed an automated build during this 4-hour window would be potentially impacted. Additionally, if a developer manually downloaded the infected library during that time frame and introduced it into a new or existing project, they would be affected as well. Its unclear at this time how many downstream users were impacted by the malware.

SEE ALSO: Legacy systems contain outdated hardware & software that is often difficult to replace

JAXenter: What was the response to the attack and how was it fixed?

Chris Eng: The owner of NPM, GitHub, was quick to act. They removed the compromised packages and issued security advisory for those using the affected versions. The package maintainer also quickly released patches so that any projects configured to pull down the most recent version of the library would receive a clean copy. Users have been advised to upgrade to newer versions of the library, as well as check their systems for compromise, with a full list of indicators of compromise being shared.

JAXenter: How likely is it that another malicious package will be spread in a similar manner and what potential harm could it cause?

Chris Eng: Very likely. Its not the first time weve seen a supply chain attack, and it wont be the last. Threat actors, like the ones from this incident, could introduce any sort of malicious code. The outcome could be even worse if the library developers were less active, or if the injected code were stealthier for example, if it had introduced a subtle vulnerability in the library that could be exploited later.

JAXenter: What are the unique security concerns that come with open source software?

Chris Eng: We often assume security has been considered by the developers behind an open source library, but thats not always the case. Developers publishing open source code may not have taken security into account at all. And these libraries are being updated all the time, sometimes with fixes to issues, and other times with more vulnerabilities added in. Proactive security scanning of code is a must when we cant ensure the software is taking security into account in development.

Seven of every 10 applications use at least one flawed open source library with a vulnerability, according to research from ESG last year. Despite the prevalence of vulnerabilities in open source code, organizations continue to use these libraries without much regard for security. Veracodes State of Software Security (SoSS) v11: Open Source edition from earlier this year found that nearly 80% of the time, third-party libraries are never updated once they are added to a code base.

A great example of the potential harm these malicious open source packages can bring is the 2017 Equifax hack. Failing to update vulnerable open source libraries was one of the contributing factors in that breach, which compromised social security numbers and other PII for over 143 million people.

JAXenter: Of course, we shouldnt trust packages based merely on their number of downloads. So, what can we do to boost our open source security practices to avoid malicious code?

Chris Eng: For starters, we need to improve upon that stat that nearly 80% of third party libraries are never updated after they are added. Regularly updating open source software allows for new versions with security patches to replace the older versions with exploitable vulnerabilities.

Removing vulnerable code is as much a cultural issue as a technology issue though. Leadership needs to carve out time for developers to scan their software for vulnerabilities before deployment and take action to remediate the findings. Usually there is an emphasis on pushing out new features as quickly as possible, without factoring in the risk associated with new vulnerabilities, not to mention existing security debt.

We need to add early and recurring security scanning for open source code into the day-to-day operations of DevSecOps. Additionally, teams should proactively scan after deployment as well, since new vulnerabilities in open source libraries are discovered all the time.

SEE ALSO: wasmCloud allows us to rethink the cloud as just a stop on the way

JAXenter: How dependable are security scanning tools? Should we spend more time analyzing manually, or is human error more of an issue?

Chris Eng: Security scanning tools are a must-have for todays enterprises. There are far too many vulnerabilities in software to rely on a manual approach. Security scanning tools have the ability to automatically identify both first-party and third-party software vulnerabilities. Enterprises should invest in scanners that take into account business objectives, levels of risk for each vulnerability, and flaws that can be fixed the fastest to create a clear path forward for remediation. Ideally, the best scanners will provide these results, quickly, automatically, and holistically across the entire software lifecycle.

JAXenter: And finally, whats in your essential security toolkit?

Chris Eng: Lots of things, but lets focus on this particular incident. All of this likely began with the takeover of a developer account. Control of the account allowed the threat actor to inject malicious code which then propagated to projects using the library. Im willing to bet that the compromised developer account was not using multi-factor authentication (MFA) and was being protected only by a password. And that password was likely compromised either via a phishing attack, or as a result of an unrelated breach where the developer had re-used a common password. Had the developer enabled MFA on their npm account, this attack probably never would have happened.

For organizations building software with open source libraries, a Software Composition Analysis (SCA) tool is key. Being able to quickly identify which of your projects used this vulnerable library either directly or transitively (i.e. using a different library which depends on the vulnerable library) is an important first step in understanding your organizations exposure to an incident like this.

See the original post here:

"Proactive security scanning of code is a must" - JAXenter

Pantheon Kicks off Program to Give Back to Open Source Communities with Second Annual Gift of Open Source – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Pantheon, the SaaS-based website operations (WebOps) platform for developers, designers and marketers, today announced it will kick off its second annual Gift of Open Source on Dec. 1. The free, month-long event, which runs through Dec. 31, connects technical and non-technical audiences to opportunities to contribute to open source projects that are designed to make the open web more inclusive, efficient and impactful for all.

The programs ultimate goal is to provide resources and mentorship to engage and energize first-time contributors to give back to open source. Opportunities are broad, spanning code- or non-code-based contributions to Drupal or WordPress projects, Pantheon repositories, GitHub pull requests or adjacent projects. For each contribution up to 500 contributions made, Pantheon will donate $20 to the Drupal Association and WordPress Foundation, for a total potential of $5,000 to each organization to support their efforts.

We are passionate advocates for the open web, and we believe deeply that it will play a critical role in solving many of todays most meaningful challenges, said Josh Koenig, Co-Founder and Chief Strategy Officer at Pantheon. This event is all about promoting the open webs power, growing the community of contributors and celebrating the collective mark we can make on the future together.

During the inaugural Gift of Open Source in 2020, the event generated more than 140 technical and non-technical contributions across open source projects. Among these were contributions that helped non-profit organizations enable donation collection via their sites; added multi-tag filtering to support Terminus; and improved AI transcripts for diverse speaker training workshops, making the content easier to translate and extend to broader audiences. For many, the event marked their first contributions to the community.

"My name appeared in the WordPress credits for the first time ever thanks to the Gift of Open Source," said Joel Yoder, Director of Web Services at Saint Mary-of-the-Woods College. I am eager to jump in and find a few projects to dig in on during this years event.

Registration will remain open through Dec. 31, 2021. However, all participants must share their contributions here by 12:59pm PST on Dec. 31 to receive credit. Details on participation, available project matching resources, and incentives for participants are available on pantheon.io.

About Pantheon

Pantheons WebOps Platform powers the open web, running sites in the cloud for customers including Stitch Fix, Okta, Home Depot, Pernod Ricard and The Barack Obama Foundation. Every day, thousands of developers and marketers create, iterate, and scale websites on the open web to reach billions of people globally. Pantheons SaaS model puts large and small web and digital teams in control of increasing the performance of their teams, websites, and marketing programs. Pantheon cloud native software includes governance, security and collaboration tools that make it easy to securely manage a single website or thousands of websites across multiple teams in one platform. The built-in ability to simultaneously create, test, deploy and run live sites with unrivaled hosting speed, scalability and uptime give marketing teams the agility to win in the dynamic world of digital marketing.

Visit link:

Pantheon Kicks off Program to Give Back to Open Source Communities with Second Annual Gift of Open Source - Business Wire

In the ’80s, spaceflight sim Elite was nothing short of magic. The annotated source code shows how it was done – The Register

Just a fortnight under 40 years ago, the BBC Micro was released. Although it was never primarily a games machine it was too expensive, for a start nonetheless one of its defining programs was a video game: Elite.

Its source was released a few years ago, but your correspondent just discovered a lavishly described and documented online edition if you want to see exactly how it was done. The annotations were written by Mark Moxon, a web dev and journalist who among many other things was once editor of Acorn User magazine.

Elite was famous for several things, including its very considerable difficulty and amazing for 1984 wireframe 3D graphics with hidden-line removal. This was displayed on a screen which combined high-resolution and multi-colour graphics in a way the BBC's hardware couldn't natively do: the game changed screen modes from Mode 4 (medium-resolution monochrome) to Mode 5 (low-resolution four-colour) two-thirds of the way though generating each screen. At 50Hz, on a 2MHz 6502.

Some of the remarkable features were not so obvious, though. For instance, the game contained eight galaxies with 256 planets. A database of 2,048 star systems would have filled the computer's tiny 22kB of memory. (22kB? Yes: modes 4 and 5 both take up 10kB of the Beeb's standard puny 32kB of RAM.) This would have been far more obvious with the programmers' original 248, or 281,474,976,710,656, planets.

The answer was that the game generated the list of galaxies and planets on the fly, using a modified Fibonacci sequence, allowing for more places to explore than would fit into the program. A similar method was used to generate the 4,000 unique locations in Mike Singleton's Lords of Midnight, released the same year.

The difference being that unlike Singleton's ZX Spectrum game, you can read about what Elite did on the Elite Wiki and then study the source code to see how developers Ian Bell and David Braben achieved it.

Moxon's site, which has been updated throughout 2021 and 2020, covers the original cassette and floppy-disc versions, as well as those for the Electron, BBC Master, 6502 Second Processor, unreleased versions and even the third-party enhancement Elite-A.

Read the original post:

In the '80s, spaceflight sim Elite was nothing short of magic. The annotated source code shows how it was done - The Register

So you want to be a software developer? Advice on getting started from self-taught programmer-turned-CTO Eric Solender – Technical.ly

With the mass open source database that is the internet, you have all the resources you need to learn coding available to you, but often the problem with a sea of knowledge is distilling it into drinkable lessons.

Eric Solender is a self-taught computer programmer who teamed with then-fellow students to start Mindstand Technologies atUMBC,then left the unversity to become full-time CTO of the company, which uses AI to improve and measure diversity, equity and inclusion in online communities. This year, Solender was chosen as one of Technical.lys RealLIST Engineers in Baltimore.

Now 23, Solender has been teaching himself the programming skills hes used to build his career since he was 15.

Of course, teaching himself is a relative term, as no man is an island. His own curiosity and industriousness was met with support along the way. Solenderlearned first from watching his dad work as a software engineer at Circleback, Inc., and interning with the firm after passing a high school AP computer science class. At 17, he built a concussion detection tool using the motion controls of Xbox Kinect after suffering his own severe concussion. This led to a position with Columbia-based cybersecurity firm Masterpeace Solutions working with startups. When IoT security startup Zuul became a company after spinning out of Masterpeace Launchpad, Solender worked with that team.

The engine that powered Solenders growth from company to company was his desire to learn and fill the gaps in his knowledge. Below are the tips and tricks hes learned to maximize his acquisition of coding skills.

But before we get into the lessons and advice on how to maximize the journey of a self taught programmer, lets acknowledge the counter argument to self-teaching: learning the skill wrong. Regardless of your primary method of learning to code, whether it be a bootcamp, computer science class or the university of Youtube, it is extremely important to learn the fundamentals of a skillset or language correctly.

Code Academy is what Solender used years ago to learn the basics of Python when it was primarily free. Now it has a paid model but there are still plenty of free resources on the site.

These are libraries Solender sees as the best get started quickly resources in Application Programming Interface or (API) development: FastAPI, which helps build out the web framework using python for beginners. Flask, a quick way to learn and get into web development.

Here are a few more options:

Application of the material is one of the tried and true methods of learning. When getting up to speed with a new coding language, Solender always looks to do a project thatll make himself laugh to solidify a technique into memory.

I dont just do coding exercises, said Solender. I try to come up with some very small contained project I can write in that language that will exercise all the things I need to make sure I understand.

A Texas Hold Em project he completed in the AP computer science course in high school came to mind:

A coding project Eric Solender completed in high school (Courtesy photo)

Heres a demo of the Concussion program he made with the Xbox Kinect:

Another way to work on those skills and create projects that lead to opportunities is contribute to open source projects. Solenders most notable Github project is with Mindstand.

These are books that have survived changes in technology like Design Patterns: Elements of Reusable Object-Oriented Software by a group of technologists now known as theGang of Four. Although published over 20 years ago and centered around C++, design patterns such as the Singleton pattern and Factory pattern highlighted in that book are still relevant today.

Another widely-read book is Introduction to Algorithms from MIT Press.

My philosophy is if you can understand the patterns you can adapt them to whatever the modern language is, said Solender.

Object-oriented languages are fundamentally different from learning a functional language. But outside of that distinction, Solender has found that most programming languages are very similar. Learning one after learning another is closer to learning a new dialect than a whole language.

Thats what makes guides like Teach a Python Programmer to Use GO and Java to GO helpful.

For pretty much every language theres a guide someone wrotethatll say heres this in language A. Heres what it looks like in language B, said Solender. And that gets me to a point that I can kind of code in that language. Then I dig really deep into the way youre supposed to [code] in that language.

Solenders journey in tech is an example of what early education and more of an apprenticeship-style model of learning could achieve. Since his sophomore year of high school, hes been working in the tech industry, and it inspired that drive to learn from a variety of resources that arent just formal education.

If you get a little bit of encouragement and you know where to find the resources, you can pretty much teach yourself everything you need to know on your own, said Solender.

Link:

So you want to be a software developer? Advice on getting started from self-taught programmer-turned-CTO Eric Solender - Technical.ly

Addressing the Low-Code Security Elephant in the Room – Dark Reading

With all the hype around low-code/no-code platforms, many are now touting the benefits of adopting low-code/no-code development. Lets address the (security) elephant in the room: Anyone can spin up applications using these tools, but who is responsible for the security of these applications?

If, similar to cloud computing, it is a shared-responsibility model, then where do we draw the lines of responsibility among the different parties involved?

One Size Does Not Fit AllLow-code applications are diverse: They come in different forms, vary in how they are deployed, and solve a broad range of problems. When discussing the security responsibility model for low-code applications, we have to first understand the different layers of a low-code application. Here is a brief summary:

We can also consider the low-code platform development environment used to develop the application as Layer 0. Even if you do everything necessary to rigorously secure your application, if a malicious user gets access to your development console thats just as bad.

Security Is a Shared ResponsibilityCloud computings approach to the shared-responsibility model is straightforward: As you advance in your cloud journey and adopt higher levels of abstraction, the security responsibility shifts away from you and toward the cloud provider.

Should we consider low-code/no-code applications as yet another step in this evolution?

It depends. Where the responsibility lies depends on the choices you make when adopting low-code development. For example, with the infrastructure layer, are you planning on hosting your application in a private cloud or a public data center? Some low-code/no-code platforms are designed specifically for on-premises or hybrid cloud/on-premises deployments. If you decide to host your own applications, you will have full control over the underlying infrastructure, but that also means you are responsible for securing every aspect of the environment.

Application-Layer ChoicesWhat are some development choices about the application layer that affect the security responsibility?

If the low-code application is strictly made up of low-code platform native capabilities or services, you only have to worry about the basics. That includes application design and business logic flaws, securing your data in transit and at rest, security misconfigurations, authentication, authorizing and adhering to the principle of least-privilege, providing security training for your citizen developers, and maintaining a secure deployment environment. These are the same elements any developer low-code or traditional would need to think about in order to secure the application. Everything else is handled by the low-code platform itself.

That is as basic as it gets.

But what if you are making use of additional widgets, components, or connectors provided by the low-code platform? Those components and the code used to build them are definitely out of your jurisdiction of responsibility. You may need to consider how they are configured or used in your application, though. Its possible that an incorrectly used component may lead to a potential vulnerability in your application.

For example, most low-code platforms provide a SQL database connector, which enables low-code app developers to run SQL queries to access the data stored in the databases. In some common SQL connectors that we looked at, we saw several methods for interacting with databases: Some provided strict security and allowed less flexibility to developers, while others were more flexible. If used incorrectly, those connectors with flexible methods could lead to a disastrous SQL injection (SQLi) vulnerability. For example, a successful SQLi attack against a low-code application can result in unauthorized access to the data. The attacker may be able to manipulate the data or even execute shell commands on the database server.

The third choice is to extend the components library with custom components because the low-code/no-code platform of choice does not provide all the needed (or desired) functionality. For example, you may create Mendix custom widgets to create dynamic menus in your application, Appian custom plug-in components to render a Google Maps object, orCanvas Apps in Microsoft Power Apps to integrate data from other Microsoft applications.

While custom built components provide extensibility and the freedom to create functionality as you see fit, they also introduce more code and logic to your application. Just like with traditionally developed software, more code and logic means a greater chance of introducing defects, design flaws, and security vulnerabilities. When developing custom components, even in the low-code/no-code world, make sure you have the proper SDLC and security processes in place. Developers should follow your organizations security policy and guidelines for developing and deploying applications.

Finally, you may have to rely on third-party components because the functionality you are looking for does not exist as a native service or is offered as an add-on component by your low-code platform. In this case, you will be responsible for vetting and choosing third-party components based on several factors:

Similar to vetting third-party open source packages, you must have a process in place to make sure you are not turning these components into the weakest link of your application security chain.

Choosing Between the Cloud and On-PremisesIts quite common to integrate low-code applications with existing public cloud accounts in order to consume public cloud services, such as storage buckets, message queues, databases, and so forth. If that is the case, you have to add cloud security as an additional factor to the overall security posture of your application. You should make sure you are adopting a mature cloud security posture management approach.

Many low-code/no-code platforms offer connectivity to on-premises data and applications. As an example, organizations that use the Microsoft Power Apps low-code platform have the option to use an on-premises data gateway, which acts as a bridge to provide quick and secure data transfer between on-premises data (data not in the cloud) and several Microsoft cloud services. Another example is when using the Appian low-code platform with robotic process automation (RPA), which supports a hybrid cloud/on-premises deployment model.

When creating a bridge between the cloud and your organizations on-premises infrastructure, data, and applications, you are essentially opening up your private assets to access from the public Internet. Needless to say, in such cases security and privacy should be top-of-mind, and access should be as restricted as possible encrypted and monitored at all times.

Who Is Responsible? The VerdictGiven all the different options for low-code application development, theres really no simple answer. Neither is there a straight line we can draw in some low-code stack security chart that would be clear-cut. Low-code/no-code is a paradigm shift in the way software is developed, from monolithic, to microservices, and now low-code/no-code. It should not be viewed as a way to abstract away hardware and deployment models as part of the next phase in the evolution of cloud computing.

The bottom line is that low-code/no-code applications are another form of software. It is inevitable they will contain bugs, design flaws, vulnerabilities, and misconfigurations that will introduce risk. Even if you are giving away some of the control and responsibility to a low-code/no-code platform provider or other supplier, you are still the owner of your application and its data. You remain responsible for making sure the applications are secure and adhere to your corporate security policies and standards.

Regardless of how much abstraction you use, and how much control you are giving up, always keep in mind the following two aspects: know your apps, and secure your business logic. You need tofully understand how your low-code applications are developed, deployed and maintained. Always make sure you have full visibility to your low-code applications, and address any security concerns raised here. And regardless of how your application is developed, you should always make sure that you applied secure design, development and application security best practices. A simple flaw in business logic can make the most resilient application vulnerable.

View post:

Addressing the Low-Code Security Elephant in the Room - Dark Reading

IBM, David Clark Cause Award Saaf Water Top Call for Code Prize | The Weather Channel – Articles from The Weather Channel | weather.com – The Weather…

People collect drinking water from a tanker in Sanjay camp, Chanakyapuri, on July 10, 2021, in New Delhi, India. Call for Code winner Saaf Water created a solution to help identify unsafe drinking water.

An innovative water-quality monitoring device called Saaf Water is winner of the 2021 Call for Code Global Challenge.

The annual Call for Code competition, now in its fourth year, is a partnership between the David Clark Cause, United Nations Human Rights, the Linux Foundation and IBM, the parent company of weather.com and The Weather Company.

Winners were announced at a ceremony Tuesday night in New York.

Saaf Water will receive $200,000 and support to incubate, test, and deploy their solution from the IBM Service Corps and expert partners in the Call for Code ecosystem, and assistance from The Linux Foundation to open source their application.

(MORE: 2021 Call for Code Finalists Named)

Saaf Water is a cellular-enabled water-quality monitoring device designed to be universally compatible with various types of community water pumps. It tracks characteristics such as dissolved solids, turbidity, temperature and pH and uses artificial intelligence to predict when water quality can go bad.

If problems are detected, an onsite visual indicator is triggered to alert those using the system. There is also a dashboard that can be viewed in a web browser and sent through SMS messaging to subscribed users.

The groundwater quality monitoring tool developed by Saaf Water is promising, timely, and appears to have great potential for use by communities relying on groundwater for domestic use, Annapurna Vancheswaran, managing director of The Nature Conservancy India, said in a news release. This open-source technology could help avoid water-related health risks by indicating unsafe water quality. We certainly look forward to the tool being scaled up for the benefit of communities.

The team behind Saaf Water.

Contaminated drinking water is estimated to contribute to at least 480,000 deaths a year worldwide and at least 2 billion people use a contaminated water source, according to the World Health Organization.

Saaf is a Hindi word for clean and the team behind it took inspiration from their native India and their own lives. Team member Hrishikesh Bhandari's mother became ill after drinking water from her village's groundwater supply, which was assumed to be safe. The other Saaf team members all have friends or family members impacted by contaminated water.

Fittingly, this year's Call for Code Global Challenge was launched on World Water Day.

(MORE: App Aims To Empower Female Farmers with Weather, Climate Data)

Saaf Water was one of five finalists in the competition. Green Farm, a platform designed to connect small farmers with consumers and help solve problems faced by community supported agricultural organizations, was awarded second place and $25,000; Project Scavenger, an app that helps users safely dispose of e-waste, received third place and $25,000; Honestly, an online browser extension that will alert users to things like bad press on a brand they are shopping, provide relevant ratings aggregated from outside sources and list carbon footprint and supply chain data, took fourth place and earned $10,000; Plenti, an app to help prevent food waste at home was awarded fifth place and $10,000.

Trashtag, a technology to verify, track, and reward waste removal in outdoor areas, took the top prize in the university category and will receive $10,000 as well as an invitation for team members to interview for potential roles at IBM.

To date, more than 20,000 Call for Code applications have been built using open source-powered software, with more than 500,000 developers and problem solvers participating across 180 nations.

What makes Call for Code unique is the impact it is making on the ground through our deployments in communities around the world, Bob Lord, senior vice president of worldwide ecosystems for IBM, said in a news release. The potential of these technologies, like Saaf Water, are vast and have the potential help save lives.

Visit IBM Developer to learn more about Call for Code.

The Weather Companys primary journalistic mission is to report on breaking weather news, the environment and the importance of science to our lives. This story does not necessarily represent the position of our parent company, IBM.

See the original post:

IBM, David Clark Cause Award Saaf Water Top Call for Code Prize | The Weather Channel - Articles from The Weather Channel | weather.com - The Weather...

Bilibili, China’s YouTube, joins the Open Invention Network – ZDNet

Even in 2021, I still hear people saying, "Open source is somehow suspicious, or it's not good for business." Multi-billion-dollar Chinese companies know better. Bilibili has joined other Chinese technology powerhouses such as ByteDance, TikTok's parent company, and its rival Kuaishou, in joining the Open Invention Network (OIN).

The OIN is the world's largest patent non-aggression consortium. It protects Linux and related open source software and the companies behind them from patent attacks and patent trolls. The OIN recently broadened its scope from core Linux programs and adjacent open source code by expanding its Linux System Definition to other patents such as the Android Open Source Project (AOSP) and the Extended File Allocation Table exFAT file system.

The OIN does this by practicing patent non-aggression in core Linux and related open source technologies by cross-licensing Linux System patents to one another on a royalty-free basis. Patents owned by OIN are similarly licensed royalty-free to any organization that agrees not to assert its patents against the Linux System. Any company can do this by simply signing the OIN license online.

So, why is a company that makes its money from giving its young content creators a platform allying with open source? For the same reason, almost all companies rely on open source for their software. It makes good, hard financial sense.

As Wang Hao, Bilibili's VP, explained, "We are committed to opening and sharing technologies and providing positive motivation in the innovation field of playback transmission, interactive entertainment, and cloud-native ecology through open source projects. Linux and open source are important software infrastructures that promote business developments. Our participation in the OIN community demonstrates our consistent and ongoing commitment to shared innovation. In the future, we will also firmly support Linux's open source innovation."

Related Stories:

Read more from the original source:

Bilibili, China's YouTube, joins the Open Invention Network - ZDNet

Pennacchio Introduces Bill Promoting Election Transparency and Integrity – InsiderNJ

Pennacchio Introduces Bill Promoting Election Transparency and Integrity

Legislation Requires Open-Source Paper Ballot Voting Systems

To restore the publics confidence, Senator Joe Pennacchio introduced legislation that would increase the transparency and reliability of elections in the state.

Elections are the cornerstone of our democracy. Recently, the public has begun questioning the accuracy and security of the election process, said Pennacchio (R-26). People have the right to demand elections that are fair and honest, and this bill would help restore faith in the process.

Pennacchios bill, S-4162, would require paper ballots for in-person voters and mandate the use of open-source code for software controlling optical scanners used to record the votes.

Similar bills are under consideration in state legislatures across the country. Even elected officials in California, where the Secure the Vote Act would require open-source paper voting, realize our democracy is threatened when residents question the validity of elections, Pennacchio said.

We want to ensure transparency of the mechanisms of voting software and hardware. Currently, the proprietary process is cloaked in secrecy, and neither the voting public nor the media have access to any preliminary data, said Pennacchio. People want to know that their vote will be counted, and that they dont have to worry about vote tampering, or any other interference.

The Senators legislation further addresses the publics distrust by requiring open-source coding for software controlling scanning equipment and other gear.

The coding used in commercial voting booths is proprietary and can obscure vulnerabilities that hackers may exploit. The code in open-source software, on the other hand, is accessible to the large community of developers who can uncover weaknesses in the system and more importantly, can create transparency of the entire system.

Requiring the use of open-source coding will allow developers and coding experts to comb through the programs and identify flaws and security vulnerabilities, said Pennacchio. It will increase oversight and public confidence in the process.

In 2019, Switzerland was prepared to roll out a new voting system prior to a national election. When the private source coding was leaked, however, researchers detectednumerous weaknessesthat went undiscovered by a team of professional auditors.

Utilizing this unusual pairing of old school process and evolving technology can help ensure the accuracy and reliability of election results and preserve our democracy for many generations to come, Pennacchio said.

(Visited 32 times, 1 visits today)

Read the original post:

Pennacchio Introduces Bill Promoting Election Transparency and Integrity - InsiderNJ

Protecting todays web applications requires more than a firewall – Security Boulevard

The way organizations build web applications has changed dramatically over the last several years. As a result, many organizations are considering additional security strategies to augment the Web Application Firewall (WAF) on which they have relied to protect critical digital business operations from vulnerabilities. New technology has created a development environment where the web application threat landscape grows larger and more complex every day. Fortunately, there are solutions available to shore up your web application security and account for vulnerabilities you may not know you had. Implementing these solutions will require you to think differently about web application security. In this post, well talk about what has changed in the web application development process that makes you vulnerable to different threats and why relying on patching alone to address them may no longer be sufficient. Well also explain how a shift-left, inside out approach with a unified security model will augment the protection you get from your WAF and create a more sustainable and scalable application security strategy.

Every year, we discover new vulnerabilities in commercial, off-the-shelf software, open source libraries or in dependencies introduced by other applications. The number of vulnerabilities has exploded over the last few years. As we wrote more code to create more applications, we used better tooling and better analysis to find them. Another cause of the vulnerability explosion lies in the way organizations compose applications. Developers are moving beyond traditional web application building methods and introducing elements such as APIs and micro services which bring their own vulnerabilities into the mix.

Legitimate web application developers are not the only people out there looking for vulnerabilities. Imperva Research Labs reported that in the first half of 2021, Imperva blocked 40% more web application incidents in the financial services industry than over the same time period in 2020. Given what we know, this is not a surprise. The COVID pandemic has left lots of prospective attackers sitting at home out of work, potentially looking for different avenues to get money. The tools required to affect attacks have been democratized over the years, making it easier and cheaper than ever to constantly take pot shots at applications and see what sticks. This generation of bad actors has the time and motivation to look for and exploit vulnerabilities to launch attacks on high value targets like financial services.

Applications may be categorized as first party applications and third party applications. First-party applications are those that your organization writes and develops themselves. Lets say an eCommerce company has a website where people can add items to a shopping cart and buy them when they check out. The website is a first-party application. If this company doesnt do the check out, but instead, employs another system, component, microservice, or API to do it, then they are using a third-party application working in conjunction with the first party application (website). Some organizations have zero development capabilities and use only third-party applications, something as simple as off-the-shelf software they deploy in their environment.

Historically, organizations have had control over first-party applications (there are blind spots that well address later). You have control over your own code, you can audit some of the open-source code thats coming into your environment, and so you have some level of observability. However, with most tools, you dont really have visibility into the security of third-party applications, you find yourself virtually taking the vendors word that they are protected.

The key element that stretches between first and third-party applications is risk, and both types of applications can introduce risk back into the overall environment that you need to manage to protect your applications. One risk mitigation technique is to draw a firewall between first and third-party applications. As the threat landscape grows, this is unlikely to be sufficient. Even if your vendor patches a zero-day vulnerability, the response may be inexact, take too long, and fail to prevent the code vulnerability from showing up everywhere in your environment.

In 2021, the non-profit organization Open Web Application Security Project (OWASP) which helps website owners and security experts protect web applications from cyber attacks introduced for the first time the concept of insecure design as the 4th most important security risk affecting web applications. This represents an overall realization that developers should consider a shift left approach to application security, putting a greater emphasis on more rigorous threat modeling or secure design upfront in the process. Going forward, application developers may need to consider compensating controls as part of their overall design to counter bringing in risky third party code that has unknown vulnerabilities.

OWASPs greater emphasis this year on the risks of using components with known vulnerabilities is really a sign of the times in terms of whats going on with supply chain attacks. There are more bad actors than ever trying to compromise supply chain vulnerabilities. The most efficient way to wreak supply chain havoc is to compromise a library or component that many organizations use.

Traditional security methods are highly effective at stopping threats from the outside that are trying to break into an environment, but are ineffective at stopping supply chain attacks because you just dont see whats going on inside of the application. As the code base and the number of dependencies increase, developers need to understand exactly whats going on in or around applications in the software supply chain, whether its bad dependencies or bad packages or solutions that are around the core application, in either first party or third party applications.

In the positive security model, you allow the application to do what it does based on its current behavior and anything outside of that behavior or abnormal behavior is just not going to work. You dont require any signatures or updates and you can protect the entire stack. You can easily deploy it by flipping a switch, no need to constantly update it with patches, and no risk of degrading the applications performance. This is where runtime protection (RASP) comes in. RASP is a light-weight agent that attaches to your applications and, unlike a firewall, which protects from the outside, in this solution, RASP offers code-level protection from the inside out. RASP lives squarely in the realm of development and security,

A web application firewall is great at stopping attacks compromising known vulnerabilities. In instances where an attack targets unknown vulnerabilities in certain code in the applications themselves, a unified positive security model including RASP provides more targeted controls designed to automatically mitigate those types of attacks. The same thing applies to client side attacks, which target the users in their actual browser environments themselves. With visibility from inside the application, Imperva RASP is able to understand how the individual components are working with one another and offer automated mitigation in order for enterprises to focus on business logic without compromising on security.

Deficiencies in so many organizations ability to protect the overall supply chain have driven changes in todays regulatory landscape, and RASP is now frequently identified in regulations as an effective control in delivering that protection.

This post was adapted from the webinar, Mitigate Vulnerabilities in Application Code Without Emergency Patching. Watch it on demand here.

See RASP in action yourself. Start a free trial today.

The post Protecting todays web applications requires more than a firewall appeared first on Blog.

*** This is a Security Bloggers Network syndicated blog from Blog authored by Bruce Lynch. Read the original post at: https://www.imperva.com/blog/protecting-todays-web-applications-requires-more-than-a-firewall/

Read this article:

Protecting todays web applications requires more than a firewall - Security Boulevard