Public Key Signature: What It Is & Why It’s Everywhere – Hashed Out by The SSL Store

PKI digital signatures can be found virtually everywhere from digitally signed emails and software to secure websites. Well break down what a PKI signature is and how it helps protect your datas integrity

Remember when you were a kid and your parents told you that if you put your mind to it, you can do or be anything you want? Well, on the internet, that is kind of true. You can pretty much make your own truth about yourself you could be a teenager, an adult, or a companys CEO. Without a way to prove your claims are legitimate, no one will be any the wiser.

Cybercriminals know this and love to take advantage of it. Thats why we have all the issues that we do today relating to phishing other sorts of predatory cyber attack techniques. Before the internet, you had to meet up with someone face to face to securely exchange information or send coded, encrypted messages.

But now that people are communicating and doing business with others across the world instantaneously, face-to-face meetups are no longer feasible in most cases. So, to protect yourself and your customers, you need to have a way to prove your identity online and help people know that your emails, files, and software are legitimate and havent been faked. This is where PKI signatures come into play.

But what is a public key signature? How is a digital signature different from other electronic signatures? And where can you find PKI digital signatures in action?

Lets hash it out.

Before we can dive head-first into the nitty-gritty of public key signatures, it would be smart to at least briefly recap what a digital signature is as well as the role it plays in public key infrastructure (PKI). After all, you cant run the play if you dont know the rules.

A PKI signature is a form of verifiable digital identity that helps you prove you (or something you create) is real. In a way, its kind of like a fingerprint because its something that uniquely identifies you. However, its more than just identity. A digital signature is a way for your organization to affirm its legitimacy through the use of a digital certificate (such as a code signing certificate) and a cryptographic key.

In a nutshell, using a PKI digital signature enables you to attach your verifiable identity to software, code, emails, and other digital communications so people know theyre not fake. This helps you:

If that all seems a bit complicated, lets break this down with more of a simple analogy

A PKI signature is the modern equivalent of a wax seal that people historically would use to secure sensitive communications. Before the internet or the invention of the telephone, people would either meet up in person or communicate remotely via written letters. Of course, without digital communications, these messages would have to be delivered by hand via train, boat, or horseback riders which means that these messages could be intercepted on their way to their intended recipients.

Say, you want to send a sensitive message to a friend. Youd want to have a way to let them know that you signed it and that the message hasnt been tampered with in any way. Years ago, youd use a wax seal to achieve this. This process would entail:

When your friend receives your message, theyll see that the wax seal intact. This unbroken wax seal indicates that your message is legitimate in two crucial ways:

In much the same way, communications on the internet also need to have these same types of protections. While theyre not being sent by horseback, digital communications pass through a lot of hands as they transmit across the internet in the form of servers, routers, and other intermediates until they reach the right destination. This means that cybercriminals would have many opportunities to alter or manipulate your information in transit if there wasnt a way for the recipient to verify the messages integrity.

Heres a great video from Computerphile that helps to explain PKI digital signatures in another way:

People often mistakenly conflate PKI digital signatures and electronic signatures as being the same, but thats not quite true. Yes, a digital signature is a type of electronic signature, but not all electronic signatures are digital signatures. Its kind of like how all iPhones are smartphones but not all smartphones are iPhones. Sure, they both are a way to say youre someone on the internet, but only one of them (*cough*PKI signature*cough*) can actually help you prove your identity because its more than just an online signature that can be altered.

Its kind of like getting an autograph of your favorite athlete like, say, quarterback Tom Brady. (Sorry, Pats fans, Tom is ours now! #TampaBayBucs) Sure, you could just walk up to Tom at a bar and ask him to sign something. But without having some way to authenticate that his signature is real like, say, an official certificate of authenticity then someone could argue that anyone could have signed his name.

Or, for all they know, you really could have gotten Tom to autograph one item. But what would stop you from sitting at home on the weekends, using his signature as an example so that you can forge his autograph on a bunch of Buccaneers team gear that you want to sell? Well, nothing, unless your prospective buyers had a way to verify the autographs legitimacy.

This is kind of like the difference between an electronic signature and a digital signature:

To really get at the heart of understanding public key signatures, you need to know about two cryptographic processes that play pivotal roles in their creation: encryption and hashing.

This cryptographic process takes a mathematical algorithm and applies it to plaintext (readable) data to scramble it into an unreadable state. It can use:

As you can see, there are some key differences (excuse the pun) between asymmetric and symmetric encryption. Regardless of those differences, the process is, essentially, reversible (using the decryption key), which means that encryption is a two-way function.

In digital signatures, encryption is used to specifically encrypt the hash data to create the digital signature. (It doesnt encrypt the file or email you want to digitally sign it only encrypts the hash value.)

Hashing is a cryptographic function that also applies a mathematical algorithm to data and files. However, its purpose is different than an encryption algorithm a hashing algorithm takes data of any length and maps it to an output (hash value) of a specific length. For example, you can take a single sentence or an entire book, apply a hash function to it, and the result will be an output (hash value) of the same length.

Because the process isnt reversible, theres not a key that reverts or maps the hash value back to the original input. This means that hashing is a one-way cryptographic function. (You know because hashing only works in one direction.)

In truth, digital signatures can be found all across the internet. For example, you can use digital signatures in the following applications:

A website security certificate, or whats known as an SSL/TLS certificate, is one of the most central components of security on the internet. Installing this certificate on your server enables you to secure your website using the secure HTTPS protocol. Enabling HTTPS means that whenever customers connect to your website, their individual connections (and any data they share during their session) will be secured using encryption. This is what makes that nifty little padlock icon appear in your browser.

A digital signature is a part of whats known as the TLS handshake (or what some people still call the SSL handshake). We wont get into all of the specifics here, but the first part of the handshake involves the websites server and users browser exchanging information (including the servers SSL/TLS certificate and digital signatures) via an asymmetric encrypted connection. Using a digital signature helps the server prove that its the legitimate server for the website youre trying to visit.

A document signing certificate enables you to apply your digital signature to many types of documents, including Microsoft Office documents and PDFs (depending on the specific certificate you use). Heres a quick example of what a digital signature looks like:

Using an email signing certificate (i.e., an S/MIME certificate) allows you to apply your digital signature to your emails. This provides identity assurance and protects the integrity of your communications.

Note: For extra security, you can also use this certificate to send encrypted emails (to users who also use email signing certificates). This provides secure, end-to-end encryption that protects your data both while its bouncing between servers and routers and sitting on your recipients email server.

Using a code signing certificate helps you to protect your supply chain. It also offers assurance to users who download your software that your software is both legitimate and unmodified.

When you sign your certificates using a code signing certificate, youll display your verified company organization information (as shown in the screenshot on the right):

Of course, unsigned (and software signed using standard code signing certificates) can also trigger Windows SmartScreen warning messages as well the difference would be that digitally signed software would display the verified publisher information instead of Unknown publisher.

To avoid displaying Windows SmartScreen messages, be sure to sign your software, code, and other executables using an extended validation code signing certificate. Using this PKI digital signature ensures Microsoft and its browsers automatically trust your software.

Remember the SSL/TLS handshake that we mentioned earlier? Well, in two-way authentication, or whats known as mutual authentication, both the server and the client prove their identities to one another. This means that in addition to the server providing its information to the client, the client must do the same by providing information to the server.

This information includes a generated hash value, digital client certificate, and cryptographic public key. The client generates the hash using data it exchanges with the server and encrypts the fixed length string using its private key (which is mathematically related to the public key it shares).

Heres a basic overview of how this process works:

Public key signatures are essential in an internet-oriented world. As more companies are moving to the cloud and relying on this public network to conduct business and provide services, the roles of identity and integrity in security become more important.

Of course, weve talked about the reasons why its so important at length in a previous article. Be sure to check out our article on why you should use digital signatures to sign everything. But well quickly summarize the key reasons here for you about why digital signatures matter:

Thanks to all of you whove stuck through this article to get to this point. For those of you whove decided to skip to the end for the too long; didnt read portion of our article, welcome. We know your time is precious, so heres a quick overview of what weve covered in this article so you can skim and head out on your way.

All of this is to say that this cryptographic technique is all about helping companies prove their authenticity and giving users a way to verify that files, software, and other information havent been manipulated or altered since they were digitally signed.

Stay tuned next week for a related article that will break down how digital signatures work.

View post:
Public Key Signature: What It Is & Why It's Everywhere - Hashed Out by The SSL Store

IoT News | The Security Risks of Open Source Software – IoT Business News

The phrase no person is an island means that no person is completely self-sufficient; all of us rely on others to some extent in order to survive and thrive. The same is true of software. While it is technically possible for every piece of software to be built completely from scratch, this simply isnt practical in most cases.

Instead, developers frequently use modules or packages of code, frequently found in open source repositories such as Github, which they can use to piece together their software. Think of these as the pre-constructed window frames, doors, and bricks that a builder might use to construct a new house.

There are multiple reasons why developers might rely on open source code in this way. A big one is the speed at which developers must often work. A developer likely has a fixed budget and deadline that theyre working to, making it impractical to spend time building every single component of the software theyre working on. Using open source code also allows them to build their programs using code that they might not have the expertise to build. To return to the house-building analogy, a person building a house may not have the expertise to create beautifully constructed doors. In addition, the crowdsourced nature of open source code, which has been contributed to and examined by large numbers of users, can help with spotting and fixing bugs and potential vulnerabilities.

With this in mind, its no surprise to hear that open source ecosystems are booming, whether thats Java, JavScript, .NET, or Python: contributing to hundreds of thousands of projects, drawing on millions of downloadable packages available to developers. Those numbers are only going to increase over time.

But while open source software brings no shortage of benefits to developers, it nonetheless poses potential risks to developers. Thats where tools like WAF can help. What is WAF? Short for web application firewall, its one of the many cybersecurity tools available to help devs tackle a growing problem. Consider it a must have.

Open source, by its nature, attracts large numbers of users from all over the world. According to one report, open source code is found in upward of 30 percent of commercially released applications and far more when considering tools such as software for internal use. Unfortunately, its not just the good folks that are attracted to open source.

The number of attacks on open source projects have ramped up significantly. One piece of analysis suggests that the number of attacks have increased by upward of 650 percent over the past year.

For attackers, one of the reasons for trying to target open source projects is because it allows them to poison the well that is then used by large numbers of applications. Rather than targeting proprietary or custom code, if an attacker can find a way to carry out malicious code injection or some other attack targeting open source projects, this tainted code could then be baked into legitimate software.

Although open source code is, by its nature, open and inspectable, many developers may not spend the necessary time carrying out this inspection process. Instead, they could assume that this bug-spotting has been carried out by other users, opting instead to spend that time developing new features or getting on with other projects.

Companies which do not do their proper due diligence when it comes to the use of open source modules or packages in applications could introduce serious vulnerabilities making possible everything from large scale data exfiltration to remote code execution. The damage could be major, whether thats non-compliance with laws around protecting data, operational risks, or damage to the reputation of the companies that use this open source code.

Protecting vulnerable open source code is essential. Luckily, there are tools that can help. A WAF or WAAP (web app and API solution) can help to virtually patch open source vulnerabilities, preventing them from being exploited. These tools can assist with offering protection against security issues that may plague open source code. They can assist with detecting and quickly blocking any attempted exploitation by hackers of code vulnerabilities.

Adopting these tools is among the smartest moves organizations can make. This way, customers and users can continue to enjoy the myriad advantages the open source software community has to offer without having to worry about potential risks.

While its still crucial that developers properly inspect the code they use, this is nonetheless a valuable safeguard for any potential vulnerabilities that slip through the cracks. Attacks on open source projects arent going away. But by using solutions such as this, its possible to mitigate the worst potential damages they can cause.

Read the rest here:

IoT News | The Security Risks of Open Source Software - IoT Business News

"Proactive security scanning of code is a must" – JAXenter

JAXenter: Hi Chris, thanks for taking the time to answer our questions. Can you tell us about the UA-Parser-JS NPM Open Source library hack? What happened and how many people were affected?

Chris Eng: UA-Parser-JS is a popular open source library that performs a simple but useful function it determines the browser, engine, operating system, CPU, and device type by inspecting the User-Agent header sent by the end users web browser. The library is downloaded millions of times per week and is used by projects at major enterprises including Facebook, Amazon, Google, and Microsoft.

For a 4-hour period on Friday, October 22, the library contained malicious code that would infect devices with cryptominers and password-stealing malware. When a compromised package is installed on a users device, a script will check the operating system of the device and launch a Linux shell script or Windows batch file, resulting in attempting to steal passwords stored on the device. Cryptominers are also being deployed in a similar way.

Any project using ua-parser-js that performed an automated build during this 4-hour window would be potentially impacted. Additionally, if a developer manually downloaded the infected library during that time frame and introduced it into a new or existing project, they would be affected as well. Its unclear at this time how many downstream users were impacted by the malware.

SEE ALSO: Legacy systems contain outdated hardware & software that is often difficult to replace

JAXenter: What was the response to the attack and how was it fixed?

Chris Eng: The owner of NPM, GitHub, was quick to act. They removed the compromised packages and issued security advisory for those using the affected versions. The package maintainer also quickly released patches so that any projects configured to pull down the most recent version of the library would receive a clean copy. Users have been advised to upgrade to newer versions of the library, as well as check their systems for compromise, with a full list of indicators of compromise being shared.

JAXenter: How likely is it that another malicious package will be spread in a similar manner and what potential harm could it cause?

Chris Eng: Very likely. Its not the first time weve seen a supply chain attack, and it wont be the last. Threat actors, like the ones from this incident, could introduce any sort of malicious code. The outcome could be even worse if the library developers were less active, or if the injected code were stealthier for example, if it had introduced a subtle vulnerability in the library that could be exploited later.

JAXenter: What are the unique security concerns that come with open source software?

Chris Eng: We often assume security has been considered by the developers behind an open source library, but thats not always the case. Developers publishing open source code may not have taken security into account at all. And these libraries are being updated all the time, sometimes with fixes to issues, and other times with more vulnerabilities added in. Proactive security scanning of code is a must when we cant ensure the software is taking security into account in development.

Seven of every 10 applications use at least one flawed open source library with a vulnerability, according to research from ESG last year. Despite the prevalence of vulnerabilities in open source code, organizations continue to use these libraries without much regard for security. Veracodes State of Software Security (SoSS) v11: Open Source edition from earlier this year found that nearly 80% of the time, third-party libraries are never updated once they are added to a code base.

A great example of the potential harm these malicious open source packages can bring is the 2017 Equifax hack. Failing to update vulnerable open source libraries was one of the contributing factors in that breach, which compromised social security numbers and other PII for over 143 million people.

JAXenter: Of course, we shouldnt trust packages based merely on their number of downloads. So, what can we do to boost our open source security practices to avoid malicious code?

Chris Eng: For starters, we need to improve upon that stat that nearly 80% of third party libraries are never updated after they are added. Regularly updating open source software allows for new versions with security patches to replace the older versions with exploitable vulnerabilities.

Removing vulnerable code is as much a cultural issue as a technology issue though. Leadership needs to carve out time for developers to scan their software for vulnerabilities before deployment and take action to remediate the findings. Usually there is an emphasis on pushing out new features as quickly as possible, without factoring in the risk associated with new vulnerabilities, not to mention existing security debt.

We need to add early and recurring security scanning for open source code into the day-to-day operations of DevSecOps. Additionally, teams should proactively scan after deployment as well, since new vulnerabilities in open source libraries are discovered all the time.

SEE ALSO: wasmCloud allows us to rethink the cloud as just a stop on the way

JAXenter: How dependable are security scanning tools? Should we spend more time analyzing manually, or is human error more of an issue?

Chris Eng: Security scanning tools are a must-have for todays enterprises. There are far too many vulnerabilities in software to rely on a manual approach. Security scanning tools have the ability to automatically identify both first-party and third-party software vulnerabilities. Enterprises should invest in scanners that take into account business objectives, levels of risk for each vulnerability, and flaws that can be fixed the fastest to create a clear path forward for remediation. Ideally, the best scanners will provide these results, quickly, automatically, and holistically across the entire software lifecycle.

JAXenter: And finally, whats in your essential security toolkit?

Chris Eng: Lots of things, but lets focus on this particular incident. All of this likely began with the takeover of a developer account. Control of the account allowed the threat actor to inject malicious code which then propagated to projects using the library. Im willing to bet that the compromised developer account was not using multi-factor authentication (MFA) and was being protected only by a password. And that password was likely compromised either via a phishing attack, or as a result of an unrelated breach where the developer had re-used a common password. Had the developer enabled MFA on their npm account, this attack probably never would have happened.

For organizations building software with open source libraries, a Software Composition Analysis (SCA) tool is key. Being able to quickly identify which of your projects used this vulnerable library either directly or transitively (i.e. using a different library which depends on the vulnerable library) is an important first step in understanding your organizations exposure to an incident like this.

See the original post here:

"Proactive security scanning of code is a must" - JAXenter

In the ’80s, spaceflight sim Elite was nothing short of magic. The annotated source code shows how it was done – The Register

Just a fortnight under 40 years ago, the BBC Micro was released. Although it was never primarily a games machine it was too expensive, for a start nonetheless one of its defining programs was a video game: Elite.

Its source was released a few years ago, but your correspondent just discovered a lavishly described and documented online edition if you want to see exactly how it was done. The annotations were written by Mark Moxon, a web dev and journalist who among many other things was once editor of Acorn User magazine.

Elite was famous for several things, including its very considerable difficulty and amazing for 1984 wireframe 3D graphics with hidden-line removal. This was displayed on a screen which combined high-resolution and multi-colour graphics in a way the BBC's hardware couldn't natively do: the game changed screen modes from Mode 4 (medium-resolution monochrome) to Mode 5 (low-resolution four-colour) two-thirds of the way though generating each screen. At 50Hz, on a 2MHz 6502.

Some of the remarkable features were not so obvious, though. For instance, the game contained eight galaxies with 256 planets. A database of 2,048 star systems would have filled the computer's tiny 22kB of memory. (22kB? Yes: modes 4 and 5 both take up 10kB of the Beeb's standard puny 32kB of RAM.) This would have been far more obvious with the programmers' original 248, or 281,474,976,710,656, planets.

The answer was that the game generated the list of galaxies and planets on the fly, using a modified Fibonacci sequence, allowing for more places to explore than would fit into the program. A similar method was used to generate the 4,000 unique locations in Mike Singleton's Lords of Midnight, released the same year.

The difference being that unlike Singleton's ZX Spectrum game, you can read about what Elite did on the Elite Wiki and then study the source code to see how developers Ian Bell and David Braben achieved it.

Moxon's site, which has been updated throughout 2021 and 2020, covers the original cassette and floppy-disc versions, as well as those for the Electron, BBC Master, 6502 Second Processor, unreleased versions and even the third-party enhancement Elite-A.

Read the original post:

In the '80s, spaceflight sim Elite was nothing short of magic. The annotated source code shows how it was done - The Register

Pantheon Kicks off Program to Give Back to Open Source Communities with Second Annual Gift of Open Source – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Pantheon, the SaaS-based website operations (WebOps) platform for developers, designers and marketers, today announced it will kick off its second annual Gift of Open Source on Dec. 1. The free, month-long event, which runs through Dec. 31, connects technical and non-technical audiences to opportunities to contribute to open source projects that are designed to make the open web more inclusive, efficient and impactful for all.

The programs ultimate goal is to provide resources and mentorship to engage and energize first-time contributors to give back to open source. Opportunities are broad, spanning code- or non-code-based contributions to Drupal or WordPress projects, Pantheon repositories, GitHub pull requests or adjacent projects. For each contribution up to 500 contributions made, Pantheon will donate $20 to the Drupal Association and WordPress Foundation, for a total potential of $5,000 to each organization to support their efforts.

We are passionate advocates for the open web, and we believe deeply that it will play a critical role in solving many of todays most meaningful challenges, said Josh Koenig, Co-Founder and Chief Strategy Officer at Pantheon. This event is all about promoting the open webs power, growing the community of contributors and celebrating the collective mark we can make on the future together.

During the inaugural Gift of Open Source in 2020, the event generated more than 140 technical and non-technical contributions across open source projects. Among these were contributions that helped non-profit organizations enable donation collection via their sites; added multi-tag filtering to support Terminus; and improved AI transcripts for diverse speaker training workshops, making the content easier to translate and extend to broader audiences. For many, the event marked their first contributions to the community.

"My name appeared in the WordPress credits for the first time ever thanks to the Gift of Open Source," said Joel Yoder, Director of Web Services at Saint Mary-of-the-Woods College. I am eager to jump in and find a few projects to dig in on during this years event.

Registration will remain open through Dec. 31, 2021. However, all participants must share their contributions here by 12:59pm PST on Dec. 31 to receive credit. Details on participation, available project matching resources, and incentives for participants are available on pantheon.io.

About Pantheon

Pantheons WebOps Platform powers the open web, running sites in the cloud for customers including Stitch Fix, Okta, Home Depot, Pernod Ricard and The Barack Obama Foundation. Every day, thousands of developers and marketers create, iterate, and scale websites on the open web to reach billions of people globally. Pantheons SaaS model puts large and small web and digital teams in control of increasing the performance of their teams, websites, and marketing programs. Pantheon cloud native software includes governance, security and collaboration tools that make it easy to securely manage a single website or thousands of websites across multiple teams in one platform. The built-in ability to simultaneously create, test, deploy and run live sites with unrivaled hosting speed, scalability and uptime give marketing teams the agility to win in the dynamic world of digital marketing.

Visit link:

Pantheon Kicks off Program to Give Back to Open Source Communities with Second Annual Gift of Open Source - Business Wire

Addressing the Low-Code Security Elephant in the Room – Dark Reading

With all the hype around low-code/no-code platforms, many are now touting the benefits of adopting low-code/no-code development. Lets address the (security) elephant in the room: Anyone can spin up applications using these tools, but who is responsible for the security of these applications?

If, similar to cloud computing, it is a shared-responsibility model, then where do we draw the lines of responsibility among the different parties involved?

One Size Does Not Fit AllLow-code applications are diverse: They come in different forms, vary in how they are deployed, and solve a broad range of problems. When discussing the security responsibility model for low-code applications, we have to first understand the different layers of a low-code application. Here is a brief summary:

We can also consider the low-code platform development environment used to develop the application as Layer 0. Even if you do everything necessary to rigorously secure your application, if a malicious user gets access to your development console thats just as bad.

Security Is a Shared ResponsibilityCloud computings approach to the shared-responsibility model is straightforward: As you advance in your cloud journey and adopt higher levels of abstraction, the security responsibility shifts away from you and toward the cloud provider.

Should we consider low-code/no-code applications as yet another step in this evolution?

It depends. Where the responsibility lies depends on the choices you make when adopting low-code development. For example, with the infrastructure layer, are you planning on hosting your application in a private cloud or a public data center? Some low-code/no-code platforms are designed specifically for on-premises or hybrid cloud/on-premises deployments. If you decide to host your own applications, you will have full control over the underlying infrastructure, but that also means you are responsible for securing every aspect of the environment.

Application-Layer ChoicesWhat are some development choices about the application layer that affect the security responsibility?

If the low-code application is strictly made up of low-code platform native capabilities or services, you only have to worry about the basics. That includes application design and business logic flaws, securing your data in transit and at rest, security misconfigurations, authentication, authorizing and adhering to the principle of least-privilege, providing security training for your citizen developers, and maintaining a secure deployment environment. These are the same elements any developer low-code or traditional would need to think about in order to secure the application. Everything else is handled by the low-code platform itself.

That is as basic as it gets.

But what if you are making use of additional widgets, components, or connectors provided by the low-code platform? Those components and the code used to build them are definitely out of your jurisdiction of responsibility. You may need to consider how they are configured or used in your application, though. Its possible that an incorrectly used component may lead to a potential vulnerability in your application.

For example, most low-code platforms provide a SQL database connector, which enables low-code app developers to run SQL queries to access the data stored in the databases. In some common SQL connectors that we looked at, we saw several methods for interacting with databases: Some provided strict security and allowed less flexibility to developers, while others were more flexible. If used incorrectly, those connectors with flexible methods could lead to a disastrous SQL injection (SQLi) vulnerability. For example, a successful SQLi attack against a low-code application can result in unauthorized access to the data. The attacker may be able to manipulate the data or even execute shell commands on the database server.

The third choice is to extend the components library with custom components because the low-code/no-code platform of choice does not provide all the needed (or desired) functionality. For example, you may create Mendix custom widgets to create dynamic menus in your application, Appian custom plug-in components to render a Google Maps object, orCanvas Apps in Microsoft Power Apps to integrate data from other Microsoft applications.

While custom built components provide extensibility and the freedom to create functionality as you see fit, they also introduce more code and logic to your application. Just like with traditionally developed software, more code and logic means a greater chance of introducing defects, design flaws, and security vulnerabilities. When developing custom components, even in the low-code/no-code world, make sure you have the proper SDLC and security processes in place. Developers should follow your organizations security policy and guidelines for developing and deploying applications.

Finally, you may have to rely on third-party components because the functionality you are looking for does not exist as a native service or is offered as an add-on component by your low-code platform. In this case, you will be responsible for vetting and choosing third-party components based on several factors:

Similar to vetting third-party open source packages, you must have a process in place to make sure you are not turning these components into the weakest link of your application security chain.

Choosing Between the Cloud and On-PremisesIts quite common to integrate low-code applications with existing public cloud accounts in order to consume public cloud services, such as storage buckets, message queues, databases, and so forth. If that is the case, you have to add cloud security as an additional factor to the overall security posture of your application. You should make sure you are adopting a mature cloud security posture management approach.

Many low-code/no-code platforms offer connectivity to on-premises data and applications. As an example, organizations that use the Microsoft Power Apps low-code platform have the option to use an on-premises data gateway, which acts as a bridge to provide quick and secure data transfer between on-premises data (data not in the cloud) and several Microsoft cloud services. Another example is when using the Appian low-code platform with robotic process automation (RPA), which supports a hybrid cloud/on-premises deployment model.

When creating a bridge between the cloud and your organizations on-premises infrastructure, data, and applications, you are essentially opening up your private assets to access from the public Internet. Needless to say, in such cases security and privacy should be top-of-mind, and access should be as restricted as possible encrypted and monitored at all times.

Who Is Responsible? The VerdictGiven all the different options for low-code application development, theres really no simple answer. Neither is there a straight line we can draw in some low-code stack security chart that would be clear-cut. Low-code/no-code is a paradigm shift in the way software is developed, from monolithic, to microservices, and now low-code/no-code. It should not be viewed as a way to abstract away hardware and deployment models as part of the next phase in the evolution of cloud computing.

The bottom line is that low-code/no-code applications are another form of software. It is inevitable they will contain bugs, design flaws, vulnerabilities, and misconfigurations that will introduce risk. Even if you are giving away some of the control and responsibility to a low-code/no-code platform provider or other supplier, you are still the owner of your application and its data. You remain responsible for making sure the applications are secure and adhere to your corporate security policies and standards.

Regardless of how much abstraction you use, and how much control you are giving up, always keep in mind the following two aspects: know your apps, and secure your business logic. You need tofully understand how your low-code applications are developed, deployed and maintained. Always make sure you have full visibility to your low-code applications, and address any security concerns raised here. And regardless of how your application is developed, you should always make sure that you applied secure design, development and application security best practices. A simple flaw in business logic can make the most resilient application vulnerable.

View post:

Addressing the Low-Code Security Elephant in the Room - Dark Reading

So you want to be a software developer? Advice on getting started from self-taught programmer-turned-CTO Eric Solender – Technical.ly

With the mass open source database that is the internet, you have all the resources you need to learn coding available to you, but often the problem with a sea of knowledge is distilling it into drinkable lessons.

Eric Solender is a self-taught computer programmer who teamed with then-fellow students to start Mindstand Technologies atUMBC,then left the unversity to become full-time CTO of the company, which uses AI to improve and measure diversity, equity and inclusion in online communities. This year, Solender was chosen as one of Technical.lys RealLIST Engineers in Baltimore.

Now 23, Solender has been teaching himself the programming skills hes used to build his career since he was 15.

Of course, teaching himself is a relative term, as no man is an island. His own curiosity and industriousness was met with support along the way. Solenderlearned first from watching his dad work as a software engineer at Circleback, Inc., and interning with the firm after passing a high school AP computer science class. At 17, he built a concussion detection tool using the motion controls of Xbox Kinect after suffering his own severe concussion. This led to a position with Columbia-based cybersecurity firm Masterpeace Solutions working with startups. When IoT security startup Zuul became a company after spinning out of Masterpeace Launchpad, Solender worked with that team.

The engine that powered Solenders growth from company to company was his desire to learn and fill the gaps in his knowledge. Below are the tips and tricks hes learned to maximize his acquisition of coding skills.

But before we get into the lessons and advice on how to maximize the journey of a self taught programmer, lets acknowledge the counter argument to self-teaching: learning the skill wrong. Regardless of your primary method of learning to code, whether it be a bootcamp, computer science class or the university of Youtube, it is extremely important to learn the fundamentals of a skillset or language correctly.

Code Academy is what Solender used years ago to learn the basics of Python when it was primarily free. Now it has a paid model but there are still plenty of free resources on the site.

These are libraries Solender sees as the best get started quickly resources in Application Programming Interface or (API) development: FastAPI, which helps build out the web framework using python for beginners. Flask, a quick way to learn and get into web development.

Here are a few more options:

Application of the material is one of the tried and true methods of learning. When getting up to speed with a new coding language, Solender always looks to do a project thatll make himself laugh to solidify a technique into memory.

I dont just do coding exercises, said Solender. I try to come up with some very small contained project I can write in that language that will exercise all the things I need to make sure I understand.

A Texas Hold Em project he completed in the AP computer science course in high school came to mind:

A coding project Eric Solender completed in high school (Courtesy photo)

Heres a demo of the Concussion program he made with the Xbox Kinect:

Another way to work on those skills and create projects that lead to opportunities is contribute to open source projects. Solenders most notable Github project is with Mindstand.

These are books that have survived changes in technology like Design Patterns: Elements of Reusable Object-Oriented Software by a group of technologists now known as theGang of Four. Although published over 20 years ago and centered around C++, design patterns such as the Singleton pattern and Factory pattern highlighted in that book are still relevant today.

Another widely-read book is Introduction to Algorithms from MIT Press.

My philosophy is if you can understand the patterns you can adapt them to whatever the modern language is, said Solender.

Object-oriented languages are fundamentally different from learning a functional language. But outside of that distinction, Solender has found that most programming languages are very similar. Learning one after learning another is closer to learning a new dialect than a whole language.

Thats what makes guides like Teach a Python Programmer to Use GO and Java to GO helpful.

For pretty much every language theres a guide someone wrotethatll say heres this in language A. Heres what it looks like in language B, said Solender. And that gets me to a point that I can kind of code in that language. Then I dig really deep into the way youre supposed to [code] in that language.

Solenders journey in tech is an example of what early education and more of an apprenticeship-style model of learning could achieve. Since his sophomore year of high school, hes been working in the tech industry, and it inspired that drive to learn from a variety of resources that arent just formal education.

If you get a little bit of encouragement and you know where to find the resources, you can pretty much teach yourself everything you need to know on your own, said Solender.

Link:

So you want to be a software developer? Advice on getting started from self-taught programmer-turned-CTO Eric Solender - Technical.ly

Bilibili, China’s YouTube, joins the Open Invention Network – ZDNet

Even in 2021, I still hear people saying, "Open source is somehow suspicious, or it's not good for business." Multi-billion-dollar Chinese companies know better. Bilibili has joined other Chinese technology powerhouses such as ByteDance, TikTok's parent company, and its rival Kuaishou, in joining the Open Invention Network (OIN).

The OIN is the world's largest patent non-aggression consortium. It protects Linux and related open source software and the companies behind them from patent attacks and patent trolls. The OIN recently broadened its scope from core Linux programs and adjacent open source code by expanding its Linux System Definition to other patents such as the Android Open Source Project (AOSP) and the Extended File Allocation Table exFAT file system.

The OIN does this by practicing patent non-aggression in core Linux and related open source technologies by cross-licensing Linux System patents to one another on a royalty-free basis. Patents owned by OIN are similarly licensed royalty-free to any organization that agrees not to assert its patents against the Linux System. Any company can do this by simply signing the OIN license online.

So, why is a company that makes its money from giving its young content creators a platform allying with open source? For the same reason, almost all companies rely on open source for their software. It makes good, hard financial sense.

As Wang Hao, Bilibili's VP, explained, "We are committed to opening and sharing technologies and providing positive motivation in the innovation field of playback transmission, interactive entertainment, and cloud-native ecology through open source projects. Linux and open source are important software infrastructures that promote business developments. Our participation in the OIN community demonstrates our consistent and ongoing commitment to shared innovation. In the future, we will also firmly support Linux's open source innovation."

Related Stories:

Read more from the original source:

Bilibili, China's YouTube, joins the Open Invention Network - ZDNet

IBM, David Clark Cause Award Saaf Water Top Call for Code Prize | The Weather Channel – Articles from The Weather Channel | weather.com – The Weather…

People collect drinking water from a tanker in Sanjay camp, Chanakyapuri, on July 10, 2021, in New Delhi, India. Call for Code winner Saaf Water created a solution to help identify unsafe drinking water.

An innovative water-quality monitoring device called Saaf Water is winner of the 2021 Call for Code Global Challenge.

The annual Call for Code competition, now in its fourth year, is a partnership between the David Clark Cause, United Nations Human Rights, the Linux Foundation and IBM, the parent company of weather.com and The Weather Company.

Winners were announced at a ceremony Tuesday night in New York.

Saaf Water will receive $200,000 and support to incubate, test, and deploy their solution from the IBM Service Corps and expert partners in the Call for Code ecosystem, and assistance from The Linux Foundation to open source their application.

(MORE: 2021 Call for Code Finalists Named)

Saaf Water is a cellular-enabled water-quality monitoring device designed to be universally compatible with various types of community water pumps. It tracks characteristics such as dissolved solids, turbidity, temperature and pH and uses artificial intelligence to predict when water quality can go bad.

If problems are detected, an onsite visual indicator is triggered to alert those using the system. There is also a dashboard that can be viewed in a web browser and sent through SMS messaging to subscribed users.

The groundwater quality monitoring tool developed by Saaf Water is promising, timely, and appears to have great potential for use by communities relying on groundwater for domestic use, Annapurna Vancheswaran, managing director of The Nature Conservancy India, said in a news release. This open-source technology could help avoid water-related health risks by indicating unsafe water quality. We certainly look forward to the tool being scaled up for the benefit of communities.

The team behind Saaf Water.

Contaminated drinking water is estimated to contribute to at least 480,000 deaths a year worldwide and at least 2 billion people use a contaminated water source, according to the World Health Organization.

Saaf is a Hindi word for clean and the team behind it took inspiration from their native India and their own lives. Team member Hrishikesh Bhandari's mother became ill after drinking water from her village's groundwater supply, which was assumed to be safe. The other Saaf team members all have friends or family members impacted by contaminated water.

Fittingly, this year's Call for Code Global Challenge was launched on World Water Day.

(MORE: App Aims To Empower Female Farmers with Weather, Climate Data)

Saaf Water was one of five finalists in the competition. Green Farm, a platform designed to connect small farmers with consumers and help solve problems faced by community supported agricultural organizations, was awarded second place and $25,000; Project Scavenger, an app that helps users safely dispose of e-waste, received third place and $25,000; Honestly, an online browser extension that will alert users to things like bad press on a brand they are shopping, provide relevant ratings aggregated from outside sources and list carbon footprint and supply chain data, took fourth place and earned $10,000; Plenti, an app to help prevent food waste at home was awarded fifth place and $10,000.

Trashtag, a technology to verify, track, and reward waste removal in outdoor areas, took the top prize in the university category and will receive $10,000 as well as an invitation for team members to interview for potential roles at IBM.

To date, more than 20,000 Call for Code applications have been built using open source-powered software, with more than 500,000 developers and problem solvers participating across 180 nations.

What makes Call for Code unique is the impact it is making on the ground through our deployments in communities around the world, Bob Lord, senior vice president of worldwide ecosystems for IBM, said in a news release. The potential of these technologies, like Saaf Water, are vast and have the potential help save lives.

Visit IBM Developer to learn more about Call for Code.

The Weather Companys primary journalistic mission is to report on breaking weather news, the environment and the importance of science to our lives. This story does not necessarily represent the position of our parent company, IBM.

See the original post:

IBM, David Clark Cause Award Saaf Water Top Call for Code Prize | The Weather Channel - Articles from The Weather Channel | weather.com - The Weather...

Pennacchio Introduces Bill Promoting Election Transparency and Integrity – InsiderNJ

Pennacchio Introduces Bill Promoting Election Transparency and Integrity

Legislation Requires Open-Source Paper Ballot Voting Systems

To restore the publics confidence, Senator Joe Pennacchio introduced legislation that would increase the transparency and reliability of elections in the state.

Elections are the cornerstone of our democracy. Recently, the public has begun questioning the accuracy and security of the election process, said Pennacchio (R-26). People have the right to demand elections that are fair and honest, and this bill would help restore faith in the process.

Pennacchios bill, S-4162, would require paper ballots for in-person voters and mandate the use of open-source code for software controlling optical scanners used to record the votes.

Similar bills are under consideration in state legislatures across the country. Even elected officials in California, where the Secure the Vote Act would require open-source paper voting, realize our democracy is threatened when residents question the validity of elections, Pennacchio said.

We want to ensure transparency of the mechanisms of voting software and hardware. Currently, the proprietary process is cloaked in secrecy, and neither the voting public nor the media have access to any preliminary data, said Pennacchio. People want to know that their vote will be counted, and that they dont have to worry about vote tampering, or any other interference.

The Senators legislation further addresses the publics distrust by requiring open-source coding for software controlling scanning equipment and other gear.

The coding used in commercial voting booths is proprietary and can obscure vulnerabilities that hackers may exploit. The code in open-source software, on the other hand, is accessible to the large community of developers who can uncover weaknesses in the system and more importantly, can create transparency of the entire system.

Requiring the use of open-source coding will allow developers and coding experts to comb through the programs and identify flaws and security vulnerabilities, said Pennacchio. It will increase oversight and public confidence in the process.

In 2019, Switzerland was prepared to roll out a new voting system prior to a national election. When the private source coding was leaked, however, researchers detectednumerous weaknessesthat went undiscovered by a team of professional auditors.

Utilizing this unusual pairing of old school process and evolving technology can help ensure the accuracy and reliability of election results and preserve our democracy for many generations to come, Pennacchio said.

(Visited 32 times, 1 visits today)

Read the original post:

Pennacchio Introduces Bill Promoting Election Transparency and Integrity - InsiderNJ