The Most Comprehensive Guide to White Box Penetration Testing – Security Boulevard

The ultimate objective of any software developer is to create performant, secure, and usable applications. Realizing this goal requires every application to be tested thoroughly.

Testing is therefore a critical aspect of creating robust applications. Its what ensures the developed software meets the desired quality expectations.

This blog examines one of the vital testing methods: white box penetration testing. In this blog, well review the following:

What Is White Box Penetration Testing?

How Is White Box Pen Testing Performed?

Types of white box pen testing

White Box Pen Testing Techniques

White Box Pen Testing Tools

Advantages of White Box Pen Testing

Disadvantages of White Box Pen Testing

Differences Between White Box and Other Types of Pen Testing

Penetration testing, also referred to as ethical hacking or pen testing, is the process of performing an authorized attack on a system to identify security weaknesses or vulnerabilities.

White box is a type of penetration testing that assesses an applications internal working structure and identifies its potential security loopholes. The term white box is used because of the possibility to see through the programs outer covering (or box) into its inner structure. Its also called glass box pen testing, code-based pen testing, transparent box pen testing, open box pen testing, or clear box pen testing.

In this type of testing, the ethical hacker has full-disclosure of the applications internal configurations, including source code, IP addresses, diagrams, and network protocols. White box pen testing aims to simulate a malicious intruder who could have full familiarity with the target systems internal structure.

White box testing has three basic steps: prepare for testing, create and execute tests, and create the final report.

Preparation is the first step in the white box penetration testing technique. It involves learning and understanding the internal workings of the target application.

Performing successful white box testing requires the pen tester to have an in-depth knowledge of the inner functionalities powering the application. This way, itll be easier to create test cases to uncover security loopholes in the target software.

In this preparation phase, the tester acquaints themself with the source code of the application, such as the programming language used to create it and the tools used to deploy it.

After understanding how the application works internally, the pen tester then creates tests and executes them.

In this stage, the tester runs test cases that assess the softwares source code for the existence of any anomalies. The tester may write scripts to test the application manually, use testing tools for performing automated tests, or use other testing methods.

In the last stage, the pen tester creates a report that communicates the results of the entire testing process. The report should be provided in a format that is easy to understand, give a detailed description of the testing activity, and summarize the outputs of the testing tasks.

Creating the final report justifies the steps and strategies used, allows the team to analyze and improve the efficiency of the testing process, and provides a document for future reference.

There are several white box testing types that can be used to assess the internal functionalities of an application and reveal any security weaknesses.

WhiteSource Report DevSecOps Insights 2020 Download FreeReport

The main ones include the following:

Unit testing. The individual units or components of the applications source code are tested. It aims to validate whether each unit of the application can behave as desired. This type of white box testing is essential in identifying security anomalies early in the software development life cycle. Defects discovered during unit testing are easier and cheaper to fix.

Integration testing. This type of open box testing involves combining individual units or components of the applications source code and testing them as a group. The purpose is to expose errors in the interactions of the different interfaces with one another. It takes place after unit testing.

Regression testing. In regression testing, the pen tester performs further tests to verify that a recent change in the applications code has not harmed existing functionalities. The already executed test cases are rerun to confirm that previously created and tested features are working as desired. It verifies that the old code still works even after fixing bugs, adding extra security features, or implementing any changes.

A major technique for performing white box penetration testing is code coverage.

Code coverage is a metric that gauges the extent to which the source code has been tested. It computes the number of lines of code that have been validated successfully by a test scenario.

This is the formula for calculating it:

Code coverage = (Number of lines of code executed / Total number of lines of code) * 100

Suppose all your tests are passing with flying colors, but only capture about 55% of your codebase. Do the test results give you enough confidence?

With code coverage, you can determine the efficiency of the test implementation, quantitatively measure how your code is exercised, and identify the areas of your program not executed by test cases.

There are three main types of white box testing techniques and methods related to code coverage: statement, branch, and function coverage.

Statement coverage is the most basic form of code coverage analysis in white box pen testing. It measures the number of statements executed in an applications source code.

This is the formula for calculating it:

Statement coverage = (Number of statements executed / Total number of statements) * 100

Branch coverage is a white box pen testing technique that measures the number of branches of the control structures that have been executed.

It can check if statements, case statements, and other conditional loops present in the source code.

For example, in an if statement, branch coverage can determine if both the true and false branches have been exercised.

This is the formula for calculating it:

Branch coverage = (Number of branches executed / Total number of branches) * 100

Function coverage evaluates the number of defined functions that have been called. A pen tester can also provide different input parameters to assess if the logic of the functions could make them vulnerable to attacks.

This is the formula for calculating it:

Function coverage = (Number of functions executed / Total number of functions) * 100

Here are some common open source white box testing tools:

JUnit is a unit testing tool for pen testers using the Java programming language.

HtmlUnit is a Java-based headless browser that allows pen testers to make HTTP calls that simulate the browser functionality programmatically. Its mostly used for performing integration tests on web-based applications atop other unit testing tools like JUnit.

PyUnit is a unit testing tool for pen testers using the Python programming language.

Selenium is a suite of testing tools for automatically validating web applications across various platforms and browsers. It supports a wide range of programming languages, including Python, C#, and JavaScript.

Benefits of performing code-based penetration testing include the following:

The tests are deep and thorough, which maximizes the testers efforts.

It allows for code optimization and identification of hidden security issues.

Automating test cases is easier. This greatly reduces the time and costs of running repetitive tests.

Since white box testers are acquainted with the internal workings, the communication overhead between them and developers is reduced.

It offers the ability to identify security threats from the developers point of view.

Disadvantages of performing code-based penetration testing include the following:

White box testing is time-consuming and demanding because of its rigorous approach to penetration testing.

The tests are not done from the users perspective. This may not represent a realistic scenario of a potential non-informed attacker.

White box penetration testing is often compared to black box penetration testing. In black box testing, the pen tester does not have a deep understanding of the applications internal structures or workings. The term black box is used because its difficult to see through the programs outer covering (or box) when its completely closed.

One major difference between the two testing strategies is that black box pen testing does not have any prior information about the internal workings of the target system. A black box tester aims to penetrate the system just like an uninformed outside attacker does. Black box penetration testing is suitable when the pen tester wants to imitate an actual external attack scenario.

In penetration testing, white box is a useful approach to simulating the activities of an attacker who has full knowledge of the internal operations of the target system. It allows the pen tester to have exhaustive access to all the details about the system. This enables the pen tester to identify as many vulnerabilities as possible.

Of course, in some situations, you may opt for other pen testing methods, such as black box testing, to assume the stance of a non-informed outside potential attacker.

Recent Articles By Author

*** This is a Security Bloggers Network syndicated blog from Blog WhiteSource authored by Patricia Johnson. Read the original post at: https://resources.whitesourcesoftware.com/blog-whitesource/white-box-penetration-testing

Visit link:
The Most Comprehensive Guide to White Box Penetration Testing - Security Boulevard

Training Facial Recognition on Some New Furry Friends: Bears – The New York Times

From 4,675 fully labeled bear faces on DSLR photographs, taken from research and bear-viewing sites at Brooks River, Ala., and Knight Inlet, they randomly split images into training and testing data sets. Once trained from 3,740 bear faces, deep learning went to work unsupervised, Dr. Clapham said, to see how well it could spot differences between known bears from 935 photographs.

First, the deep learning algorithm finds the bear face using distinctive landmarks like eyes, nose tip, ears and forehead top. Then the app rotates the face to extract, encode and classify facial features.

The system identified bears at an accuracy rate of 84 percent, correctly distinguishing between known bears such as Lucky, Toffee, Flora and Steve.

But how does it actually tell those bears apart? Before the era of deep learning, we tried to imagine how humans perceive faces and how we distinguish individuals, said Alexander Loos, a research engineer at the Fraunhofer Institute for Digital Media Technology, in Germany, who was not involved in the study but has collaborated with Dr. Clapham in the past. Programmers would manually input face descriptors into a computer.

But with deep learning, programmers input the images into a neural network that figures out how best to identify individuals. The network itself extracts the features, Dr. Loos said, which is a huge advantage.

He also cautioned that, Its basically a black box. You dont know what its doing, and that if the data set being examined is unintentionally biased, certain errors can emerge.

See more here:
Training Facial Recognition on Some New Furry Friends: Bears - The New York Times

Re:Earth, a web service that allows anyone to publish their archives digitally, has been launched! – GlobeNewswire

The 4 Key Features of Re-Earth

Visualize data globally! Re:Earth, a web service that allows anyone to publish their archives digitally, has been launched!

TOKYO, JAPAN, Nov. 12, 2020 (GLOBE NEWSWIRE) -- Re:Earth supports a wide variety of visualization techniques and digital archiving for companies, NPOs, local governments, museums, art museums, and more. It can also be used as a visualization tool for activities working towards the SDGs!

Eukarya Inc. has released Re:Earth, a service that allows users to create and publish digital archives with location information without the need for coding.

Re:Earth

Official website:https://reearth.io/

Price: please contactinfo@eukarya.iofor more information.

1. The 4 Key Features of Re:Earth

(1) No need for coding

You can easily create digital archives and publish them to the public without programming. All you need is your browser. Even if there are no or insufficient engineers in your organization, you can easily digitize your documents and data.

(2) Storytelling feature

Allows material to be viewed in a sequential fashion, as if the viewer were in an actual exhibition room. This also creates the chance for archivists to present the information they want to share in an easy to digest manner.

(3) Location information

Simply drop a pin on the digital globe and it will give you location information. It is ideal for introducing cross-border activities or materials, as well as conveying information that is closely related to location. This could be the distribution of flora and fauna, humanities or commerce activity, or even the history of migration, all presentable in an easy-to-understand manner.

(4) Import / Export

It is possible to import existing data and export data registered in the archive (CSV, KML, etc.). It is not only for archiving, but also for further development of activities and deepening of research.

2. Product Background

We have been in charge of many digital archive production projects (planning, design, development and operation).

Over time we found that the hurdle to create digital archives is getting higher due to the necessity of coding and engineers. We want to make it faster and far easier for users to create digital archives by themselves.

"Re:Earth" will start as an enterprise application for now, but in the future we plan to diversify the content of the application to include a basic application for the consumer market.

We are also planning to eventually open the Re:Earth system up to the public as OSS (Open Source Software). Starting with our own developers home countries of Japan, China, Canada and Syria, we hope to form a worldwide OSS community with engineers based around Re:Earth.

3. Usage examples

(1) General companies:

Use digital archives to promote their social activities!

e.g. SDGs activities and fair trade can be visualized on the web by using GIS (Geographic Information System) technology and the digital globe. This is perfect for corporate branding.

(2) Local governments and tourism associations:

Digital maps of seasonal festivities and tourism information!

e.g. Users can edit the information directly from the UI, so you can create digital maps and archives with greater immediacy that can attract customers as well as share important information, such as details on congestion, specialty products, seasonal scenery, etc.

(3) Museums and archives:

Integrates into modern technology trends as Corona forces us to move more and more digitized!

e.g. Using the storytelling function allows for the creation of unique online exhibitions, digital outreach programs, and more.

4. Contract Details & Options

We make use of the knowledge we have accumulated in the production of archives to provide a wide range of support for our customers, from problem analysis, strategy and concept planning, to design and operation.

For more information, please feel free to contact us through the contact form on the official website or by email.

(1) Service

Basic Pack- Meeting (general explanation and Q&A)- Re:Earth demonstration- Re:Earth account issuance- Maintenance and Operation Support- Technical Support

Optional Services

- Creation of a digital archive production plan- Collection of materials and compilation of databases- Design of the digital archive (logo, icons, etc.)- Development of Re:Earth plug-ins (SNS integration, AR support, etc.)

(2) Contracted

After signing the contract we will issue you a Re:Earth account and show you how to use the service.

(3) Support

- When you first set up the service, we will give you free training on how to use it.

*Using remote meeting tools such as ZOOM and Google Meet.**Please contact us if it is difficult for security reasons.

- During the contract period, our engineers will support you via email or business chat tools such as Slack.

(4) Price

It depends on the scale of the project and the scope of our support.

Please feel free to contact us for more information.

(5) Official Website

Re:Earth Official Website

https://reearth.io/

A sample project made in Re:Earth

https://earth.reearth.io/

For more information, please feel free to contact us through the contact form on the official website or by email.We make use of the knowledge we have accumulated in the production of archives to provide a wide range of support for our customers, from problem analysis, strategy and concept planning, to design and operation.

5. Service Operator

Eukarya Inc.

Location: Yebisu Garden Place 27F COREBISU, 4-20-3 Ebisu, Shibuya-ku, Tokyo, JAPAN

Representative: Kenya Tamura, CEO

Established: 24 July 2017

URL:https://eukarya.io/

Email:info@eukarya.io

Phone: +81-90-6063-6784

View post:
Re:Earth, a web service that allows anyone to publish their archives digitally, has been launched! - GlobeNewswire

Microsoft: New VS Code update is out plus here’s what GitHub Codespaces will cost – ZDNet

Microsoft has released a new update of its Visual Studio Code (VS Code) code editor for Windows, Windows on Arm, macOS and Linux.

The latest update brings VS Code to version 1.51, which contains fixes for "housekeeping GitHub issues" that have emerged since GitHub Codespaces was released.

Microsoft in September aired its plans to kill off Visual Studio Codespaces the rebranded version of Visual Studio Online and merge it with GitHub's take on the online code-editing service, GitHub Codespaces.

SEE: Hiring Kit: Python developer (TechRepublic Premium)

Microsoft opted to consolidate Visual Studio Codespaces with GitHub Codespaces to "eliminate confusion, simplify the experience for everyone, and make more rapid progress to address customer feedback".

Visual Studio Codespaces users have until February 2021 to move to GitHub Codespaces. After that, the Visual Studio Codespaces offering on Azure will end.

The VS Code team said it has "worked with our partners at GitHub on GitHub Codespaces, which ended up being more involved than originally anticipated".

The team says it will continue working on GitHub housekeeping for part of the November iteration of VS Code.

Microsoft unveiled GitHub Codespaces in May, offering developers a cloud-hosted development environment that launches quickly inside GitHub so that developers can start contributing to projects immediately.

It offers developers a containerized, browser-based version of the VS Code editor, but developers can also opt to use their desktop IDEs instead to start a codespace in GitHub and connect to it from their desktops via VS Code.

Codespaces in GitHub supports VS Code's code completion and navigation, extensions, and terminal access.

GitHub Codespaces is still in a limited public beta. It's described as an "integrated development environment (IDE) on GitHub". During the beta phase, GitHub Codespaces is free to use. However, when it becomes generally available, users will be billed for storage and compute resources.

GitHub has now listed pricing details that will apply when GitHub Codespaces reaches general availability.

The 'basic' Linux package with two CPU cores, 4GB of RAM, and 32GB of SSD storage costs $0.085 per hour. The 'standard' option with four CPU cores, 8GB of RAM, 32GB of SSD costs $0.169 per hour, whole the 'premium' option with eight CPU cores, 16GB of RAM, 32GB of SSD costs $0.339 per hour.

Additionally, each codespace incurs monthly storage costs until users delete the codespace. Storage costs for all instance types are $0.10 per GB per month.

SEE: Windows 10: Microsoft details workaround for 'Reset This PC' failures in 2004 update

The proposed pricing is consistent with the recently discounted prices of instances for Visual Studio Codespaces in Azure, which was basically halved.

Besides GitHub housekeeping, Microsoft has introduced more prominent pinned tabs for the VS Code workbench, a custom hover for extension trees, and the ability to install a VS Code extension without synchronizing it while settings sync is enabled.

Users can also now move the cursor while suggestions are showing, allowing users to trigger suggestions at the end of a word, move left to see more suggestions, and then use replace to overwrite the word.

See more here:

Microsoft: New VS Code update is out plus here's what GitHub Codespaces will cost - ZDNet

Cyber Actors Stole Source Code Of U.S. Government Agencies And Businesses: FBI – Mashable India

The U.S. Federal Bureau of Investigation (FBI) issued a security alert warning that states have stolen source code from U.S. government agencies and private businesses. These hackers are abusing misconfigured SonarQube applications to access sensitive data, reports ZDNet.

SEE ALSO: Apple Paid $2,88,500 To Ethical Hackers For Discovering Security Flaws In Its Systems

The alert was issued by the FBI back in October and states that unidentified threat actors have been actively targeting vulnerable SonarQube applications since April 2020. The attack is being conducted with the purpose of access to source code repositories of U.S. government agencies and private businesses in the technology, finance, retail, food, eCommerce, and manufacturing sectors.

It further states that these hackers exploit known configuration vulnerabilities that give them access to proprietary code. For the uninitiated, SonarQube is an open-source automatic code review tool that detects bugs and security vulnerabilities in source code. FBI states that these unidentified threat actors leaked internal data from two organizations.

The stolen data was sourced from SonarQube instances that used default port settings and admin credentials running on the affected organizations networks, states the FBI.

The FBI has also listed a slew of mitigations for organizations to protect themselves from these threats including changing the SonarQube default settings, changing default administrator username, password, and port (9000), placing SonarQube instances behind a login screen, etc.

SEE ALSO: Russian Hackers Hit U.S. Hospitals With Ransomware Attack

See the article here:

Cyber Actors Stole Source Code Of U.S. Government Agencies And Businesses: FBI - Mashable India

You Don’t Have to Be a Moodle Expert to Make the Most Out of Your Moodle – Moodle

Video ConferencingBigBlueButton is an open source web conferencing system that supports real-time sharing of slides (including a white board), audio, video, chat, breakout rooms and more that is available as a plugin to Moodle.

Another option is Zoom, a cloud-based video conferencing solution that can be integrated with Moodle using the Zoom meeting plugin or via an LTI.

GamificationGamifying your courses can help you keep your learners motivated and engaged during their learning journey. From literally turning your quiz into a video game where answers to questions come down as spaceships for learners to hit the correct one, to rewarding your students with coins or items to collect theres many options to bring gamification into Moodle courses.

The options to extend your Moodles functionality are many. You can start by checking out Moodles Certified Integrations for add-ons that work seamlessly with Moodle, and also the communitys favorite plugins for more inspiration on how you can transform your Moodle.

It is also worth noting that with Moodle you own your site data, meaning that the configurations, integrations, all your course content everything is yours to do with as you please, which is important when you think of future-proofing your learning platform.

With open-source Moodle, you have a number of options when it comes to how you host your site. You can self-host your Moodle instance or work with a Certified Moodle Partner or a number of vendors. Because Moodle is open source and all users own their data, it also makes it easy to change how you host your site if needed.

The benefit of this is that if you are ever dissatisfied with your current instance, need more support, or are just looking for a change, you can bring your Moodle site with you. You dont have to worry about starting from scratch or losing any of the hard work youve already put into configuring your site.

Like weve said, you dont have to be a technical expert to create and set up your Moodle in a way that works for your organisation. In saying that, if you need extra help with configuration or with certain aspects key to keeping your Moodle site running smoothly (taking care of hosting, maintenance, updates), we have a network of Certified Moodle Partners that can help.

Beyond customisation, our Certified Partner network is there to support you with a range of Moodle services to help you meet your learning goals. Working with a Certified Moodle Partner ensures your Moodle is configured to meet your needs and budget, while taking the heavy lifting off your shoulders, so you have more time to focus on what you do best.

As Moodle is an open source solution, there are many companies that provide Moodle-based online learning solutions. However, only Certified Moodle Partners have our seal of approval and a guarantee for excellent solutions and services, and ensure you are actively contributing back to our open-source Moodle project.

If youre ready to begin with Moodle LMS, learn how to get started on our website, and chat with other Moodlers in our community forums.

Our worldwide network of Certified Moodle Partners can also help you with customised LMS hosting, maintenance and training.

Read more from the original source:

You Don't Have to Be a Moodle Expert to Make the Most Out of Your Moodle - Moodle

GCHQ takes action against Russian COVID-19 vaccine disinformation. Source code theft. OceanLotus has a network of fake sites. – The CyberWire

Britains GCHQ has gone on the offensive against anti-vaccine propaganda. The Times says the SIGINT agency is using techniques proved against Islamic State online activity against state-sponsored purveyors of vaccine disinformation. Its not a comprehensive rumor-control effort, but operates against state-directed disinformation only, not ordinary grassroots craziness.

The campaign against which GCHQs efforts are directed is Russian, Engineering and Technology reports. One of the disinformation campaigns central claims seems unlikely to convince anyone: a COVID-19 vaccine developed in the UK by AstraZeneca and Oxford University is bound to turn anyone who gets it into an ape, on account of that vaccine used a chimpanzee virus somewhere in its development. According to Reuters GCHQ is taking down hostile state-linked content and disrupting the communications of the cyberactors responsible. The Week suggests the motive for the disinformation is at least partly commercial: Russia is pushing widespread adoption of two vaccines developed there.

The US FBI last week made public an alert issued on a restricted basis back in October to the effect that unknown actors had exploited insecurely configured instances of the SonarQube code review tool to steal source code from companies and Government agencies. ZDNet summarizes the research into and remediation of the issue.

Volexity researchers report that OceanLotus, the Vietnamese cyberespionage crew also known as APT32, is using an array of bogus Web sites and Facebook pages to attract victims. CyberScoop notes that OceanLotus has, since its discovery in 2017, been particularly active against foreign corporations doing business in Vietnam.

More:

GCHQ takes action against Russian COVID-19 vaccine disinformation. Source code theft. OceanLotus has a network of fake sites. - The CyberWire

IBM Food Trust Delivers Traceability, Quality Assurance to Major Olive Oil Brands with Blockchain – cnweekly

ARMONK, N.Y., Nov. 11, 2020 /PRNewswire/ --IBM (NYSE: IBM) and olive oil producers Conde de Benalua, a cooperative in Spain made up of more than 2,000 farmers, and Rolar de Cuyo, an olive oil supplier in Argentina, today announced they are using IBM Food Trust on IBM Cloud to trace the lifecycle of their product and provide traceability,authenticity and quality for consumers. They join CHO, a Tunisia-based producer that makes Terra Delyssa brand olive oil, and I Potti de Fratini, a family-run oil mill in Italy, which joined IBM Food Trust earlier in 2020.

Using blockchain technology, these companies from around the world are promoting greater consumer trust in their olive oil and working to create a more efficient and transparent supply chain.

Consumers' demand for transparency and general distrust have been driven by recent reports of olive oil counterfeits and adulteration. That trend is reflected in a broader context, according to a recent IBM Institute for Business Valuestudy, whichfound that 73% of consumers will pay a premium for full transparency into the products they buy.

"Our mission is to provide customers quality olive oil so they can enjoy a genuine and healthy product. Rolar de Cuyo's objective in using blockchain technology is to ensure olive oil packers worldwide trust us and choose us. IBM blockchain technology provides the transparency we need to trace the origin of our products, complying with all quality processes to reach consumers' tables," said Guillermo Jos Albornoz, Rolar de Cuyo director.

IBM Food Trust uses IBM Blockchain technology and IBM Cloud to close the information gap for customers. By scanning a QR code on each bottle of olive oil, consumers can trace its production from the groves where the olives were grown, to the mills where they were processed into oil, to the stores where it is sold. They can see images of where the olives were picked and pressedand get to know the farmers and workers behind the scenes and even review what criteria was met for the oil in each bottle. For example, the tracing will show whether the olives were processed to the standards required to be labeled extra virgin olive oil.

On the production side, members of the supply chain can work together with greater confidence and efficiency, creating a permanent digital record of transactions that can be easily shared with permissioned parties. This data within IBM Food Trust can also be used to help ensure the freshness of food, control storage times and reduce waste.

"Our Terra Delyssa brand of premium olive oil has seen a spike in demand since bottles of traceable olive oil reached stores shelves earlier this year. Consumers in the US and Canada can now buy Terra Delyssa premium extra virgin olive oil in more than 10,000 grocery stores and online platforms, with more retailers adding Terra Delyssa's premium, traceable olive oil to their shelves," said Chris Fowler, Sales Manager at CHO America.

The growing demand in early January helped CHO anticipate a spike in sales due to its new consumer traceability app. Supply chains had ample products on store shelves throughout the pandemic, during which time demand rose 30% due to an increase in consumers cooking at home.

CHO is now working on creating a separate enterprise application for distributors and retailers. This app will provide access to in-depth information about each processing and control stage that a certain lot has passed through, including whether it was first cold-pressed, extra virgin or organic, with analysis from CHO's International Olive Council-accredited laboratory and third-party auditors.

"Our continuing work with olive oil producers demonstrates the growing momentum around Food Trust and our commitment to strengthening the chain that connects food from farm to table around the world,"said Raj Rao, general manager, IBM Blockchain Platforms."There's a growing desire among consumers to know where their food comes from and an increased business motivation to optimize processes with better supply insights. We're able to work with olive oil producers and distributors provide a single source of secured and transparent information through IBM Blockchain technology."

For more information on IBM Food Trust please visithere.

About IBM Blockchain

IBM is recognized as theleading enterprise blockchain provider. The company's research, technical and business experts have broken barriers in transaction processing speeds, developed the most advanced cryptography to secure transactions, and are contributing millions of lines of open source code to advance blockchain for businesses. IBM is the leader in open-source blockchain solutions built for the enterprise. Since 2016, IBM has worked with hundreds of clients across financial services, supply chain, government, retail, digital rights management and healthcare to implement blockchain applications, and operates a number of networks running live and in production. The cloud-based IBM Blockchain Platform delivers the end-to-end capabilities that clients need to quickly activate and successfully develop, operate, govern and secure their own business networks. IBM is an early member of Hyperledger, an open source collaborative effort created to advance cross-industry blockchain technologies. For more information about IBM Blockchain, visithttps://www.ibm.com/blockchain/or follow us on Twitter at @ibmblockchain.

Media Contact:Anthony Colucci, IBM External RelationsAnthony.colucci@ibm.com

Read the original post:

IBM Food Trust Delivers Traceability, Quality Assurance to Major Olive Oil Brands with Blockchain - cnweekly

The future of programming languages: What to expect in this new Infrastructure as Code world – TechRepublic

Commentary: New declarative programming languages like HCL and Polar might just be the perfect way to boost productivity with IaC.

Image: DragonImages, Getty Images/iStockphoto

There are a lot of programming languages--over 700, as Wikipedia lists them. And yet, we arguably don't have nearly enough programming languages. Not since cloud upended the way applications get built.

Developers are moving away from managing physical servers to calling APIs that touch storage, compute, and networking resources. In turn, developers are trying to automate everything as code through static configurations, scripts, and files. Such automation would be easier if developers had programming languages that matched the task at hand, but they don't. So, using a general purpose language like Java, a developer might invest thousands of lines of code to try to express business logic...and mostly fail.

To solve for this, we're seeing companies like HashiCorp (HCL) and oso (Polar) release special-purpose declarative languages. Even at the risk of programming language proliferation, this feels like the right way forward: Purpose-built instead of general-purpose languages. However, we're likely to see many of these programming languages rise and fall before we settle into a useful set of standard declarative languages.

SEE:Top 5 programming languages for systems admins to learn (free PDF)(TechRepublic)

The irony is that the "novel" approach taken by special-purpose declarative languages really isn't very novel. Years ago, programming languages split between functional (declarative) programming languages like Lisp and imperative programming languages like C. While the latter dominated for decades, functional declarative languages are making a comeback, said Jared Rosoff in an interview, a software executive who has built product at VMware, MongoDB, and more.

"Imperative languages were better suited to encoding business logic for apps," Rosoff noted. "But in Infrastructure as Code [IoC], the world isn't imperative. It's rule-driven. And this world gets much easier when we change out the languages we use to program it."

Even Polar, a declarative logic programming language specialized for making authorization decisions and tightly integrating with an application's native language, really isn't new. As Sam Scott, cofounder and CTO of oso, suggested in an interview, Polar has its roots in Prolog, which was developed way back in 1972, yet has the feel of imperative languages like Python. (Here's an example of what Polar looks like.) This is important because it's difficult to encode authorization logic in traditional, general-purpose programming languages. Doing so in a declarative language like Polar is more expressive and concise--think "tens of lines of code" instead of "thousands of lines of code."

And yet, many will question whether creating new programming languages is the right approach. How many do we really need? The short answer is "more." Here's the longer answer.

While we still use COBOL and other older programming languages, we also keep inventing new languages, each with its own advantages and disadvantages. For example, we have Rust and C++ for low-level, performance-sensitive systems programming (with Rust adding the benefit of safety); Python and R for machine learning, data manipulation, and more; and so on. Different tools for different needs.

But as we move into this Everything-as-Code world, why can't we just keep using the same programming languages? After all, wouldn't it be better to use the Ruby you know (with all its built-in tooling) rather than starting from scratch?

The answer is "no," as Graham Neray, cofounder and CEO of oso, told me. Why? Because there is often a "mismatch between the language and the purpose." These general-purpose, imperative languages "were designed for people to build apps and scripts from the ground up, as opposed to defining configurations, policies, etc."

Further, mixing declarative tools with an imperative language doesn't make things any easier to debug. Consider Pulumi, which bills itself as an "open source infrastructure-as-code SDK [that] enables you to create, deploy, and manage infrastructure on any cloud, using your favorite languages." Sounds awesome, right?

Unfortunately, while the program may be executed, this is simply used to build a data structure for Pulumi to feed into its engine, which operates in a more declarative way (i.e., take the data structure, diff it with the current infrastructure state, and apply changes). Although existing language tools exist (e.g., JavaScript debuggers), they aren't very useful because debugging Pulumi would require an intimate knowledge of that codebase. The Pulumi engine is still very opaque and tough to debug. This isn't a critique of Pulumi--it's just indicative of the problems inherent in trying to apply existing, imperative languages to Everything-as-Code.

SEE:Python is eating the world: How one developer's side project became the hottest programming language on the planet (cover story PDF) (TechRepublic)

The same problem crops up when trying to skirt the issue with data as config. This is a bit like using an existing language (Hey! I already know JSON). As Scott explained, to make this approach work, a vendor typically needs to dress up the data format with conditions or custom rules (e.g., GitHub Actions) to make it work for the use case. Or maybe they use templating (e.g., Helm or how Ansible uses Jinja2). Plus, while the appeal often starts with the data format being human readable, the "files have a nasty habit of getting long and unwieldy," he said, leading to posts like this and this and this and this.

This brings us back to declarative programming languages.

Declarative languages like Polar and HCL are great for use cases like configuration because they allow you to just declare what you want the world to look like and not have to worry about what you need to do to make that happen. The downside is that it's new: New learning curve, new need to build out an ecosystem of tools around it, etc. It's still early for declarative programming languages, but that's ok--it's also still early for our Everything-as-Code world.

And while declarative programming languages aren't perfect, they offer significant benefits over imperative programming languages, as iRobot's Ben Kehoe called out. Over the next few years, I suspect we'll see declarative programming languages proliferate, with the industry standardizing around those that do best at making themselves accessible to newbies through tooling and approachability (e.g., embracing a familiar syntax). If "developers are the new kingmakers," it's time for the declarative programming language designers to start crowning some new kings.

Disclosure: I work for AWS, but the views expressed herein are mine.

From the hottest programming languages to the jobs with the highest salaries, get the developer news and tips you need to know. Weekly

See more here:

The future of programming languages: What to expect in this new Infrastructure as Code world - TechRepublic

Open Source: The IIoT Security You’re Looking For? | RFID JOURNAL – RFID Journal

Nov 08, 2020As the Industrial Internet of Things (IioT) market continues to mature, new devices flood onto networks that also contain a host of legacy and early-generation devices. This combination is increasing the complexity of network traffic, as well as raising integration questions, forcing enterprises across the spectrum to reappraise the best security approaches, with open-source solutions increasingly coming to the fore.

IoT security has become one of the hot topics of today, with a Gartner report predicting a total market value of $3.1 billion by 2021. While there is an element of fear, uncertainty and doubt to some of the more doom-laden predictions, the fact is that IIoT security presents some significant challenges.

OT Plus IT: A Heady MixIn just one example, a study from Trend Micro, in association with Politecnico di Milano, conducted in its Industry 4.0 lab, has identified a variety of methods by which attackers are able to leverage unconventional new attack vectors to sabotage smart manufacturing environments. The security firm highlights two key problems. Firstly, IIoT systems were originally designed to be isolated from traditional IT infrastructure, so network trust is high and there are few integrity checks. Secondly, many IIoT platforms utilize proprietary languages that, while more niche than widespread languages, can still be effectively exploited to input malicious code, traverse through the network or steal confidential information.

That increasing erosion of IIoT isolation is, indeed, at the heart of the next wave of IIoT security concerns. As OT and IT systems are integrated more widely, those underlying security issues will be enhanced. There is also a significant issue with regard to legacy systemsthe simple fact is that many pilot projects and early-adopter enterprises did not have security at the forefront of their thinking.

LoRaWan: Pros and ConsThe LoRaWan protocol has been widely deployed across the globe in applications ranging from IIoT climate-control systems to smart meters and asset tracking. As a non-cellular protocol, it has been popular; there are approximately 142 countries with LoRaWAN deployments and 121 network operators in 58 countries, with around 100 million LoRaWAN-connected devices online, a figure projected to hit 730 million or more by 2023.

However, a recent study released by IOActive found that the root keys used for encrypting communications between LoRaWAN smart devices, gateways and network servers are often poorly protected and easily obtainable through several common hacking methods. The researchers found that many deployments simply used default keys in their enthusiasm to test out the technology, leaving the door open.

Moreover, another core issue with LoRaWAN is managing security revisionsa particularly problematic question throughout the IIoT, due to power limitations and access difficulties. In the case of LoRaWAN, 1.0.3 devices can't be updated to version 1.1 due to hardware limitations, locking an entire generation of devices into outdated software. This is something that hackers are more than well aware of how to exploit.

Limitations of the PLCAnother specific battleground is the industrial programmable logic controller (PLC), which has been a core part of industrial automation applications for decades. These were never built with security in mind, creating the difficult scenario of either updating the PLCs, creating open-source gateways to secure them or replacing them with custom IIoT devices.

Either option requires in-house developers or a third-party systems integrator to build something bespokethat "something" being reliant on a wide range of software libraries used to program the devices. The gateway route has been explored by developers using the open-source Apache MyNewt, Apache's first RTOS built for systems too small to run Linux.

Open-Sourced Trust?Of course, open-source technology is not entirely invulnerable to security flaws and vulnerabilities, as demonstrated by the recent Heartbleed security bug affecting OpenSSL. However, the open-source community is taking the initiative in many ways, perhaps most visibly in the shape of Project Alvarium. Set up by the Linux Foundation in October 2019, Alvarium is dedicated to building a data confidence fabric (DCF) to facilitate trust and confidence in data and applications spanning IIoT/IoT and traditional IT systems. The game plan is to collaborate on the baseline open-source framework and related APIs that bind together the various ingredients that constitute trust fabrics, as well as to define the algorithms that drive confidence scores.

The idea of introducing and quantifying trust in IIoT networks is not entirely new, but it does potentially offer a more scalable and robust solution than traditional IT approaches. Another leading light in developing IIoT trust frameworks is, of course, blockchain stalwart IOTA, which has been pushing the adoption of its distributed ledger technology (DLT) for some years. Recent announcements include collaborating on the E.U.-funded Dig_IT project to use DLT for increasing sustainability (via the IIoT) in the mining industry, as well as joining the Eclipse Open-Source Foundation.

Future ValuesOf course, the road of open source is littered with failures, as well as notable successes, and whether Project Alvarium and IOTA will thrive and prosper remains to be seen. However, it's increasingly clear that traditional IT-style approaches to IIoT security are not able to scale cost-effectively, and new approaches will be required as the sheer volume of devices and applications continues to increase exponentially. Open source also has the major inbuilt requirement of good collaboration between enterprises, a critical element in cementing the future of the IIoT.

Martin Keenan is the technical director at Avnet Abacus, which assists and informs design engineers in the latest technological challenges, including designing for Industry 4.0 and Industrial IoT manufacturing.

See original here:

Open Source: The IIoT Security You're Looking For? | RFID JOURNAL - RFID Journal