Scribe Security Releases Tools for Integrity Validation The New Stack – thenewstack.io

From hijacked updates to compromised open source code, software supply chain attacks dont seem to be slowing down. Over the course of 2021, 62% of organizations faced attacks. Securing the supply chain can be challenging due to its many components and the numerous opportunities for exploitation from cybercriminals. Scribe Security, a cybersecurity company specializing in the software supply chain, is aiming to make security a standard thats easy to uphold.

Scribe is releasing a code integrity validator (Scribe Integrity) that verifies and authenticates proprietary and open source code. For developers, this will provide more transparency for ensuring code doesnt have any malicious components. In an interview with The New Stack, Scribe Security, CTO and founder Danny Nebenzahl said, Its not something in the current toolbox of DevSec. Unfortunately, in many areas, security does not come first.

Paired with Scribe Integritys release is an open source Github security project from the company. GitGat is a free policy-as-a-code tool whose features allow users to run reports that supply an overarching view of a business security position by using the OPA (Open Policy Agent) policy manager. Both products are in early stages but with the state of security in open source software, CEO and co-founder Rubi Arbel says the market is long overdue for these tools. Better security is crucial for open source technologys survival. If people dont trust open source, they wont use it.

According to Nebenzahl, Scribes approach to securing against open source and supply chain attacks is focusing on the artifacts. Regarding code with a neverending suspicion, Nebenzahl says, When an artifact is created, we tell it that its guilty unless it can prove otherwise. At that point, metaphorically, the artifact should collect evidence that will prove its innocence. Along that pipeline, policies can be evaluated.

What can be classified as evidence? Nebenzahl says it varies. Integrity of materials and processes or the final artifacts, proof that nothing was modified. It could also be things that have to do with processes, like did the right people sign off on what they needed to? It could have to do with the security of the factory are the gates locked? This evidence collection capability is a part of what Scribe calls their bottom-up concept. On the other side is the top-down description, where employees in higher roles can use insights from the data for compliance and other matters.

These insights are what connects the bottom-up and top-down approach. The DevSecOps guy is worried if the code was modified. The Cisco guy is more worried about Did we comply with the SDF? Which requires integrity and preservation along the pipeline, Nebenzahl said. Arbel weighed in to agree. The tools main goal is to give users the feeling of what integrity along the pipeline should look like. He continued, Suppose you have a Node project. How would verification of the pipelines integrity look if you had only two points, the beginning and the end including verifying the open source components?

The road to release was not easily traveled, Arbel says. Software integrity is inherently a difficult problem, but creating the technology behind Scribe Integrity was filled with roadblocks. The evidence collectors, or sensors as Arbel calls them, were a complex puzzle to solve. We had to develop sensors whose main task is to collect evidence that isnt being collected by anyone today. Its not just application logs of GitHub or Jenkins, its a new kind of data. We need to generate the data, collect it, and then transfer it to a secure place where we can run our rule engines on it. And thats the second challenge.

Deciding what is and isnt suspicious isnt always as easy as one would think for a machine. Arbel went on, Lets say that the data is metadata in the hash in a cryptographically hard signature. So now youve got it, but now you need to decide what is a normal process. What is an anomaly, and when the integrity changes, you need to understand if a certain specific change is legitimate or not.

Now that Scribe Integrity is ready for public use, Arbel is confident in the uniqueness of the technology. There is no good technology for software integrity today that were familiar with, especially one capable of doing it in an automatic way towards pipelines.

The open source bug spread pretty fast. Though its been an astronomical help in advancing technology, Nebenzahl says security tends to be an afterthought.

The open source movement, which started from more volunteering ecosystem, is now more business-oriented with business-related activities inside. What was driving it at first was community building, and now we are seeing Business and Technology building, he said.

While he acknowledged that it isnt a bad thing, Nebenzahl says users have to be mindful of the lack of security. Whoever is building an open source project has not committed currently to any security requirements, he noted. Hes not building a product, hes not giving a service. Hes just writing code. The requirements of security and regulation become irrelevant when you start using this technology. However, when it gets to real-world scenarios and in real products, or real companies that are liable, people scratch their heads and say, Hey, what about the security of these pieces?

Low-security oversight has been the cause of millions of dollars in hacker theft and the nail in the coffin for otherwise strong businesses. The developer community continues to see growth and change in the way code is shared, and its more necessary than ever to stay vigilant with the software supply chains security. As the open source community expands and attacks continue, prepare to see tools like Scribe Securitys at the forefront of the fight.

View original post here:

Scribe Security Releases Tools for Integrity Validation The New Stack - thenewstack.io

Pulling security to the left: How to think about security before writing code – TechRepublic

Involving everyone in security, and pushing crucial conversations to the left, will not only better protect your organization but also make the process of writing secure code easier.

Technology has transformed everything from how we run our businesses to how we live our lives. But with that convenience comes new threats. High profile security breaches at companies like Target, Facebook and Equifax are reminders that no one is immune. As technology leaders, we have a responsibility to create a culture where securing digital applications and ecosystems is everyones responsibility.

One approach to writing, building and deploying secure applications is known as security by design, or SbD. Taking the cloud by storm after the publication of an Amazon White Paper in 2015, SbD is still Amazons recommended framework today for systematically approaching security from the onset. SbD is a security assurance approach that formalizes security design, automates security controls and streamlines auditing. The framework breaks securing an application down into four steps.

Outline your policies and document the controls. Decide what security rules you want to enforce. Know which security controls you inherit from any of the external service providers in your ecosystem and which you own yourself.

As you begin to define the infrastructure that will support your application, refer to your security requirements as configuration variables and note them at each component.

SEE: Hiring kit: Data scientist (TechRepublic Premium)

For example, if your application requires encryption of data at rest, mark any data stores with an encrypted = true tag. If you are required to log all authentication activity then tag your authentication components with log = true. These tags will keep security top of mind and later inform you of what to templatize.

Once you know what your security controls are and where they should be applied, youll not want to leave anything to human error. Thats where your templates come in. By automating infrastructure as code, you can rest easy knowing the system itself prevents anyone from creating an environment that doesnt adhere to the security rules youve defined. No matter how trivial the configuration may seem, you dont want admins configuring machines by hand, in the cloud or on-premises. Writing scripts to make these changes will pay for themselves a thousand times over.

The last step in the security by design framework is to define, schedule and do regular validations of your security controls. This too can be automated in most cases, not just periodically but continuously. The key thing to remember is that you want a system that is always compliant, and as a result the system is always audit ready.

When properly executed, the SbD approach provides a number of tangible benefits.

Additionally, whether on-premises or in the cloud, make sure your security policies address the following vectors:

When it comes to the actual application development, be aware of the OWASP Top 10. This is a standard awareness document for developers and web application security. It represents a broad consensus about the most critical security risks to web applications. It changes over time, but below weve compiled the 2022 top list of threats.

While its important for your developers to understand these threats (step one of the SbD process) so that they can identify proper controls and implement accordingly (steps two and three), its equally important that the validation activities (step four) are applied during and after the development process. There are a number of commercial and open source tools that can assist with this validation.

The OWASP project keeps an updated list of these tools, and even maintains a few of these open source projects directly. Youll find these tools mostly targeted at a particular technology, and the attacks unique to it.

No organization can be truly secure without mitigating the largest risk to security: The users. This is where account best practices come in. By enforcing account best practices, organizations can make sure their users dont inadvertently compromise the overall security of the system. Make sure as an organization you are following best security practices around account management:

In some industries or geographies, you will need to conform to additional security controls. Common ones include PCI for payments and HIPAA for medical records. Its crucial you do your homework, and if you find yourself subject to any of these additional security requirements, it may be worth contacting a security consultant that specializes in the particular controls needed, as violations often carry stiff fines.

Its important to remember that while organizations are the targets of cyber attacks, the victims are individuals: They are your customers; they are your employees; they are real people who have put their trust in you and your technology. Thats why its paramount that organizations lean into securing applications from the onset.

Reactive security measures will not succeed in todays fast paced digital environment. Savvy CIOs are taking a proactive approach, pulling security conversations to the left, involving the entire business and embedding best practices in every step of the software development lifecycle.

Read more:

Pulling security to the left: How to think about security before writing code - TechRepublic

We test out Bittle, a pet robot dog that will teach you how to code – review – BBC Science Focus Magazine

What is Bittle?

Bittle is a DIY, servo-based robot dog from Petoi, controllable via Bluetooth, infrared and WI-FI. It comes disassembled in kit form (although there is a pre-built option available), and once assembled its remarkably agile, operating via remote control, via the app or via a number of different programming options (more on that below). You can run demo codes before writing your own, and its a great little tool to learn robotics with the open source software.

The robot is built on Petois OpenCat open source platform and features a customised Arduino board to coordinate movements. And, being open source, users can add on different smart sensors, accessories or even AI chips. After playing with Bittle for around a week, Ive found it to be a practical and engaging application for those keen to get into coding and/or robotics. Not to mention its great fun to build.

This is the build-it-yourself version of Bittle.

Bittle is a small robot, just a little bigger than your palm, and weighs in at just 290g. Its a versatile little gadget that can detect orientation and will even right itself, should it end up on its back (and it will). The highly customisable NyBoard (essentially Bittles motherboard) can support a whole host of additional hardware, including an intelligent camera module, sound sensor, light sensor, touch sensor, PIR Sensor and even an OLED display.

Bittle is the result of a crowdfunding campaign on Kickstarter back in 2020, when it smashed the original target of $50,000, raising instead just over $500,000.

For more tech inspiration, browse our ultimate list of cool gadgets, or, if youre keen to dive into the world of coding, why not check out our round-up of the best coding toys for kids.

Think of servos as tiny motors. But unlike motors that continuously turn over, servos can be precisely manipulated so that you can adjust limbs by the merest fraction. Servos are commonly found in robotics, as they are small, powerful and easy to program. Nine servos are used to actuate Bittle; eight for the walking joints and one for head panning.

Open source software (OSS) is a type of computer software with a source code that can be seen, modified, or enhanced by anyone. OSS is a non-propriety software that is publicly accessible, and the code can also be distributed to anyone, and for any reason.

Source code is essentially a set of instructions that a programmer uses, and its usually written in plain text.

Building the robot can be accomplished in as little as around 30-60 minutes. However, if you dont fancy putting it together yourself theres also a pre-built option you can buy, although I found that building Bittle was very much part of the fun.

The design has been modified since the original version was released in 2020. Now, the body chassis comes fully assembled, with the neck already fixed in place. Gone also are the seven RGB LEDs on the NyBoard, and although useful for debugging, theyre covered up (for the most part) by the bodyframe cover. The legs too, have had a modification; gone are the days of breaking your fingers trying to get the shock-absorbent springs in place. Instead, the legs only require you to install the servos and secure them with the self-tapping screws.

This, however, does bring me onto a minor gripe with the design; the included screwdriver is simply too small. Its near impossible to get enough leverage to fasten the screws all the way. And, being self-tapping screws, they require more pressure to fasten them into the robot. I had success using both a standard-sized Phillips screwdriver and an electric one.

The battery fits easily into the bottom of the bodyframe and slides up and down a pre-cut channel. This is particularly useful for changing batteries or shifting the centre of mass if you decide to opt for the add-on components (as the heavier, chunkier part of the battery is on the front). For reference, the cable comes out the back of the battery, and loops around so that the connector is facing towards the front.

The first step in the construction process is to attach the servos to the lower leg pieces, paying attention to the direction of the cables.

Bittle uses nine P1S micro servos (eight for the walking joints and one for the neck) specially designed for Bittle, plus a spare. Having this many servos drastically increases the versatility of movement that can be achieved, and each has a controllable angle of 270, which is very welcome in a robot at this price point.

The general construction process begins with slotting four long-cable servos into the lower leg pieces, then fixing them into position with the self-tapping screw; own screwdriver is recommended. A short-cable servo forms the neck joint, slotting into the head and securing as before. Once the neck servo is in place, the head can be popped on and off the main frame of the robot with ease.

With the battery removed, you can slot the four short-cable servos easily into the bodyframe to create Bittles shoulders.

Four short-cable servos are used for the shoulders and slot easily into position on the bodyframe. Once youve secured these with the screws, the cables on the legs and head are easy enough to feed through the body. It does get a bit fiddly at this point as there are lots of wires hanging loose, so its important to keep them as organised and tidy as possible.

The NyBoard is on the right of this picture, and you can see the infrared sensor that Ive pulled out (compare this to the picture above, where the sensor is not pulled out).

Theres a small infrared sensor at the rear of the NyBoard, which needs to be pulled down before attaching it to the body. Its just wire and unfolds easily, although I was careful not to ding it too much.

Make sure that you keep your cables neat so that the NyBoard has room to slot on top.

After that, its just a case of connecting the servos to the correct pin set on the NyBoard. Once all the servos are connected, its important to tuck the cables away neatly so that the NyBoard can be screwed onto the pillars.

Bittle with the NyBoard fitted, ready for the upper leg pieces.

The last thing that youll need to do before calibration, is to attach the upper leg pieces to complete the model. Youll need to do this as precisely as possible, and I found I needed to make several adjustments before I was happy.

Before you start making Bittle do tricks, it will need to be calibrated. There are three ways to calibrate Bittle: through the mobile app, through the desktop app or through Arduino IDE. The easiest (and quickest) way to do this is via the app, and I recommend calibrating via the app if youre pushed for time.

But be warned: whichever method you choose to calibrate Bittle, its a fiddly process!

I decided to calibrate via the app, which is compatible with both iOS and Android. Plug in the blue-coloured Bluetooth adaptor to the six-pin socket on the NyBoard, connect the battery and long-press the button on the back to turn it on. Try not to jump as Bittle suddenly comes to life and in my case, flailing limbs all over the place.

Youll need to calibrate each of Bittles joints separately. Its a fiddly process, but the Petoi app will guide you through it.

After it calms down, youll need to calibrate each joint separately. The aim is to fine-tune the position of each servo so that the limbs move smoothly during operation, and the robot doesnt fall over.

Although fiddly (and it does take a while to get right), the process itself is relatively straightforward.

This is the calibration pose. You can see that the front left limb is not perfectly at a 90 angle and needs to be calibrated further.

The app will display an image of Bittle, showing each of the nine servos in position. Select each servo in turn (by tapping on the image) and adjust the angle of that joint using the plus or minus buttons on the app. Youre aiming for a perfect 90 angle for each of the legs, and theres an L-shaped calibration tool (essentially a set-square) in the box to help you.

Once you think its calibrated, use the commands to instruct Bittle to stand and rest (lie down). The real test comes when you instruct it to walk. If it doesnt walk straight, walks in a circle, or falls over when walking, then one (or more) of the legs is not at a perfect 90.

Unfortunately, youve only got your eyes as a way of determining the angle, the app doesnt tell you. Although fiddly, once I was happy with the calibration, I actually found that I enjoyed the process.

Once youre happy that you dont need to make any further coarse adjustments (taking the limbs off again), youll need to lock the legs in place using the flat head screws. After that, youll need to pop the cable cover off the lower leg piece, before hiding the remaining loose wires inside the leg.

As far as the electronics go, Bittle is powered by NyBoard V1, which is a customised Arduino Uno board, with sockets for external modules. It can drive up to 12 pulse width modulation (PWM) servos, and an IMU (Inertial Measurement Unit) is used for balancing. When Bittle is turned on, you can see the IMU in action when you tilt Bittle to one side; his face and limbs will turn towards you!

Like the software itself, the operation of Bittle is flexible. There are multiple methods available to give you control of your new friend, depending on your experience level and the time you have available. For immediate use straight after building (or taking out the box if you bought the pre-built version), then it comes with a standard infrared remote control.

Alternatively, you can use the mobile app Petoi or the Petoi desktop app. If youre keen to dive into the world of coding or hone your skills further, then it also uses Arduino IDE (essentially a text editor for writing code) and Python, one of the most popular programming languages (often used to build websites and software and to automate tasks).

Its also compatible with Raspberry Pi, a credit card-sized computer that plugs into the NyBoard and allows Bittle to analyse more data even make decisions by itself. But if you dont fancy writing your own code, you can download demo codes from GitHub.

The basic control panel for Bittle in the Petoi app is clean and simple to use

Using the app you can have full control over the robot, and you can even create your own customised commands. From the off, you can instruct Bittle to step, crawl (very cute), walk or trot. There are two pre-set speed settings, fast and slow, with directional arrows to control Bittle like you would a drone, or remote control car.

There are also a number of pre-programmed controls, including stand, rest, sit, stretch, say hi, hip up, push up and play dead. You can even make Bittle raise his back leg and pee.

The app is clean and straightforward to use, although you do need to reconnect Bittle every time you turn it on.

Bittle in the stand pose

Bittle in the sit pose

Bittle in the stretch pose

Bittle in the rest pose

The build quality of Bittle is surprisingly sturdy. According to the manufacturers, an assembled Bittle can support the weight of an adult standing on its back although Im not quite that brave to test that particular claim!

Bittle is made with high-strength, injection moulded 3D interlocking parts, so there are only a handful of screws in the whole robot. As the screws that you install as part of the build process are self-tapping (that means they cut a thread into the plastic as youre screwing them in), once assembled its a solid little thing. And it feels it, too.

The tail is made from silicone, which provides a nice little bounce when the robot is in operation, and the NyBoard cover helps add crossbody support.

The exception to this is perhaps the head itself, and the head attachment feels a little flimsy compared to the rest of the robot. However, Ive not had any problems yet time will tell.

Everything considered, I really like Bittle. Its the sort of gadget that the more you use, and the more you dive into the coding side of things, the more youll get out of it. Going by the plethora of complaints in the community regarding the difficulty assembling the legs in the first iteration, the tweaks that Petoi have made to the build-it-yourself model are certainly for the better.

If youre just after a remote control dog and youre not fussed about coding, with a 300 price tag Bittle is probably not worth the investment. However, as a STEM learning tool, or for anyone interested in robotics and programming, Bittle is a fantastic little gadget, especially when you consider the variety of additional hardware.

Bittle is advertised for ages 14 and up, and although the build is fairly straightforward for bright children, the coding element makes it better suited for adults, or older teen tech enthusiasts.

If youre after cute, its hard to beat the iconic Vector robot from Anki. Powered by AI and advanced robotics, Vector is alive and engaged by sight, sound, and touch. Vector can independently navigate and self-charge, but does require a compatible iOS or Android device, as well as the free Vector app for set up only.

Vector is a curious and attentive companion, who will answer questions, take pictures, and even time your dinner with the built-in Amazon Alexa.

Using an HD camera, he can identify people, see and remember faces, and navigate his environment without bumping into objects (or pets). Hes also got a powerful four-microphone array for directional hearing and communicates in a unique language made up of hundreds of synthesised sounds.

If youre a fan of Raspberry Pi, then the Freenove Robot Dog Kit could be a nice open source alternative. Like Bittle, it requires assembly, although the design is somewhat less polished. Its controlled wirelessly by your Android phone or tablet, but the actual Raspberry Pi and battery are sold separately.

The ELEGOO Smart Robot Car V4.0 has the added bonus of a camera, something that Bittle doesnt have as standard. This robot car is another DIY robot that requires assembly, and like Bittle, runs on Arduino IDE.

It has multiple different modes, including auto-go, infrared control, obstacle avoidance and line tracking modes. In each mode, you will learn how to load programs and command the car to run as instructed. It is, however, significantly bigger and heavier than Bittle, measuring 26.3 x 14.5 x 8cm and weighing in at 1,140g, but is less than a third of the price of Bittle.

Read more reviews:

Read the original:

We test out Bittle, a pet robot dog that will teach you how to code - review - BBC Science Focus Magazine

JFrog’s revenue jumps almost 40% as it beats Wall Street’s expectations – SiliconANGLE News

DevOps company JFrog Ltd. delivered solid second-quarter financial results today, beating Wall Streets expectations, but its stock fell slightly in after-hours trading when it offered guidance for the next quarter that was only in line with analysts targets.

The company reported a net loss of $23.7 million for the period, amounting to a loss before certain costs such as stock compensation of two cents per share. Revenue came to $67.8 million, up 39% from a year earlier. Wall Street had been targeting a loss of three cents per share on sales of $65.5 million.

JFrog is a provider of software developer tools, best known for its open-source binary repository manager Artifactory. The offering is somewhat similar to GitHub, which is used by developers to store their code. But it caters to a different part of the development lifecycle, storing the binary files that are created when engineers compile code into a functioning program.

The JFrog Platform also includes JFrog Pipelines, a continuous integration and continuous delivery platform. Its used to create automated software workflows that transform raw code into binaries before deploying them automatically.

JFrog co-founder and Chief Executive Shlomi Ben Haim (pictured) said revenue from the companys cloud offerings accelerated on a sequential basis, showing the importance of hybrid and multicloud DevOps among big enterprises.

We believe that our success in the second quarter provides further validation that the JFrog platform is the backbone of their software supply chain, Ben Haim said. We remain laser-focused on making our Liquid Software vision a reality.

JFrog said its cloud revenue grew by 68% from a year ago, to $19.2 million, representing 28% of its total sales. That suggests its cloud offerings are growing in importance, because cloud accounted for just 24% of sales one year earlier.

The company showed plenty of other positive growth metrics too. Its net dollar retention rate, which is a measure of its ability to retain customers and the revenue they provide, ended the quarter at 132%. Meanwhile, customers that deliver at least $100,000 in annual revenue grew to 647, up from 415 one year earlier. Of those, 36% have adopted the complete JFrog Platform, as opposed to just 32% a year ago.

For the third quarter, JFrog said its anticipating earnings of between a penny loss and a penny profit, and revenue of $70.5 million to $71.5 million. Thats more or less in line with Wall Streets forecast of a penny profit on sales of $70.9 million.

JFrogs stock slipped just over 1% on the report, having made gains of more than 5% in the regular trading session.

Continue reading here:

JFrog's revenue jumps almost 40% as it beats Wall Street's expectations - SiliconANGLE News

The Industry Handbook: Software Industry | Global Online Money – Global Online Money

Software program is differentiated from {hardware} because the algorithm that enable providers to be carried out on the bodily system. The software program business is absolutely solely a small a part of the general pc programming exercise that takes place, because it pertains to software program traded between software program producers and software program shoppers. Many software program packages created in-house for very particular makes use of are by no means bought exterior of the corporate. For the reason that businesss starting within the Nineteen Fifties, it has gone by a variety of revolutionary adjustments, from easy punch-card programming providers provided to these few firms that had computer systems in 1955 to revolutionary traits similar to software program as a service (SaaS), system programming for the Web of Issues (IoT) and open-source options acceptance by main firms.

The software program business could be separated into 4 foremost classes: programming providers, system providers, open supply and SaaS. The next describes the classes of enterprise software program used within the business.

Programming Providers this sector has traditionally been the most important sector and contains names similar to Microsoft Company (NASDAQ: MSFT), Automated Knowledge Processing, Inc. (NASDAQ: ADP), Oracle Company (NYSE: ORCL) and SDC Applied sciences, Inc. These firms usually pioneered options to wants by companies to research information, retailer and arrange information, or present packages to run equipment.

System Providers though programming was the most important software program sector early in pc historical past, system providers grew quickly by the Sixties and Nineteen Seventies, after which exploded within the Eighties with the rise of non-public computer systems (PCs) and the necessity for an encompassing working system similar to Microsofts unique disk working system (DOS) that was launched in 1981.

Open Supply programming or software program engineering has change into an enormous in-demand occupation with the expansion of the Web, cloud methods and companies prepared to enterprise extra willingly into open-source environments such because the Linux working system. Open supply refers to a code base that was created and is free to amass. Nevertheless, most companies require adjustments to be made to the code bases to swimsuit their wants. One other open-source code base is the Android working system.

Software program as a service with the rise of cloud computing and the motion of most companies massive and small to the cloud, SaaS has change into extra well-liked than system software program for companies particular wants. This software program is saved on the creators servers and shoppers entry the software program by the Web, additionally known as the cloud. All upgrades, patches and points are dealt with on the creator facet with a subscription-based mannequin for the consumer. The SaaS sector is forecast for steady progress over the following decade, representing nearly 30% by 2018. By the tip of 2016, its forecast that over 80% of all companies will incorporate at the least one element of cloud computing inside their info expertise (IT) infrastructures, similar to infrastructure as a service (IaaS), platform as a service (PaaS) or SaaS packages.

SaaS suppliers are vying for market share by attempting to supply essentially the most providers inside their choices to cater to as many conditions as potential. Zohos suite of apps or Oracles motion into software program modules are nice examples of how software program firms are creating into large modular-based methods the place companies can plug within the essential parts for his or her scenario. The mannequin is engaging to companies of all sizes as a enterprise solely must pay for the modules, similar to packages and apps, it requires to run its enterprise, and most of those SaaS merchandise are nearly immediately scalable if the enterprise must develop.

With the arrival of the Web and cloud computing, the pc software program business has radically modified how firms work together with, develop and use software program. Software program was as soon as a product that was bought, put in and maintained. In 2016, an increasing number of firms are utilizing software program in a subscription mannequin the place all the event, upkeep and maintenance of this system is finished by the unique creator. (For associated studying, see 8 Software program Expertise At the moment In Demand)

Read the original:

The Industry Handbook: Software Industry | Global Online Money - Global Online Money

The Microsoft Team Racing to Catch Bugs Before They Happen – WIRED

As a rush of cybercriminals, state-backed hackers, and scammers continue to flood the zone with digital attacks and aggressive campaigns worldwide, its no surprise that the maker of the ubiquitous Windows operating system is focused on security defense. Microsofts Patch Tuesday update releases frequently contain fixes for critical vulnerabilities, including those that are actively being exploited by attackers out in the world.

The company already has the requisite groups to hunt for weaknesses in its code (the red team") and develop mitigations (the blue team). But recently, that format evolved again to promote more collaboration and interdisciplinary work in the hopes of catching even more mistakes and flaws before things start to spiral. Known as Microsoft Offensive Research & Security Engineering, or Morse, the department combines the red team, blue team, and so-called green team, which focuses on finding flaws or taking weaknesses the red team has found and fixing them more systemically through changes to how things are done within an organization.

People are convinced that you cannot move forward without investing in security, says David Weston, Microsofts vice president of enterprise and operating system security whos been at the company for 10 years. Ive been in security for a very long time. For most of my career, we were thought of as annoying. Now, if anything, leaders are coming to me and saying, Dave, am I OK? Have we done everything we can? Thats been a significant change.

Morse has been working to promote safe coding practices across Microsoft so fewer bugs end up in the companys software in the first place. OneFuzz, an open source Azure testing framework, allows Microsoft developers to be constantly, automatically pelting their code with all sorts of unusual use cases to ferret out flaws that wouldnt be noticeable if the software was only being used exactly as intended.

The combined team has also been at the forefront of promoting the use of safer programming languages (like Rust) across the company. And theyve advocated embedding security analysis tools directly into the real software compiler used in the companys production workflow. That change has been impactful, Weston says, because it means developers arent doing hypothetical analysis in a simulated environment where some bugs might be overlooked at a step removed from real production.

The Morse team says the shift toward proactive security has led to real progress. In a recent example, Morse members were vetting historic softwarean important part of the groups job, since so much of the Windows codebase was developed before these expanded security reviews. While examining how Microsoft had implemented Transport Layer Security 1.3, the foundational cryptographic protocol used across networks like the internet for secure communication, Morse discovered a remotely exploitable bug that could have allowed attackers to access targets devices.

As Mitch Adair, Microsofts principal security lead for Cloud Security, put it: It would have been as bad as it gets. TLS is used to secure basically every single service product that Microsoft uses.

Read the original:

The Microsoft Team Racing to Catch Bugs Before They Happen - WIRED

The State of Software Security Testing Tools in 2022 – ITPro Today

Supply chain attacks, injection attacks, server-side request forgery attacks all these threats, and more, prey on software vulnerabilities. Vulnerabilities can range from misconfigurations to faulty design and software integrity failures. Overall, applications are the most common attack vector, with 35% of attacks exploiting some type of software vulnerability, according to Forrester Research.

The focus on software security, along with the proliferation of software security testing tools, has grown over the past few years, thanks in part to supply chain attacks like those on Stuxnet and SolarWinds. And as organizations expand their web presence, there is more risk than ever. Finally, the move toward DevSecOps has encouraged more organizations to include security testing in the software development phase.

Related: App Development: Staying Secure Using Low-Code Platforms

Keeping software attacks at bay requires increasing efforts around testing -- and not only at the end of development. For those developing software in house, software should be tested early and often. Doing so canreducedelays and extra expenses that occur when software must be rewritten toward the end of a production cycle.

In the case of software developed externally, the wisest approach is to test via multiple methods before putting it into full-scale production.

Its always easier to prevent problems than it is to find issues during production, so baking in security testing from the beginning makes a lot of sense, said Janet Worthington, senior analyst for security and risk at Forrester.

One of the most important testing tools to prevent the escalation of threats is static analysis testing.

Also called static application security testing (SAST), this type of testing analyzes either the software code or its application binaries to model the applications for code security weaknesses. Its especially good at rooting out injection attacks. SQL injection attacks are a common attack vector that inserts a SQL query through the input data from the client to the application. It is often used to access or delete sensitive information.

SAST tools also can help identify server-side request forgery (SSRF) vulnerabilities, where attackers can force servers to send forged HTTP requests to a third-party system or device. SAST tools can help catch these vulnerabilities before they reach production.

Another critical testing tool is software composition analysis. These tools help block malicious components from entering the pipeline altogether. They look for known vulnerabilities in all components, including those in open-source and third-party libraries. Vulnerabilities like Log4J have contributed to the popularity of this type of testing tool. Forty-six percent of developers now use software composition analysis tools for testing, according to Forrester.

Other important types of software security testing tools include:

Depending on the type of threat, the platform, and other factors, organizations may choose to employ various types of testing tools. Some applications may also need testing tools that arent on the list above. For example, an application that includes cryptographic signing will probably require a cryptographic analysis tool. Thats why today, more than ever before, its important to use more than one type of software testing tool.

If you want to be as thorough as possible, youll want to do SAST testing to find vulnerabilities in source code, SCA for open-source components and DAST to test the running web application, said Ray Kelly, a fellow at Synopsys, which provides software security and testing tools. Its really about finding the right tools for your specific situation.

There is no shortage of tools, and it can be confusing to sift through the options. Overall, there are open-source tools, best-of-breed tools from vendors, and proprietary software testing platforms.

Open-source tools tend to be very tactical in nature, focused on one thing. Examples include OWASP ZAP, a free web application security scanner; Snyks free code quality and vulnerability checker; SQLmap or Metasploit for penetration testing; SonarQube for code security; and FOSSA for open-source dependency testing.

There are, of course, many best-of-breed tools available for a fee from various vendors.

And then there are proprietary software testing platforms, like HCL AppScan and HP Fortify, as well as platforms from vendors like Veracode, Checkmarx, Synopsys, Palo Alto Networks, and Aqua Security.

In most cases, organizations would do best to blend different types of tools from different sources, said Aaron Turner, a vice president at Vectra AI, a threat detection and response vendor. If you combine a software testing platform with select best-of-breed testing tools, whether open source or proprietary, you can be sure to hit all of your marks, because there is no one platform that can do everything.

If budget is an issue, Worthington recommended starting with the free version of a testing tool, which many vendors now offer. For example, Snyk, which is known for its software composition analysis tool, has a free open-source version. After the tool has proven valuable, the organization can decide whether to pay for thefull-featured version.

Advice From the Experts

Know your team and its capabilities before diving into software security testing, Kelly advised.

In many cases, software development [or evaluation] teams are overwhelmed by features, product requests, and agile deployment methodologies, Kelly explained. Often, they are shipping a new product every week, if not every day, and sometimes security takes a backseat. Its worth taking the time to really analyze what applications are actually running in your environment today, what their risks are, and what the threat landscape is. Take the time to take that inventory and get a baseline.

And before committing to any testing tool or methodology, make sure youre considering the relative importance of the software in your environment. If youre a natural gas pipeline operator and you rely on a specific piece of software to keep the pipeline running, youll probably spend a lot more time and effort testing that piece of industrial control software than you would testing WordPress, which runs your website, Turner said.

Finally, its important to keep up with developments in software security. That means not only subscribing to relevant blogs and podcasts, but staying on top of government advisories (e.g., via the Cybersecurity and Infrastructure Security Agency) and NISTs National Vulnerability Database.

About the author

Read more:

The State of Software Security Testing Tools in 2022 - ITPro Today

Bridging the security gap in continuous testing and the CI/CD pipeline – Security Boulevard

Learn why Synopsys earned the highest score for the Continuous Testing Use Case in Gartners latest report.

Gartner recently released its 2022 Critical Capabilities for Application Security Testing (AST) report, and I am delighted to see that Synopsys received the highest score across each of the five Use Cases. Lets look at the Continuous Testing Use Case and dive into how Gartner ranks and rates it, and see why the Synopsys portfolio of offerings is well-suited for organizations that are looking to implement or are currently doing continuous testing.

When it comes to the criteria used to rate the top 14 tools ability to deliver continuous testing, Gartner places slightly more weight on a tools ability to perform dynamic application security testing (DAST), interactive application security testing (IAST), and API security testing and discovery. It places less or equal weight on a tools ability to perform static application security testing (SAST) and software composition analysis (SCA). To understand why, lets look at the role continuous testing plays in todays software ecosystem.

Download the Gartner report

First, we need to understand what exactly continuous testing is. As the name implies, continuous testing refers to the execution of automated tests every time code changes are made. These tests are carried out continuously and iteratively across the software development life cycle (SDLC). They are conducted as a part of the software delivery pipeline to drive faster feedback on changes pushed to the code and/or binary repository.

Continuous testing is important especially in an organizations drive toward DevOps continuous integration / continuous delivery (CI/CD). While CI/CD enables product innovations at lightning speed (which is crucial for businesses to stay ahead of the curve), continuous testing helps build trust in the quality. Continuous testing provides the much-needed peace of mind that the products perform as expected and are reliable and secure. Continuous testing in a delivery pipeline allows the team to introduce any number of quality gates anywhere they want, to achieve the degree of quality that they need.

Although continuous testing is becoming a standard practice today, embedding another layer of security oversight is something not readily undertaken by most organizations. It is simple to understand why.

Implementing continuous testing is already a massive undertaking without adding another layer of security on top of it. For continuous testing to work, both development and QA test teams need to get together to define the tests early, develop the test-driven or behavioral-driven test cases, and ensure good test coverage. To run a successful continuous testing operation, they will also need to have a complete test environment on demand, with dev-friendly tools (such as code, CI/CD integrations, and supported open source) for the various development and test teams use. These environments ideally should be ready for the various on-demand needs from unit test to integrated, functional, regression, and acceptance test needs and have the ability to provision the right test data so teams can perform comprehensive tests with production-like data. With continuous testing, the various types of tests are executed seamlessly in the different environments and at each stage of the continuous pipeline and in different environments that it gets deployed to. Tests are triggered automatically by events such as code check-in or code changes. The aim of continuous testing is to ensure prompt feedback to alert the team of problems as quickly as possible.

Continuous testing becomes tougher and longer as it progresses toward the production environment. The depth of testing also progresses as the simulation environment gets closer to production. You need to slowly add more tests and more complicated tests as the code matures and environment complexity advances. Chances are the same test cases developed earlier would not be run throughout the SDLC. The test cases need to be updated each time significant changes are introduced. The automated scripts will need to be updated at the different phases of testing as the code becomes more matured and progresses to a higher level of environment where configurations and infrastructure also advance until it reaches production.

Even the time needed to run the tests increases as the testing progresses toward the release point. For example, a unit test might take very little time to run, whereas some integration tests or system/load tests might take hours or days to run. With the amount of time and effort required to execute end-to-end continuous testing, its no wonder automated security tests lag behind other types of automation efforts (e.g., automating build, and release), according to Googles State of DevOps report.

For organizations that have security test practices and tools built into their continuous testing and delivery pipeline, its common to find SAST and/or SCA tools deployed in their automated pipeline. These tools have their own place in the SDLC, and in fact, they are necessary early in the SDLC to help secure proprietary codebases and external dependencies such as open source and third-party code. This may suffice in a controlled environment, with controlled codebases that ensure predictable user experiences.

Unfortunately, the software app development and delivery paradigm has shifted from monolithic to todays highly distributed computing model. There are innumerable software components and event-driven triggers thanks to technologies such as microservices architecture, the cloud, APIs, and serverless functions in todays modern, composite-based applications. And some critical vulnerabilities and exploits cannot be anticipated or caught in early development phasesthey dont get triggered until application runtime tests when the various components are integrated. The sheer volume of apps that an organization owns and must manage todayfrom internal proprietary codebases and applications to third-party components and APIscontributes to the growth of unanticipated attack surfaces.

Therefore, its more critical than ever to incorporate modern DAST approaches to testing, particularly those that can augment the continuous testing and CI/CD pipeline with the least friction.

Synopsys has the broadest and most comprehensive portfolio for your application security needs. Our AST tools provide seamless life cycle integration with end-to-end app security test coverage across the continuous pipeline.

Some key benefits of Synopsys solutions include

Continuous security testing and continuous delivery are processes that can take time to implement successfully. But close collaboration between development, security, and DevOps teams, along with continuous security feedback based on highly accurate data and the right tool set, will help bulletproof your critical applications.

Download the report

Here is the original post:

Bridging the security gap in continuous testing and the CI/CD pipeline - Security Boulevard

Paladin Cloud lands $3.3 million seed funding with T-Mobile – SC Media

Paladin Cloud on Monday announced a $3.3 million seed round with T-Mobile Ventures that aims to equip developers with a strong platform to detect, visualize, and remediate important risks in their multi-cloud environments across Amazon Web Services, Microsoft Azure, and the Google Cloud Platform.

Developers can use Paladin Cloud to continuously monitor their cloud services in real-time. The open source platform promises to identify and eliminate misconfigurations, thus reducing security risks while automating workflow and remediation.

Leveraging T-Mobiles PacBot framework, Paladin Cloud aims to build a new open source community for developers dedicated to holistically improving cloud security.

Its become very important to incorporate security into development, both by setting policies as guardrails to block coding misconfigurations from being deployed, and to automate testing of apps to quickly identify and fix security issues, explained Melinda Marks, a senior analyst at the Enterprise Strategy Group.

This has been a challenge with modern software development, to build these processes into development in a non-disruptive way, Marks said. An investment from T-Mobile shows they are interested in helping developers get the resources they need to produce secure code for more secure applications. And because so many more transactions are happening now via mobile, it gives developers the right tools can help secure their appsotherwise anything disruptive will make them skip the security measures.

Frank Dickson, who covers security and trust at IDC, said misconfigurations have become the primary risk vector for cloud application, much worse than vulnerabilities, adding that developers desperately need offerings addressing misconfigurations.

I question the open source approach, though, Dickson said. I realize that open source is the rage in software, but open source also means that the customer owns the outcome. It also means yet another vendor to manage. The needs of the market are demand more integrated platform solutions that create outcomes for customers.

View post:

Paladin Cloud lands $3.3 million seed funding with T-Mobile - SC Media

TypeScript Tutorial: A Guide to Using the Programming Language – thenewstack.io

JavaScript is one of the most widely-used programming languages for frontend web development on the planet. Developed by Microsoft, TypeScript serves as a strict syntactical superset of JavaScript that aims to extend the language, make it more user-friendly, and apply to modern development. TypeScript is an open source language and can be used on nearly any platform (Linux, macOS, and Windows).

TypeScript is an object-oriented language that includes features like class, interface, Arrow functions, ambient declaration, and class inheritance. Some of the advantages of using TypeScript include:

Some of the features that TypeScript offers over JavaScript include:

One of the biggest advantages of using TypeScript is that it offers a robust environment to help you spot errors in your code as you type. This feature can dramatically cut down on testing and debugging time, which means you can deliver working code faster.

Ultimately, TypeScript is best used to build and manage large-scale JavaScript projects. It is neither a frontend nor backend language, but a means to extend the feature set of JavaScript.

Im going to walk you through the installation of TypeScript and get you started by creating a very basic Hello, World! application.

Lets get TypeScript installed on Linux (specifically, Ubuntu 22.04). In order to do this, we must first install Node.js. Log into your Ubuntu Desktop instance, open a terminal window and install both Node.js and npm with the command:

sudo apt-get install nodejs npm -y

With Node.js and npm installed, we can now install TypeScript with npm using the command:

npm install -g typescript

If that errors out, you might have to run the above command with sudo privileges like so:

sudo npm install -g typescript

To verify if the installation was successful, issue the command:

tsc -v

You should see the version number of TypeScript that was installed, such as:

Version 4.7.4

Now that you have TypeScript installed, lets add an IDE into the mix. Well install VSCode (because it has TypeScript support built-in). For this we can use Snap like so:

sudo snap install code --classic

Once the installation is complete, you can fire up VSCode from your desktop menu.

The first thing were going to do is create a folder to house our Hello, World! application. On your Linux machine, open a terminal window and issue the command:

mkdir helloworld

Change into that directory with:

cd helloworld

Next, well create the app file with:

nano hw.ts

In that new file, add the first line of the app like so:

let message: string = 'Hello, New Stack!';

let message: string = 'Hello, New Stack!';

Above you see we use let which is similar to the var variable declaration but avoids some of the more common gotchas found in JavaScript (such as variable capturing, strange scoping rules, etc.). In our example, we set the variable message to a string that reads Hello, New Stack!. Pretty simple.

The second line for our Hello, World! app looks like this:

What this does is print out to the console log whatever the variable message has been set to (in our case, Hello, New Stack!).

Our entire app will look like this:

let message: string = 'Hello, New Stack!';console.log(message);

let message: string = 'Hello, New Stack!';

console.log(message);

Save and close the file.

With VSCode open, click Terminal > New Terminal, which will open a terminal in the bottom half of the window (Figure 1).

Figure 1: Weve opened a new terminal within VSCode.

At the terminal, change into the helloworld folder with the command:

cd helloworld

Next, well generate a JavaSript file from our TypeScript file with the command:

tsc hw.ts

Open the VSCode Explorer and you should see both hw.js and hw.ts (Figure 2).

Figure 2: Both of our files as shown in the VSCode Explorer.

Select hw.js and then click Run > Run Without Debugging. When prompted (Figure 3), select node.js as your debugger.

Figure 3: Selecting the correct debugger is a crucial step.

Once you do that, VSCode will do its thing and output the results of the run (Figure 4).

Figure 4: Our Hello, World! app run was a success.

What if you want to do all of this from the terminal window (and not use an IDE)? Thats even easier. Go back to the same terminal you used to write the Hello, World! app and make sure youre still in the helloworld directory. You should still see both the TypeScript and JavaScript files.

To run the Hello, World! app from the command line, you use node like so:

node hw.js

The output should be:

Hello, New Stack!

Congratulations, youve installed TypeScript and written your first application with the language. Next time around well go a bit more in-depth with what you can do with the language.

Go here to read the rest:
TypeScript Tutorial: A Guide to Using the Programming Language - thenewstack.io