This bumper coding certification bundle is on sale for 88% off – AOL

Person looking at monitor

TL;DR: The 2022 CPD Certified Coding Certification Bundle is on sale for 33, saving you 88% on list price.

Whether its a hobby or a career path, an interest in tech doesnt necessarily equate to knowing exactly where you want to start learning. With so many tools and technologies to master, it can be overwhelming to get started. However, the CPD Certified Coding Certification Bundle may make it a bit easier. With four courses corresponding to four natural starting points in app development, coding, computer design, and website building, this 33 bundle could make it easier to jump into a crowded topic.

This bundle comes with four courses totaling 120 hours of content thats available to you for 60 days. If you want to start with app design, then youd head straight into Mobile App Development with Flutter & Dart. Flutter is an open-source software development kit that is free to download and can build apps for virtually any OS, including Windows, Mac, Android, iOS, and Linux. Dart is the coding language that is most often used for Flutter, and learning both could let you create apps to post on the Google Play Store or Apple App Store.

This bundle also covers some introductory coding for HTML, CSS, and JavaScript. You could learn to code for apps or other projects, manage data streams, and develop good programming habits. If youre looking for an introduction to coding, this course could be a great place to start, even if youre just looking for a place to get to know some of the technical jargon programmers tend to use.

Competitive gaming, high-demand apps, and more might be more accessible if you have a computer that has some real power behind it. Start learning the technology behind modern computers, including identifying your computing needs and learning to make an architectural design, a map of the hardware and software youd need for your ideal computer.

Web design doesnt have to be all about coding if you use a tool like WordPress. Considering over 30% of the worlds websites run on WordPress, learning to make a blog or manage your own website with it means youre joining a pretty rich community.

The tech industry is ever-evolving and impacts our world in a huge way. Find four starting points in this CPD-Certified Coding Certification Bundle on sale for 33 for a limited time.

Course logo

Opens in a new tab

Credit: International Open Academy

2022 CPD Certified Coding Certification Bundle (opens in a new tab)

33 at the Mashable Shop

Get Deal (opens in a new tab)

Read more:
This bumper coding certification bundle is on sale for 88% off - AOL

You’ll never be as happy as this adorable wiggly-armed robot – PC Gamer

Meet the myBuddy 280 Pi, an adorable dual-armed Raspberry Pi-powered robot. The creator of the myBuddy, Elephant Robotics, calls it an "open-source educational essentials collaborative robot" for coding, AI, and robotics enthusiasts.

The little guy features two articulated arms (a first for Elephant Robotics) that can lift 260g of weight, rotate 165 degrees, and perform various tasks like waving hello or conducting a band. The ends of the arms can be fitted with little hands, suction pumps, and grippers. You know, robot stuff. The company's previous robots have featured single-armed creations that seem to lack the personality the myBuddy brings to the table.

The myBuddy is powered by a Raspberry Pi 4 with three ESP32 microcontroller modules. The arms run on a high-performance servo steering gear with six points of articulation.

The seven-inch touchscreen display has a pair of dual-200w pixel cameras that'll help teach the robot how to use visual sorting and facial recognition applications to have it do things like greet you when you arrive. Oh, and it'll show off several cute little facial expressions to give it some personality.

According to the video above, some applications of myBuddy include, but aren't limited to, programming it to play an instrument, wave a flag, and dribble a ball. There's even a way to control the arms remotely via a VR headset and controller.

Your next machine

Best gaming PC (opens in new tab): The top pre-built machines from the prosBest gaming laptop (opens in new tab): Perfect notebooks for mobile gaming

Some of the more wild things you can teach it include how to pet a robot cat, pour out candy, and even have it perform fun little dances. What you can do with the myBuddy seems contingent on your skills and creativity as a programmer.

The open-source robot works with Arduino, Python, C++, and Java programming languages. Elephant Robotics provides useful tools and software on its download page for users to experiment with.

The myBuddy 280 Pi starts at $1,699 or $1,749, depending on whether or not you want to add a pair of goofy hands to your new friends. Our colleagues at Tom's Hardware pointed out that myBuddy won't be showing up on Amazon for the next few months because of limited supply. Elephant Robotics also makes a bionic pet cat if you're looking for more robotic companions of the four-legged variety.

View original post here:
You'll never be as happy as this adorable wiggly-armed robot - PC Gamer

Teacher inspires early love of technology through robotics – Journal & Courier

BROOKSTON, Ind. One local teacher has transformed not just her own curriculum, but her school's learning methods with two little robots.

Mindy Brennan, the computer lab teacher at Frontier Elementary School, has been utilizing the Cozmo and Vector robots from Digital Dream Labs (DDL) to supplement the school's programming studies.

DDL is an "edtechtainment" company that develops consumer robots for people and students of all ages.

"Digital Dream Labs started off as an ed-tech company," Jacob Hanchar, CEO at DDL said, "so we taught coding to kindergarten through 5th grade. And now we are a robotics/AI-companion company. So we make robots for all ages that not only teach coding but also keep you company, count medicine for you, reminds you to take your medicine; go with you on trips, take pictures, answer phone calls...all those things."

The two main products produced by DDL is Cozmo and Vector. Cozmo is the robot focused more so for a younger audience, according to Hanchar. This little bot has its own YouTube channel, "Cozmo & Friends" where Cozmo and its friends teach and learn about STEM-oriented topics.

"We lead with entertainment and fun first," Hanchar said. "So (Cozmo) is more of a fun robot where you control it like a toy. It's more like a toy experience where you have direct remote control access, you're using the phone as that remote control. So it's heavily dependent on the user to get information, whereas Vector's more autonomous and he kind of just hangs out with you a little bit."

Vector does not require a phone to be used as a remote controller. The robot synchronizes to Wi-Fi and functions based on that.

"He can kind of putter around your desk independently," Hanchar said. "And the age group who owns that tends to be (around) 25 (years old). That's more of like your companion robot, more of a quote-unquote 'AI' experience."

Brennan is a retired Navy veteran with a passion for teaching. The DDL bots have factored into many aspects of her own classroom and school.

"The first person that (students) want to see is Vector and Cozmo when they come in my classroom," Brennan said, regarding how the robots influence her students. "...I already use them for our coding curriculum. All of our students start coding in kindergarten."

Brennan explained that the DDL bots help the students learn the terminology of coding, the different parts of computers and more. The social-emotional side of learning is something Cozmo and Vector have shown serious impacts in as well.

"They'll say 'Hey Vector, what is a sequence? What's a loop?' They would rather have Vector tell them what that means versus Mrs. B telling them what that means because it means more coming from Vector," Brennan said, with a laugh. "The other part of (the robots) that I did not see coming was the social-emotional side that these robots have with these students."

Cozmo and Vector gives the students someone who is not a teacher or parent to open up to. This has allowed students to become more open-minded in the classroom - both intellectually and emotionally, according to Brennan.

"I would have students that were nonverbal but, the way that Cozmo's set up, he has a whole (range) of emotions that he can do," Brennan said. "Whether it's frustrated, sad, mad, happy, unsure...And then we have students that actually earn individual time with Cozmo and Vector on a weekly basis."

The impact of the DDL bots has been felt outside of Brennan's classroom as well, from the guidance counselor's office to the music, art and physical education departments.

"So I taught the guidance counselor how to code Cozmo so that these kids can talk with Cozmo about their feelings," Brennan said. "...They're not as scared. They're a lot more open with the robot than they are a human being.

"...I had been attempting to code Cozmo for the music (program)I can't read music. So I went to the person that I knew that could. Well he couldn't code...And so (we taught each other) so more students this year will actually be learning the music notes and coding it into Cozmo in music class."

Brennan also provided an example of Cozmo helping out in the art department. A new 3D printer at Frontier Elementary will allow students to print and attach a drawing accessory for Cozmo that will allow him to hold markers, pencils and more. The students will then be taught how to code Cozmo to draw pictures. The art students in turn will have a unit where they learn to draw Cozmo itself.

This symbiotic relationship between the robots, teachers, students and expanded curriculums continues. In gym class, student can build "weight sets" to go on its lifting mechanism so that Cozmo can shout the number of lifts needed and do the lifts with the students.

The future of edtechtainment bots in the classroom is vast, according to both Brennan and Hanchar.

"We're working hard to improve Cozmo, that's for sure," Hanchar said. "One thing that we know we're gonna do is, we have an application inside of Cozmo (and) we need to build more games like that in the future where you're doing drag-and-drop block coding. And we're gonna integrate that more with some of our other applications."

Brennan is looking forward to expanded use of DDL AI in her classroom and school.

"Right now the kids are wanting me to also bring in and work on video editing," Brennan said. "So we have a green screen...it's something we're going to be working on towards Christmas time (and) they get to do a skit of whatever kind they want and we also work on the video editing and teaching them how to do all of that.

"...The possibilities are endless because they are so excited."

Margaret Christopherson is a reporter for the Journal & Courier. Email her at mchristopherson@jconline.com and follow her on Twitter @MargaretJC2.

Read the original here:
Teacher inspires early love of technology through robotics - Journal & Courier

JavaScript had a hand in delivering James Webb Space Telescopes images – The Verge

It turns out that JavaScript, the programming language that web developers and users alike love to complain about, had a hand in delivering the stunning images that the James Webb Space Telescope has been beaming back to Earth. And no, I dont mean that in some snarky way, like that the website NASA hosts them on uses JavaScript (it does). I mean that the actual telescope, arguably one of humanitys finest scientific achievements, is largely controlled by JavaScript files. Oh, and its based on a software development kit from 2002.

According to a manuscript (PDF) for the JWSTs Integrated Science Instrument Module (or ISIM), the software for the ISIM is controlled by the Script Processor Task (SP), which runs scripts written in JavaScript upon receiving a command to do so. The actual code in charge of turning those JavaScripts (NASAs phrasing, not mine) into actions can run 10 of them at once.

The manuscript and the paper (pdf) JWST: Maximizing efficiency and minimizing ground systems, written by the Space Telescope Science Institutes Ilana Dashevsky and Vicki Balzano, describe this process in great detail, but Ill oversimplify a bit to save you the pages of reading. The JWST has a bunch of these pre-written scripts for doing specific tasks, and scientists on the ground can tell it to run those tasks. When they do, those JavaScripts will be interpreted by a program called the script processor, which will then reach out to the other applications and systems that it needs to based on what the script calls for. The JWST isnt running a web browser where JavaScript directly controls the Mid-Infrared Instrument its more like when a manager is given a list of tasks (in this example, the JavaScripts) to do and delegates them out to their team.

The JavaScripts are still very important, though the ISIM is the collection of instruments that actually take the pictures through the telescope, and the scripts control that process. NASA calls it the heart of the James Webb Space Telescope.

It seems a bit odd, then, that it uses such an old technology; according to Dashevsky and Balzano, the language the scripts are written in is called Nombas ScriptEase 5.00e. According to Nombas (now-defunct) website, the latest update to ScriptEase 5.00e was released in January 2003 yes, almost two decades ago. There are people who can vote who werent born when the software controlling some of the JWSTs most vital instruments came out.

This knowledge has been bubbling up on the internet in Hacker News and Twitter threads for years, but it still surprised quite a few of us here at The Verge once it actually clicked. At first blush, it just seems odd that such a vital (not to mention expensive) piece of scientific equipment would be controlled by a very old version of a technology thats not particularly known for being robust.

After thinking about it for a second, though, the softwares age makes a bit more sense while the JWST was launched in late 2021, the project has been in the works since 1989. When construction on the telescope started in 2004, ScriptEase 5 wouldve only been around two years old, having launched in 2002. Thats actually not particularly old, given that spacecraft are often powered by tried-and-true technology instead of the latest and greatest. Because of how long projects like the JWST take to (literally) get off the ground, things that had to be locked in early on can seem out of date by more conventional standards when launch day rolls around.

Its worth noting that, like the project itself, these documents that describe the JWSTs JavaScript system are pretty old; the one written by Dashevsky and Balzano is undated but came out in 2006, according to ResearchGate, and the ISIM manuscript is from 2011. (There does appear to have been a version published in 2010, but the one I read cites papers published in 2011.) Its always possible that NASA couldve changed the scripting system since then, but that seems like a pretty big undertaking that wouldve been mentioned somewhere. Also, while NASA didnt reply to The Verges request for comment, this JWST documentation page published in 2017 mentions event-driven science operations, which is pretty much exactly how the documents describe the JavaScript-based system.

This knowledge base, by the way, also contains a few more details on the telescopes 68 GB SSD, saying that it can hold somewhere between 58.8 and 65 gigabytes of actual scientific data. Wait, did I forget to mention that? Yes, this telescopes solid state drive has around the same capacity as the one that was available in the original 2008 MacBook Air.

Anyways, were not here to talk about the JWSTs storage. I feel like the big question at this point is why Javascript? Sure, theres probably a bit more angst about the language now than there was in the time when the projects engineers were selecting tech for the project, but NASA is famous among some programmers for its strict programming guidelines whats the point of going with web-like scripts instead of more traditional code?

Well, NASAs document says that this way of doing things gives operations personnel greater visibility, control and flexibility over the telescope operations, letting them easily change the scripts as they learn the ramifications and subtleties of operating the instruments. Basically, NASAs working with a bunch of files that are written in a somewhat human-readable format if they need to make changes, they can just open up a text editor, do a bunch of testing on the ground, then send the updated file to the JWST. Its certainly easier (and therefore likely less error-prone) than if every program was written in arcane code that youd have to recompile if you wanted to make changes.

If youre still worried, do note that the Space Telescope Science Institutes document mentions that the script processor itself is written in C++, which is known for being... well, the type of language youd want to use if you were programming a spacecraft. And its obviously working, right? The pictures are incredible, no matter what kind of code was run to generate them. It is, however, a fun piece of trivia next time youre cursing the modern web for being so slow and wishing that someone would just blast JavaScript into space, you can remember that NASA has, in fact, done that.

See the original post here:
JavaScript had a hand in delivering James Webb Space Telescopes images - The Verge

Microsoft is teaching computers to understand cause and effect – TechRepublic

Image: ZinetroN/Adobe Stock

AI that analyzes data to help you make decisions is set to be an increasingly big part of business tools, and the systems that do that are getting smarter with a new approach to decision optimization that Microsoft is starting to make available.

Machine learning is great at extracting patterns out of large amounts of data but not necessarily good at understanding those patterns, especially in terms of what causes them. A machine learning system might learn that people buy more ice cream in hot weather, but without a common sense understanding of the world, its just as likely to suggest that if you want the weather to get warmer then you should buy more ice cream.

Understanding why things happen helps humans make better decisions, like a doctor picking the best treatment or a business team looking at the results of AB testing to decide which price and packaging will sell more products. There are machine learning systems that deal with causality, but so far this has mostly been restricted to research that focuses on small-scale problems rather than practical, real-world systems because its been hard to do.

SEE: How to become a machine learning engineer: A cheat sheet (TechRepublic)

Deep learning, which is widely used for machine learning, needs a lot of training data, but humans can gather information and draw conclusions much more efficiently by asking questions, like a doctor asking about your symptoms, a teacher giving students a quiz, a financial advisor understanding whether a low risk or high risk investment is best for you, or a salesperson getting you to talk about what you need from a new car.

A generic medical AI system would probably take you through an exhaustive list of questions to make sure it didnt miss anything, but if you go to the emergency room with a broken bone, its more useful for the doctor to ask how you broke the bone and whether you can move your fingers rather than asking about your blood type.

If we can teach an AI system how to decide whats the best question to ask next, it can use that to gather just enough information to suggest the best decision to make.

For AI tools to help us make better decisions, they need to handle both those kinds of decisions, Cheng Zhang, a principal researcher at Microsoft, explained.

Say you want to judge something, or you want to get the information on how to diagnose something or classify something properly: [the way to do that] is what I call Best Next Question, said Zhang. But if you want to do something, you want to make things better you want to give students new teaching material, so they can learn better, you want to give a patient a treatment so they can get better I call that Best Next Action. And for all of these, scalability and personalization is important.

Put all that together, and you get efficient decision making, like the dynamic quizzes that online math tutoring service Eedi uses to find out what students understand well and what they are struggling with, so it can give them the right mix of lessons to cover the topics they need help with, rather than boring them with areas they can already handle.

The multiple choice questions have only one right answer, but the wrong answers are carefully designed to show exactly what the misunderstanding is: Is someone confusing the mean of a group of numbers for the mode or the median, or do they just not know all the steps for working out the mean?

Eedi already had the questions but it built the dynamic quizzes and personalized lesson recommendations using a decision optimization API (application programming interface) created by Zhang and her team that combines different types of machine learning to handle both kinds of decisions in what she calls end-to-end causal inferencing.

I think were the first team in the world to bridge causal discovery, causal inference and deep learning together, said Zhang. We enable a user who has data to find out the relationship between all these different variables, like what calls what. And then we also understand their relationship: For example, how much the dose [of medicine] you gave will increase someones health, by how much which topic you teach will increase the students general understanding.

We use deep learning to answer causal questions, suggest whats the next best action in a really scalable way and make it real world usable.

Businesses routinely use AB testing to guide important decisions, but that has limitations Zhang points out.

You can only do it at a high level, not an individual level, said Zhang. You can get to know that for this population, in general, treatment A is better than treatment B, but you cannot say for each individual which is best.

Sometimes its extremely costly and time consuming, and for some scenarios, you cannot do it at all. What were trying to do is replace AB testing.

The API to do that, currently called Best Next Question, is available in the Azure Marketplace, but its in private preview, so organizations wanting to use the service in their own tools the way Eedi has need to contact Microsoft.

For data scientists and machine learning experts, the service will eventually be available either through Azure Marketplace or as an option in Azure Machine Learning or possibly as one of the packaged Cognitive Services in the same way Microsoft offers services like image recognition and translation. The name might also change to something more descriptive, like decision optimization.

Microsoft is already looking at using it for its own sales and marketing, starting with the many different partner programs it offers.

We have so many engagement programs to help Microsoft partners to grow, said Zhang. But we really want to find out which type of engagement program is the treatment that helps a partner grow most. So thats a causal question, and we also need to do it in a personalized way.

The researchers are also talking to the Viva Learning team.

Training is definitely a scenario we want to make personalized: We want people to get taught with the material that will help them best for their job, said Zhang.

And if you want to use this to help you make better decisions with your own data, We want people to have an intuitive way to use it. We dont want people to have to be data scientists.

The open-source ShowWhy tool that Microsoft built to make causal reasoning easier to use doesnt yet use these new models, but it has a no-code interface, and the researchers are working with that team to build prototypes, Zhang said.

Before the end of this year, were going to release a demo for the deep end-to-end causal inference, said Zhang.

She suggests that in the longer term, business users might get the benefit of these models inside systems they already use, like Microsoft Dynamics and the Power Platform.

For general decision-making people, they need something very visual: A no-code interface where I load data, I click a button and [I see] what are the insights, said Zhang.

SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)

Humans are good at thinking causally, but building the graph that shows how things are connected and whats a cause and whats an effect is hard. These decision optimization models build that graph for you, which fits the way people think and lets you ask what-if questions and experiment with what happens if you change different values. Thats something very natural, Zhang said.

I feel humans fundamentally want something to help them understand If I do this, what happens, if I do that, what happens, because thats what aids decision making, said Zhang.

Some years ago, she built a machine learning system for doctors to predict how patients would recover in different scenarios.

When the doctors started to use the system they would play with it to see if I do this or if I do that, what happens,' said Zhang. But to do that, you need a causal AI system.

Once you have causal AI, you can build a system with two-way correction where humans teach the AI what they know about cause and effect, and the AI can check whether thats really true.

In the U.K., schoolchildren learn about Venn diagrams in year 11. But when Zhang worked with Eedi and the Oxford University Press to find the causal relationships between different topics in mathematics, the teachers suddenly realized theyd been using Venn diagrams to make quizzes for students in years 8 and 9, long before theyd told them what a Venn diagram is.

If we use data, we discover the causal relationship, and we show it to humans its an opportunity for them to reflect and suddenly these kinds of really interesting insights show up, said Zhang.

Making causal reasoning end to end and scalable is just a first step: Theres still a lot of work to do to make it as reliable and accurate as possible, but Zhang is excited about the potential.

40% of jobs in our society are about decision making, and we need to make high-quality decisions, she pointed out. Our goal is to use AI to help decision making.

Read more:
Microsoft is teaching computers to understand cause and effect - TechRepublic

DreamWorks Animation To Release Renderer As Open-Source Software – Slashdot

With annual CG confab SIGGRAPH slated to start Monday in Vancouver, DreamWorks Animation announced its intent to release its proprietary renderer, MoonRay, as open-source software later this year. Hollywood Reporter reports: MoonRay has been used on feature films such as How to Train Your Dragon: The Hidden World, Croods: A New Age, The Bad Guys and upcoming Puss in Boots: The Last Wish. MoonRay uses DreamWorks' distributed computation framework, Arras, also to be included in the open-source code base.

"We are thrilled to share with the industry over 10 years of innovation and development on MoonRay's vectorized, threaded, parallel, and distributed code base," said Andrew Pearce, DWA's vp of global technology. "The appetite for rendering at scale grows each year, and MoonRay is set to meet that need. We expect to see the code base grow stronger with community involvement as DreamWorks continues to demonstrate our commitment to open source."

Go here to read the rest:

DreamWorks Animation To Release Renderer As Open-Source Software - Slashdot

This Mac hacker’s code is so good, corporations keep stealing it – The Verge

Patrick Wardle is known for being a Mac malware specialist but his work has traveled farther than he realized.

A former employee of the NSA and NASA, he is also the founder of the Objective-See Foundation: a nonprofit that creates open-source security tools for macOS. The latter role means that a lot of Wardles software code is now freely available to download and decompile and some of this code has apparently caught the eye of technology companies that are using it without his permission.

Wardle will lay out his case in a presentation on Thursday at the Black Hat cybersecurity conference with Tom McGuire, a cybersecurity researcher at Johns Hopkins University. The researchers found that code written by Wardle and released as open source has made its way into a number of commercial products over the years all without the users crediting him or licensing and paying for the work.

The problem, Wardle says, is that its difficult to prove that the code was stolen rather than implemented in a similar way by coincidence. Fortunately, because of Wardles skill in reverse-engineering software, he was able to make more progress than most.

I was only able to figure [the code theft] out because I both write tools and reverse engineer software, which is not super common, Wardle told The Verge in a call before the talk. Because I straddle both of these disciplines I could find it happening to my tools, but other indie developers might not be able to, which is the concern.

The thefts are a reminder of the precarious status of open-source code, which undergirds enormous portions of the internet. Open-source developers typically make their work available under specific licensing conditions but since the code is often already public, there are few protections against unscrupulous developers who decide to take advantage. In one recent example, the Donald Trump-backed Truth Social app allegedly lifted significant portions of code from the open-source Mastodon project, resulting in a formal complaint from Mastodons founder.

One of the central examples in Wardles case is a software tool called OverSight, which Wardle released in 2016. Oversight was developed as a way to monitor whether any macOS applications were surreptitiously accessing the microphone or webcam, with much success: it was effective not only as a way to find Mac malware that was surveilling users but also to uncover the fact that a legitimate application like Shazam was always listening in the background.

Wardle whose cousin Josh Wardle created the popular Wordle game says he built OverSight because there wasnt a simple way for a Mac user to confirm which applications were activating the recording hardware at a given time, especially if the applications were designed to run in secret. To solve this challenge, his software used a combination of analysis techniques that turned out to be unusual and, thus, unique.

But years after Oversight was released, he was surprised to find a number of commercial applications incorporating similar application logic in their own products even down to replicating the same bugs that Wardles code had.

Three different companies were found to be incorporating techniques lifted from Wardles work in their own commercially sold software. None of the offending companies are named in the Black Hat talk, as Wardle says that he believes the code theft was likely the work of an individual employee, rather than a top-down strategy.

The companies also reacted positively when confronted about it, Wardle says: all three vendors he approached reportedly acknowledged that his code had been used in their products without authorization, and all eventually paid him directly or donated money to the Objective-See Foundation.

Code theft is an unfortunate reality, but by bringing attention to it, Wardle hopes to help both developers and companies protect their interests. For software developers, he advises that anyone writing code (whether open or closed source) should assume it will be stolen and learn how to apply techniques that can help uncover instances where this has happened.

For corporations, he suggests that they better educate employees on the legal frameworks surrounding reverse engineering another product for commercial gain. And ultimately, he hopes theyll just stop stealing.

Read more:

This Mac hacker's code is so good, corporations keep stealing it - The Verge

Secrets in the Code: Open-Source API Security Risks – BankInfoSecurity.com

This episode has been automatically transcribed by AI, please excuse any typos or grammatical errors. Steve King 00:13A good day everyone this is Steve King, Im the managing director at CyberTheory. We are running our podcasts today around a topic that we call secrets in the code. Todays episode will focus on day zero supply chain vulnerability. With me today is Moshe Zioni. The VP of security research at Apiiro an early stage cybersecurity company founded in 2019, whose purpose is to help security and development teams proactively fix risk across the software supply chain before releasing to the cloud, which is very cool. In my estimation, backed by Greylock and Kleiner Perkins with a $35 million a round, I think they are well on the way to a market leadership position in the space. And some of what theyve done so far is the current winner of the pretty prestigious RSA sandbox Innovation Award. They were named to Gartner 20 week 21, cool vendor and Dev SEC ops. They found that detected a de zero supply chain security vulnerability on Kubernetes space, the Argos CD platform. And theyve been a frequent contributor to the NIST 800 to 18 Secure Software Development Framework. So Moshe has been researching security for over 20 years in multiple industries and specialized specializing in penetration testing, detecting algorithms and incident response, constant contributor to the hacking community has been co founder of the Shabak on security conference for the past six years. So welcome to the show emotion. Im glad you could join me today. Thank you, Steve. ImMoshe Zioni 02:08very happy to be here. Thank you for having me.Steve King 02:11Sure. Lets jump right in. We all know that traditional OpSec is failing modern enterprises, and that weve got many hidden risks in open source API security. In fact, you guys published a report, I think, entitled secrets in the code, which eloquently describes the business industry impact of your research, along with some actionable insights for practitioners? Can you give us an overview of that? Sure. So as aMoshe Zioni 02:40backdrop secrets, ENCODE is something that many developers and security professionals have been pointing out throughout recent years. But of course, it is as old as code exists. Simply put it is the fact that developers are putting into their code, some strings, or some artifacts that are there without a real reason, or at least not a secure reason to do to do the same thing with a secure string, or maybe some alternative that we have currently, like vaults or something. So instead, theyre using hard coded secrets secret can be a password, a token that can be utilized, again, a cloud service or something, something in this in the Spirit. And by using that sometimes they neglect it in code. And once this code is, is open source to the world, some other hacker can pick it up from the source itself, and utilize it for their own good their permissions there or authorization that you get from those tokens are is of course, varies between different suppliers and providers. But in general, you can think of the most common examples are like tokens to a specific API service that can give you maybe some credentials to implement or to access, cloud services and cloud resources of the organizations. So this is the backdrop of why we actually went through the research method and eventually resulted in the report that youve just mentioned. And in this report, we found we took like something around 20 Different organizations with different scale with different industries. And through those organizations, we actually scanned pretty rigorously all of their commits. commits are the single piece of code that are being pushed into an open source repository. And we reach 2 million commits overall. And by those commits, we have a very good grasp of how secrets behave in code how developers are, wrongly put their secrets in their code. And also what kind of what can we learn from those kinds of behaviors? Is there a Some things you can point out as a pattern. And of course, the result is the report. So you can guess there are some patterns that are most interesting to explore. And to add to the decision making processes within security professionals and organizations, once they have their plan or strategy strategic plan put intoSteve King 05:21place. Yeah. And are there quite a few dependencies that, you know, downstream dependencies on other open source programs that are called by some of these APIs and, and other open source code that no one has any idea? What what those are? Or are people? I guess the question is, how do we vet? Is that even possible that event the percentage of code that we that we reuse from these libraries?Moshe Zioni 05:52Wow, thats a great question. And of course, a very complex answer. Ill try to do it briefly. The short answer is that you can assess at least the risk of having specific package or dependencies that you use and import into your code. There is a limit to it, of course, because everything can be seen as a risk. And what we are proposing and we are, can we actually have another project in open source project for that name, the dependency come popular, which is doing exactly that, its taking into account multiple intelligence feeds, and made a data of the packages and trying to assess what is the risk of using this kind of import of using this kind of package. There are different ways to go about this kind of route of intelligence over packages, you can maybe scan them, you can actually went through a code review practice with them. But this is, of course, a very laborious and expensive in resources, of an effort to go about every kind of open source dependency that youre using, that this number is just accumulating over time, and, from our perspective, never go down, we all see the trend of using more and more open source, there is good reason for that is this saves a lot of time, this is this becomes a standard. And by that you can implement and produce better and also faster software to production. So we dont see retraction from this kind of trend. Quite the opposite.Steve King 07:29Yeah, no, and I, you know, the Imam understand that the need for you know, if were driving so desperately to digitalization and, and the fourth result revolution, and all of that I see the need for, you know, agile development, of course, but, you know, I mean, at some point, dont you say, you know, the cost is far outweigh? I mean, to do it to do it in a safe context, isnt the cost far outweigh the benefit? Its amazing to me, I know, you guys have developed some best practices also, when it comes to, you know, ethically reporting and patching these vulnerabilities. And can you help our audience understand what a few of these might be? And do they include, you know, if we run into a secret, for example, or the dependency that youre working on? Now, do you alert the dev SEC ops team? Or how does that work?Moshe Zioni 08:25Again, this is a very good point on both cases, and on once you find a vulnerability or you find the secret, which can be seen as a subset of a file a vulnerability in code, some kind of weakness that you are exposing. So in general, yes, there is a responsible disclosure process. If you are internal to the organization, this should be easy for you, you should contact your immediate app SEC engineer or app SEC representative. And by that acknowledges them that should they should respond to this kind of incident. By that they need to, of course, prista, first of all remediate meaning that they need to revoke the token, after they are rotating it into a more secure way and fixing the code. To be supportive of that on dependencies are quite the same. If you find a dependency we have our ability, you acknowledge that to the to your closest representative if you are extended to the organization, thats a bit more complicated, but fortunately, we have many processes around that. Its collectively called responsible disclosure, meaning that you are disclosing a vulnerability or maybe a weakness as we mentioned the secret to an organization Hey, listen, you have this kind of of an issue. And you also would like to extend an explained sometimes why this is an issue. What kind of business impact does this help desk this issue has over business noteworthy organizations. Once you have that you are filling up a short report, maybe an email they maybe they have some kind of a bug bounty program which Just another way to support this kind of disclosures. And by that you can go about and just disclose this kind of information safely to the organization, you can look up for more mature organizations will have their contact in the front page, just as for security manners, and of course, every kind of respectable corporate will have this kind of process one way or another.Steve King 10:25Yeah. And I assume that that means that we want to only work with mature organizations with that have ways of interacting and contacting to make sure that were able to do that responsible disclosure, and have them act on it. Right? Yeah,Moshe Zioni 10:44yeah, absolutely. We, this is one measurement for you to measure, if those kinds of issues have been just mentioned dependencies, just to measure if this, this, this dependency is being mature enough in terms of security, you can see if there were any kind of vulnerabilities in the past, you can see if they have a process installed, in order to contact their security advisory or security board. And by that you can assess at least their seriousness and their maturity in terms of security processes. This is a great indicator. Yeah, I must agree. Yeah. SoSteve King 11:14are you attempting to do that in an automated context? Or do you simply return the discovered dependency to a manual process where people dont have to look it up.Moshe Zioni 11:30So we do both, it really depends on on what the customer needs. And you can, you can, you can set it up as you will, if youd like to have just as a, an alert or something that will be notifying you about this kind of discrepancy, maybe a vulnerability funding dependency, so youll be able to manually act upon. And also, on many vulnerabilities, there are automation processes in place, so you can just forget about it and say you want to be automatic, most of the organizations will have some kind of a mix for high impact vulnerabilities, excuse me high impact on the business, they would like to assess it manually. Either way they can break. For example, if you just need to update the dependency version, you will need to test it first by a human being maybe in the future, that will be even better. So well well be able to just reduce this kind of effort as well. But currently, every kind of high business impact application will have to have some kind of a manual analysis and manual testing before releasing it to to a stable state. You can choose for at least for the time being if you would like for example, just to have a bit as a beta for testing, or maybe for some cutting edge. And someone thats more like to to have the risk of return, they be able to automatically update for the latest version and then just use it as is.Steve King 12:55Yeah, I got it. Ransomware is continuing to be a thorn and everybodys side is growing like crazy. For all the obvious reasons. Youve got advice on how organizations can best mitigate future ransomware attacks and specifically around supply chain and open source? Security. I know a lot of people that would love to hear the answer to that question. How do you mitigate future ransomware attacks,Moshe Zioni 13:22when we are discussing ransomware. Or if we can generalize it a bit for any kind of malware activity, malware can be directed and can be implemented. Not just of course, by a ransomware, I agree with you the trend somewhere is the most prominent attack vector once you have a foothold into the organizations. And what we are foreseeing and what we are proposing, especially around the supply chain, and they were supply chain ransomware attacks is to defend your code as early as you can. And also, that means that there is a trend called shift left meaning that you would like to have as much as those kind of things and validation done as soon as possible not once, not just once you are going to production. And the second rule of thumb here is if you have something more closer to the actual production systems, what youll be able to do is to lock down the versions lock down the specific cases, specific dependencies that you have. And by that, even if someone is lets say half men in the middle attack over your dependencies, youll be able to validate, and by the signature and by the fingerprint of those kinds of dependencies that you you actually get what youre expecting. So nothing like for example, a very common mistake in those kinds of cases that can lead to those kinds of attacks, potentially, is to leave it to the dependency to be able to pull down the latest version instead of the specific version that you know that is safe to use, and buy that every time that they So a build will go up, it will request the latest version without acknowledging what kind of certificate what kind of fingerprint should should this version have. And this is called a locking, version locking. So you lock the version, you can also add to that on many package managers, the actual fingerprint of the package. And by that you ensure that at least you wont be harmed, harmed by a new kind of attack through the supply chain through dependencies, if that makes sense.Steve King 15:27Okay, how much post sales support? And training do you guys have to provide to get your customers that fully extract value from the solution?Moshe Zioni 15:42I would say not much. First of all, we are in very close contact with our customers. As a startup, of course, we have this kind of agility to fit their needs pretty quickly. And we are going through the rule of thumb that if it doesnt make sense, the first time you look at it, it maybe will make sense that the third or fourth time you will but thats something that we are refraining from we are trying to make the system approachable meaning that the you user experience itself should reflect native flows of organizations and not enforcing the organizations to our will, and our own processes and what we think Sheesh, they should do. The second thing we are doing its the whole system is interconnected with your current processes. So it wont make up new processes, if you dont like to, the workloads that we can build for you are automatic and are suitable for your ticketing system, maybe for your instant messaging systems like Slack like teams, etc. And by that we are leaving the ecosystem instead of instructing it.Steve King 16:45Do you think you can scale that down as you grow?Moshe Zioni 16:49Absolutely. Currently, the the way that we are doing that is, first of all, we are a cloud native ourselves. So by that the scalability that we, if we have any kind of scalability requests, is pretty easy to do. DevOps teams are pretty used to that. And we are also always preparing ourselves to do much more than we are currently withholding. And, of course, we are looking into more and more customers, we have huge customers on our portfolio. And by that we are pretty confident with that. But of course, we are always checking those kinds of assumptions, we dont want anyone to be held down by resources or anything similar to that. And the process itself is pretty easy, you can be ramped up into onto the payroll platform, in a matter of less than a day, or even less than some than several hours sometimes depends on your size. And the analysis itself will also kick in soon as possible, though, you will have your repositories analyzed and if you are asSteve King 17:53what size customer is your ideal prospect or your ideal end user in terms of, you know, a number of people or obviously they have to have DevStack ops team, how big does that have to be? Yeah,Moshe Zioni 18:09so this is the funny thing. We are, first of all, we are seeing a lot of different customers in terms of structure. So sometimes they will have their own DevStack ops team, sometimes they will, they will have dev ops team and not dev SEC ops team, sometimes they wont have either and they maybe will have a single entity named OpSec, engineer or upset professional to go about and do the work of app SEC application security, excuse me. And by that the whole purpose of the A pillar system is to save you those kinds of resources, you you you wont need it before that you lets say you need 10 people to to exercise application security throughout your supply chain Bureau is diminishing those numbers to a single digit. And on the low end of it, the purpose of it is to make the clutters of the alerts and the alarms that you have all the bells and whistles that goes off every time you will have the minimum amount that you need. And the very focused one, dealing with deduplication dealing with automations of those kinds of processes. So in general, our idea of of, of an organization will will have to be something that some organization that will have at least one application security personnel, that can be a devsecops that can be a DevOps, and that can be an absolute professional. In terms of number of developers, you can go up to the hundreds of 1000s. But in general, thats the whole idea that the system is scalable. We are learning as much as we can from from those kinds of development developer behavior. So if you have more developers, that will make much more value. But if even if you have quite a few, even in the numbers of 10s developer, a few 10s of developers, its still going to be much valuable information and insights about who is doing what Add how what is the timeline of each material change in the code? What kind of code impacts you more than that something else and the risks that every code commits, is contributing to your to your repositories. And of course, you decide what to do with it. And we aid you with our workflows and automations around remediation and measurement.Steve King 20:21Yeah, I see. And thats got, thats got to be one of your key value propositions as well, right? Peoples dont have to stand up a whole dev SEC ops team, they, if they dont have one, thats fine, too, because youre actually doing that work.Moshe Zioni 20:38Exactly. We have some very good indications on that from customers that they applaud us on several occasions than we recently on past months, everyone had those kinds of VIP CDs, meaning vulnerabilities are very high impact into data streams. And instead of spending hours, maybe days, maybe a week, some customers said that their peers in the industry spent two weeks in order to discover all of the weaknesses they have, it took a took them with a much less of a much, much fewer applications, security professionals. And within a few hours, they had all the information they needed to mitigate and to spot every every weakness in every vulnerability that was that were discussed, and those kinds of events. So this is a very good assurance, that the impact and to the philosophy that we are taking reallySteve King 21:31your platform. Yeah, sounds like it. Thats great. Weve talked about numbers a little bit here that you know, you in the difference between private and public repositories, you youve discovered that I dont know, it was like eight times the number of expose secrets and privates. Can you told me give our listeners the difference between private and public repositories? And why that wed have eight times the number of expose secrets in private repositories? Yeah, sure. SoMoshe Zioni 22:00they there is a technical answer to that. And there is a, I would say psychological, psychological aspect of it. So first of all, the technical answer is that private versus public, a public repository is something that you quite, not surprisingly, opening up to the world and to the public. So everyone can can see your code. The reasons for that vary, sometimes its something that you would like to share, because it would like to share something with the community or maybe some some kind of a support to other customers that you have yourself, or you have an Open Source Repositories that you are maintaining the private repositories, which are the funny thing is that they are much more common than the public ones in organizations, of course, is your code that you dont you dont want to expose to the world. So this is the technical aspect of repositories, private versus public. The other aspect of it is more a psychological and organizational level aspect, is that what you do with those kinds of private repositories, those private repositories holds your crown jewels. And another difference is that those private repositories have maybe a different threat actor attacking or, or influencing the risk of those kinds of repositories. And what we found in the research is that, as you said, you have eight times the number of secrets on those kinds of private repositories. This is the first of any kind of report that covered internal repositories, to the to this breath. And by that you can also think or at least correlate the fact that developers and every organization feel much more safer to keep their code within their realms. And by that some secrets can slip in much more heavily. And also you they will never expect those kinds of secrets to go out. So they will assume this is safer, and maybe they shouldnt act upon it as furiously as they will be on public repositories. But this is completely false. First of all, many accidents that weve weve encountered and aided in those kinds of incidents, try to convey the message that some of those extents begin with the private repositories. But then sometime in the future, this code snippet or maybe the whole repositories, become public. The second thing is that if those private repositories are private, that doesnt mean that that no one can see that its accepted, specific developer quite the opposite. In those kinds of organizations, many have those kinds of access. And something like a snippet can slip through someone can copy paste something to an unsecure device. And by that you see those kinds of private repositories maybe the most notorious case of the past here was the Twitch link, which the streaming service have been hacked sometime in the past and in 2021, and the end of 2021. We saw the link itself a few gigs. bytes of code. And we saw how many, this is pretty confirming to this kind of aspects, how many secrets there were in twitches code doesnt mean that Twitch is any different from any kind of another implementation, it just confirms the fact that those kinds of secrets are much more prevalent in entire repositories.Steve King 25:19Wow. You know, as it gets more complicated the human factor, it gets more important, doesnt it? Across the board, whether its, you know, server configurations, or open source code, or the kind of mistakes that humans make, just naturally, I mean, people are people, you know. So its, its always interesting to me, it is also interesting that I hit you said that over a third of the secrets that you detected, your research detected happened in the first quarter of the year. What is the correlation between that time of year and the number of secrets?Moshe Zioni 26:01Yeah, Im happy to bring that up. Because for me, its the most revealing fact from the report Maybe, and maybe most surprising to many. But when you think about it, what the actual the actual report stated that 30 point 34 point 34% of secrets that were found, were added to those repositories during the first few months during the first quarter of the year. This is spanning the research itself spanned throughout multiple years. So and we saw this kind of very clear cadence that you have in from the beginning of the of the year to the end of it, you have some kind of a sine wave throughout, and the correlation that we found, and we also discussed it with, with experts and some on an organizations themselves. By the way, I havent mentioned that until now that the report itself has been vetted and been validated and discussed with 15 different external, external experts on the field of application security. Some of them are our customers, some of them are champions of application security globally. And they have reviewed it and gave gave their insights as well. And part of what we receive there is that many organizations have this kind of rotation cadence of secrets within their organization. Quite naturally, it maybe its the beginning of the year, maybe sometime else inside the fiscal year that needs to be rotated, because you are re rethreading over licenses. And maybe they just had a very good year sometime. And they have this kind of very aggressive recruitment. So they have much more new employees and by the new developers makes much more mistakes. Another fact that we that we put on the report itself, by the way, so we see this kind of seasonality, first of all, because of organization cadences outside of secrets, but affecting secrets indirectly. And also, we can think of the holidays, especially if the US holidays are happening at the end of the year. So something along those lines also can affect the holiday time that people take and then return. Maybe its a its overburdening for the application security team that is always in the stress of accomplishing more, so they have less time for code reviews. And they cant really stop the whole flood of secrets at those kinds of times of year. Those are, of course assumptions and correlations, and we cant really prove one to one. But we see this kind of correlations pretty strongly, especially on the seasonality and rotation factor that that I mentioned.Steve King 28:41Yeah. Yeah, that makes sense. Id love to get a copy of that report. If its now public, and perhaps you can email me some version. Thats true. Yeah, thatd be great. Its worth promoting for sure. I this is a this is a huge problem. You know, its right up there in my mind with all of the other complicating factors around our networks being way too complicated. Its moment and, and our approach is relying way too much on human on the human factor. I think were near the end of our time here. And I wanted you to have you confirmed that I think, a brief way to summarize a Pirro that you guys discover, remediate and measure every API service, dependency and sensitive data in the CI CD pipeline to map the application attack surface, right. Right, together withMoshe Zioni 29:42contextual knowledge about the risks themselves, like what is the material change? What kind of technologies are you using? If the actual code change was affecting authorization, authentication, storage, or anything along those lines and much more. All this contextual knowledge gives us the power To really recommend and to score risks according to your normalization of the organization, and not just by a ad hoc, something agnostic to yoga kind of organization, it the context is everything. And its no different with this kind of risks.Steve King 30:15Yeah, sure. And, and this all happens pre production, right? pre pre entry into the production stream and the crowd in the cloud. Yeah. Okay. Yeah, correct. So who are some of your more notable customers that folks would recognize? And then what competitors would folks expect to find when looking for a code risk platform? Is that a category by the way, that code risk platform? Category? Is that is that a Gartner thing? Or did you guys can see that?Moshe Zioni 30:46I dont think its a Gartner category, the Gartner closer thing is the scene app or the cloud native application protection platform. And by that I can mention a few of course, I can mention every kind of customer that we have. But just to name a few we have so first platica, Chegg tripactions, Imperva, rivian, mind geek, Rakuten, and many more on our platform. And if you just notice the whole line there, there are diverse customers from for many industries, any shape and size. And this is, of course, gives us a lot of, of johe working with those kinds of big customers that knows how to run application security programs. And by that they enjoy the experts platform that gives them the this kind of contextual power. Yeah,Steve King 31:37Im sure. In terms of competitors. I know you guys are early, have there been a bunch of competitors that that have been sort of creeping up? Or do you have any serious competitors that you worry about?Moshe Zioni 31:51I dont think its I think its too early to really designate a competitor, every there is a lot of cloud related startups and solutions. But every everyone is doing their thing very much differently. And we are not excluding the we are not excluded there. And by that I dont see anyone like direct competitor, but the area is still fresh. Let me let me put it that way, asked me again in one year, and I youSteve King 32:19know, I will, I believe Ill have you back in a year. And well have the same conversation and see, see where you are, which is great. You know, I mean, when you sold Imperva, there must have been competitors there that you beat out. Right? Again, weMoshe Zioni 32:37are we have a very unique approach and philosophy, to the market to application security in general. To be honest, the first time Ive heard from the founders about the company done plotnick. And youre not done about the solution, my jaw just dropped. As a veteran in the application security industry, this was not just news, but earth shaking and a paradigm shift in the way that organizations should deal with application security from now on. And this is so much time after that. I still feel like there is no competitor in the same scale and in the same maturity, and very much nothing the same even method that we are looking into. And thats why Im struggling to find a direct competitor that you are looking for.Steve King 33:26Yeah, no, I know. I dont believe that youre being evasive at all. I think that youre right. I dont know any. Any competitors here. And you guys. Thats why when Alex originally contacted me, I was I was floored, you know, I was like, Can this be for real? Because youre absolutely right. This is a this is a solution I havent seen before and it is revolutionary, absolutely set in terms of you know, security by design. No, no question about it. So thank you Moshe, for taking the time out of your crazy schedule, Im sure to join us today. This is Moshe ziani, the VP of security research at a Pyrrho and we will ask you to come back not in a year but maybe six months and have another one of these and kind of see whats happened in the market. Now. You know, were heading into a challenging moment here to the next few months and but you know, cybersecurity is not going to stop and so people still need to protect their PII and PII and IP and all the rest of it. So Im sure that you should have a fantastically successful quarter here.Moshe Zioni 34:41Thank you very much, Steve. And Im looking forward for the next invitation. It was a very pleasant discussion. And there was questions. Thank you very much.Steve King 34:49Good. Thank you. And thank you to our listeners for joining us in another one of our unplugged reviews is the stuff that matters in cybersecurity and technology and our Our new digital landscape until next time, Im your host, Steve King signing out

See original here:

Secrets in the Code: Open-Source API Security Risks - BankInfoSecurity.com

Application Security Tools: Which solution is best? – IDG Connect

As the threat of cybercrime continues to grow, it is more important than ever for business leaders to ensure their security of their applications. For many, this means utilising application security tools tailored to the demands of today. However, selecting a product isnt always easy and there are many to choose from.

Over 540,000 professionals have used Peerspot research to inform their purchasing decisions. Its latest paper looks at the highest rated application security tool vendors, profiling each and examining what they can offer enterprise.

Heres a breakdown of the key players currently active in the market:

Average Rating: 7.6

Top Comparison: SonarQube

Overview: Highly accurate and flexible static code analysis product that allows organisations to automatically scan uncompiled code and identify hundreds of security vulnerabilities in all major coding languages and software frameworks.

Average Rating: 8.8

Top Comparison: Veracode

Overview: A breakthrough technology that enables highly accurate assessment and always-on protection of an entire application portfolio, without disruptive scanning or expensive security experts.

Average Rating: 8.9

Top Comparison: Snyk

Overview: Helps organisations detect and fix vulnerabilities in source code at every step of the software development lifecycle.

Average Rating: 7.7

Top Comparison: Black Duck

Overview: Effortlessly secures what developers create and uniquely removes the burden of application security, allowing development teams to deliver quality, secure code faster.

Average Rating: 7.7

Top Comparison: SonarQube

Overview: A web application security testing tool that enables continuous monitoring. The solution is designed to help organisations with security testing, vulnerability management, and tailored expertise.

Average Rating: 8.6

Top Comparison: OWASP Zap

Overview: The worlds leading toolkit for web security testing. Over 52,000 users worldwide,

across all industries and organisation sizes, trust the solution to find more vulnerabilities, faster.

Average Rating: 8.4

Top Comparison: SonarQube

Overview: User-friendly security solution that enables users to safely develop and use open source code. Users can create automatic scans that allow them to keep a close eye on their code and prevent bad actors from exploiting vulnerabilities.

Average Rating: 8.0

Top Comparison: Veracode

Overview: The leading tool for continuously inspecting code quality and code security, and guiding development teams during code reviews.

Average Rating: 8.6

Top Comparison: SonarQube

Overview: An open-source security and dependency management software that uses only one tool to automatically find open-source vulnerabilities at every stage of the system development life cycle.

Average Rating: 8.1

Top Comparison: SonarQube

Overview: A unique combination of SaaS technology and on-demand expertise that enables DevSecOps through integration with enterprise pipelines and empowers developers to find and fix security defects.

Originally posted here:

Application Security Tools: Which solution is best? - IDG Connect

OFRAK, an Open Source IoT Reverse Engineering Tool, Is Finally Here – WIRED

At the 2012 DefCon security conference in Las Vegas, Ang Cui, an embedded device security researcher, previewed a tool for analyzing firmware, the foundational software that underpins any computer and coordinates between hardware and software. The tool was specifically designed to elucidate internet-of-things (IoT) device firmware and the compiled binaries running on anything from a home printer to an industrial door controller. Dubbed FRAK, the Firmware Reverse Analysis Console aimed to reduce overhead so security researchers could make progress assessing the vast and ever-growing population of buggy and vulnerable embedded devices rather than getting bogged down in tedious reverse engineering prep work. Cui promised that the tool would soon be open source and available for anyone to use.

This is really useful if you want to understand how a mysterious embedded device works, whether there are vulnerabilities inside, and how you can protect these embedded devices against exploitation, Cui explained in 2012. FRAK will be open source very soon, so were working hard to get that out there. I want to do one more pass, internal code review before you guys see my dirty laundry.

He was nothing if not thorough. A decade later, Cui and his company, Red Balloon Security, are launching Ofrak, or OpenFRAK, at DefCon in Las Vegas this week.

In 2012 I thought, heres a framework that would help researchers move embedded security forward. And I went on stage and said, I think the community should have it. And I got a number of emails from a number of lawyers, Cui told WIRED ahead of the release. Embedded security is a space that we absolutely need to have more good eyes and brains on. We needed it 10 years ago, and we finally found a way to give this capability out. So here it is.

Though it hadnt yet fulfilled its destiny as a publicly available tool, FRAK hasnt been languishing all these years either. Red Balloon Security continued refining and expanding the platform for internal use in its work with both IoT device makers and customers who need a high level of security from the embedded devices they buy and deploy. Jacob Strieb, a software engineer at Red Balloon, says the company always used FRAK in its workflow, but that Ofrak is an overhauled and streamlined version that Red Balloon itself has switched to.

Cuis 2012 demo of FRAK raised some hackles because the concept included tailored firmware unpackers for specific vendors products. Today, Ofrak is simply a general tool that doesnt wade into potential trade secrets or intellectual property concerns. Like other reverse engineering platforms, including the NSAs open source Ghidra tool, the stalwart disassembler IDA, or the firmware analysis tool Binwalk, Ofrak is a neutral investigative framework. And Red Balloons new offering is designed to integrate with these other platforms for easier collaboration among multiple people.

What makes it unique is its designed to provide a common interface for other tools, so the benefit is that you can use all different tools depending on what you have at your disposal or what works best for a certain project, Strieb says.

Go here to see the original:

OFRAK, an Open Source IoT Reverse Engineering Tool, Is Finally Here - WIRED