Acronis SCS and Leading Academics Partner to Develop AI-based Risk Scoring Model – Unite.AI

U.S. cyber protection company Acronis SCS has partnered with leading academics to improve software through the use of artificial intelligence (AI). The collaboration developed an AI-based risk scoring model capable of quantitatively assessing software code vulnerability.

The new model demonstrated a 41% improvement at detecting common vulnerabilities and exposures (CVEs) during its first stage of analysis. The following tests resulted in equally impressive results, and Acronis SCS is set to share the model upon its completion.

One of the greatest aspects of this technology is that it can be utilized by other software vendors and public sector organizations. Through its use, software supply chain validation can be improved without hurting innovation or small business opportunity, and it is an affordable tool for these organizations.

Acronis SCS AI-based model relies on a deep learning neural network that scans through both open-source and proprietary source code. It can provide impartial quantitative risk cores that IT administrators can then use to make accurate decisions involving the deployment of new software packages and updating existing ones.

The company uses language model to embed code. A type of deep learning, language model combines an embedding layer with a recurrent neural network (RNN). Up-sampling techniques and classification algorithms such as boosting, random forests, and neural networks are used to measure the model.

Dr. Joe Barr is Acronis SCS Senior Director of Research.

We use language model to embed code. Language model is a form of deep learning which combines an embedding layer with recurrent neural network (RNN), Dr. Barr told Unite.AI.

The input consists of function pairs (function, tag) and the output is a probability P(y=1 | x) that a function is vulnerable to hack (buggy). Because positive tags are rare, we use various up-sampling techniques and classification algorithms (like boosting, random forests and neural networks). We measure goodness by ROC/AUC and a percentile lift (number of bads in top k percentile, k=1,2,3,4,5).

Another great opportunity for this technology is its ability to make the validation process far more efficient.

Supply chain validation, placed inside a validation process, will help identify buggy/vulnerable code and will make the validation process more efficient by several orders of magnitude, he continued.

As with all AI and software, it is crucial to understand and address any potential risks. When asked if there are any risks unique to open source software (OSS), Dr. Barr said there are both generic and specific.

There are generic risks and specific risks, he said. The generic risk includes innocent bugs in the code which may be exploited by a nefarious actor. Specific risks relate to an adversarial actor (like state-sponsored agency) who deliberately introduces bugs into open source to be exploited at some point.

The initial results of the analysis was published in IEEE titled Combinatorial Code Classification & Vulnerability.

See the rest here:

Acronis SCS and Leading Academics Partner to Develop AI-based Risk Scoring Model - Unite.AI

AWS wants to tempt customers into switching to Linux – TechRadar

Another tech giant has thrown its weight behind Linux partnerships after Amazon Web Services (AWS) praised the system when launching the source code for its latest open source tool on GitHub.

The open source Porting Assistant for .NET is designed to scan .NET apps and list the things that need to be fixed in order to port the app to Linux. This, AWS argues, will help customers take advantage of the performance, cost savings, and robust ecosystem of Linux.

This choice of words has to be taken in context with the release of the AWS UI, which the company describes as just the first step in a larger process of creating a new open source design system.

As per reports, these recent releases are part of a larger move in the company to switch to JavaScript/TypeScript and React in order to build cross-platform user interface components, getting the benefit of being able to share libraries between web and desktop.

The basis for this assumption is two-pronged. First is the fact that the user interface for the Porting Assistant for .NET is written in React, although it could have just as easily been developed in .NET.

It is seconded by the release of AWS UI, which the company describes as a collection of React components that help create intuitive, responsive, and accessible user experiences for web applications."

While AWS doesnt create client applications, its embrace of React and this move towards what it describes as a new open source design system is perhaps done with the purpose of easing access to its services.

Its argued that switching to a new open source, platform-agnostic design methodology will surely make AWS services easier to consume and increase their adoption.

Via: The Register

Read this article:

AWS wants to tempt customers into switching to Linux - TechRadar

Elevate your security posture and readiness for 2021 – GCN.com

INDUSTRY INSIGHT

For some agencies, the SolarWinds attack was simply a wake-up call. For untold thousands of others, it was a tangible threat to digital assets with the potential for real-world consequences. While only 50 such organizations are thought to be genuinely impacted by the breach -- and the ramifications may be years or decades from full discovery -- it is clear that agencies must strongly reconsider their security posture and organizational readiness in light of the attack.

What does that mean for government IT personnel and related stakeholders? As the people keeping vital information systems safe, the best thing agencies and staff can do is find ways to apply these lessons in day-to-day operations.

The software supply chain matters more than ever

The potential for supply chain attacks and breaches from are far from a new concept, one ComplianceWeek piece noted, but recent examples remind us that attackers can leverage third-party code to directly compromise agency systems. Software supply chain attacks are up more than 400%, pointing to an increasingly attractive avenue of attack.

Also of concern is the practice of using free or open-source tools. While it is tempting to use free solutions, the risk of breach is quite high. By nature, open-source supply chain software is even more vulnerable to compromise by nefarious nation-state-sponsored hackers intent on breaching U.S. homeland defense and public safety organizations.

Organizations prioritizing security should avoid open-source software altogether, and those using prepackaged application programming interfaces and other third-party components must make a stronger commitment to testing, verifying and securing code integrated from outside sources. An initial breach in one system can allow attackers to gain increasing control over time, leapfrog to other systems and ultimately infect those outside the agency via a compromised update.

Agencies must likewise verify the safety of any third-party systems that integrate or use core agency computing or infrastructure systems -- such as a vendors schedule program sending automated update emails over the network -- and confirm the security of the vendors used by their third-party partners as much as possible.

Even within local government, every agencys digital topography will consist of dozens or even hundreds of third-party products, themselves comprised of hundreds more underlying third-party components.

Using guidance from the Federal Risk and Authorization Management Program and Federal Information Security Modernization Act, agencies can conduct a thorough audit of their third-party contractors by asking these questions:

Knowing these answers can make life much easier both during normal operations and in the event of a breach. Strong organizational readiness requires deep knowledge into the systems, processes and organizations with which agencies work.

Move from blacklisting to a whitelisting strategy

Think of blacklisting -- banning malicious or untrustworthy activity -- as a reactive approach to security. In contrast, whitelisting is a proactive strategy that assigns trust to reliable sources instead of revoking trust when things go wrong.

How do things look when an agency approaches security from a trust-giving perspective instead of a trust-taking one? Agencies can model the idea over any number of digital activities, from web traffic to application data to inbound network requests from presumably trustworthy sources.

Embrace the zero-trust model

In a technology environment with so many moving parts, it can be difficult to monitor all suspicious activity. Instead of trying to identify all potentially nefarious actors, consider a zero-trust security model -- a system of governance aligned to the trust-giving perspective. Having caught the IT world by storm, the idea as described by one expert in a CSO piece is quite simple: Cut off all access until the network knows who you are. Dont allow access to IP addresses, machines, etc. until you know who that user is and whether theyre authorized.

In a public-safety context, for example, the concept of inside vs. outside is key. While older castle-and-moat governance styles give a large degree of freedom to devices and users once theyve been permitted past the initial moat, zero trust regards interior users with a consistent level of wariness.

With a castle-and-moat model, hackers can leverage the trust allocated to vendors to compromise agency system more easily -- executing remote commands, sniffing passwords and more. A system that instead requires components to be identified, justified and authenticated at all points is one that can more easily catch compromises and prevent further access. This makes a zero-trust model a serious consideration for IT managers trying to keep operations secure with minimal manual intervention.

Check weak points before its too late

Knowing about potential (or even confirmed) breaches has obvious value and is also a boon for an agencys overall security posture -- understanding weaknesses and points of entry means they can be addressed.

See the article here:

Elevate your security posture and readiness for 2021 - GCN.com

Ryan Abernathey: Helping to Open a Universe of Data to the World – State of the Planet

Ryan Abernathey is a physical oceanographer in the Department of Earth and Environmental Sciences and the Lamont-Doherty Earth Observatory at Columbia University.The Oceanography Society named Abernathey among three recipients of its Early Career Award.

Earths climate system is experiencing unprecedented change as human-made greenhouse gas emissions continue to perturb the global energy balance. Understanding and forecasting the nature of this change, and its impact on human welfare, is both a profound scientific problem and an urgent societal need. Embedded in that scientific task is a technological challenge. New observational technologies are bringing in a flood of new information. Applying data science to this immense stream of information allows science to more deeply explore aspects of climate. However, this astonishing volume of data creates a different challenge: the need for tools that can scale to the size of our ever-expanding datasets.

Unraveling and interpreting that data is of particular fascination to Ryan Abernathey. The physical oceanographer is an associate professor at Columbia Universitys Department of Earth and Environmental Sciences who also leads the Ocean Transport Group at Lamont-Doherty Earth Observatory. His research focuses on the role of ocean circulation in the climate system, particularly mesoscale ocean dynamics the processes that occur at horizontal scales of less than 100 kilometers. A computer modeler as well as a physical oceanographer, Abernathey uses satellite data, computer models, and supercomputing clusters to study the impacts of mesoscale turbulence on the larger circulation of heat, water, and nutrients in the global oceans.

This week, The Oceanography Society named Abernathey among three recipients of its very first Early Career Award. The award recognizes individuals who have demonstrated extraordinary scientific excellence and possess the potential to shape the future of oceanography. The Early Career Award also recognizes individuals who have made significant contributions toward educating and mentoring in the ocean sciences community and/or who have a record of outstanding outreach and/or science communication beyond the scientific community. Abernathey is creating a unique impact In these areas. Below, he discusses the award, his work, the role of big data, and what it all means to future research.

Q: Congratulations, Ryan. You say your work has two parallel threads; how would you describe your objectives?

A: The central mission for our research group is to understand ocean transport, or how stuff moves around in the ocean. By stuff we mean, first and foremost, just the water itself, the ocean currents and the ways those currents transport things we care about. For example, the way they help heat enter the ocean as part of global warming. This matters a lot for the climate and ocean ecosystems. The way we do that is by using two main tools: satellite observations and high-resolution simulations or models. What both of these tools allow us to do is see small-scale ocean processes with more clarity so we can understand them better. And that leads to the data and computing side of our work. We need to see these small-scale processes better. That means we need high quality images with more detail, but that amounts to a much bigger files. These satellite observations/high-resolution images create a whole lot of data to deal with.

Q: Why is it important to get a better understanding of the role of small-scale ocean processes in the ocean?

A: A specific example is phytoplankton. These tiny organisms are the lungs of the ocean; they consume CO2, photosynthesize, and breath out oxygen. But they also need nutrients in order to grow. There is growing evidence that the supply of nutrients from small-scale ocean features, like eddies and fronts, is a really important source of nutrients for these organisms. But the global climate models we use to project future climate change are too coarse to properly represent these features, which means those projections may be missing something. By studying these processes in detail, we can get a sense of what might be missing.

Q: How have you dealt with the problem of having so much data to process that it can overwhelm available computational systems?

A: Ive discovered I just love building tools for working with data and putting them into the hands of as many people as possible, and seeing those people use those tools to do their own research. Thats really satisfying to me, personally. This is not necessarily the most common activity of a scientist. Typically, researchers are expected to produce more and more papers detailing their scientific findings, so this focus on building tools has really been a pivot in my career. Its been incredibly satisfying. Its really kind of a community effort.

Q: Community is a big focus for you. For instance, the work you did to bring about and now lead Pangeo: An Open Source Big Data Climate Science Platform. Why is creating open source code so important to you?

A: I just feel that its a place I can contribute and I like doing it and its going to have a real, broad impact. I think a lot of people recognize the challenge of working with these really large data sets but the unique thing our project brings to the table is a vision for what to do about it, an idea of what the future infrastructure for data and computing will look like for oceanography. Participating in data-intensive research requires a lot of expensive infrastructure, and that is exclusionary. So, theres also a sort of democratizing aspect to what were trying to do to make it possible for anyone, at any institution anywhere in the world, to do this data and computationally intensive research.

Q: Clearly, the award takes into account your specific approach to science. Was that important to you?

A: Im glad the award did cite my work on open software and tools because its something thats traditionally undervalued by the academic reward system. The fact that it can be recognized is a sign of progress. Its not just about publishing papers. Im pleased that this output of mine is recognized. That is indicative of a cultural evolution in the incentive structure in academia.

Q: What is most exciting to you about your work?

A: I love the data. I genuinely love looking at ocean data sets. Particularly, really large complex and beautiful ones that reveal these turbulent ocean processes. On a very aesthetic level, I just love to look at and work with ocean data. Its sort of a unifying thread throughout all this work. The day-to-day motivation is about truth and beauty and these more abstract scientific ideals.

More:

Ryan Abernathey: Helping to Open a Universe of Data to the World - State of the Planet

CTO power panel: Shaping the future of cloud at the edge – SiliconANGLE News

Edge computing is an adolescent market just starting a growth spurt. Its predicted surge from $3.6 billion in 2020 to $15.7 billion by 2025 comes from the enormous diversity of potential use cases. But like any talented teen, edge technology has to decide exactly what it is, where it belongs, and how its going to get there.

Defining edge, its easier to define what it isnt: Its anywhere that youre going to have IT capacity that isnt aggregated into a public or private cloud data center, said John Roese (pictured, left), global chief technology officer of products and operations at Dell Technologies Inc.

The edge is really the place where data is created, processed and/or consumed, saidChris Wolf (pictured, right), vice president of the Advanced Technology Group, Office of the CTO, at VMware Inc., Whats interesting here is that you have a number of challenges in that edges are different. You have all these different use cases. So what were seeing is you cant just say this is our edge platform and go consume it, because it wont work. You have to have multiple flavors of your edge platform.

Wolf and Roese spoke with Dave Vellante and John Furrier, co-hosts of theCUBE, SiliconANGLE Medias livestreaming studio, during theCUBE on Cloud event. They discussed key technology trends that will shape the future of cloud at the edge, including what belongs at the edge, issues with security and latency, and how to define a software framework for the edge.

There may be many use cases for edge, but not all potential uses are productive ones. After a year of testing with customers, Dell has come up with four major reasons why a company should build an edge platform.

The first is latency: If you need real-time responsiveness in the full closed-loop of processing data, you might want to put it in an edge, Roese said.

But then comes the question of defining the real-time responsiveness necessary for each specific use case. The latency around real-time processing matters, Roese stated. Real-time might be one millisecond; it might be 30 milliseconds; it might be 50 milliseconds. If it turns out that its 50 milliseconds, you probably can do that in a colocated data center pretty far away from those devices. [If its] one millisecond, you better be doing it on the device itself.

The second revolves around requirements for data flow. Theres so much data being created at the edge that if you just flow it all the way across the internet youll overwhelm the internet, Roese said. So we need to pre-process and post-process data and control the flow across the world.

The third question on edge relevancy centers on if the use case requires the convergence of information technology and operations technology. The IT/OT boundary that we all know, that was the IoT thing that we were dealing with for a long time, Roese added.

Fourth and potentially most important is security.

[Edge] is a place where you might want to inject your security boundaries because security tends to be a huge problem in connected things, Roese stated, mentioning the security-enabled edge, or as Gartner named it, secure access service edge, aka SASE. If datas everything, the flow of data ultimately turns into the flow of information the knowledge and wisdom and action. If you pollute the data, if you can compromise it at the most rudimentary levels by putting bad data into a sensor or tricking the sensor, which lots of people can do, or simulating a sensor, you can actually distort things like AI algorithms.

Agility is key to edge, with the COVID pandemic demonstrating how companies with established edge platforms were able to react at speed, according to Wolf.

When you have a truly software-defined edge, you can make some of these rapid pivots quite quickly, he said, giving the example of how Vanderbilt University adapted one of its parking garages to a thousand-bed hospital ward. They needed dynamic network and security to be able to accommodate that.

The software behind the edge needs to be open, according to Wolf. We see open source as the key enabler for driving-edge innovation and driving an ISV ecosystem around that edge innovation, he said.

The first step in defining this currently very complex area is to separate edge platforms from the edge workload, according to Roese.

You do not build your cloud, your edge platform co-mingled with the thing that runs on it. Thats like building your app into the OS, and thats just dumb, he stated.

Recognizing that humans are bad when it comes to really complex distributed systems, is also important, according to Roese, who advocates the use of low-code architectures interfaced via APIs through CI/CD pipelines.

What were finding is that most of the code being pushed into production benefits from using things like Kubernetes or container orchestration or even functional frameworks, he said. It turns out that those actually work reasonably well.

This links with VMwares bet on open source as the software of choice for edge. Multiple Kubernetes open-source projects are currently addressing a variety of edge use cases.

Whether its k3s or KubeEdge or OpenYurt or superedge the list goes on and on, Wolf said. However, he points out that Kubernetes is perhaps not the best approach, as it was designed for data center infrastructure not edge computing.

OS projects that take a different approach include open software platform EdgeX Foundry, which is about giving you a PaaS for some of your IoT apps and services, Wolf said, noting that the solution is currently seeing growth in China.

Addressing machine learning at the edge through a federated machine learning model is the open-source FATE project. And Wolf and VMware are laying bets on this approach. We think this is going to be the long-term dominant model for localized machine learning training as we continue to see massive scale-out to these edge sites, Wolf stated.

Dells long-term vision for edge software is that it really needs to be the same code base that were using in data centers and public clouds, according to Roese. It needs to be the same cloud stack, the same orchestration level, the same automation level, he said. Because what youre doing at the edge is not something bespoke. Youre taking a piece of your data pipeline and youre pushing it to the edge, and the other pieces are living in private data centers and public clouds, and youd like them to all operate under the same framework.

Watch the complete video interview below, and be sure to check out more of SiliconANGLEs and theCUBEs coverage of theCUBE on Cloud event.

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.

Wed also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we dont have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary onSiliconANGLE along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams attheCUBE take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here,please take a moment to check out a sample of the video content supported by our sponsors,tweet your support, and keep coming back toSiliconANGLE.

Read more:

CTO power panel: Shaping the future of cloud at the edge - SiliconANGLE News

How the Digital DIY movement thrived in 2020 – TechRadar

Living through an extended period of lockdown this year forced many of us into becoming DIY try-ers, either tackling jobs in the house we would normally avoid or simply trying something new as a way to be creative. For many, this mindset was entirely novel, but for an army of online builders, it was merely an extension of what has already been done previously.

Active before the pandemic, but able to operate through it, the collection of brains behind Digital DIY are continually focused on making the latest applications and software packages. From the smart home to novel robotics, these tinkerers and makers, as they are known, are taking advantage of open source software and online communities to solve a whole host of challenges.

There are two main ways in which lockdown has made people more interested in tinkering. First, being stuck at home has led to more free time; instead of going out and socializing or losing time to commuting, people have had more time to themselves, and so able to take on new hobbies or learn new skills. For the technical and the curious out there, this has turned into capacity for new projects. For some, this will mean using software to solve challenges for the first time, while for those already involved in Digital DIY, they have had more time to get to the end of existing projects, solving the problems they first set out to tackle.

An example of this is the creation of a Raspberry Pi-powered sous chef, which is configured to automate pan-cooking tasks so that the user can focus on more advanced culinary activity. This tinkerer created an Onion Bot to autonomously control the temperature of the pan on the stove, using a PID control system, and is set-up to remind the chef if they havent stirred the pan after a designated time. This type of project is typical of the home-based Digital DIY' we have seen during lockdown, with people taking on more challenges than they have before.

The second reason for the expansion during lockdown is more online socializing. More people are signing up to forums or looking for communities surrounding their hobbies and interests and are involving themselves in these communities. For the tinkerers out there, this has led to a boost in traffic and greater visibility of their pages and discussions. The increase in traffic has led to more contributors, more sign ups and more tutorials. The size of the community has expanded because of this and led to more people getting involved in open source software.

Digital DIY is an umbrella term for countless different projects that people are involved in, whereby anyone can build things or learn about technology. This could be someone learning on their own or by interacting with hundreds of different communities of like-minded, curious, enthusiastic people, who share their passion. This isn't a new movement, because digital creation and tinkering has been around for decades. The availability of affordable hardware, meanwhile, has played a huge part in Digital DIY from the beginning. Hardware like Raspberry Pi, or derivatives of similar use, opened the door to a new realm of digital making. Beforehand, this was done predominantly on web, desktop or mobile apps, with code, where only tech companies with a sizable research and development budget could work on digital device projects.

With the addition of affordable hardware, individuals could suddenly make their own devices, or products, adding a whole new dimension to what digital making was. This makes Digital DIY possible even with something as obscure as e-paper (the material used for Kindle devices), which have been used to create an IoT-controlled message board using Raspberry Pi. This is configured by connecting the e-paper to a Google Docs API, so that it becomes possible for the message board to poll a Google Sheet and update itself whenever there is new data. This is an example of small-scale DIY that can be done within the home and is increasingly possible even without significant budget or hardware.

A key aspect of this is peoples ability to solve problems through open source software and online communities. These communities enable the sharing of support, advice, ideas, and general encouragement. This simplifies the flow of ideas and means people can share their experiences quicker, as more people working on open source code or ideas will result in quicker turn arounds and better solutions.

Countless enterprises have emerged from DIY/tinkering; start-ups appear every year based on one form of digital making or another. Enterprises are realizing the vast untapped potential within tinkering and the tinkering community. Big businesses are good for platforms, for creating, maintaining and supporting the platforms on which tinkerers and digital makers work. The more that big business turns to open source and invests in these tools and platforms, the easier and greater the quality of digital making becomes. Because of this, its easier for people to make their own projects and products, and form communities that carry innovative weight.

See original here:

How the Digital DIY movement thrived in 2020 - TechRadar

Why Developers Shun Security and What You Can Do about It – Security Boulevard

The Linux Foundation and the Laboratory for Innovation Science at Harvard recently released a Report on the 2020 Free/Open-Source Software Contributor Survey. One of the primary conclusions of this report was the fact that free/open-source software developers often have a very negative approach to security. They spend very little time resolving security issues (an average of 2.27% of their total time spent) and they express no willingness to spend more.

Some of the quotes from the survey were simply disturbing. For example, I find the enterprise of security a soul-withering chore and a subject best left for the lawyers and process freaks. I am an application developer. Another example: I find security an insufferably boring procedural hindrance.

While the report contains the authors strategic recommendations, here are our thoughts about what this situation means for application security and what you can do about it.

The original report focuses only on free/open-source software (FOSS) but we believe it is important to consider whether this is only a FOSS problem or a problem with all developers.

Based on the survey, most FOSS developers (74.87%) are employed full-time and more than half (51.65%) are specifically paid to develop FOSS. This means that FOSS is often developed by the same people that develop commercial software. We do not believe that the developers change attitude depending on whether the software they work with is free or commercial. Therefore, we believe that this bad attitude towards security extends to all developers.

We also believe that the underlying cause of this attitude is the fact that developers are either taught badly or not taught at all. Most online resources that teach programming completely skip the issue of secure coding practices. Books about programming languages rarely even mention secure coding. Schools also often treat security as an optional subject instead of a core course that should be a prerequisite to all other programming classes.

Therefore, we conclude that the results of this survey may be assumed to apply to all software developers. While in the case of commercial software some security measures may be added by the presence of dedicated security teams, the root is still rotten.

While 86.3% of the respondents of the survey received formal development training, only 39.8% stated that they have formal training in developing secure software. This means that half the developers were taught badly.

Another shock comes from the response to the following question: When developing software, what are your main sources for security best practices?. It turns out that only 10.73% learned such best practices from formal classes and courses and 15.51% from corporate training. Nearly half the developers use online articles/blogs (46.54%) and forums (50.66%) as their primary source of information on best practices, which again shows the horrid state of education and the lack of resources about secure coding. And while we at Acunetix pride ourselves on filling the gap and being the teachers (thanks to our articles that explain how vulnerabilities work and how to avoid them), we would much rather have developers learn first from sources that are more reliable than a search engine.

Last but not least, survey results prove that free/open-source software is usually released with no security testing at all. While 36.63% use a SAST tool to scan FOSS source code, only 15.87% use a DAST to test applications. This situation is probably better in the case of commercial software because security teams usually introduce SAST/DAST into the SDLC.

If your application developers have a bad attitude towards security, it is not only due to their education. It may also be because of your business organization, which causes them to feel that theyre not involved in security at all.

Developers dont feel responsible for security primarily due to the existence of dedicated security teams. If security personnel work in separate organizational units, the developers think that security is not their problem and expect the security researchers to take care of it instead.

Developers also dont feel responsible because in a traditional organization they rarely are expected to fix their own security-related mistakes. A typical developer writes a piece of code, gets a code review from another developer (probably just as clueless about security), and then forgets about it. Later, a security researcher finds a vulnerability and creates a ticket to fix it. This ticket is assigned to the first available developer usually not the one who originally introduced the vulnerability.

Such an organization promotes the lack of responsibility for security and fuels negative feelings between developers and security teams. They may view one another as the ones that cause problems and this is what you must aim to change first.

Automating the process of finding and reporting security vulnerabilities as early as possible solves this problem. First of all, errors are reported by a piece of software, not a human therefore there is no other person to blame. Secondly, the error is reported immediately, usually after the first build attempt, and the build fails, so the developer must fix their own mistake right away. And thirdly, every time the developer is forced to fix their own error, they learn a little more about how to write secure code and how important it is.

The only problem that remains is finding software that can be trusted with this task. Unfortunately, limited capabilities of SAST/DAST software have been the cause of many failures in the past and this is why many developers do not want to use a SAST or a DAST tool.

SAST tools point to potential problems but they report quite a few false positives the developer spends a lot of time researching something that turns out not to be a vulnerability at all. In the end, developers stop trusting the tool and start hating it. On the other hand, DAST tools report fewer false positives but often dont provide enough information for the developer to be sure where the vulnerability is and what it can lead to.

Acunetix helps solve such problems. The advantage is that, in the case of the most serious vulnerabilities, Acunetix provides proof of the vulnerability. For example, the developer may receive a report that their code exposed sensitive files from the server including the content of these sensitive files as evidence.

The most worrying conclusion from this article is that most free/open-source software is inherently insecure and if you want to feel safe using it, you need to do regular security testing yourself.

Another worrying conclusion is that people who should be your first line of defense in IT security are not educated about security and have a bad attitude toward it. This is not something that is going to be easy or quick to change.

Long-term strategic resolutions are needed to solve these major problems and simply implementing an automated solution cannot be perceived as a magic wand. However, if you introduce a reliable automated testing solution such as Acunetix into your DevSecOps SDLC at the earliest stage possible, you will ensure that your software is safe and you will teach your own developers that they need to take responsibility for the security of their code.

DOWNLOAD FEATURED DOCUMENT

Download this presentation to find out how you can solve several common problems by including Acunetix in your DevSecOps processes.

DOWNLOAD FEATURED DOCUMENT

Tomasz Andrzej NideckiTechnical Content Writer

Tomasz Andrzej Nidecki (also known as tonid) is a Technical Content Writer working for Acunetix. A journalist, translator, and technical writer with 25 years of IT experience, Tomasz has been the Managing Editor of the hakin9 IT Security magazine in its early years and used to run a major technical blog dedicated to email security.

Read more from the original source:

Why Developers Shun Security and What You Can Do about It - Security Boulevard

Hack together your own e-paper smartwatch with this $50 open-source kit – The Verge

If youve ever wanted to be like Steve Wozniak and have your own custom-made, geeky watch, Squarofumi (stylized SQFMI) may have the product for you: an open-source, Arduino-powered smartwatch with a 1.54-inch e-paper screen (via Gizmodo). Its called the Watchy, and both the hardware and software are completely customizable. You can, however, use it right out of the box, as the PCB acts as the body and has points to attach a watch strap. And to top it all off, its only $50, on sale for $45 at time of writing.

The SQFMI site has sections for watch faces and cases, but at the moment they both only say Coming Soon, so if you're thinking about this watch, youll definitely want to make sure youre ready for a DIY project. Oh, and theres also the fact that the watch doesnt come assembled you have to put it together yourself, hooking the 200x200 display, PCB, and 200mAh battery together. Theres Wi-Fi, Bluetooth, a 3-axis accelerometer and four buttons that can be used for navigation, or whatever other functions you can dream up.

If having to put together the hardware yourself doesnt intimidate you, theres one last thing to note: while the watch does come with pre-loaded software, if you want to make any changes to the watch face youll have to download the Arduino IDE and program them yourself.

While some people may be turned off by all the work required to get the watch working, to certain people the build-it-yourself approach means that theyll be able to get exactly what they want. If you want a watch with a case that looks like an iPod or Game Boy, with an interface to match, you can 3D print a case and code the watch face yourself. Its the type of freedom youre not likely to get from most commercial smartwatches, though Tizen and Wear OS watches do offer downloadable watch faces.

The battery life SQFMI estimates depends on your use case it says if youre just keeping time you should get five to seven days, but if youre fetching data frequently you may only see two to three. Its open-source nature, however, means that you could always fit a larger battery into it, or try and make some software optimizations if there are features youre willing to cut.

If youre looking for this kind of coding/DIY project, the Watchy is being sold on Tindie. Id just recommend you look at SQFMIs website to make sure that the amount of documentation available is enough for you to get started.

View post:

Hack together your own e-paper smartwatch with this $50 open-source kit - The Verge

What must be done to bring Linux to the Apple M1 chip – ZDNet

Everyone loves Apple's new M1 chip Macs. Even Linux's creator Linus Torvalds has said "I'd absolutely love to have one if it just ran Linux." And, recently, Hector Martin, a Tokyo-based IT security consultant and hacker, is leading the crowd-funded Asahi Linux project to bring the Arch Linux distro to Apple's ARM-based M1 architecture. But, in an e-mail interview, Greg Kroah-Hartman (gregkh), the Linux kernel maintainer for the stable branch and leader of the Linux Driver Project, said Asahi's programmers will face "lots of work in figuring out the hardware connected to the CPU (i.e.driver stuff)."

Why would that be so hard you ask? Doesn't Linux run on almost every processor in the world from 80386s to IBM s390x to SPARC? Hasn'tLinux been running on the ARM family since 1995? Yes and yes. But, in earlier cases, Linux developers had access to the chip's firmware, microcode, and documentation. That's not the case with the M1.

Torvalds would love to run Linux on these next-generation Macs. "I've been waiting for an ARM laptop that can run Linux for a long time. The new Air would be almost perfect, except for the OS. And I don't have the time to tinker with it, or the inclination to fight companies that don't want to help."

In an interview, Torvalds told me, "The main problem with the M1 for me is the GPU and other devices around it because that's likely what would hold me off using it because it wouldn't have any Linux support unless Apple opens up."

Apple isn't opening up. So, Linux developers have to do it the hard way. And the hard way is really hard.

Even gregkh, who's long been the Linux driver developer leader, finds the M1 daunting:

"I'm not going to lay out all of the individual things that need to happen here, as the people involved should already know this (hopefully). It's no different from porting Linux to any other hardware platform where we already have CPU support for it. People do it all the time, but usually, they do it with the specs for how the hardware works. Here no one seems to have specs, so it will take a lot more effort on their part."

Can it be done? Sure. The M1 starts from a well-known architecture.

Fortunately, Asahi has Alyssa Rosenzweig to help with the port. Rosenzweig has been working with Collabora on Panfrost, a free and open-source graphics stack for Android Arm Mali GPUs. Her work with these proprietary GPUs will serve her in good stead in dealing with M1's built-in GPU.

Rosenzweig has already been successful in some M1 reverse engineering. While macOS has open-source roots in the BSD Unix variant Darwin and some open-source code, that's not as much help as you might think.

Rosenzweig explained that, for example, "While the standard Linux/BSD system calls do exist on macOS, they are not used for graphics drivers. Instead, Apple's own IOKit framework is used for both kernel and userspace drivers, with the critical entry point of IOConnectCallMethod, an analog of ioctl." In short, no one's porting Linux to this processor over the weekend, or possibly this year.

But it's not impossible either. Martin said: "Apple allows booting unsigned/custom kernels on Apple Silicon macs without a jailbreak! This isn't a hack or an omission, but an actual feature that Apple built into these devices. That means that, unlike iOS devices, Apple does not intend to lock down what OS you can use on Macs (though they probably won't help with the development)."

No, no they won't. But, let us, with gregkh, wish Rosenzweig, Martin, and the rest of the Asahi Linux crew good luck. Macs have long been popular with Linux users. With some luck and a lot of hard work, Linux users may eventually run their favorite operating system on the next-generation of their favorite Apple hardware.

Link:

What must be done to bring Linux to the Apple M1 chip - ZDNet

[YS Learn] How Zerodha, ERPNext collaborated to build FOSS United Foundation to push for open-source projects – YourStory

The Free and open-source (FOSS) movement started in the 90s in India. However, it was not until 2016 that Zerodha and ERPNext started the FOSS United Foundation.

Founded in 2008, ERPNext was developed by Founder and CEO Rushabh Mehta to manage his family business. Bootstrapped since inception, the company provides FOSS ERP (enterprise resource planning) systems to its clients.

In a conversation with YourStory, Zerodha CTO Kailash Nadh and Rushabh Mehta talk about the different ways FOSS United Foundation helps developers and hackers with its solutions.

[Techie Tuesdays] Kailash Nadh the techie who knows how to make it large

Edited excerpts from the conversation:

Kailash Nadh [KN]: FOSS United was originally founded by Rushabh Mehta in 2016 as the ERPNext Foundation. Around the same time, Zerodha discovered ERPNext and decided to build its system on top of the ERPNext software.

This got us talking to the ERPNext team, and we gradually realised that we shared similar views on FOSS.

In early 2020, we co-organised a small FOSS conference in Bengaluru and realised we had the right intent and resources to do a lot more activities around FOSS. Thus, FOSS United Foundation was launched as a collaboration between ERPNext and Zerodha, where we subsumed the ERPNext Foundation into a new entity, broadening its goals.

Rushabh Mehta [RM]: India is now a hub of startups, innovative consumer software, developer communities, and large scale technological infrastructure. However, somewhere down the line, the spirit of FOSS and hacking have been overshadowed.

This is illustrated by the disproportionately low number of quality FOSS projects coming out of India, given a thriving industry compared to the explosion of projects that has happened globally over the last decade.

In the Indian context, our goals are:

KN: There is a huge developer base in India that consumes large amounts of FOSS from all over the world. However, there are very few projects that originate from the country. We want to see a local ecosystem of useful FOSS projects emerge and thrive in India, by creating a platform to encourage developers to create and contribute to FOSS projects, both for fun and profit. This could be in the form of project incubation, funding and grants, volunteer networking, and help with legal aspects, etc.

RM: It is not designed to help coders, but support FOSS projects in India. Recently, we conducted an online hackathon with decent prize money that had over 600 participants build and learn about FOSS. We plan to do events and hackathons throughout the year and build projects for the Indian developers community.

RM: Zerodha started using ERPNext for some of its applications. The team had discovered the community in Mumbai and had come down for a FOSS conference in 2018. Over the next two years, both the tech teams realised that they are very passionate about FOSS.

We experimented with a FOSS conference (IndiaOS in February 2020), which garnered positive responses. And we realised the need for community-wide FOSS activities in India.

The goal of FOSS United Foundation is multi-fold. We want to bring the community together to conduct events and support projects. We want to bring back the joy of hacking into the profession, where developers should be inspired to develop just for the fun of it. We also aspire to build products under the FOSS United banner.

We expect FOSS United to become a platform like Mozilla that creates community products. Being beneficiaries of the FOSS projects, we feel we must try and give back to the community.

We don't have an active roadmap, but we hope in the next year, there will be some really exciting products that will come out of FOSS United Foundation.

RM: India is only second to the US in the number of software developers. However, we still do not rankanywherein the open-source segment. While there are a few active projects, they are in their early days. India is also slowly moving from service to a "product" mindset, but there is very little innovation involved.

We believe that FOSS is the backbone of technology innovation globally. While some people believe Intellectual Property Rights (IPR) spurs innovation, this is mostly a stale thought. The phenomenal rise of FOSS, aided by the collaboration on the internet, has led to unprecedented growth in technology.

There is only one way to change it to become the change ourselves. In the words of Mahatma Gandhi, "Be the change you wish to see in the world."

We hope through our example, more people will be inspired to push projects online, and we want to be there if they need a platform to share their learning or help them on the path to sustainability.

KN: While India is a startup hub and has seen massive technological innovation over the last decade, we seemed to have developed a culture of consuming, and not giving back.

A disproportionate number of FOSS projects originate from India, while the mainstream focus is on building startups of high valuations and not re-usable technology. There is very little talk of actual engineering and technology in the tech industry.

RM: Hackathons provide a great way for developers to test their skills against others. It can inspire a competitive spirit in them that can spur them to think deeper about their skills.

While hackathons alone can't do anything, they do provide a space for people to connect, talk, and discover good projects. During our last hackathon, we discovered an attempt to build a new programming language for the next-generation web architectures, and we were able to provide some support to the developer to take it forward.

KN: Hackathons are just a means of getting developers excited to network and tinker. Most projects built at a hackathon may not get developed further, but some of the ideas that are discussed, or one of the many micro-experiences that are had, could set a hobbyist on the path to becoming a good developer.

For India to have strong technological self-reliance, we need developers to be encouraged to build and contribute to FOSS, the industry to openly support and fund FOSS, and the government to set a strong mandate on using FOSS for its needs.

These measures can create local technological capacity, highly skilled technical jobs, humongous cost savings, and of course, highly re-usable technology.

See the original post here:

[YS Learn] How Zerodha, ERPNext collaborated to build FOSS United Foundation to push for open-source projects - YourStory