Open-Source Tool ‘Gitjacker’ Detects Leaking .git Repositories On Websites – Fossbytes

Web developers can accidentally copy their entire Git repository online along with the /.git folder or forget to remove it, thus exposing sensitive information to attackers. This is where a new open-source tool dubbed Gitjacker can help.

A .git directory stores all of your Git repository data, such as configuration, commit history, and the actual content of each file in the repository.As a rule of thumb, /.git folders should never be uploaded online.

If someone can access the entire contents of a .git directory of a website, they can retrieve raw source code for that site, and sensitive configuration data like database passwords, password salts, and more.

So, Gitjacker helps developers detect the leaking .git repositories on websites. It was created by a British software engineer Liam Galvin in Go programming language. You can download Gitjacker for free from GitHub.

To explain how Gitjacker works in the simplest terms; it lets users scan a domain and detect all the location of a /.git folder on their production systems.

It can also identify the /.git folders included in automated build chains and added to Docker containers that are later installed as web servers.

Also Read: Pysa: An Open-Source Tool To Detect & Fix Security Issues In Python Code

The tool can not only find /.git folders but also fetch its content like sensitive configuration files within a few keyboard strokes.

Hackers tend to scan the internet for such folders in accidentally exposed systems. They download their content to gain access to configuration data or apps source code.

Webservers that have directory listings enabled are pretty vulnerable to such kind of attacks. With the directory listings disabled, retrieving a complete repository becomes difficult.

But Gitjacker can handle the download and extraction of a git repository for users, even in cases where web directory listings are disabled.

The developer of Gitjacker says that he made the tool to be used in penetration tests. But owing to its abilities, Gitjacker can also be abused by malicious actors as we know that hackers have a longstanding history of misusing open source tools for their purposes.

See more here:
Open-Source Tool 'Gitjacker' Detects Leaking .git Repositories On Websites - Fossbytes

Does Python Have to Change? – Editorials 360

The Python programming language is a giant hit for machine studying, learn a headline this week at ZDNet, including However now it wants to alter.

Python is the highest language in line with IEEE Spectrums electrical engineering viewers, but youll be able tot run Python in a browser and you mayt simply run it on a smartphone. Plus nobody builds video games in Python lately. To construct browser functions, builders are likely to go for JavaScript, Microsofts type-safety tackle it, TypeScript, Google-made Go, and even previous however trusty PHP. On cell, why would software builders use Python when theres Java, Java-compatible Kotlin, Apples Swift, or Googles Dart? Python would not even assist compilation to the WebAssembly runtime, an online software commonplace supported by Mozilla, Microsoft, Google, Apple, Intel, Fastly, RedHat and others.

These are simply a number of the limitations raised by Armin Ronacher, a developer with an extended historical past in Python who 10 years in the past created the favored Flask Python microframework to unravel issues he had when writing net functions in Python. Austria-based Ronacher is the director of engineering at US startup Sentry an open-source mission and tech firm utilized by engineering and product groups at GitHub, Atlassian, Reddit and others to watch person app crashes attributable to glitches on the frontend, backend or within the cell app itself Regardless of Pythons success as a language, Ronacher reckons it is vulnerable to shedding its attraction as a general-purpose programming language and being relegated to a particular area, resembling Wolframs Mathematica, which has additionally discovered a distinct segment in information science and machine studying

Peter Wang, co-founder and CEO of Anaconda, maker of the favored Anaconda Python distribution for information science, cringes at Pythons limitations for constructing desktop and cell functions. It is an embarrassing admission, nevertheless its extremely awkward to make use of Python to construct and distribute any functions which have precise graphical person interfaces, he tells ZDNet. On desktops, Python isnt the first-class language of the working system, and it should resort to third-party frameworks like Qt or wxPython. Packaging and redistribution of Python desktop functions are additionally actually troublesome, he says.

Learn extra of this story at Slashdot.

Here is the original post:
Does Python Have to Change? - Editorials 360

SoloLearn Announces Partnership with Nonprofit Techqueria to Bridge Skills Gaps and Promote Latinx Careers in Tech – PRNewswire

SAN FRANCISCO, Oct. 22, 2020 /PRNewswire/ --SoloLearn, the leading e-platform for coding education, is proud to announce a year-long partnership with Techqueria, a 501(c)(3) nonprofit dedicated to empowering Latinx professionals with the resources and support they need to become leaders in the tech industry. This Community Partnership signals SoloLearn's commitment to providing resources, support and up-skilling within the Techqueria community to encourage education for greater diversity in the tech industry in which the Latinx community only represents 6.8% of workers according to a Brookings Institute study.

"The mission of SoloLearn is to make coding education accessible to everyone and create a more diverse tech community," said Yeva Hyusyan, co-founder and CEO of SoloLearn. "We are thrilled to partner with Techqueria to make that a reality within the Latinx community by providing a fun, accessible and effective technical learning experience that bridges the skills gap to future career opportunities."

Latinx workers are underrepresented in 88 out of 90 large metro areas and 40 of the largest metros have seen representation declining since 2010. As part of this partnership, SoloLearn has committed to make a change in the right direction by using their self-paced learning platform to upskill Techqueria community members based on technology interests and future career goals. They will provide full platform access to Techqueria members who are interested in learning new programming languages and pursuing careers in technology.

"We are excited to have SoloLearn as a partner in supporting our mission in elevating the careers of Latinx professionals in tech. We look forward to having our members benefit from SoloLearn's courses so they can further elevate their software engineering careers. The more of our members who can grow through SoloLearn, the more Latinx engineering leaders we can cultivate and create." (Frances Coronel, Executive Director at Techqueria)

For more information about SoloLearn visit https://sololearn.com.

About SoloLearn SoloLearnis bridging the skills gap to future careers by building the most fun, accessible, and effective technical learning experience. With more than 40 million installs worldwide, SoloLean has built the world's largest, online mobile coding learning community. The app guides learners through a comprehensive and immersive experience with bite-sized lessons, code coaching exercises, Q&A, a built-in IDE called "Coding Playground", and a vibrant community where individuals can challenge each other to head-to-head challenges. The service is available on both mobile and web, and offers free coding courses in over 15 programming languages, with over 2,000 lessons and 15,000 quizzes. The company has been distinguished with Google Play Editor's Choice Award, Facebook's FbStart Global App of the Year Award, and is the #1 learn to code search result in Google Play and the App Store.

About Techqueria Techqueriais a 501c3 nonprofit that empowers Latinx professionals with the resources and support that they need to thrive and become leaders in the tech industry.

To that end, we work with both tech companies and employee resource groups (ERGs) to build Latinx-centered spaces that revolve around career advice, technical talks, mentorship, open jobs, networking events, speaking opportunities, and open-source in order to comprehensively affect change in the tech industry.

Coming from all walks of life, we believe that the diversity of our community is the most reliable asset we have. Our space aims to be inclusive so we invite Latinx from the regions of the Caribbean, Haiti, and Brazil as well as those who identify as Afro-Latinx, Asian-Latinx or LGBTQIA. The term Latinx is used instead of Latino or Latina because it is a gender-neutral and inclusive term.

Media Contact: Andrea Toch Colter Communications [emailprotected] Phone: 602.405.8335

SOURCE SoloLearn

https://sololearn.com

Original post:
SoloLearn Announces Partnership with Nonprofit Techqueria to Bridge Skills Gaps and Promote Latinx Careers in Tech - PRNewswire

Understanding Is Crucial for Voice and AI: Testing and Training are Key To Monitoring and Improving It – Voicebot.ai

on October 24, 2020 at 12:00 pm

Editors Note: This is a guest post written by Bespoken.io CEO John Kelvie

How well does your voice assistant understand and answer complex questions? It is often said, making complex things simple is the hardest task in programming, as well as the highest aim for any software creator. The same holds true for building for voice. And the key to ensuring an effortlessly simple experience for voice is the accuracy of understanding, achieved through testing and training.

To dig deeper into the process of testing and training for accuracy, Bespoken undertook a benchmark to test Amazon Echo Show 5, Apple iPad Mini, Google Nest Home Hub. This article explores what we learned through this research and the implications for the larger voice industry based on other products and services.

For the benchmark, we took a set of nearly 1,000 questions from the ComQA dataset and ran them against the three most popular voice assistants: Amazon Alexa, Apple Siri, and Google Assistant. The results were impressive these questions were not easy, and the assistants handled them often with aplomb:

Google Assistant did especially well. Given Googles preeminence in the search space, this perhaps does not come as a surprise. As important, and perhaps more surprising, Google also excels in UNDERSTANDING the user.

To explain further, we probably expect Google to know:What is the least populated county in the state of Georgia?

Not an easy question for a regular person, but eminently knowable, and the type of thing that Google and search engines excel at.

Then what about this:What year did the first men on the moon?

Without the word land in there, its not clear what is being asked. But most of us could figure it out. And so could the assistants.

But then how about this one:What largest us state is closer canada?

You can read this a few times without making sense of it the question may even seem rather unfair. If your head starts to hurt theres a quick remedy. Just pop it into Google Assistant, which replies:There are 13 states that share a border with Canada. With 1,538 miles (2,475 km), Alaska shares the longest border.

Ahh, that feels right. Our benchmark shows that though all the assistants have room for improvement, they clearly are able to, in some ways, even exceed our own question parsing and answering ability as humans quickly and correctly.

And this accuracy of understanding is critical to delivering great voice and AI experiences. Thankfully, the pathway to doing it is straightforward testing and training. Testing means looking at all the ways users might interact with a voice application and seeing how well they work, just as we did with our benchmark. We run these commonly on behalf of our customers and typically see voice experiences have error rates of 20% or greater during initial baseline assessments.

But that is not a reason to despair! Training and tuning is the process to reduce these errors it means revising and re-testing the model until it reaches an optimal level. This is an ongoing process that is essential to building for AI we typically see reductions in errors of between 75-95% using simple techniques and even further improvements with more advanced but still straightforward techniques.

For example, in our work with the Mars Agency on behalf of a major cosmetics brand, testing and training reduced errors by more than 80% simply by adding in sounds-alike phrases to the Google Action speech model. These could be things such as when the user says Ageless, it is understood as age list. We dont need to know all the complex algorithms involved in speech recognition to add age list as a synonym for ageless; once we know that one phrase is commonly mistaken for the other, its as easy as that to make dramatic improvements to accuracy. And further improvements, with more advanced approaches, are also readily achievable.

Whats more, this all works best when tied into a continuous integration/continuous testing process, facilitated by DevOps tools such as Github, Gitlab, and DataDog, which we used in our benchmark findings. When paired with in-depth testing and training regimens, these tools ensure voice experiences improve over time to consistently delight users. You can see the workflow we put together for our benchmark here its all open-source. And the diagram below summarizes what we recommend as best practices for pulling these tools together for best-in-class accuracy and performance.

In summary, to apply this for your own projects, we recommend building off the five points listed below. Start with them to build a great testing, training, and monitoring regimen:

See more here:
Understanding Is Crucial for Voice and AI: Testing and Training are Key To Monitoring and Improving It - Voicebot.ai

Esports Organization Cloud9 Unveils All-Female Roster – Hollywood Reporter

11:00 AM PDT 10/25/2020byTrilby Beresford

Esports organization Cloud9 unveiled on Sunday an all-female professional roster of athletes.

The team, dubbed Cloud9 White, will compete in Riot Games tactical first person shooter Valorant and in the same tournaments as the men's team Cloud9 Blue, with the long term goal being for men and women to compete in teams together.

Members of the women's team, formerly known as MAJKL, include Alexis Guarrasi, Annie Roberts, Jasmine Manankil, Katsumi and Melanie Capone.

Their experience includes securing a place in the Counter Logic Gaming tournament at the Blitz Open Cup, while last month they won first place at the FTW Summer Showdown tournament part of the Valorant ignition series. The win gave them $25,000 and wider visibility.

Riot Games' first strike tournament qualifiers begin Oct. 25. AT&T will be the presenting partner.

Cloud9 was founded in 2013 and has professional teams in Fortnite, League of Legends, Hearthstone, Teamfight Tactics, Counter-Strike: Global Offensive and more.

Go here to see the original:
Esports Organization Cloud9 Unveils All-Female Roster - Hollywood Reporter

How to Make DevOps Work with SAFe and On-Premise Software – InfoQ.com

Key Takeaways

There can be no agile software delivery without the right DevOps infrastructure. In this article, we would like to share our experience in our DevOps and agile transformation journey. We have a big and distributed team structure and we are delivering an on-premise software that makes the delivery different from cloud practices. We have been using many tools that are almost standard in the agile world. The challenge was bringing all the teams together in a pipeline for faster delivery. Our first release we managed to do in 3 years! After establishing SAFe, we were able to release in semi-regular intervals 3-4 times per year. And currently, we are laying the groundwork for even faster delivery, basically trying to do the "release-on-demand" defined by SAFe, delivering a feature as soon as it is ready to be delivered.

We managed to create 2 release trains so far, and are currently working on dividing it into more pieces to help enable faster delivery. This isnt as easy as it sounds, because its not just about technically creating the trains and thinking about their domains, dependencies, etc. It is also about people and teams. The team members are used to working with each other and sometimes can show resistance to join other teams and work with different individuals. There is no silver bullet for this situation, only clear, transparent, and bi-directional communication can make things move.

Like other software development teams, we have been using many different tools for our DevOps. The fragmented DevOps landscape resulted in a lack of visibility and difficulty in dealing with problems. These problems were blocking us from releasing because we were observing the problems too late, and resolving them took time which delayed the delivery further.

The main issues we dealt with in speeding up our delivery from a DevOps perspective were: testing (unit and integration), pipeline security check, licensing (open source and other), builds, static code analysis, and deployment of the current release version. For some of these problems we had the tools, for some, we didnt and we had to integrate new tools.

Another issue was the lack of general visibility into the pipeline. We were unable to get a glimpse of what our DevOps status was, at any given moment. This was because we were using many tools for different purposes and there was no consolidated place where someone could take a look and see the complete status for a particular component or the broader project. Having distributed teams is always challenging getting them to come to the same understanding and visibility for the development status. We implemented a tool to enable a standard visibility into how each team was doing and how the SAFe train(s) were doing in general. This tool provided us with a good overview of the pipeline health.

The QA department has been working as the key-holder of the releases. Its responsibility is to check the releases against all bugs and not allow the version to be released if there are critical bugs. As standard as it sounds, this doesnt follow the agile principles. We have been trying to "inspect quality in", instead of building it in. This is why we followed DevOps principles to enable the teams to deliver quality in the first place as well as getting help from QA about expectations and automation to speed up many processes. This is taking time but the direction is clear and teams are constantly working toward it. When we analyze our release cycles, we can see where we spend the most time - it is on Staging, and this is what we are working on reducing.

Finally, we are enabling the release-on-demand concept in SAFe, because we want to release any feature, bug resolution, tweak as soon as it is ready if the Product Owner and the team say it can be released. This is a big paradigm change compared to the very long staging times for releasing a fixed scope, which was usually huge, where just to ensure everything worked required a long testing time.

The current definition of DevOps in Wikipedia is:

DevOps is a set of practices that combines software development (Dev) and IT operations (Ops). It aims to shorten the systems development life cycle and provide continuous delivery with high software quality. DevOps is complementary with Agile software development; several DevOps aspects came from Agile methodology.

This definition and the practical reality tells us that no software development project can be really agile without the proper DevOps. Without having a healthy DevOps in place, it would be very difficult to have a fast, reliable delivery pipeline. Lets map some of the key features of DevOps to some key agile principles to show the relationship in a clearer manner.

Although there is no manifesto for DevOps, lets list the most commonly used DevOps practices:

Besides these more technical best practices, some culture-related practices are very commonly mentioned with DevOps:

The principles which are linked to technical best practices of DevOps are emphasized in bold above. These principles are focused on delivery, and so are the DevOps practices. Different DevOps practices focus on different parts of the delivery cycle. But they complement a cycle of faster delivery by working together.

For example, the team can only deliver continuously if they stick to the DevOps practices by doing build/integration/deployment/release, test automation, secure delivery, monitor the metrics continuously. For the rest of the agile principles, there is still a big dependency on DevOps practices actually, but they are a bit less obvious, which is why we preferred not to link directly. However, again just to give an example, lets look at "continuous attention to technical excellence and good design enhances agility." To pay close attention to technical excellence, the code itself should also be subject to change when the team decides to do some refactoring, and this can be MUCH faster and robust only if there is the DevOps infrastructure ready for it, providing automated tests, continuous integration, and deployment, etc. Otherwise refactoring work can be a big pain, and very costly due to many bugs discovered in later stages.

If we think about the cultural aspects of DevOps, we see a lot of transition from agile principles. We wont delve deeper into those, as we believe agile and DevOps complement each other from a cultural perspective anyway, and our focus for this article is more on DevOps technicalities. We will focus on relevant cultural elements in the rest of the article, only from an agile perspective.

DevOps is much more commonly used for cloud-based software nowadays, but DevOps was there before there was cloud. DevOps principles can be applied to any kind of software development project.

Workplace Hub (WPH) is also one of those projects that deliver software to the endpoints, not to the cloud. Although the project has some cloud elements, the majority of the software being developed runs on-premise. Actually, from a DevOps perspective, this makes no big difference. DevOps is about automation and enabling fast delivery. As long as the team can deliver fast releases, we can say that DevOps is successfully utilized.

Lets try to explain what we mean. Going back to some of the most common DevOps best practices weve been doing:

Each of these items will be explained in detail in the "How to use DevOps to enable faster delivery cycles" section. But here, we want to show that all these practices are applicable for on-premise software too. The key difference lies in the delivery method. For cloud software, the distinction between the release and the deployment is usually blurred. However, in our case, the endpoints are actually at client sites, so what the development team is responsible for is delivering the release to the deployment team which plans the deployment, notifying the customers when necessary. Some of the updates might cause downtime, due to their nature (i.e. when the firmware for the server is updated, there has to be a restart). This requires careful planning on the deployment teams part. Following this, the development team has to deliver the releases faster, so that the deployment teams have enough time to plan for the deployment.

The development team has to ensure the release is stable, can be readily deployed without issues, doesnt have known security issues, and so on. Solving a problem at a customer site is always more difficult than cloud scenarios, where the delivery team is in charge of the infrastructure anyway. In cloud scenarios, when there is a problem with the deployment it is much easier to rollback, or diagnose and troubleshoot the problem. The cloud environments provide necessary tools, and they can be used comparatively easily, especially by experts. In most cases, the development team is in charge of deployment too, and even if they arent they can work together with the deployment team much easier because it doesnt require any planning to be done with customers. In cloud scenarios, there is high-availability, blue-green deployments, or similar scenarios that can be used to avoid downtime.

But in our scenario, the deployment is being done to the WPHs at the customer site, and the network infrastructure or the other servers existing in the network arent under our control. This results in carefully planning, deploying, and monitoring the upgrade/deployment of the new release to the endpoints. Solving any problems that can occur at the customer site is costly and time-consuming - usually not as easy as solving problems in a cloud environment. This is why DevOps becomes even more critical to ensure that the release is stable, secure, tested, and delivered faster.

A sample schema of how DevOps can work in a cloud environment vs an on-premise delivery environment can be seen below.

Following through on the definition of a release train, each train should be able to interact with its stakeholders and deliver more or less independently from the other trains.

Deciding how to construct a release train isnt as easy as it sounds. Some key questions to keep in mind are:

With these and other similar considerations in mind, the organizations try to find and create the optimal release trains, which can act fairly independently from one another (from a technical as well as a customer point of view). I have to emphasize here that being completely independent is impossible for the majority of the projects. The goal is to be as independent as possible. Otherwise, this might turn into a game of a cat chasing its tail.

The goal of establishing release trains is ensuring there is consistent and fast delivery of customer expectations. Each customer (group) can work with different release trains to ensure they get what they want.

Another thing to keep in mind is that software projects are live systems. This means that the software will evolve with the advance of technology and new customer requirements. Changes to technology and requirements will mean that the release trains, as well as the teams, need to adapt to new situations and reorganize to cover the new status.

With this short introduction, lets take a look at our example, and how we reshaped our release trains. First of all, let me emphasize that we have 2 different sets of customers in this example. The first is of course our end users, the clients. WPH is delivered to the customer site with the software on it and updated and supported remotely, or when necessary on-site. The second customer group is our support teams, who are the users of the support functionalities we deliver on our data centers and public cloud environments. We initially created 2 release trains which had different deployment environments and different customers: Platform and Support trains. The Platform train is responsible for implementing and delivering core functionalities we expect from the on-premise WPH, whereas the Support train covers support team requirements. Due to different deployment environments and different customers, these 2 trains have different deployment methodologies. Even though we do deployments with different frequencies for these 2 trains, we use one single Program Increment (PI, as defined by SAFe) event to cover the planning for all of the teams. As mentioned before, the teams are NOT completely independent from one another and the planning has to be done together.

Establishing the release trains, or (re)forming teams isnt an easy task. The goal is clear, enabling the faster and better delivery cycles (which in its turn will bring faster customer benefit/feedback), but it is also about human beings, who need to be informed/reminded about the goals of the reorganization and listened to for their suggestions and thoughts. Some team members might be used to working with each other and sometimes can show resistance to joining other teams and working with different individuals. There is no silver bullet for this situation, only clear, transparent, and bi-directional communication can make things move. After all, these individuals are still part of the same broader team, and everyone is working for the single purpose of delivering a solution that works for the customer. The members and the trains will still have to work together. Many agile principles imply self-organizing teams. We havent been successful with this principle so far. We think this has to be a cultural principle as well. Some companies are more successful with this approach because they start teaching and reminding this principle of reorganizing from hiring on. They continue to encourage their employees to reorganize or come up with their ideas to reorganize. The goal is always two-fold: to be able to deliver a better solution for the customers; and making developers lives easier because of having clearer targets and fewer dependencies. All the team members need to keep in mind at all times that the broader team has to be in the best shape to deliver the best solution to the customer and be willing to reorganize when necessary. Suggestions should come from the team members, who can see situations clearly looking at the backlog, dependencies, etc. If the company culture isnt designed as such, it is all too easy to fall into the trap of "staying in the comfort zone" and implementing no changes to the team structure, thus delivering sub-optimally.

In our situation, we had cases where team members suggested creating new trains themselves (like the applications train, which will be mentioned in the next paragraph), and other cases where teams continued to deliver sub-optimally due to the lack of a clear backlog or lacking enough capacity to deliver. In some of these cases, some team members saw the situation and spoke up, but not all were willing to change. It was up to the management to take action to reorganize some teams to speed up the delivery and to have teams with more capacity split-up, etc. What we did was make the intention clear in each case (like we need more team members for team A or team B isnt able to deliver a clear customer benefit so must be disbanded, etc.). Once the intention was clear, the team members came up with suggestions of which teams they could move or what kind of split they could apply to a growing team, from a technical standpoint. These helped us reshape the teams in a more meaningful way with much less frustration to the team members, (although there were emotions, which is normal of course).

Now, we are on our way to creating a 3rd release train for applications that can run on different environments, be it WPH, cloud, or our data centers. This train will be responsible for delivering applications that run on or connect to WPH and can be used by our end users.

Here is a schema that shows how our release trains are shaping up. Team names and the number of teams arent shared but the concept should be quite clear.

Having different release trains enable us to package different solutions in their own separate way and deliver them separately. This allows us to deliver faster in general because each train can deliver separately and doesnt have to wait for another. To make this work, the infrastructure of the release mechanism has to be addressed and designed so that different trains can deliver without causing each other significant disruptions, ideally no disruptions at all. We are avoiding using absolute terms (like saying absolutely no disruptions) because especially with software running on-premise there can be some challenges quite difficult to overcome, like version dependencies between software running on-premise and compatibility with the version running on the data center. The goal here is to minimize the disruption and to design the system as close to the ideal state as possible.

In this section, well highlight what we have done to enable faster delivery cycles from a DevOps perspective. It should be noted that the topics mentioned in all other sections of this article complement this picture. Without establishing proper release trains, or without organizing QA to contribute to faster delivery with quality this couldnt have been possible.

We have to underline that the biggest problem we had with our releases was the huge scope. Huge scope means a very long staging time, and many bugs are found and resolved, which all adds to the release time. We are now changing our release schema to deliver smaller scope, which requires very little extra testing, and fewer bugs due to the size of the scope. To enable this, we have been changing the architecture of our software and utilizing container updating capabilities, which is more industry-standard and causes little to no-downtime when upgrading.

Our teams have used various DevOps tools. However, governance and consolidation were missing. Lack of governance resulted in the teams using the tools as they "see fit." There is nothing necessarily wrong with this if there arent too many teams and there are no big integration challenges. However, in our case and in many other cases, the teams output needed to be integrated to deliver a complete product, which means that each piece should fit into the puzzle properly. Without having some governance around DevOps tools this was proving to be impossible for us.

We decided to standardize our processes. Without going into tool details, well explain our guiding principles per DevOps practice.

We were using many different programming languages and different technologies which were making the pipeline standardization difficult. To avoid issues with build configuration errors and manually setup build processes, we started to treat build definitions the same way as production code. Having pipelines declared in code and versioned allowed us to make adjustments on scale (across more than 100 pipelines) safely. Adding a new step to all of our builds, like a vulnerability check, is a matter of opening a merge request and completing a review process.

The WPH development environment is quite diverse. We have different programming languages being used, which was one of the challenges in creating standardization in the first place. We have created a set of rules for each programming language and encouraged the teams to use this toolset (with additional rules from their part if they so desire), to check the code quality.

Static code analysis has a 2-fold benefit for us. First, as mentioned above, is following the coding rules which results in a successful integration. Second is making the code reviews and handovers easier. It is a very common scenario that teams change their domains, or people change their teams. In this kind of scenario, the recipient of the code is now more comfortable with the code received because it follows the defined set of rules.

We integrated a tool into our pipeline to check which open-source libraries (OSS) we were using and what their license types were. This tool is used to list the libraries in our release notes as well as take care of license-specific issues.

Depending on the license type, there are specific actions the team had to take. For instance, using an LGPL licensed library might mean the company has to expose its code too. By using this integrated tool, we have more visibility into our OSS landscape and cooperate with the legal teams about what we need to do for different cases.

We have been running penetration tests for our releases to check for any security issues we might have and address them before the release. This is a waterfall approach and has been costing us time. This is why we have been trying to shift-left this approach and find security flaws as early as possible in our development lifecycle.

There are also risks stemming from the dependencies from things like libraries or other components being used. The libraries can bring their own risks with them.

For this reason, we have integrated a tool into our pipeline, which runs a vulnerability check and lists its findings. Using these results, we can address some critical gaps much faster, leading to faster delivery.

Single Component View (Developer names are hidden)

Report View (Component names are hidden)

Last, but not least, we would like to emphasize how critical it is to monitor the whole delivery pipeline. Using some custom tools we have developed and constantly improving, we are now looking at the status of each release train and can see if any red flags need to be addressed by teams. Of course, each team is looking at their page and taking necessary actions on their part, but for governance (standardizing the approach) as well as being able to monitor the general health of the delivery by all stakeholders, we have an overview monitoring page that shows the general status of the delivery.

The Release Portal

We used the SAFe definition of Release on Demand as another guiding principle for our delivery pipelines and have the goal of delivering in increments to customers.

This is not a new concept in agile, but just a rephrasing of it. As weve mentioned before, one of the pillars of agile is releasing faster to get customer feedback faster, learn from the feedback, and constantly evolve the software. Long waiting times and huge scope arent desirable in this sense because they make getting direct feedback very complicated.

SAFe, and any other agile methodology, aims to be able to release whenever a feature is ready to be released and get feedback as soon as possible. To realize this, we have been shaping our release trains accordingly and preparing our release process and technical architecture to support releasing in small increments. Instead of releasing a fixed scope (which is generally big and suspect to scope creep), we are now moving towards releasing on demand.

Here, lets define what can be released at each increment. There is no static rule for this. We can only say that the Product Owner(s) (or the customers, if you happen to have them as part of your team directly) are in charge of defining what should be released because they are the ones who know the effect of releasing a finished implementation. This completed piece can be some completely new feature, an update to an already existing feature, maybe a removal of a feature, some bug resolution, or even some refactoring or security update to the code that isnt visible to the end-user at all, but would be critical from a technical standpoint. The Product Owner is in charge of judging when something is worth releasing, and potentially getting some feedback from this.

Other reasons for releasing faster have been already covered in this article in the above sections. Heres the SAFe perspective to DevOps and Release on Demand.

Agile simply doesnt work without DevOps. Whatever kind of software you might be producing, try to take the DevOps principles to heart and apply them as they fit your deliverables. But even this isnt enough, as it implies that you use whatever you can use. Lets be more specific here. The agile teams need to go out of their way to change their organizational structures, deliverables, and customer interactions to follow DevOps principles as much as possible. This will result in faster, more secure delivery cycles, which will enable the team to get faster feedback from the customer which will be fed back into the next cycle(s) improvements.

Burak Ilter worked as the Head of Engineering at Konica Minolta. He is an IT professional with a long and diverse career. Hes worked for major companies in different roles, ranging from software engineering to system engineering, and from architecture to engineering management. He has practical experience with different programming languages, methodologies, and business domains, including the public sector, defense sector, finance, healthcare, and productization. He is married with two children. He enjoys reading science-fiction, history, and is interested in cycling and running. He is also an avid Japanese anime fan.

Follow this link:
How to Make DevOps Work with SAFe and On-Premise Software - InfoQ.com

Web Frameworks Software Market 2020 Global Share, Trend, Segmentation, Analysis and Forecast to 2026 – Virtual-Strategy Magazine

Global Web Frameworks Software Market 2020 Analysis

Wiseguyreports.Com Publish New Market Research Report On-Web Frameworks Software Market 2020 Global Analysis, Size, Share, Trends, Opportunities and Growth, Forecast 2026

Web Frameworks Software Market 2020

Market OverviewWeb Frameworks are critical for every organization and individual, as they provide developers with a generic foundation of particular functionality modules that can be used directly or modified for application-specific software. Web frameworks are efficient software frameworks that can be preferred specifically for developing web-based software applications, websites, and web APIs. Web frameworks software offers a potential platform for developing high-performance and efficient software applications. Web frameworks software basically defines the structure of a programming system. Apart from prominent life-simplifying features, web frameworks software offers several benefits to the developer, organization, and other end-users.

Market Segment by Top Companies, this report coversRuby on RailsDjangoAngular JSASP.NETMeteorLaravelExpressSpringPLAYCodeIgniter

Request Free Sample Report @https://www.wiseguyreports.com/sample-request/4451376-global-web-frameworks-software-market-2019-by-company

Most of the popular web frameworks are open-source and deliver functionality to a large number of users. Web frameworks software can also come with licensing that is not restrictive and enables developers to design a commercial product. In most cases, web frameworks software has good documentation and client support. Web frameworks software eliminates the requirement to write a lot of redundant code that developers find being used in numerous different applications. The benefit of efficiency can never be underestimated. Software developers can expect to accomplish a particular project in much less time than would be attained writing programs without web frameworks software.

The modules provided by the web frameworks software are usually developed and tested by different skilled developers, which ensures a strong level of security. It is possible that different security risks are properly addressed and tested when frameworks are being developed. The advanced source code modules offered by web frameworks software solutions have high-level integration capabilities. Web frameworks software allows developers to build almost any kind of application, including dynamic and responsive websites. There also exist several specialized tools that are critical for web development. Efficient web frameworks software makes it easier for programmers to link to these specialized tools and communicate with them.

Market SegmentationThe global web frameworks software market can be analyzed on the basis of the following segments-Major product types-Web-based web frameworksCloud-based frameworks

Web-based framework software is developed with the objective to support the design and development of web-based applications, including web resources, web services, and web-based application program interface. Web frameworks software is, in short, source code repositories/libraries that enable programmers to develop their own applications in a much faster and smarter way. Cloud-based frameworks save a lot of time and considerably reduce software development costs. These solutions enable developers to get services from a remote location and any device, just with the internet connection.Web frameworks software is critical for every small, medium, and large-sized software development organization.

Regional AnalysisNorth America, Europe, Asia Pacific, South America, and the Middle East and Africa are the major regions contributing to the growth of the web frameworks software market. Major IT and web development organizations operating in the Asia-Pacific region are expected to offer potential opportunities to the web frameworks software vendors. A growing number of IT organizations, the rising demand for dynamic and responsive websites, and the availability of skilled web developers are some factors that are expected to support the growth of the Asia Pacific web frameworks software market. North America is another leading market for web frameworks software. Factors, such as the presence of prominent market players, and ongoing innovation and technological advancement in web-development procedures are expected to fuel the product demand in the North American countries.

Industry NewsCloud9, the cloud-based framework, is considered as the best solution for web development. Cloud9 serves as an efficient platform enabling developers to design efficient operational code with specialized Ubuntu-based working area in the cloud for Python, Ruby, node.js, PHP, and HTML. These work areas are efficiently motorized by Docker Ubuntu ampules. The framework comprises a conversation module that enables web developers to have a connection with each other inside a particular IDE.

Complete Report Details @https://www.wiseguyreports.com/reports/4451376-global-web-frameworks-software-market-2019-by-company

Table of Contents Analysis of Key Points

1 Web Frameworks Software Market Overview

2 Manufacturers Profiles

3 Global Web Frameworks Software Market Competition, by Players

4 Global Web Frameworks Software Market Size by Regions

5 North America Web Frameworks Software Revenue by Countries

6 Europe Web Frameworks Software Revenue by Countries

7 Asia-Pacific Web Frameworks Software Revenue by Countries

8 South America Web Frameworks Software Revenue by Countries

9 Middle East and Africa Revenue Web Frameworks Software by Countries

10 Global Web Frameworks Software Market Segment by Type

11 Global Web Frameworks Software Market Segment by Application

12 Global Web Frameworks Software Market Size Forecast (2019-2024)

13 Research Findings and Conclusion

14 Appendix

List of Tables and Figures

Continued..

Media ContactCompany Name: Wiseguyreports.comContact Person: Norah TrentEmail: Send EmailPhone: +1 646 845 9349, +44 208 133 9349City: PuneState: MaharashtraCountry: IndiaWebsite: https://www.wiseguyreports.com

See the original post here:
Web Frameworks Software Market 2020 Global Share, Trend, Segmentation, Analysis and Forecast to 2026 - Virtual-Strategy Magazine

Days after AOCs Among Us stream, the game is hit with a pro-Trump hack – Dazed

This week, hundreds of thousands of viewers tuned in to watch Alexandria Ocasio-Cortez play the viral video game Among Us on Twitch, with an all-star cast brought together with the help of Chelsea Manning and Twitch streamer Hasan Piker.Just days after the wholesome footage aired though, Among Us has been targeted with a massive hack that spams players with messages in support of Donald Trump.

In case youre still catching up, Among Us is a murder mystery-style game set in space, which sees up to ten players join a lobby. One or two of these players are impostors, and try to murder the others without getting found out the innocent have to work together to complete tasks and vote to kick suspicious players off the ship.

When AOC joined a lobby earlier this week alongside Minnesota Congresswoman Ilhan Omar and her daughter, the climate activist Isra Hirsi it provided the perfect opportunity to garner support for the Democrats in the upcoming election with jokes about voting (namely: orange sus, vote him out).

First reported on Thursday evening, however, the hacker spammed pro-Trump messages in the in-game chat, through other players avatars. Besides political messages, they also promoted various online accounts under the same handle: subscribe to eris loris.

In response to hundreds of screenshots of the hack shared via Twitter, the Among Us developers InnerSloth have rolled out emergency maintenance. In a Twitter post, they advise users to remain careful for the time being, writing: Please play private games or with people that you trust!

More here:
Days after AOCs Among Us stream, the game is hit with a pro-Trump hack - Dazed

Is Encryption the Answer to Data Security Post Lockdown? #NCSAM – Infosecurity Magazine

Remote work and working from home has grown exponentially over the past decade. In fact, a 2018 study from Apricorn found that 100 per cent of surveyed IT decision makers noted that they had employees who work remotely at least some of the time.

However, theCOVID-19 pandemicand resulting lockdown have forced a large number of employees into unfamiliar territory, not just remote work, but full-time working from home (WFH). While some businesses may have long adopted remote work strategies as part of increased flexibility, others have resisted due to the risks posed to data security and compliance efforts.

Worryingly, a more recent (2020) survey by Apricorn found that more than half (57 percent) of UK IT decision makers still believe that remote workers will expose their organization to the risk of a data breach. Employees unintentionally putting data at risk remains the leading cause of a data breach, with lost or misplaced devices the second biggest cause.

More than a remote risk

Whilst some are already transitioning back into the workplace, many are questioning whether WFH could become the new norm. The issue remains however, that remote working brings a number of challenges to data protection: be it an increased risk of external attacks, or employees tendency to relax security practices when working from home. Whatever the case, sensitive information leaving the confines of the office walls will always be more vulnerable than when it is safely secured on the corporate network.

Employees may well be tempted to use personal devices when working from home, or businesses may have introduced the need for video conferencing tools, or document sharing services, but it is critical that businesses take the onus on securing information before employees further put data at risk.

Our survey found that, of those with an information security strategy that covers employees use of their own IT equipment for mobile/remote working, forty two per cent said they permitted only corporate IT provisioned/approved devices, and have strict security measures in place to enforce this with endpoint control. Additionally, seven percent tell employees theyre not allowed to use removable media, but dont have technology in place to prevent this.

Every organization should cover the use of employees own IT equipment for mobile and remote working in their information security strategy. If businesses want to secure data on the move, it is essential that encryption and endpoint control is applied to all devices, whether that be laptops, mobile phones, or removable devices such asUSBs.

Data must remain on lockdown

Despite COVID restrictions showing some signs of easing, data must always remain on lockdown. Whether working from home or not, the GDPR has clear mandates for data encryption; firstly for compliance (Article 32); secondly to mitigate the impact on any organization who suffers a breach (Article 34) which removes the obligation to individually inform each citizen affected if the data remains unintelligible.

Additionally, article 83 suggests that fines will be moderated where the company has been responsible and mitigated any damage suffered by data subjects. Businesses will find that they are in a stronger position to defend themselves in the event of a breach should they be able to demonstrate the use of encryption practices.

The good news is that we have seen an increase in encryption and endpoint control. Nearly all survey respondents (94%) say their organization has a policy that requires encryption of all data held on removable media. Of those that encrypt all data held on removable media, more than half (57%) hardware encrypt all information as standard.

Businesses are seeing the value of encryption, but this is an ongoing process and it needs to cover all devices. The research highlighted that a number of those surveyed have no further plans to expand encryption on USB sticks (38%), laptops (32%), desktops (37%), mobiles (31%) and portable hard drives (40%). With so much data now moving beyond the corporate perimeter, its imperative to address the importance of encryption in protecting sensitive information, whilst giving staff the flexibility required to work remotely.

The value of encryption

Hardware encryption offers much greater security than software encryption and PIN pad authenticated, hardware encrypted USB storage devices offer additional, significant benefits. Being software-free eliminates the risk of keylogging and doesnt restrict usage to specific Operating Systems; all authentication and encryption processes take place within the device itself, so passwords and key data are never shared with a host computer. This makes it particularly suited for use in highly regulated sectors such as defense, finance, government and healthcare.

By deploying removable storage devices with built-in hardware encryption, a business can roll this approach out across the workforce, ensuring all data can be stored or moved around safely offline. Even if the device is lost or stolen, the information will be unintelligible to anyone not authorized to access it.

The pandemic has thrown up many challenges this year, but data protection should not have been one of them. It should not be an afterthought, something incorporated into the business strategy as a result of an incident, but one thats core to business operations and security best practice.

Organizations should analyze their data, identify everything that should be protected, understand where it exists and how it is transported, and ensure that it is encrypted at all stages of its lifecycle. Encryption and endpoint control can ensure that data remains secure and businesses can be prepared for the risks that come with an enduring remote workforce.

See the article here:
Is Encryption the Answer to Data Security Post Lockdown? #NCSAM - Infosecurity Magazine

The Police Can Probably Break Into Your Phone – The New York Times

The companies frequently turn over data to the police that customers store on the companies servers. But all iPhones and many newer Android phones now come encrypted a layer of security that generally requires a customers passcode to defeat. Apple and Google have refused to create a way in for law enforcement, arguing that criminals and authoritarian governments would exploit such a back door.

The dispute flared up after the mass shootings in San Bernardino, Calif., in 2015 and in Pensacola, Fla., last year. The F.B.I. couldnt get into the killers iPhones, and Apple refused to help. But both spats quickly sputtered after the bureau broke into the phones.

Phone-hacking tools have served as a kind of a safety valve for the encryption debate, said Riana Pfefferkorn, a Stanford University researcher who studies encryption policy.

Yet the police have continued to demand an easier way in. Instead of saying, We are unable to get into devices, they now say, We are unable to get into these devices expeditiously, Ms. Pfefferkorn said.

Congress is considering legislation that would effectively force Apple and Google to create a back door for law enforcement. The bill, proposed in June by three Republican senators, remains in the Senate Judiciary Committee, but lobbyists on both sides believe another test case could prompt action.

Phone-hacking tools typically exploit security flaws to remove a phones limit on passcode attempts and then enter passcodes until the phone unlocks. Because of all the possible combinations, a six-digit iPhone passcode takes on average about 11 hours to guess, while a 10-digit code takes 12.5 years.

The tools mostly come from Grayshift, an Atlanta company co-founded by a former Apple engineer, and Cellebrite, an Israeli unit of Japans Sun Corporation. Their flagship tools cost roughly $9,000 to $18,000, plus $3,500 to $15,000 in annual licensing fees, according to invoices obtained by Upturn.

Original post:
The Police Can Probably Break Into Your Phone - The New York Times