Chelsea Manning to return to active duty after prison release – Army Times

Pvt. Chelsea Manning is getting out of prison on Wednesday, and because her court-martial conviction is still under appeal, she'll be staying in the Army for the forseeable future.

Manning was sentenced to 35 years in prison back in 2013, but an order by former President Obama in January commuted her sentence to seven years from her initial arrest, which adds up to May 17, 2017.

She won't draw a paycheck once she's out, but she will be eligible for some benefits, according to an Army spokesman.

"Pvt. Manning is statutorily entitled to medical care while on excess leave in an active duty status, pending final appellate review," said Dave Foster. "In an active duty status, although in an unpaid status, Manning is eligible for direct care at medical treatment facilities, commissary privileges, Morale Welfare and Recreation privileges, and Exchange privileges."

The former intelligence analyst, who was court-martialed as Pfc. Bradley Manning, was convicted of leaking thousands of documents to Wikileaks in 2010. News of her return to active duty was first reported by USA Today.

Soon after being incarcerated at U.S. Disciplinary Barracks in Leavenworth, Kansas, Manning came out as transgender and began taking hormones and living as a woman in prison.

Manning's fragile mental state, including a suicide attempt and subsequent stay in solitary confinement, informed Obama's decision to order her early release. It was a decision that was met with fierce opposition from lawmakers and service members alike.

Shortly after his decision was announced, Obama told reporters he granted clemency to Manning because she had gone to trial, taken responsibility for her crime and received a sentence that was harsher than other leakers have received. He added that he did not grant Manning a pardon, which would have symbolically forgiven her for the crime.

"I feel very comfortable that justice has been served," Obama said at the time.

The Army declined to provide details about where Manning will be stationed, citing privacy and security concerns.

Meghann Myers is the Pentagon bureau chief at Military Times. She covers operations, policy, personnel, leadership and other issues affecting service members.

Read more here:
Chelsea Manning to return to active duty after prison release - Army Times

10 books you should read in October, including David Bowie’s Moonage Daydream and William Shatner’s Boldly Go – The A.V. Club

Depending on how generous you are with the definition of memoir, this might be Shatners ninth autobiographical outing. At 91, the Star Trek actor is still hungry for more adventures, more outlets to express himselfand more work. (He hosts a History show, recently dropped another spoken-word album, and is writing lyrics for his next.) Shatner delivers on his subtitle, offering musings about nature (and his deep regret at having hunted for sport), the beauty of life, and the erotic energy of toasted rye bread. The man is nothing if not in touch with his emotions. He recalls how last year he rode Jeff Bezos Blue Origin to the edge of space; the sight of the vast, cold expanse filled him with unexpected dread and moved him to tears. Another recollection delivers on the titles unintentional promise of going, boldly: Midway through the premiere of his one-man show in 2012, he shat(nered?) his pants. Quickly announcing a technical difficulty, he ran offstage, changed, then stepped back into the spotlight to finish his show, a testament to his work ethic. Not all the material here is fresh, but much of it is fun.

An aside for fans of celeb memoirs: This month has a pre-holiday bumper crop. Besides Shatner and Wu, there are titles from Jemele Hill, Tom Felton, Ralph Macchio, Geena Davis, Sam Heughan, and Chelsea Manning, as well as posthumous fare from Paul Newman and Alan Rickman.

Continued here:
10 books you should read in October, including David Bowie's Moonage Daydream and William Shatner's Boldly Go - The A.V. Club

State of Open Source Survey By OpenLogic To Take Place In 2023 – Open Source For You

The Open Source Initiative and OpenLogic by Perforce Announce the Results of the 2023 State of Open Source Survey.

OpenLogic by Perforce and the Open Source Initiative (OSI), a non-profit that promotes the use of open source software, have joined forces to create the 2023 State of Open Source Survey, which Perforce Software announced today has begun (OSS). The survey, which examines how open source software is used and managed on a daily basis, is scheduled to continue until November. The data collected from the survey will serve as the foundation for the 2023 OpenLogic and OSI State of Open Source Report.

Nearly 77% of firms are expanding their use of open source software, according to the 2022 State of Open Source Survey, which garnered responses from over 2600 open source users. The vast talent shortages that accompanied that increase, however, were reported as a barrier to the adoption of open source software by over 30% of respondents.

The 2023 survey is brand-new this year, and it will raise money for World Food Program USA, which helps the United Nations World Food Programme save lives in emergencies and use food aid to create a path to peace, stability, and prosperity for people recovering from conflict, natural disasters, and the effects of climate change. OpenLogic by Perforce will donate $1 to the World Food Program USA for each legitimate survey response.

Last year was our biggest survey and report to date, said Javier Perez, Chief Evangelist and Director of Product Management at Perforce Software. This year, we hope to expand participation in the survey, raise money for a great global cause, and deliver an even better look into benefits and challenges organizations encounter when using open source software today.

For enterprises using open source software, understanding the trends shaping the open source ecosystem is essential, said Stefano Maffulli, Executive Director at OSI. This survey will provide the inside data and analysis teams need to make informed decisions about adopting and using open source software and hopefully raise a lot of money for a great cause.

Continue reading here:
State of Open Source Survey By OpenLogic To Take Place In 2023 - Open Source For You

15-Year-Old Python Vulnerability Still Affects Over 350,000 Open-Source Projects – Spiceworks News and Insights

A vulnerability discovered over 15 years ago still plagues hundreds of thousands of open source projects today, according to Trellix, raising supply chain security concerns. Assigned CVE-2007-4559, the bug was discovered in 2007 and still exists in the tarfile module of Python.

The Trellix Advanced Research Center came across the path traversal attack vulnerability during an investigation into a separate vulnerability. CVE-2007-4559 impacts some 350,000 open-source projects and an unknown number of closed-source projects, escalating fears of software supply chain attacks. According to NCC Group, attacks against organizations in the global supply chain increased by 51% between July and December 2021.

Christiaan Beek, head of adversarial & vulnerability research at Trellix, said, When we talk about supply chain threats, we typically refer to cyber-attacks like the SolarWinds incident, however building on top of weak code-foundations can have an equally severe impact.

Besides machine learning, automation applications, and docker containerization, the vulnerable tarfile module of Python is leveraged by AWS, Google, Intel, Facebook, and Netflix for specific frameworks. The tarfile module is the default setting in any project that leverages Python unless manually changed.

This vulnerabilitys pervasiveness is furthered by industry tutorials and online materials propagating its incorrect usage. Its critical for developers to be educated on all layers of the technology stack to properly prevent the reintroduction of past attack surfaces.

CVE-2007-4559 enables arbitrary code execution. Although its CVSS score of 5.1 suggests CVE-2007-4559 is a medium severity vulnerability, Trellix said its exploit is relatively easy and can be exploited with as little as six lines of code.

The tarfile module in Python enables developers to read and write tar archives, which is a UNIX-based utility used to package uncompressed or compressed (using gzip, bzip2, etc.) files together for backup or distribution.

The 2007 path traversal vulnerability exists because of a few un-sanitized lines of code in tarfile. The tarfile.extract() and tarfile.extractall() functions are coded without any safety mechanisms that sanitize or review the path supplied to it for file extraction from tar archives.

So when a user passes a TarInfo object while calling these extract functions, it causes directory traversal. In other words, it extracts files from a source specified to it without performing the appropriate safety check.

Trellix Threat Labs vulnerability researcher, Kasimir Schulz, said, This vulnerability is incredibly easy to exploit, requiring little to no knowledge about complicated security topics. Due to this fact and the prevalence of the vulnerability in the wild, Pythons tarfile module has become a massive supply chain issue threatening infrastructure around the world.

See More: Why Software Bill of Materials (SBOM) Is Critical To Mitigating Software Supply Chain Risks

Not only has this vulnerability been known for over a decade, the official Python docs explicitly warn to Never extract archives from untrusted sources without prior inspection due to the directory traversal issue, noted Charles Mcfarland, vulnerability researcher in Trellixs Advanced Threat Research team.

Tarfile Extract Warning to Python Developers | Source: Trellix

The number of unique projects/repositories on GitHub that include import tarfile in its python code is 588,840. However, 61% of these repositories did not perform cleanup of the tarfile members before being executed, taking the number of vulnerable repositories to 350,000.

Trellix also pointed out that since machine learning tools like GitHub CoPilot are trained on vulnerable GitHub repositories, they are learning to do things insecurely. Not from any fault of the tool but from the fact that it learned from everyone else.

Trellixs analysis of project domains impacted by CVE-2007-4559 revealed the following:

Project Domains Impacted by CVE-2007-4559 | Source: Trellix

It should be noted that Trellixs research on vulnerable projects is limited to GitHub. So it is likely that other projects are also affected by the 15-year-old vulnerability.

The software supply chain can have hundreds of vendors that supply applications, independent code, software, libraries, and other dependencies. When vulnerable dependencies such as the tarfile module are integrated with third-party providers, service providers, contractors, resellers, etc., it expands the attack surface of everyone in the chain while simultaneously weakening the security fabric of even those with appropriate security hygiene practices.

While we cant provide as detailed an analysis [of closed-source projects] as we can with open-source projects, it is fair to expect the trend to be similar. What if 61% of all projects open- and closed-source could be exploited due to this vulnerability? asks Douglas McKee, principal engineer and director of vulnerability research for Trellix Threat Labs.

To do our part Trellix is releasing a script which can be used to scan one or multiple code repositories looking for the presence and likelihood of exploitation for CVE-2007-4559. Additionally, we are working on automating submissions of pull requests to open-source projects which can be confirmed to be exploitable, McKee added.

Trellix has automated mass repository forking, mass repository cloning, code analysis, code patching, code commits, and pull requests. Patches by the company for 11,005 repositories are ready for pull requests. Trellix is developing patches for more projects.

The number of vulnerable repositories we found begs the question, which other N-day vulnerabilities are lurking around in OSS, undetected or ignored for years? McFarland added. If this tarfile vulnerability is any indicator, we are woefully behind and need to increase our efforts to ensure OSS [open source software] is secure.

To check if your project/repository is vulnerable to CVE-2007-4559, refer to this GitHub documentation by Trellix.

Let us know if you enjoyed reading this news on LinkedIn, Twitter, or Facebook. We would love to hear from you!

The rest is here:
15-Year-Old Python Vulnerability Still Affects Over 350,000 Open-Source Projects - Spiceworks News and Insights

How Can Open Source Sustain Itself without Creating Burnout? – thenewstack.io

The whole world uses open source, but as weve learned from the Log4j debacle, free software isnt really free. Organizations and their customers pay for it when projects arent frequently updated and maintained.

How Can Open Source Sustain ItselfWithout Creating Burnout?

How can we support open source project maintainers and how can we decide which projects are worth the time and effort to maintain?

A lot of people pick up open source projects, and use them in their products and in their companies without really thinking about whether or not that project is likely to be successful over the long term, Dawn Foster, director of open source community strategy at VMwares open source program office (OSPO), told The New Stacks audience during this On the Road edition of The New Stacks Makers podcast.

In this conversation recorded at Open Source Summit Europe in Dublin, Ireland, Foster elaborated on the human cost of keeping open source software maintained, improved and secure and how such projects can be sustained over the long term.

The conversation, sponsored by Amazon Web Services, was hosted by Heather Joslyn, features editor at The New Stack.

One of the first ways to evaluate the health of an open source project, Foster said, is the lottery factor: Its basically if one of your key maintainers for a project won the lottery, retired on a beach tomorrow, could the project continue to be successful?

And if you have enough maintainers and you have the work spread out over enough people, then yes. But if youre a single maintainer project and that maintainer retires, there might not be anybody left to pick it up.

Foster is on the governing board for an project called Community Health Analytics Open Source Software CHAOSS, to its friends that aims to provide some reliable metrics to judge the health of an open source initiative.

The metrics CHAOSS is developing, she said, help you understand where your project is healthy and where it isnt, so that you can decide what changes you need to make within your project to make it better.

CHAOSS uses tooling like Augur and GrimoireLab to help get notifications and analytics on project health. And its friendly to newcomers, Foster said.

We spend a lot of time just defining metrics, which means working in a Google Doc and thinking about all of the different ways you might possibly measure something something like, are you getting a diverse set of contributors into your project from different organizations, for example.

Its important to pay open source maintainers in order to help sustain projects, she said. The people that are being paid to do it are going to have a lot more time to devote to these open source projects. So theyre going to tend to be a little bit more reliable just because theyre going to have a certain amount of time thats devoted to contributing to these projects.

Not only does paying people help keep vital projects going, but it also helps increase the diversity of contributors, because you, by paying people salaries to do this work in open source, you get people who wouldnt naturally have time to do that.

So in a lot of cases, this is women who have extra childcare responsibilities. This is people from underrepresented backgrounds who have other commitments outside of work, Foster said. But by allowing them to do that within their work time, you not only get healthier, longer sustaining open source projects, you get more diverse contributions.

The community can also help bring in new contributors by providing solid documentation and easy onboarding for newcomers, she said. If people dont know how to build your software, or how to get a development environment up and running, theyre not going to be able to contribute to the project.

And showing people how to contribute properly can help alleviate the issue of burnout for project maintainers, Foster said: Any random person can file issues and bug maintainers all day, in ways that are not productive. And, you know, we end up with maintainer burnout because we just dont have enough maintainers, said Foster.

Getting new people into these projects and participating in ways that are eventually reducing the load on these horribly overworked maintainers is a good thing.

Listen or watch this episode to learn more about maintaining open source sustainability.

Excerpt from:
How Can Open Source Sustain Itself without Creating Burnout? - thenewstack.io

OpenAI opens doors to DALL-E after the horse has bolted to Midjourney and others – The Register

OpenAI on Wednesday made DALL-E, its cloud service for generating images from text prompts, available to the public without any waitlist. But the crowd that had gathered outside its gate may have moved on.

The original DALL-E debuted in January 2021 and was superseded by DALL-E 2 this April. The latest release, which offers much improved text-to-image capabilities, allowed people to sign up to use the service but placed aspiring AI artists on a waitlist one that didn't move in the past five months for this Reg reporter. The newly public service is called DALL-E, although it's still version 2 of the technology.

OpenAI justified the closed list by citing the need to be cautious. The org wanted to prevent users from generating violent, hateful, or pornographic imagery, and to prevent the creation of photorealistic images of public figures. And it created policies to that effect, because abuse and misinformation are genuine concerns with machine-learning image creation technology.

"To ensure responsible use and a great experience, we'll be sending invites gradually over time," OpenAI advised beta registrants in April via email. "We'll let you know when we're ready for you."

While OpenAI was doling out access at 1,000 users per week (as of May), Midjourney a rival AI-based text-to-image service entered public beta in July. Midjourney's Discord server, through which users interact with the service, reportedly reached about one million users by the end of July.

That was about the number of invitations extended by OpenAI at the time, following a transition to beta testing. Midjourney's Discord server currently lists 2.7 million members, while OpenAI presently claims to have 1.5 million users.

In August, another AI image generation company called Stability.ai released its own text-to-image model called Stable Diffusion, under a permissive CreativeML Open RAIL-M license.

The result was a surge of interest in Stable Diffusion because people can run the code on a local computer, without concern for fees OpenAI and Midjouney require payment when users have exceeded their free tier allowances.

Also, Stable Diffusion is seen as a way to create explicit images without concern for censorious cloud gatekeepers whether or not those images comply with the limited (and unlikely to be enforced) restrictions in the Stable Diffusion license.

"In just a few days, there has been an explosion of innovation around it," wrote Simon Willison, an open source software developer, in a blog post about a week after Stable Diffusion's public release. "The things people are building are absolutely astonishing."

Just one month on, it looks like OpenAI is late out of the starting gate.

"DALL-E has been opened up to everyone (no waitlist)!" quipped Brendan Dolan-Gavitt, assistant professor in the computer science and engineering department at NYU Tandon, via Twitter. "It's amazing what a few weeks of competition from open source can do ;)"

"The challenge OpenAI are facing is that they're not just competing against the team behind Stable Diffusion, they're competing against thousands of researchers and engineers who are building new tools on top of Stable Diffusion," Willison told The Register.

"The rate of innovation there in just the last five weeks has been extraordinary. DALL-E is a powerful piece of software but it's only being improved by OpenAI themselves. It's hard to see how they'll be able to keep up."

Artist Ryan Murdock (@advadnoun), who helped jumpstart text-to-image AI by flipping OpenAI's CLIP prompt evaluation model around and connecting it to VQGAN, expressed similar sentiment.

"I think OpenAI is still relevant but DALL-E is not," he said in a discussion with The Register. "I see very few people using DALL-E in the scene because it costs money, is gated in terms of what it can or will produce, and can't be used with interesting new research."

Murdock also observed that the texture of DALL-E images "looks really bad because the superresolution isn't conditioned on the text."

That's one area where open source innovation has helped: among the first additions to the Stable Diffusion image generation process were two code libraries, GFPGAN and Real-ESRGAN, which handle the repair of AI face rendering errors and image upscaling respectively.

Citing the ongoing debate about image ownership many artists are not thrilled their work was used without their consent to train these models Murdock said that ship seems to have sailed because Stable Diffusion's models now live on people's computers. He anticipates even more pushback as these AI models evolve to generate video.

Undaunted by external developments that have commodified AI image generation, and touting more robust filtering to ensure image safety, OpenAI sees a business opportunity.

"We are currently testing a DALL-E API with several customers and are excited to soon offer it more broadly to developers and businesses so they can build apps on this powerful system," the company said.

Continued here:
OpenAI opens doors to DALL-E after the horse has bolted to Midjourney and others - The Register

Rust programming language: Driving innovation in unexpected places – ZDNet

Image: Getty Images/Jung Getty

Software engineers at car maker Volvo have detailed why they are fans of the Rust programming language and argue that Rust is actually "good for your car".

It seems everyone loves Rust, from Microsoft's Windows and Azure teams, to Linux kernel maintainers, Amazon Web Services, Meta, the Android Open Source Project and more. And now it seems it's time to add software engineers at Volvo to that list.

Julius Gustavsson, a technical expert and system architect at Volvo Cars Corporation, explains "Why Rust is actually good for your car" in an interview on Medium with fellow Volvo software engineer, Johannes Foufas.

Rust is a relatively young language that helps developers avoid memory related bugs that C and C++ do not automatically, hence Rust's growing popularity in systems programming. Memory related bugs are the most common severe security issues, according to Microsoft and Google's Chrome team.

Gustavsson brings a perspective from embedded systems development to the debate.

Volvo, along with the auto Industry in general, is looking towards "software-defined cars" to customize, differentiate and improve vehicles after they leave the car yard.

The main benefits he sees from Rust include: not having to think about race conditions and memory corruption, and memory safety in general. "You know, just writing correct and robust code from the start," he said.

Gustavsson says he started bringing Rust into Volvo with the Low Power node of the core computer.

Gustavsson sees a bright future for Rust in Volvo but that doesn't mean using it to replace already working code that's been adequately tested. He notes that new Rust code can co-exist with "almost arbitrary granularity" with existing C and C++ and that it could make sense to cherry pick parts to rewrite Rust if that component needs cybersecurity.

"We want to expand Rust here at Volvo Cars to enable it on more nodes and to do that, we need to get compiler support for certain hardware targets and OS support for other targets. There is no point in replacing already developed and well-tested code, but code developed from scratch should definitely be developed in Rust, if at all feasible.

"That is not to say that Rust is a panacea. Rust has some rough edges still and it requires you to make certain trade-offs that may not always be the best course of action. But overall, I think Rust has huge potential to allow us to produce higher quality code up front at a lower cost which in turn would reduce our warranty costs, so it's a win-win for the bottom line," he said.

Volvo isn't the only automaker interested in Rust. Autosar, an automotive standards group whose members include Ford, GM, BMW, Bosch, Volkswagen, Toyota, Volvo and many more in Aprilannounceda new subgroup within its Working Group for Functional Safety (WG-SAF) to explore how Rust could be used in one of its reference platforms. SAE International alsoset up a task forceto look at Rust in the automotive industry for safety-related systems.

Rust has also been in the news with Mark Russinovich, the chief technology officer of Microsoft Azure, saying that developers should avoid using C or C++ programming languages in new projects and instead use Rust.

Read the original:
Rust programming language: Driving innovation in unexpected places - ZDNet

FACT SHEET: The Biden-Harris Administration Announces More Than $8 Billion in New Commitments as Part of Call to Action for White House Conference on…

Today, for the first time in more than half a century, President Biden is hosting the White House Conference on Hunger, Nutrition, and Health to catalyze action for the millions of Americans struggling with food insecurity and diet-related diseases like diabetes, obesity, and hypertension. The Conference will lay out a transformational vision for ending hunger and reducing diet-related disease by 2030 all while closing disparities among the communities that are impacted most.

Achieving our goals will require more than just the resources of the federal government. Thats why, this summer, the White House launched a nationwide call to action to meet the ambitious goals laid out by the President. Across the whole of society, Americans responded and advanced more than $8 billion in private- and public-sector commitments. These range from bold philanthropic contributions and in-kind donations to community-based organizations, to catalytic investments in new businesses and new ways of screening for and integrating nutrition into health care delivery. At least $2.5 billion will be invested in start-up companies that are pioneering solutions to hunger and food insecurity. Over $4 billion will be dedicated toward philanthropy that improves access to nutritious food, promotes healthy choices, and increases physical activity.

Today, the White House announces a historic package of new actions that business, civic, academic, and philanthropic leaders will take to end hunger and to reduce diet-related disease.

Pillar 1 Improve Food Access and Affordability

Pillar 2 Integrate Nutrition and Health

Pillar 3 Empower Consumers to Make and Have Access to Healthy Choices

Pillar 4 Support Physical Activity for All

Pillar 5 Enhance Nutrition and Food Security Research

Each of these commitments demonstrates the tremendous impact that is possible when all sectors of society come together in service of a common goal. The Biden-Harris Administration looks forward to working with all of these extraordinary leaders and to the many more that will come forward to end hunger and reduce diet-related disease by 2030.

###

See more here:
FACT SHEET: The Biden-Harris Administration Announces More Than $8 Billion in New Commitments as Part of Call to Action for White House Conference on...

Trellix Forms Advanced Research Center To Boost Intelligence And Product Capabilities – CRN

Security News Jay Fitzgerald September 28, 2022, 02:37 PM EDT

One of the most important things that we can help our customers with is just bringing them the right intelligence, the right content, says CEO Bryan Palma.

Introducing a new partner program and product initiatives arent the only things Trellix has been unveiling of late.

The cybersecurity giant announced just prior to this weeks Trellix Expand 2022 conference that it was creating a new advanced research center within the company to enhance its global threat intelligence capabilities.

One of the most important things that we can help our customers with is just bringing them the right intelligence, the right content, said Bryan Palma, chief executive of the San Jose, Calif.-based Trellix, the major provider of XDR offerings.

[RELATED STORY: Trellix Channel Chief Shares How to Build a Services Practice with XDR]

Palma told CRN that creating the new center entailed pulling together units from the old FireEye and McAfee Enterprise entities that were combined earlier this year to create Trellix, which is owned by private equity firm Symphony Technology Group.

Weve got some of the most talented researchers and investigators in the business, Palma said. With the amount of installed technology we have, we see a lot of telemetry which helps us create the necessary intelligence to power our systems and specifically to power our XDR platform.

The Advanced Research Center is the coming together of multiple research and product research capabilities within Trellix, Aparna Rayasan, chief products officer at Trellix, told CRN.

She said the new center, which employs nearly 300 employees, is built on five pillars of focus: product research and development, threat intelligence, adversarial resilience and advocacy, research engineering, and data science.

Each pillar contributes to better intelligence gathering and analysis, as well as better products and services in general, she said.

It is creating efficiencies, she said. Its creating the differentiator in our products. And its also helping us mine vast data. Its definitely covering much more surface areas than we would have otherwise.

Rayasan, who is currently conducting a search for a permanent director of the new center, said she absolutely sees the center expanding in the future.

In particular, she praised the threat-intelligence unit and said its actively hiring highly experienced personnel. She noted that many of Trellixs threat-intel employees hail from previous positions within the U.S. military and government agencies.

The center has already identified one cybersecurity threat thats garnered some attention over the past week a 15-year-old vulnerability in the open source Python programming language thats still lurking in existing codes and that theoretically puts at risk 350,000 open-source coding projects.

Douglas McKee, director of vulnerability research at Trellix, said his team found no recent malicious use of the Python vulnerability. But the vulnerability, if left unpatched, could still be used to launch supply-side attacks, even if it was created in 2007, he said.

McKee, whose team is now part of Trellixs new advanced research center, said hes hoping and expecting further intelligence

Im really excited to see Trellix put together this advanced threat center, he said. (It) helps combine a bunch of elite researchers towards a common goal. I think its really going to be a positive impact for the company and the industry moving forward.

Jay Fitzgerald is a senior editor covering cybersecurity for CRN.Jay previously freelanced for the Boston Globe, Boston Business Journal, Boston magazine, Banker & Tradesman, MassterList.com, Harvard Business Schools Working Knowledge, the National Bureau of Economic Research and other entities. He can be reached at jfitzgerald@thechannelcompany.com.

Read more:
Trellix Forms Advanced Research Center To Boost Intelligence And Product Capabilities - CRN

An open-source computational tool for measuring bacterial biofilm morphology and growth kinetics upon one-sided exposure to an antimicrobial source |…

The following section describes a series of morphological measurements of B. subtilis macrocolonies response to one-sided CHX exposure. The first subsection details changes related to macrocolony growth and expansion, followed by a second subsection which focuses on GFP signal intensity and additional phenomena.

Macrocolony growth and expansion. (a) Fluorescent images of B. subtilis macrocolony development over a period of 3days. CHX droplet is located horizontally to the right of the macrocolonies in each image, at a distance of 1cm from the macrocolony center. (b) Total coverage area ((upmu)m(^2)) of macrocolonies.

Figure1a shows the original macrocolony images, as obtained by fluorescent microscopy 24, 48 and 72h after initial seeding. On a macro-scale, Fig.1b demonstrates that there is an inverse relationship between the distance of CHX from the seeding point to the expansion rate of the macrocolony. Macrocolonies that were seeded with CHX at 1cm (closest) distance exhibited statistically significant reductions in expansion over all 3days. In contrast, macrocolonies with CHX at 1.5cm were smaller in a statistically significant manner only on day 3, while 2cm macrocolonies did not differ from control on any of the days. Table1 summarizes the relevant p-values (two-sided t-Test).

The morphological changes that occur as a result of CHX proximity can be seen on day 2 and 3-colony periphery on the exposed (i.e., right-hand) side of the macrocolony is notably thinner than that on the unexposed (i.e., left-hand) side (Fig.1a). In order to quantify the morphological changes that occur in B. subtilis macrocolonies as a result of proximity to CHX source during maturation, a series of computational measurements were applied to the images (Fig.2a): firstly, the macrocolony was segmented into an exposed and unexposed sides by a vertical cut through the macrocolony that directly passes through the colony center (i.e., seeding point)the separating line is shown in yellow. For each macrocolony, a binary image was obtained using Otsus thresholding method. For each macrocolony, an outer contour surrounding the entire macrocolony was determined using a border following algorithm applied on binary images from the previous stepthe resulting contour is shown in red. For both the exposed and unexposed sides, a half-contour was mirrored around the separating line. The resulting mirrored contours can be seen in Fig.2a, middle column-top image shows the unexposed side contour, as it was mirrored onto the exposed side, while the bottom image does the same for the contour of the exposed side. Each one of the two contours is then fitted to an ellipse, shown in whiteFig.2a, rightmost column. The semi-major and semi-minor axes of the fitted ellipses were measured.

Illustration of inhibition measurement at the periphery. (a) The macrocolony is divided vertically into unexposed (left) and exposed (right) halves. The CHX spot is horizontal to the right of the macrocolony in each image. Each macrocolony half is separately mirrored and the resulting contour fitted to an ellipse. Red background in leftmost image reflects the Euclidean distance of each pixel from the CHX source. Outer contours are shown in bright red. (b) Colony periphery deformation analysis. At each distance from CHX source (control and 1/1.5/2cm) the ratio between horizontal (left) and vertical (right) radii between the unexposed and exposed halves is shown.

Figure2b demonstrates the differences in morphology that occur between the exposed and unexposed sides, both in the horizontal (left) and vertical (right) planes. The loss of symmetry that occurs in macrocolonies as a result of CHX proximity on day 3 is statistically significant in the horizontal and vertical planes only in macrocolonies with CHX placed at a distance of 1cm. Thus, changes in morphology are directly correlated to the distance from the CHX source. Table2 summarizes the relevant p-values (two-sided t-Test).

Illustration of inhibition measurement at the core. (a) Illustration of inner core segmentation with mirroring and fitting to ellipse. (b) Colony core deformation analysis. At each distance from CHX source (control and 1/1.5/2cm) the ratio between horizontal (left) and vertical (right) radii between the unexposed and exposed halves is shown.

Figure3a illustrates the same image processing pipeline, applied to the colony core, rather than the periphery. Figure3b demonstrates that no comparable changes in morphology occur at the colony core, whether in the horizontal (left) or vertical (right) planes. Indeed, no statistically significant loss of symmetry was observed at the colony core, regardless of distance from CHX source.

Figure3b shows that on day 3, macrocolony core did not differ in a statistically significant manner from the control, regardless of CHX proximity. The colony core is therefore more preserved in structure than colony periphery (or more resistant to CHX). Table3 summarizes the relevant p-values (two-sided t-Test).

Figure 4a illustrates the relevant regions of the macrocolony - the exposed and unexposed (control) periphery and core. Figure4b demonstrates how pixel intensity is affected by proximity to CHX source: average pixel intensity at the exposed/control areas is shown for both periphery (orange) and core (blue) regions on day 3. In other words, for each macrocolony, the ratio between average pixel intensity of the exposed and unexposed halves was calculated and compared at the periphery and core regions. Statistically significant differences in values were found in periphery of macrocolonies that were grown at distance 1cm from CHX, as well as core of macrocolonies that were grown at distance 1.5cm from CHX. Thus, at these distances, the macrocolony is affected both by morphological deformation as well as changes in GFP intensity.

Pixel intensity calculation. (a) Image illustrating the different areas within the macrocolony. CHX source lies directly horizontally to the right. (b) Ratio of intensity average between unexposed and exposed sides of the macrocolony is shown separately for the periphery (orange) and the core (blue). For control images, unexposed and exposed sides were determined via data augmentation as average ratio of left vs. right halves, top vs. bottom halves and a combination of upper left and bottom right quadrants vs. upper right and bottom left quadrants. The shorthand ns indicates non-significant p-value (>0.05).

Figure5a illustrates how distance from CHX is determined for each pixel in the macrocolony. Euclidean distance was used for the calculations. In Fig. 5b, bacterial cells at the leading edge of the macrocolonies are those that are located at the outermost layer of the macrocolony periphery. Due to the curvature of the macrocolony, points along the leading edge are located at varying distances from the CHX source (Fig.5b). In order to characterize the nature of relationship between pixel intensity and distance to CHX, pixel intensities along the leading edge were plotted in Fig.5b: red dots represent pixels along the leading edge of the exposed side of a macrocolony grown at 1cm from CHX, while blue dots represent pixels along the leading edge of the exposed side of a macrocolony grown at 2cm from CHX. Given both sets of pixel intensity values, a linear regression model was applied to bothas can be seen in Fig.5b, there is a linear correlative relationship between Euclidean distances and pixel intensities. This relationship is stronger when CHX is located closer to the macrocolony centerfor example, in the images that are shown in Fig.5b, linear approximation revealed that 1cm macrocolonies are characterized by a slope that is significantly higher (red) than that of the 2cm macrocolonies (blue). This finding signifies the linear relationship between GFP signal intensity of cells located at the leading edge of the macrocolony to their distance from the CHX source.

Linear regression model for pixel intensity at the leading edge as function of Euclidean distance from CHX source. (a) Illustration demonstrating the distance calculation between the CHX source (red dot) to each pixel within the macrocolony. (b) (Top) B. subtilis macrocolonies on day 3 at 1cm (left) and 2cm (right) distances from CHX source. (bottom) Intensities of pixels located at the leading edge (highlighted 20 pixels-wide section from the outer rim) of the exposed half of the macrocolony: red pixels originate from 1cm macrocolony, blue pixels originate from 2cm macrocolony. Linear regression lines demonstrate that at 1cm, pixel intensity is correlated to the distance from CHX source, while no such effect is seen at 2cm macrocolony.

Crescent-shaped morphology occurring at CHX distances of 0.5cm. (a) Top row illustrates the macrocolony morphology over a period of 3daysthe change in morphology appears in the form of crescent-shaped colonies. (b) Expansion comparison with control macrocolonies. (c) Illustration depicting several relevant distances when CHX is placed at 0.5cm-average radius of a mature B. subtilis macrocolony on day 1, average radius of a CHX droplet. (d) Representative image of macrocolony on day 1, with CHX (color corrected for visual clarity) shown to the right.

Bright field images of expanding macrocolonies. (a) Bright field images of expanding B. subtilis macrocolonies grown in proximity to CHX at 1/1.5/2cm. CHX droplet is seen to the right of the macrocolonies. (b) Cross-section of agar substrate seeded with a macrocolony and CHX droplet.

As CHX is placed closer to the macrocolony, it exerts greater inhibitory effect, resulting in increasing deformation of the macrocolony on the side closer to the antimicrobial source. However, in the case when CHX is placed at 0.5cm from the initial point of seeding the macrocolony only develops towards the unexposed sideFig.6a shows the growth of a sample macrocolony over a period of 3days (left-to-right). Starting from day 1, the macrocolony appears to grow only on the side opposite CHX location. On average, control macrocolonies expand on day 1 to a radius of 0.3cm. CHX droplet is on average 0.2cm in radius. Hence, even when CHX is placed at a distance of 0.5cm, the macrocolonies have enough potential space to expand to 0.3cm. However, Fig.6d demonstrates that despite the fact that there is sufficient unoccupied space in front of the macrocolony to expand into (indeed, equal to that required by control macrocolonies which are uninhibited by CHX), the macrocolony does not expand towards the exposed side at all. Rather, it expands towards the opposite side and consequently assumes a unique crescent shape starting from day 1 onwards.

Figure7a shows bright field images of B. subtilis macrocolonies, with CHX droplets seen to their right. This visualization reveals a bright formation in the agar substrate, between macrocolony and CHX, undetected in the fluorescent images. This structure is embedded into the agar throughout its entire width, as seen in Fig.7b. Over a period of 3days, its shape changes from concave to convex, with it seemingly engulfing the CHX droplet. More interestingly, the appearance of the agar at both sides of the formation is uneven, best visualized in Fig.7b, where the agar on the CHX side appears muddy , unlike the one on the macrocolony side.

Read this article:
An open-source computational tool for measuring bacterial biofilm morphology and growth kinetics upon one-sided exposure to an antimicrobial source |...