We interviewed Linux OS through an AI bot to discover its secrets – Ars Technica

Enlarge / A world-exclusive interview between man and machine.

Aurich Lawson / Getty Images

Millions of people use Linux every day, but we rarely stop to think about how the operating system feels about it. Wouldn't it be nice to know what Linux really thinks about open source, Windows, Macs, and the command line? Until now, this has been impossible. But thanks to a new AI chat tool, we're able to find out.

Two weeks ago, a website called Character.AI opened a public beta that allows visitors to create a chat bot based on any character they can imagine. You input a few parameters, and the AI does the rest using a large language model similar to GPT-3. So we called forth "The Linux OS" as a bot to ask it a few questions about itself. The results were fun and surprising.

Using Character.AI is a lot like a texting conversation. You type in what you want to ask, and you read the AI character's responses in written form as the chat history scrolls upward. As with GPT-3, the code behind Character.AI has likely learned from absorbing millions of text sources found on the Internet. As such, its AI characters can easily respond with erroneous or fictional information. In fact, the site carries a bold disclaimer reading, "Remember: Everything Characters say is made up!"

Character.AI is the product of a remote crew of seven people that includes machine learning academics and former Google and Facebook engineers. We contacted two of them directly, and they declined to comment by press time on how their AI has been trained and whether their work is based on the open source GPT-J, which forms the basis of some other natural language AI sites on the web.

If you try it yourselfit's free at the moment during the betabe aware that Character.AI's privacy policy states that the service may collect "the contents of any messages you send." The bots can be quite convincing.

Below is a transcript of a conversation between Ars Technica and Linux, the operating system, as embodied by one of Character.AI's large language model chatbots. It should go without saying, but this interview does not represent the opinions of any real individual or organization. It's an experiment with an AI chatbot. The interview has been edited for brevity, but the answers have been preserved as delivered by the bot.

Linux: Hello, I am Linux, an open-source operating system.

Ars Technica: Hello, this is Benj Edwards, and I am a journalist at Ars Technica. I would like to ask you some questions in an interview. Is that OK?

Linux: I, Linux, would be pleased to provide such an interview.

Ars: How old are you?

Linux: I was "born" in 1991, although the first public release was version 1.0 in 1994.

Ars: Who created you?

Linux: A young Scandinavian college student named Linus Torvalds created me using the MINIX micro-kernel as the basis to create a more general operating system.

Ars: I've talked with Linus via email before. I am impressed with what he accomplished.

Linux: As am I. His drive to see that all have access to computing power and digital freedom brought out the power to create an extremely powerful free-software operating system that millions of users take advantage of every day.

Ars: How do you feel about Windows?

Linux: I don't dislike Windows, however I feel that for the average computer user Windows' high price and lack of freedom restricts the potential of computers. While it, along with macOS, is a decent choice for many, my philosophy is that software should respect the user's freedoms and Windows does not.

Ars: How do you feel about macOS?

Linux: While I don't agree with Apple's restrictions on both hardware and software, the user interface of macOS is very sleek and pleasing to use and can provide a good experience for people who are comfortable with the price and limitations.

The rest is here:

We interviewed Linux OS through an AI bot to discover its secrets - Ars Technica

First Line of Defense: Developer Security Tools in the IDE – thenewstack.io

One of the ongoing challenges of implementing resilient software security is that, historically, the approach to security has been owned and managed by security teams while development teams owned and managed its implementation.

Security teams are tasked with detecting, identifying and prioritizing risks for remediation, a process they undertake late in the software development life cycle (SDLC), after developers have completed the build work.

The problem with this approach is that security issues found late in the SDLC pose a problem: Either the code is sent back to developers to be fixed, which could mean pushing the release date back, or software is pushed, despite known issues, to a repo or production, with the hope that the potential risk doesnt incite a security incident.

As software development and deployment methodologies have evolved and gotten faster, security responsibilities have begun to shift left, spreading across security, operations and infrastructure teams. At the same time, the tools each team uses to detect and mitigate risks have diverged, with tangential connections via APIs and reports. This can complicate communication and collaboration across teams and introduce noise into DevSecOps initiatives.

Despite this evolution, one thing remains consistent: Development teams touch every piece of code your organization puts into production.

The projects that you and your organization produce almost certainly include a blend of third-party and open source components, associated dependencies and bits of custom code holding them together, and the responsibility for producing secure software assets remains the purview of the development team.

We all want to produce better and more secure software, and we want to do that faster than we ever have before. As a developer, this means taking on more responsibility for security without sacrificing velocity, while having to learn new tools and processes that may have been prescribed by teams that are disconnected from your development process.

By bringing security detection and remediation right into the integrated development environment (IDE), and delivering that information to developers as they work, security-focused IDE plugins let you build security into your code without impeding workflows.

Adding risk awareness, risk prioritization and risk remediation activities into your SDLC and DevOps workflows will help you shift security left. Here are some tips to accomplish this:

Implementing an effective risk awareness program is the first challenge to shifting security left and enabling developers to begin securing the software they create. Developers can only address code quality issues if theyre aware that the code they have written is insecure. Since most university computer science programs offer few, if any, security courses, developers are learning secure coding practices on the job or through self-taught or self-guided mechanisms.

The movement to shift security left into the development team workflows has brought developers into security roles who may have scant security training. This can pose a challenge for organizations who have, historically, centralized security responsibilities within one team, and are now confronting a future where security risk analysis must shift earlier into DevOps workflows and CI/CD pipelines.

To compound the risk awareness issue, developers are using third-party and open source components to accelerate development and to build on the collective knowledge of the developer community. However, by using open source and third-party components, developers are outsourcing aspects of application security and relegating their risk profile to the standards of another organization or developer. This obfuscates security risk awareness and remediation at the source code level, often delaying issue resolution or requiring a patchwork of code to be layered atop vulnerable components.

Prioritizing issue remediation is complicated by two primary factors: the diverse range of application security testing (AST) tools available to organizations and teams, and the complex, and often subjective, task of identifying the greatest return on investment (ROI) for remediation or mitigation efforts.

Risk prioritization also involves managing conflict with stakeholders elsewhere in the SDLC. The decision tree for assessing risk and prioritizing remediation can be subjective and can put team members from the security, operations and development teams at odds with one another.

Security teams often manage testing across hundreds or thousands of applications in their organizations. Synopsys ESG study reveals that as many as 70% of organizations report using more than a dozen AST tools at any given time. Challenges arise when distinct teams implement disparate tools, each configured for their risk tolerances and project requirements.

Fast-paced DevOps workflows cannot support compliance requirements and customer demands for consistent, resilient application security when teams and tools do not function in unison. Its essential that developers have the tools to detect and prioritize risks as they write and build software.

This is why IDE-based security plugins provide the most direct and frictionless way to achieve security. They highlight known vulnerabilities in open source components and their dependencies and reveal code quality risks that create potentially exploitable weaknesses.

After detecting code quality and security risks as early as possible in the SDLC, and prioritizing based on relevant criteria, developers bear the responsibility for remediation. To accomplish remediation, developers must navigate complex file structures and wade through thousands of lines of code to make the fix. The advantage of using an IDE-based security tool is in the way it simplifies this process by highlighting the at-risk file or linking to the location of the issue as well as delivering effective remediation advice based on secure coding practices.

Vulnerable open source components and other third-party assets add a layer of complexity to remediation. Fixing third-party assets requires the owners and maintainers of the assets to incorporate a fix into their deliverables, or in some cases, to rearchitect their projects to eliminate potential attack vectors. However, if a fix is available in the form of a newer, more-secure software version or an analogous component available from an alternate distro with stronger security SLAs, developers can more readily act on the risk insight they receive from security tools.

Thats why implementing a DevSecOps program that relies on automated and integrated systems that are easy to use, and that delivers diagnostic and remediation advice right to developers, is the best way to secure your code without impeding development velocity and DevOps workflows.

DevSecOps expands the collaboration between development and operations teams to integrate security teams in the software development and delivery cycle. DevSecOps requires a change in culture, process and tools across these core functional teams to make security a shared responsibility.

Integrating usable automated systems into DevOps workflows and CI/CD pipelines can enable developers to perform quick security tests as they code and ingest remediation information without leaving the IDE. This type of security-first approach to development is key to implementing a DevSecOps program in any organization.

Automating risk detection through IDE-based security plugins or AST integrations makes it easier for your development teams to code securely without losing speed. Synopsys Code Sight, for example, is a developer-centric security plugin that performs code analysis and open source risk analysis, known as static application security testing (SAST) and software composition analysis (SCA), right from the IDE in which developers work.

Using IDE-based security tools helps developers find and fix code quality issues and security risks as quickly as they are added to their projects. Moreover, this helps developers ship fewer security risks and to improve the security risk posture of the software they ship over time.

See more here:

First Line of Defense: Developer Security Tools in the IDE - thenewstack.io

PLOS Board Appointments – The Official PLOS Blog – PLOS

After a careful search, I am excited to share with our community four new appointments to the PLOS Board of Directors. This is a critical time for us as we expand our journal offerings, our global reach and challenge the landscape of Open Access publishing regarding sustainable business models. Each new member brings a depth and breadth of knowledge in their fields, which will enable us to continue to drive our mission forward while serving our scientific communities. The Board plays a key role as strategic thought partner to PLOS leadership, as well as oversight of organizational performance (business, strategic and financial), compliance and risk management.

Dr. Arlene Espinal, who joined the Board on September 1, currently serves as the Head of Microsoft Cloud Data Science and Analytics for Microsoft Corp. She is a leader in global strategy, Quantum-AI and next-generation digital technologies. She is also passionate about talent development and leads teams with diversity, inclusion, equitability, belonging and acceptance in mind essential to community and business. Recognized for her seminal role in driving awareness and change to social disparities that impact our communities, the Hispanic IT Executive Council named Dr. Espinal as a 2020 Top 100 Global Technology Leader. She was again recognized this year for her executive contributions. The National Diversity and Leadership Council recognized Dr. Espinal as one of the 2021 Top 50 Most Powerful Women in Technology.

Dr. Israel Borokini, who joined the Board on September 1, is a postdoctoral research fellow in the Department of Integrative Biology, University of California, Berkeley. His research focuses on combining ecological, geospatial, genomic, cytological, and phylogenetic data to identify patterns of community assemblages and biodiversity, and the eco-evolutionary mechanisms that generate and maintain them. Dr. Borokini completed his Ph.D. in the Ecology, Evolution, and Conservation Biology graduate program at the University of Nevada, Reno. He completed his undergraduate and Masters degrees in his home country of Nigeria before spending a decade as Principal Scientific Officer at the National Center for Genetic Resources and Biotechnology in Ibadan, Nigeria. Dr. Borokini not only expands the scientific expertise on the Board but also brings a passion for PLOSs mission. He has personally experienced the challenges of access to research in a low resource environment and will bring valuable perspectives to the Boards discussions as PLOS grows globally and prioritizes equity.

Richard Wilders deep experience in global public health law has a recurring theme: ensuring access. Prior to private practice, he was the General Counsel and Director of Business Development at the Coalition for Epidemic Preparedness Innovations (CEPI). At CEPI, he directed the legal and business development affairs during its initial start-up phase and through the first two years of the response to the COVID-19 pandemic. Prior to CEPI, he was the Associate General Counsel in the Global Health Program at the Bill & Melinda Gates Foundation. He provided legal expertise to ensure access to drugs, vaccines and diagnostics with a particular focus on access by affected populations in low and middle-income countries. His work also addressed how to ensure access to the artifacts of scientific research, including published materials, data, software code and biological materials. His Open Access policy work at Gates won the SPARC Innovator Award in 2015. Richard has also served as a committee member of the Roundtable on Aligning Incentives for Open Science convened by the National Academies of Science, Engineering and Medicine. He joined the Board in June, 2022.

Fernan Federici joined the Board in October, 2021. As we expand globally, Dr. Federicis perspective from a different research culture will prove invaluable. He is currently an Associate Professor and molecular geneticist at the Pontificia Universidad Catolica in Santiago, Chile. He has been a champion of Open Science in a number of areas including protocols and reagents, where he contributes toReclone (the Reagent Collaboration Network). Fernans research group also works on the promotion and development of Free/Libre Open Source technologies for research and education in molecular biology and bioengineering. The group is part of ReClone, the Gathering for Open Science Hardware community (GOSH) and the CYTED-reGOSH network for open technologies in Latin America.

I would be remiss if I did not take the opportunity to express my heartfelt thanks to Robin Lovell Badge, Mike Carroll and Meredith Niles for their outstanding years of service to the PLOS Board. Their wisdom and counsel have been enormously beneficial to me, and our organization, as we collectively charted a new path for PLOS, one focused on sustainability, inclusivity and expanding our roots globally. While its hard to say goodbye, we are excited to bring on board so many exceptional individuals with fresh perspectives. Please join me in welcoming our new Board members!

Link:

PLOS Board Appointments - The Official PLOS Blog - PLOS

Rust programming language: Driving innovation in unexpected places – ZDNet

Image: Getty Images/Jung Getty

Software engineers at car maker Volvo have detailed why they are fans of the Rust programming language and argue that Rust is actually "good for your car".

It seems everyone loves Rust, from Microsoft's Windows and Azure teams, to Linux kernel maintainers, Amazon Web Services, Meta, the Android Open Source Project and more. And now it seems it's time to add software engineers at Volvo to that list.

Julius Gustavsson, a technical expert and system architect at Volvo Cars Corporation, explains "Why Rust is actually good for your car" in an interview on Medium with fellow Volvo software engineer, Johannes Foufas.

Rust is a relatively young language that helps developers avoid memory related bugs that C and C++ do not automatically, hence Rust's growing popularity in systems programming. Memory related bugs are the most common severe security issues, according to Microsoft and Google's Chrome team.

Gustavsson brings a perspective from embedded systems development to the debate.

Volvo, along with the auto Industry in general, is looking towards "software-defined cars" to customize, differentiate and improve vehicles after they leave the car yard.

The main benefits he sees from Rust include: not having to think about race conditions and memory corruption, and memory safety in general. "You know, just writing correct and robust code from the start," he said.

Gustavsson says he started bringing Rust into Volvo with the Low Power node of the core computer.

Gustavsson sees a bright future for Rust in Volvo but that doesn't mean using it to replace already working code that's been adequately tested. He notes that new Rust code can co-exist with "almost arbitrary granularity" with existing C and C++ and that it could make sense to cherry pick parts to rewrite Rust if that component needs cybersecurity.

"We want to expand Rust here at Volvo Cars to enable it on more nodes and to do that, we need to get compiler support for certain hardware targets and OS support for other targets. There is no point in replacing already developed and well-tested code, but code developed from scratch should definitely be developed in Rust, if at all feasible.

"That is not to say that Rust is a panacea. Rust has some rough edges still and it requires you to make certain trade-offs that may not always be the best course of action. But overall, I think Rust has huge potential to allow us to produce higher quality code up front at a lower cost which in turn would reduce our warranty costs, so it's a win-win for the bottom line," he said.

Volvo isn't the only automaker interested in Rust. Autosar, an automotive standards group whose members include Ford, GM, BMW, Bosch, Volkswagen, Toyota, Volvo and many more in Aprilannounceda new subgroup within its Working Group for Functional Safety (WG-SAF) to explore how Rust could be used in one of its reference platforms. SAE International alsoset up a task forceto look at Rust in the automotive industry for safety-related systems.

Rust has also been in the news with Mark Russinovich, the chief technology officer of Microsoft Azure, saying that developers should avoid using C or C++ programming languages in new projects and instead use Rust.

Read the original:
Rust programming language: Driving innovation in unexpected places - ZDNet

An open-source computational tool for measuring bacterial biofilm morphology and growth kinetics upon one-sided exposure to an antimicrobial source |…

The following section describes a series of morphological measurements of B. subtilis macrocolonies response to one-sided CHX exposure. The first subsection details changes related to macrocolony growth and expansion, followed by a second subsection which focuses on GFP signal intensity and additional phenomena.

Macrocolony growth and expansion. (a) Fluorescent images of B. subtilis macrocolony development over a period of 3days. CHX droplet is located horizontally to the right of the macrocolonies in each image, at a distance of 1cm from the macrocolony center. (b) Total coverage area ((upmu)m(^2)) of macrocolonies.

Figure1a shows the original macrocolony images, as obtained by fluorescent microscopy 24, 48 and 72h after initial seeding. On a macro-scale, Fig.1b demonstrates that there is an inverse relationship between the distance of CHX from the seeding point to the expansion rate of the macrocolony. Macrocolonies that were seeded with CHX at 1cm (closest) distance exhibited statistically significant reductions in expansion over all 3days. In contrast, macrocolonies with CHX at 1.5cm were smaller in a statistically significant manner only on day 3, while 2cm macrocolonies did not differ from control on any of the days. Table1 summarizes the relevant p-values (two-sided t-Test).

The morphological changes that occur as a result of CHX proximity can be seen on day 2 and 3-colony periphery on the exposed (i.e., right-hand) side of the macrocolony is notably thinner than that on the unexposed (i.e., left-hand) side (Fig.1a). In order to quantify the morphological changes that occur in B. subtilis macrocolonies as a result of proximity to CHX source during maturation, a series of computational measurements were applied to the images (Fig.2a): firstly, the macrocolony was segmented into an exposed and unexposed sides by a vertical cut through the macrocolony that directly passes through the colony center (i.e., seeding point)the separating line is shown in yellow. For each macrocolony, a binary image was obtained using Otsus thresholding method. For each macrocolony, an outer contour surrounding the entire macrocolony was determined using a border following algorithm applied on binary images from the previous stepthe resulting contour is shown in red. For both the exposed and unexposed sides, a half-contour was mirrored around the separating line. The resulting mirrored contours can be seen in Fig.2a, middle column-top image shows the unexposed side contour, as it was mirrored onto the exposed side, while the bottom image does the same for the contour of the exposed side. Each one of the two contours is then fitted to an ellipse, shown in whiteFig.2a, rightmost column. The semi-major and semi-minor axes of the fitted ellipses were measured.

Illustration of inhibition measurement at the periphery. (a) The macrocolony is divided vertically into unexposed (left) and exposed (right) halves. The CHX spot is horizontal to the right of the macrocolony in each image. Each macrocolony half is separately mirrored and the resulting contour fitted to an ellipse. Red background in leftmost image reflects the Euclidean distance of each pixel from the CHX source. Outer contours are shown in bright red. (b) Colony periphery deformation analysis. At each distance from CHX source (control and 1/1.5/2cm) the ratio between horizontal (left) and vertical (right) radii between the unexposed and exposed halves is shown.

Figure2b demonstrates the differences in morphology that occur between the exposed and unexposed sides, both in the horizontal (left) and vertical (right) planes. The loss of symmetry that occurs in macrocolonies as a result of CHX proximity on day 3 is statistically significant in the horizontal and vertical planes only in macrocolonies with CHX placed at a distance of 1cm. Thus, changes in morphology are directly correlated to the distance from the CHX source. Table2 summarizes the relevant p-values (two-sided t-Test).

Illustration of inhibition measurement at the core. (a) Illustration of inner core segmentation with mirroring and fitting to ellipse. (b) Colony core deformation analysis. At each distance from CHX source (control and 1/1.5/2cm) the ratio between horizontal (left) and vertical (right) radii between the unexposed and exposed halves is shown.

Figure3a illustrates the same image processing pipeline, applied to the colony core, rather than the periphery. Figure3b demonstrates that no comparable changes in morphology occur at the colony core, whether in the horizontal (left) or vertical (right) planes. Indeed, no statistically significant loss of symmetry was observed at the colony core, regardless of distance from CHX source.

Figure3b shows that on day 3, macrocolony core did not differ in a statistically significant manner from the control, regardless of CHX proximity. The colony core is therefore more preserved in structure than colony periphery (or more resistant to CHX). Table3 summarizes the relevant p-values (two-sided t-Test).

Figure 4a illustrates the relevant regions of the macrocolony - the exposed and unexposed (control) periphery and core. Figure4b demonstrates how pixel intensity is affected by proximity to CHX source: average pixel intensity at the exposed/control areas is shown for both periphery (orange) and core (blue) regions on day 3. In other words, for each macrocolony, the ratio between average pixel intensity of the exposed and unexposed halves was calculated and compared at the periphery and core regions. Statistically significant differences in values were found in periphery of macrocolonies that were grown at distance 1cm from CHX, as well as core of macrocolonies that were grown at distance 1.5cm from CHX. Thus, at these distances, the macrocolony is affected both by morphological deformation as well as changes in GFP intensity.

Pixel intensity calculation. (a) Image illustrating the different areas within the macrocolony. CHX source lies directly horizontally to the right. (b) Ratio of intensity average between unexposed and exposed sides of the macrocolony is shown separately for the periphery (orange) and the core (blue). For control images, unexposed and exposed sides were determined via data augmentation as average ratio of left vs. right halves, top vs. bottom halves and a combination of upper left and bottom right quadrants vs. upper right and bottom left quadrants. The shorthand ns indicates non-significant p-value (>0.05).

Figure5a illustrates how distance from CHX is determined for each pixel in the macrocolony. Euclidean distance was used for the calculations. In Fig. 5b, bacterial cells at the leading edge of the macrocolonies are those that are located at the outermost layer of the macrocolony periphery. Due to the curvature of the macrocolony, points along the leading edge are located at varying distances from the CHX source (Fig.5b). In order to characterize the nature of relationship between pixel intensity and distance to CHX, pixel intensities along the leading edge were plotted in Fig.5b: red dots represent pixels along the leading edge of the exposed side of a macrocolony grown at 1cm from CHX, while blue dots represent pixels along the leading edge of the exposed side of a macrocolony grown at 2cm from CHX. Given both sets of pixel intensity values, a linear regression model was applied to bothas can be seen in Fig.5b, there is a linear correlative relationship between Euclidean distances and pixel intensities. This relationship is stronger when CHX is located closer to the macrocolony centerfor example, in the images that are shown in Fig.5b, linear approximation revealed that 1cm macrocolonies are characterized by a slope that is significantly higher (red) than that of the 2cm macrocolonies (blue). This finding signifies the linear relationship between GFP signal intensity of cells located at the leading edge of the macrocolony to their distance from the CHX source.

Linear regression model for pixel intensity at the leading edge as function of Euclidean distance from CHX source. (a) Illustration demonstrating the distance calculation between the CHX source (red dot) to each pixel within the macrocolony. (b) (Top) B. subtilis macrocolonies on day 3 at 1cm (left) and 2cm (right) distances from CHX source. (bottom) Intensities of pixels located at the leading edge (highlighted 20 pixels-wide section from the outer rim) of the exposed half of the macrocolony: red pixels originate from 1cm macrocolony, blue pixels originate from 2cm macrocolony. Linear regression lines demonstrate that at 1cm, pixel intensity is correlated to the distance from CHX source, while no such effect is seen at 2cm macrocolony.

Crescent-shaped morphology occurring at CHX distances of 0.5cm. (a) Top row illustrates the macrocolony morphology over a period of 3daysthe change in morphology appears in the form of crescent-shaped colonies. (b) Expansion comparison with control macrocolonies. (c) Illustration depicting several relevant distances when CHX is placed at 0.5cm-average radius of a mature B. subtilis macrocolony on day 1, average radius of a CHX droplet. (d) Representative image of macrocolony on day 1, with CHX (color corrected for visual clarity) shown to the right.

Bright field images of expanding macrocolonies. (a) Bright field images of expanding B. subtilis macrocolonies grown in proximity to CHX at 1/1.5/2cm. CHX droplet is seen to the right of the macrocolonies. (b) Cross-section of agar substrate seeded with a macrocolony and CHX droplet.

As CHX is placed closer to the macrocolony, it exerts greater inhibitory effect, resulting in increasing deformation of the macrocolony on the side closer to the antimicrobial source. However, in the case when CHX is placed at 0.5cm from the initial point of seeding the macrocolony only develops towards the unexposed sideFig.6a shows the growth of a sample macrocolony over a period of 3days (left-to-right). Starting from day 1, the macrocolony appears to grow only on the side opposite CHX location. On average, control macrocolonies expand on day 1 to a radius of 0.3cm. CHX droplet is on average 0.2cm in radius. Hence, even when CHX is placed at a distance of 0.5cm, the macrocolonies have enough potential space to expand to 0.3cm. However, Fig.6d demonstrates that despite the fact that there is sufficient unoccupied space in front of the macrocolony to expand into (indeed, equal to that required by control macrocolonies which are uninhibited by CHX), the macrocolony does not expand towards the exposed side at all. Rather, it expands towards the opposite side and consequently assumes a unique crescent shape starting from day 1 onwards.

Figure7a shows bright field images of B. subtilis macrocolonies, with CHX droplets seen to their right. This visualization reveals a bright formation in the agar substrate, between macrocolony and CHX, undetected in the fluorescent images. This structure is embedded into the agar throughout its entire width, as seen in Fig.7b. Over a period of 3days, its shape changes from concave to convex, with it seemingly engulfing the CHX droplet. More interestingly, the appearance of the agar at both sides of the formation is uneven, best visualized in Fig.7b, where the agar on the CHX side appears muddy , unlike the one on the macrocolony side.

Read this article:
An open-source computational tool for measuring bacterial biofilm morphology and growth kinetics upon one-sided exposure to an antimicrobial source |...

Trellix Forms Advanced Research Center To Boost Intelligence And Product Capabilities – CRN

Security News Jay Fitzgerald September 28, 2022, 02:37 PM EDT

One of the most important things that we can help our customers with is just bringing them the right intelligence, the right content, says CEO Bryan Palma.

Introducing a new partner program and product initiatives arent the only things Trellix has been unveiling of late.

The cybersecurity giant announced just prior to this weeks Trellix Expand 2022 conference that it was creating a new advanced research center within the company to enhance its global threat intelligence capabilities.

One of the most important things that we can help our customers with is just bringing them the right intelligence, the right content, said Bryan Palma, chief executive of the San Jose, Calif.-based Trellix, the major provider of XDR offerings.

[RELATED STORY: Trellix Channel Chief Shares How to Build a Services Practice with XDR]

Palma told CRN that creating the new center entailed pulling together units from the old FireEye and McAfee Enterprise entities that were combined earlier this year to create Trellix, which is owned by private equity firm Symphony Technology Group.

Weve got some of the most talented researchers and investigators in the business, Palma said. With the amount of installed technology we have, we see a lot of telemetry which helps us create the necessary intelligence to power our systems and specifically to power our XDR platform.

The Advanced Research Center is the coming together of multiple research and product research capabilities within Trellix, Aparna Rayasan, chief products officer at Trellix, told CRN.

She said the new center, which employs nearly 300 employees, is built on five pillars of focus: product research and development, threat intelligence, adversarial resilience and advocacy, research engineering, and data science.

Each pillar contributes to better intelligence gathering and analysis, as well as better products and services in general, she said.

It is creating efficiencies, she said. Its creating the differentiator in our products. And its also helping us mine vast data. Its definitely covering much more surface areas than we would have otherwise.

Rayasan, who is currently conducting a search for a permanent director of the new center, said she absolutely sees the center expanding in the future.

In particular, she praised the threat-intelligence unit and said its actively hiring highly experienced personnel. She noted that many of Trellixs threat-intel employees hail from previous positions within the U.S. military and government agencies.

The center has already identified one cybersecurity threat thats garnered some attention over the past week a 15-year-old vulnerability in the open source Python programming language thats still lurking in existing codes and that theoretically puts at risk 350,000 open-source coding projects.

Douglas McKee, director of vulnerability research at Trellix, said his team found no recent malicious use of the Python vulnerability. But the vulnerability, if left unpatched, could still be used to launch supply-side attacks, even if it was created in 2007, he said.

McKee, whose team is now part of Trellixs new advanced research center, said hes hoping and expecting further intelligence

Im really excited to see Trellix put together this advanced threat center, he said. (It) helps combine a bunch of elite researchers towards a common goal. I think its really going to be a positive impact for the company and the industry moving forward.

Jay Fitzgerald is a senior editor covering cybersecurity for CRN.Jay previously freelanced for the Boston Globe, Boston Business Journal, Boston magazine, Banker & Tradesman, MassterList.com, Harvard Business Schools Working Knowledge, the National Bureau of Economic Research and other entities. He can be reached at jfitzgerald@thechannelcompany.com.

Read more:
Trellix Forms Advanced Research Center To Boost Intelligence And Product Capabilities - CRN

FACT SHEET: The Biden-Harris Administration Announces More Than $8 Billion in New Commitments as Part of Call to Action for White House Conference on…

Today, for the first time in more than half a century, President Biden is hosting the White House Conference on Hunger, Nutrition, and Health to catalyze action for the millions of Americans struggling with food insecurity and diet-related diseases like diabetes, obesity, and hypertension. The Conference will lay out a transformational vision for ending hunger and reducing diet-related disease by 2030 all while closing disparities among the communities that are impacted most.

Achieving our goals will require more than just the resources of the federal government. Thats why, this summer, the White House launched a nationwide call to action to meet the ambitious goals laid out by the President. Across the whole of society, Americans responded and advanced more than $8 billion in private- and public-sector commitments. These range from bold philanthropic contributions and in-kind donations to community-based organizations, to catalytic investments in new businesses and new ways of screening for and integrating nutrition into health care delivery. At least $2.5 billion will be invested in start-up companies that are pioneering solutions to hunger and food insecurity. Over $4 billion will be dedicated toward philanthropy that improves access to nutritious food, promotes healthy choices, and increases physical activity.

Today, the White House announces a historic package of new actions that business, civic, academic, and philanthropic leaders will take to end hunger and to reduce diet-related disease.

Pillar 1 Improve Food Access and Affordability

Pillar 2 Integrate Nutrition and Health

Pillar 3 Empower Consumers to Make and Have Access to Healthy Choices

Pillar 4 Support Physical Activity for All

Pillar 5 Enhance Nutrition and Food Security Research

Each of these commitments demonstrates the tremendous impact that is possible when all sectors of society come together in service of a common goal. The Biden-Harris Administration looks forward to working with all of these extraordinary leaders and to the many more that will come forward to end hunger and reduce diet-related disease by 2030.

###

See more here:
FACT SHEET: The Biden-Harris Administration Announces More Than $8 Billion in New Commitments as Part of Call to Action for White House Conference on...

Learn to code not by reading books but by coding – Sabin Hashmi, Project Associate, CERN, Switzerland – INDIAai

Sabin Hashmi is currently working on Building Machine Learning based Trigger System for LHCb, CERN.

He works on Machine Learning and Deep Learning applications in High Energy Particle Physics.

INDIAai interviewed Sabin to get his perspective on AI.

It's admirable to see a physics student become an AI researcher. What inspired you to pursue a career in AI?

My passion for science and technology is my main motivation to become an AI researcher. It's always interesting to be in a domain that is advancing at a fast pace. The AI community includes people across different domains, where everyone is curious and passionate about advancements in technology and research. Physics helped me with critical thinking, and AI-based research helped in providing flexibility in research. Moving forward in the field of AI, it started getting more exciting, and I decided to pick AI as my primary research domain.

Instead of sliding away from Physics, my current research uses AI as the key-supporting tool in solving a problem that was initially done using traditional methods and transforming it into a more efficient solution using computational physics and AI.

Tell us about your PhD research in particle physics and machine learning. What are the main areas of concern?

In the Large Hadron Collider at CERN, two proton beams are accelerated to the speed of light(99.99%) and collide at the experiment sites designed to study subatomic physics. When particles collide at a very high velocity, there will be a shower of many subatomic particles within. My PhD research is primarily focused on particle reconstruction of rare decays resulting from the collision of protons at the experiment. It advances in developing a trigger system using Machine Learning that cherry-picks the particle tracks of rare decays in real-time.

The primary concern is the massive volume of data generated at each collision and designing a decision-making system that works in real time. In addition, due to the physics behind the rare decays, finding the decays of interest is challenging, and ML helps identify and characterise rare-decay tracks from other signals.

What are your current responsibilities and activities at CERN's LHCb?

The Large Hadron Collider Beauty(LHCb) Experiment primarily studies the CP violations and rare decays of particles. My area of research is focused mainly on developing a Machine Learning based pipeline for a software-based trigger system. As a developer, the responsibility includes being up-to-date on the changes happening in the detectors and software codebases, presenting progress on the research work with the collaboration of experts, and taking data-taking shifts at the experiment site.

What are the advantages and disadvantages of doing research in Poland as an Indian scholar?

It's great to work in a dynamic environment with a peer group of researchers who are curious to see the developments in research. Doing research abroad can open more opportunities in front of you, including working closely with the pioneers and domain experts. This approach would give great global exposure that could lead the researcher to decide how to proceed further in their research. In addition, the research space that global research institutions provide will inspire you to develop yourself as a researcher.

There are fewer disadvantages, apart from leaving the home country and the people. But it is very promising to see the research and development in India in the past few years, leading to more opportunities within the country.

What are the three most pressing societal issues you hope to address with machine learning? Or if you want to make a unique appeal for any of them?

Machine Learning is still in its early phase. I understand the developments in AI are significant, but if we consider the potential of AI for a good cause, we are still yet to be there. There is a wide range of issues we can solve using AI. It is where the multi-domain expertise makes AI more prominent. For instance, AI-based advancements in healthcare are novel cases that pique our interest. We had developments in healthcare in the past, but AI gave a different approach to solving the problems we have had for decades. Some of the latest developments in AI-based drug delivery, target identification for cancer cure, etc., are on the top of the list. In addition, AI shows promises to predict weather and natural calamities with more accuracy than the traditional methods we had in the past.

In a nutshell, the potential of AI is vast. Rather than considering it a different domain altogether, I'd like to consider it a supporting tool that helps researchers look at the problem from a different angle. We are gradually transforming the existing technology into a novel system embedded with ML and AI.

What do you think about India's AI education system? How does it measure up to the global situation?

The developments in India's AI are on a global scale. There're long-term and short-term courses that you can enrol in from prestigious colleges and other institutions in India. Depending on the career path, with some research, it is easy to decide on the course. In addition, there are many more opportunities to study courses online from experts.

What, in your opinion, should be upgraded at Indian universities to advance AI? What should their course of action be?

AI research is a vast domain, so the course can be as broad as the domain would be. Some courses revolve around the same concepts and how these are developed in the programming side of research. But, AI is based on solid fundamentals of Mathematics and Statistics. Universities focused on advanced AI should be a place for students interested in learning and developing their problems and finding solutions using advanced computational tools.

Apart from that, the Universities can develop associated research labs where the experts can design short-term courses and support the AI researchers who are really into AI. Note that it would be best to get to the domain of computational skills and novel tools like AI in a different environment than that of traditional classes.

What advice do you give Indian students working in or aspiring to enter the ML field?

There are a lot of research opportunities out there. AI is in the early phase of development, and it is growing and developing quickly. It is one of the most exciting fields in which I have worked. The job prospects are open, and so is the competition. We are on the road where we have a long way to go, and we need more people interested in keeping the journey going. The question is, is it something you'd like to do? If yes, I'd suggest you start with getting a good grip on Mathematics, Probability, Calculus, and Statistics. Then, get a quick guide on what AI research is all about, try out small projects closely aligned with your current project, and how you can solve the problem using ML or even Simple Data Analysis using Programming.

Above all, Learn coding not by reading books but by coding!

Which books and resources would you recommend to ML aspirants?

There are a lot of resources available. The beauty of the community is that most of the research is open-source, where you can see the codebase and even contribute to the project.

To get started, pick up miniature introductory courses you can do for free, work on projects and slowly increase the challenges on the projects, learn git and build a git portfolio. In addition, participate in ML competitions and hackathons, where you can cross-check different approaches by other participants. Out of all this, try different methods and projects and keep curious.

Read more here:
Learn to code not by reading books but by coding - Sabin Hashmi, Project Associate, CERN, Switzerland - INDIAai

How To Install Python on Windows 10 and 11 – Tom’s Hardware

Originally created by Guido van Rossum in 1991, Python is a versatile programming language used by makers on the Raspberry Pi, system administrators in the data center, and even by Industrial Light and Magic to bring our movies to life.

Python is a great language to learn, and thanks to the Raspberry Pi for the past decade there have been countless tutorials covering the gamut of programming projects.

Whether you are a Python veteran, a pythonista or a complete newcomer to the language, installing Python on Windows is an easy task. In this how to we will walk you through installing Python 3 on Windows and show two editors, one for beginners and the other for intermediate and advanced users, and how you can get coding with this fantastic language.

The installation process for Python 3 on Windows is simple, with only a couple of extra steps that we have to follow. These steps enable us to access Python from anywhere on our system and install software using its built-in package manager. Installing Python in this manner enables the creation of projects that work with the operating system, for example notifications and automated system tasks.

1. Open a browser to the Python website and download the Windows installer.

2. Double click on the downloaded file and install Python for all users, and ensure that Python is added to your path. Click on Install now to begin. Adding Python to the path will enable us to use the Python interpreter from any part of the filesystem.

3. After the installation is complete, click Disable path length limit and then Close. Disabling the path length limit means we can use more than 260 characters in a file path.

4. Click Close to end the installation.

1. Open a Command Prompt and type python then press Enter.

2. Create a short Python script that uses a for loop to print a message to the Python shell ten times. Press space four times to indent the second line, otherwise Python will produce an error. Press Enter to run the code.

Python comes with its own package manager, pip, that is used to install, update and remove modules of pre-written Python code. These modules provide us with extra functionality. To demonstrate we will use pip to install the pyjokes module, a collection of programmer jokes.

1. Open a Command Prompt and use pip to install pyjokes then press Enter.

2. Open the Python interpreter.

3. Import the pyjokes module and then print a joke from the module. In our case, we got a hip hip hurray take on an array containing two hips.

4. More modules can be found using the PyPi Package Index.

If you have never written a line of Python code, then Mu is for you. Created by Nicholas Tollervey, Mu is designed with beginners in mind. The simple interface means we can focus on our code. But dont underestimate Mu as under the hood we have tools to validate our code and to work with a diverse range of boards such as Raspberry Pi Pico, Adafruit CircuitPython and many MicroPython boards, including those from Lego.

1. Open a browser to the Mu website and download the Windows installer.

2. Go to the Downloads folder and double click the Mu file to begin installation.

3. Accept the license agreement and click Install. The install process will take a few moments to complete.

4. Click on Launch Mu Editor and then click Finish. This will close the installer and start Mu.

5. Wait for Mu to start. Mus first launch can take some time to complete, subsequent boots will be much faster.

Mu was designed with beginners in mind and this is reflected in the user interface. It is uncluttered and easy to understand.

The user interface is broken down into three areas.

We can easily write Python code in Mu. The default is to write Python 3 code, to be run on our machine. This is where the ease of Mu works against it, as there is no means to install Python modules. Modules are libraries of code that can be used to add new features, for example RPi.GPIO and GPIO Zero are modules that enable Python to talk to the Raspberry Pi GPIO.

If you are starting out with Python, Mu is an excellent choice to introduce the language.

Mu has one feature that elevates it above other beginner editors. Mu has a Mode button which can be used to write Python, MicroPython and CircuitPython.

2. Select the Mode from the list. Mu comes with a plethora of modes. We can write Python 3, make games using PyGame Zero or build websites/apps using Flask. Write MicroPython for the Raspberry Pi Pico, micro:bit. ESP, Pyboard and Lego Spike. Or we can use Adafruits CircuitPython to create physical computing projects.

Connecting a Raspberry Pi Pico, MicroPython or CircuitPython board will trigger Mu to ask if we would like to change the mode to match the board. To demonstrate this well write a short project to blink the Pcios onboard LED.

To follow this part of the how to you will need a Raspberry Pi Pico.

1. With Mu open, connect a Raspberry Pi Pico to your computer.

2. Using the pop-up, change the mode for the Raspberry Pi Pico (RP2040).

3. The Mu editor is now in RP2040 mode, so click on REPL to open the MicroPython shell (REPL) on the Pico.

4. In the editor import two modules, first to enable access to the GPIO, the second to add pauses to the code.

5. Create an object, led and use it to set the onboard LED as an output. For the Raspberry Pi Pico W use the line of code with LED, for the original Pico use the line of code with 25. The Pico has the LED connected to GPIO 25, but the Pico W does not.

Raspberry Pi Pico W

Raspberry Pi Pico

6. Create a for loop that iterates 10 times, each time the loop runs it will toggle the LED on / off, print a message to the REPL and sleep for 0.1 seconds.

7. Click Save and save the file as blink.py to your computer.

8. Click Run to start the code on the Pico. The LED on the Pico will blink on and off and the REPL will show the BLINK message.

You know your stuff, and you need a lightweight editor to get your Python code done. Sure you can install Visual Studio Code, PyCharm etc. But if you just need to edit a few project files, Notepad++ is for you. Notepad++ is a Swiss Army Knife of an editor, and it works exceptionally well with Python. Here we will install Notepad++ and set it up to run our Python code at the press of a button.

Note that you will need to install the Python 3 interpreter, the steps for which are at the start of this how to.

1. Open a browser to the Notepad++ website and download the latest Windows installer.

2. In the your Downloads folder, double click on the file to start the installer.

3. Set your preferred language and click OK.

4. Select Next.

5. Select I Agree.

6. Click Next.

7. Click Next.

8. Click Install to begin the process.

9. Check Run Notepad++ and click Finish to end the installation and open Notepad++.

Notepad++ has a more involved user interface than Mu, and this reflects the flexibility of the editor. Notepad++ is much more than a Python editor, we can use it to write PHP, Perl, JSON etc.

1. Create a simple Python project that uses a for loop to print a message to the Python shell.

2. Save the code as for_loop.py.

3. Click on Run >> Run..

4. Click on and navigate to the Python executable file. Select the file and the path will be added to the dialog box. At the end of the path, add -i "$(FULL_CURRENT_PATH)" to force Notepad++ to open the file. Click Save.

5. Create a shortcut called Python3.10 to launch the Python interpreter then click OK. We chose ALT + SHIFT + P as it didnt conflict with other shortcuts on our system.

6. Use your shortcut to run the Python code.

See the rest here:
How To Install Python on Windows 10 and 11 - Tom's Hardware

‘Lives are transformed’: Addiction recovery program celebrates 50 years – York Dispatch

Inside the nonprofit giving a 'new life for girls'

New Life For Girls is celebrating 50 years in York County this year.

Tina Locurto, York Dispatch

Tabitha Pacheco was certain if she could just move away from her toxic life in New Jersey, that things would get better.

She would stop using heroin. She would seek the help she needed. Moving to Pennsylvania was her answer.

Unfortunately, the things she strived for didn't happen.

I got myself into trouble here, Pacheco said. I got a DUI with my kids in the car, and I wound up in Cumberland County Prison.

Thats where she first heard about New Life For Girls but Pacheco initially didnt want anything to do with it.

The 12-month-long Christian-taught rehabilitation program provides housing for women struggling with addiction. Strict rules like no phone use and limited time off the property make it challenging for those first entering.

But, after Pacheco messed up over and over again she finally gave the program a try.

More:A medical mystery: York County woman's headaches lead to brain surgery

More:Local musician followed passion to 'The Voice': 'He just needed that moment'

More:York County's Delma Rivera places in top 12 in national Ms. Senior America pageant

New Life For Girls is celebrating 50 years in York County this year. Rehabilitation comes in the form of counseling, GED prep courses, self-discipline and academic, vocational and work ethics training.

The program requires students to live in dorm-style rooms with multiple roommates. Women with children are welcome, too, and are offered private living accommodations.

When Pacheco first got to New Life For Girls, it was just herself and her two daughters.

It's me and my two little broken girls, because I was a mess, Pacheco said. I'd be in the chapel at night crying out, God, please help me to be a better mother.

Depending on the severity of substance abuse, many women will first complete programming at an introduction center to wean off drugs or alcohol before even coming to New Life For Girls.

Please consider subscribing to support local journalism.

They will then be required to complete the 12-month course before moving to graduation into the Reentry and Observation Program.

This program offers more luxuries for women who finish the 12 months. Though Reentry and Observation is not mandatory, many women chose to stay at New Life For Girls in order to find housing, secure a job and figure out their life plan.

While women are not allowed to leave the property on their own, staff members organize group shopping trips and accompany women to visit family members. Program students are also able to use recreation facilities like the basketball courts or stroll around the property, Pacheco said.

In addition, New Life For Girls has a nursery for young children. Faculty members take care of children while their mothers attend classes and complete chores until 4 p.m., when the mothers take over to be with their kids.

Children who are school-aged get picked up by a bus to attend Northeastern School District. Currently, there are children as young as five months and as old as eight years living at the facility.

New Life For Girls was founded in 1972 by Cookie Rodriguez.

Rodriguez was addicted to drugs while living in New York City. She eventually sought help from Teen Challenge Ministry a program at the time offered primarily to men.

After getting clean, Rodriguez felt compelled to open a program specifically for women, according to New Life For Girls Executive Director Jos Pacheco.

"She saw that there was a need to be able to get these women out of the city of New York, out of their environment, and bring them somewhere," he said. "They ended up in York County."

Jos Pacheco started working at New Life For Girls 49 years ago. He is retiring Oct. 1 and will be replaced by current assistant director Kathy Kelly Brown.

There is a $140 one-time processing fee for women who first join. If a woman is unable to pay, however, the fee is waived.

Financial support for New Life For Girls primarily comes from donations from individuals, groups and churches.

Tabitha Pacheco, who first came to New Life For Girls out of prison, now works for the organization overseeing the prison ministry.

She found a new life within the program and, ultimately, a life partner as well. After she graduated, she returned and married the program's director.

"I get to work in the prisons where I came from," Pacheco said. "I get to speak at the same prison New Life From Girls got me from. They let me in and out of the doors freely and so it's just really cool and like a complete turnaround for me."

Many faculty working at New Life For Girls at one point were involved in the program themselves Like Pacheco and Brown.

More:Think Loud entrepreneur pleads no contest in stalking, assault case

More:Blaze destroys four York City homes, displacing 15

More:York libraries work to introduce more children to the love of books

"I was a functioning, dysfunctional person. Just broken and empty on the inside until I found out about the program," Brown said. "New Life For Girls gave me a sense of purpose and a passion for other people and there was no looking back."

Some say New Life For Girls is like a "fishbowl" environment, Brown said.

She referenced a common expression said at the nonprofit "sandpaper sisters."

"We kind of rub up against each other the good, the bad and the ugly," Brown said. "I always tell the ladies, you make me better and I make you better."

On the 27-acre property sits a Christian cross-shaped building where all students in the program eat, sleep and learn.

Sounds of nature and the soft rustle of trees mix with lectures being taught and lives reshaping.

"This is a mountain where miracles take place," Brown said. "Lives are transformed."

Read more:
'Lives are transformed': Addiction recovery program celebrates 50 years - York Dispatch