Jenkins: The Ultimate Guide to Automating Your Development Process – Techstry

Do you spend hours manually running build scripts and deploying applications to servers each day? If so, you should consider using Jenkins to automate your development process. Jenkins is a powerful open-source tool that can help you speed up the deployment of new code changes to production environments. With Jenkins, you can automate the build and push of a Docker image to the Docker hub. This will save time on your end as well!

Jenkins is a powerful open-source tool that can help you automate your development process. It is written in Java and has a rich plugin ecosystem that integrates with the most popular software development tools. Some of its key features include:

If you want to get started with Jenkins and begin automating your development process, there are a few things you need to do.

First, you need to install Jenkins on your server. You can do this by downloading the latest version of Jenkins from the official website (link below).

Once you have installed Jenkins, you need to create a new job. To do this, click on the New Item link in the left-hand navigation menu.

On the job configuration page, you need to specify the following:

Once you have saved your job, Jenkins will begin automatically building your code and deploying it to your production environment.

You can view the status of your builds by going to the Build History page in the left-hand navigation menu.

Follow this link:

Jenkins: The Ultimate Guide to Automating Your Development Process - Techstry

The global single cell bioinformatics software and services market was estimated to be at $205.2 million in 2020, which is expected to grow with a…

ReportLinker

The growth in the global single cell bioinformatics software and services market is expected to be driven by an increasing number of bioinformatics services being offered for computational analysis and a rising number of open-source free platform providers offering single cell analysis software.

New York, April 27, 2022 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Single Cell Bioinformatics Software and Services Market - A Global and Regional Analysis: Focus on Product, Application, End-User, and Region - Analysis and Forecast, 2021-2031" - https://www.reportlinker.com/p06272146/?utm_source=GNW

Market Lifecycle Stage

The single cell bioinformatics software and services market is still in the nascent phase.Significant increases in the research and development activities pertaining to single cell analysis are underway to develop single cell bioinformatics software and services-based products, which are expected to increase due to the rising number of chronic disease burdens such as cancer.

Researchers are generating data that have the potential to lead to unprecedented biological insight, albeit at the cost of the greater complexity of data analysis. Increasing investments in the R&D of cell bioinformatics software and services and various research fundings is one of the major opportunities in the global cell bioinformatics software and services market.

Impact

The presence of major service providers of single cell bioinformatics and services in regions such as North America and Europe has a major impact on the market. For instance, Illumina, Inc. provides clinical sequencing bioinformatics services. The service includes TruGenome undiagnosed disease test, a clinical whole-genome sequencing test for patients with a suspected rare and undiagnosed genetic disease. Companies such as QIAGEN provide QIAGEN discovery bioinformatics services. The services are a reliable and convenient way to expand in-house resources with expertise and perfectly tailored bioinformatics services that ensure quality results. The presence of these companies has a positive impact on the market growth.

Impact of COVID-19

Most of the single cell studies focused on the progression of COVID-19 and have provided important molecular and immune characteristics.Researchers have analyzed large-scale single cell integrated data, which included single cell sequencing data of 140 different types of samples from 104 COVID-19 patients.

Through this, they gained insights into the immune cell proportions of peripheral blood mononuclear cells (PBMCs), Tcell receptor (TCR) clone diversity, and other characteristics in convalescent patients.Another research study showcased profiled adaptive immune cells of PBMCs from recovered COVID-19 patients who have varying disease severity using single cell RNA sequencing, single cell TCR sequencing, and single cell BCR sequencing.

Scientists have discovered a phenomenon that is hard to explain by these studies, which shows that convalescent patients with high serum anti-spike titers produce a higher proportion of non-neutralizing antibodies.Researchers have also performed single cell RNA sequencing on COVID-19 patient samples, which helped them compare the differences in gene expression patterns in patients of varying infection severity and in different organ types to analyze organ injury.

They also found that COVID-19 infection leads to the inflammation of genes and pathways, which result in organ tissue damage to the liver, lungs, kidney, and heart. Through their study, they concluded that treating these symptoms at the organ level with therapeutic targets in the inflammatory pathways axis can lead to a better prognosis for severely infected patients.

Market Segmentation: Segmentation 1: by Productso Bioinformatics Softwareo Bioinformatics Services

The global single cell bioinformatics software and services market in the products segment is expected to be dominated by the bioinformatics services segment. This is due to an increasing number of bioinformatics service providers offering services to their end users.

Segmentation 2: by Applicationso Oncologyo Immunologyo Neurologyo Non-invasive Prenatal Diagnosis (NIPD)o Microbiologyo Other Applications

The global single cell bioinformatics software and services market is dominated by the oncology segment owing to an increasing number of patients suffering from cancer. According to the data published by World Health Organization, cancer is a leading cause of death, with nearly 10 million deaths reported in 2020.

Segmentation 3: by End-Usero Research and Academic Instituteso Biopharmaceutical Companieso Others

The research and academic institutes segment dominates the global single cell bioinformatics software and services market due to the increasing research and development activities in academic institutes and the focus of researchers on single cell analysis.

Segmentation 3: by Regiono North America - U.S., Canadao Europe U.K., Germany, France, Italy, Spain, and Rest-of-Europeo Asia-Pacific - Japan, China, India, South Korea, Australia, and Rest-of-Asia-Pacifico Latin America - Brazil, Mexico, and Rest of Latin Americao Rest-of-the-World

North America generated the highest revenue of $103.0 million in 2020, which is attributed to the R&D advancements in the field of single cell analysis and the presence of dominating players operating in the single cell bioinformatics and services market.

Recent Developments in Global Single Cell Bioinformatics Software and Services Market

In December 2019, Nanostring developed a strategic license for diagnostic equipment based on nCounter for single cell and rights to Veracyte for $50 million and Veracyte stock, plus up to $10 million after satisfying future events. In June 2021, Bruker Corporation launched the Tims TOF SCP system to provide quantitative single-cell 4D-Proteomics. In September 2019, IsoPlexis expanded the sales of its single-cell proteomic research platforms, IsoLight and IsoCode Chip, to the Japanese pharmaceutical and life sciences industry by partnering with BioStream Co. In January 2021, the company launched a new standard in spatial biology for high-speed imaging of whole slides at single-cell and sub-cellular resolution.

Demand Drivers and Limitations

Following are the demand drivers for the single cell bioinformatics software and services market: Rapid Development in the Single Cell Technologies Advancing Field of Disease Diagnosis and Drug Discovery Investments by the Biopharmaceutical Companies

The market is expected to face some limitations too due to the following challenges: Analytical Challenges in Metabolite Analysis Lack of Spatial-Temporal Context Barriers to Carry Out Effective Single Cell Analysis

How Can This Report Add Value to an Organization?

Product/Innovation Strategy: The product segment helps the reader understand the two types of products, i.e., single cell bioinformatics software and services. These services are the major focus of the study as these services are the target of market players in terms of revenue generation. Moreover, the study provides the reader with a detailed understanding of the different applications such as oncology, immunology, NIPT, microbiology, and others.

Growth/Marketing Strategy: The single cell RNA-sequencing requires a lengthy pipeline comprising RNA extraction, single cell sorting, reverse transcription, library construction, amplification, sequencing, and subsequent bioinformatic analysis.Bioinformatics analysis is carried out with the help of various open-source platforms along with various bioinformatics services.

Companies are providing these services, which is a key for market players to excel in the current bioinformatics software and services market.

Competitive Strategy: Key players in the global single cell bioinformatics software and services market analyzed and profiled in the study have involved the single cell bioinformatics software and services-based product manufacturers that provide software and services.Moreover, a detailed competitive benchmarking of the players operating in the global single cell bioinformatics software and services market has been done to help the reader understand how players stack against each other, presenting a clear market landscape.

Additionally, comprehensive competitive strategies such as partnerships, agreements, and collaborations will aid the reader in understanding the untapped revenue pockets in the market.

Key Market Players and Competition Synopsis

The companies that are profiled have been selected based on inputs gathered from primary experts and analyzing company coverage, product portfolio, and market penetration.

The leading top segment players include single cell bioinformatics services manufacturers that capture around 95% of the presence in the market. Bioinformatics software contributes around 5% of the presence in the market.

Some of the prominent names established in this market are:Company Type 1: Bioinformatics Services Fluidigm Corporation QIAGEN Mission Bio Illumina, Inc.

Company Type 2: Bioinformatics Software Takara Inc. BD. PacBio

Companies that are not a part of the above-mentioned pool have been well represented across different sections of the report (wherever applicable).

Countries Covered North America U.S. Canada Europe Germany Italy France U.K. Spain Rest-of-Europe Asia-Pacific Japan China India Australia South Korea Rest-of-Asia-Pacific Latin America Brazil Mexico Rest-of-Latin America Rest-of-the-WorldRead the full report: https://www.reportlinker.com/p06272146/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Story continues

Go here to read the rest:

The global single cell bioinformatics software and services market was estimated to be at $205.2 million in 2020, which is expected to grow with a...

Open data is a blessing for sciencebut it comes with its own curses – Popular Science

Imagine that youre hiking, and you encounter an odd-looking winged bug thats almost bird-like. If you open the Seek app by iNaturalist and point it at the mystery critter, the camera screen will inform you that what youre looking at is called a hummingbird clearwing, a type of moth active during the day. In a sense, the Seek app works a lot like Pokmon Go, the popular augmented reality game from 2016 that had users searching outdoors for elusive fictional critters to capture.

Launched in 2018, Seek has a similar feel. Except when users point their camera to their surroundings, instead of encountering a Bulbasaur or a Butterfree, they might encounter real world plant bulbs and butterflies that their camera identifies in real-time. Users can learn about the types of plants and animals they come across, and can collect badges for finding different species, like reptiles, insects, birds, plants, and mushrooms.

How iNaturalist can correctly recognize (most of the time, at least) different living organisms is thanks to a machine-learning model that works off of data collected by its original app, which first debuted in 2008 and is simply called iNaturalist. Its goal is to help people connect to the richly animated natural world around them.

The iNaturalist platform, which boasts around 2 million users, is a mashup of social networking and citizen science where people can observe, document, share, discuss, learn more about nature, and create data for science and conservation. Outside of taking photos, the iNaturalist app has extended capabilities compared to the gamified Seek. It has a news tab, local wildlife guides, and organizations can also use the platform to host data collection projects that focus on certain areas or certain species of interest.

When new users join iNaturalist, theyre prompted to check a box that allows them to share their data with scientists (although you can still join if you dont check the box). Images and information about their location that users agree to share are tagged with a creative commons license, otherwise, its held under an all-rights reserved license. About 70 percent of the apps data on the platform is classified as creative commons. You can think of iNaturalist as this big open data pipe that just goes out there into the scientific community and is used by scientists in many ways that were totally surprised by, says Scott Loarie, co-director of iNaturalist.

This means that every time a user logs or photographs an animal, plant, or other organism, that becomes a data point thats streamed to a hub in the Amazon Web Services cloud. Its one out of over 300 datasets in the AWS open data registry. Currently, the hub for iNaturalist holds around 160 terabytes of images. The data collection is updated regularly and open for anyone to find and use. iNaturalists dataset is also part of the Global Biodiversity Information Facility, which brings together open datasets from around the world.

iNaturalists Seek is a great example of an organization doing something interesting and otherwise impossible without a large, open dataset. These kinds of datasets are both a hallmark and a driving force of scientific research in the information age, a period defined by the widespread use of powerful computers. They have become a new lens through which scientists view the world around us, and have enabled the creation of tools that also make science accessible to the public.

[Related: Your Flickr photos could help scientists keep tabs on wildlife]

iNaturalists machine learning model, for one, can help its users identify around 60,000 different species. Theres two million species living around the world, weve observed about one-sixth of them with at least one data point and one photo, says Loarie. But in order to do any sort of modeling or real synthesis or insight, you need about 100 data points [per species]. The teams goal is to have 2 million species represented. But that means they need more data and more users. Theyre trying to create new tools, as well, that help them spot weird data, correct errors, or even identify emerging invasive species. This goes along with open data. The best way to promote it is to get as little friction as possible in the movement of the data and the tools to access it, he adds.

Loarie believes that sharing data, software code, and ideas more openly can create further opportunities for science to advance. My background is in academia. When I was doing it, it was very much this publish or perish, your data stays on your laptop, and you hope no one else steals your data or scoops you [mindset], he says. One of the things thats really cool to see is how much more collaborative science has gotten over the last few decades. You can do science so much faster and at such bigger scales if youre more collaborative with it. And I think journals and institutions are becoming more amenable to it.

Over the last decade, open datadata that can be used, adapted, and shared by anyonehas been a boon in the scientific community, riding on a growing trend of more open science. Open science means that any raw data, analysis software, algorithms, papers, documents used in a project are shared early as part of the scientific process. In theory, this would make studies easier to reproduce.

In fact, many governments organizations and city offices are releasing open datasets to the public. A 2012 law requires New York City to share all of its non-confidential data collected by various agencies for city operation through an accessible web portal. In early spring, NYC hosts an open data week highlighting datasets and research that has used them. A central team at the Office of Technology and Information, along with data coordinators from each agency, helps establish standards and best practices, and maintain and manage the infrastructure for the open data program. But for researchers who want to outsource their data infrastructure, places like Amazon and CERN offer services to help organize and manage data.

[Related: The Ten Most Amazing Databases in the World]

This push towards open science was greatly accelerated during the recent COVID-19 pandemic, during which an unprecedented amount of discoveries were shared near-instantaneously for COVID-related research and equipment designs. Scientists rapidly publicized genetic information on the virus, which aided in vaccine development efforts.

If the folks who had done the sequencing had held it and guarded it, it wouldve slowed the whole process down, says John Durant, a science historian and director of the MIT Museum.

The move to open data is partly about trying to ensure transparency and reliability, he adds. How are you going to be confident that results being reported are reliable if they come out of a dataset you cant see, or an algorithmic process you cant explain, or a statistical analysis that you dont really understand? Then its very hard to have confidence in the results.

Open data cannot exist without lots and lots of data in the first place. In this glorious age of big data, this is an opportunity. From the time when I trained in biology, way back, you were using traditional techniques, the amount of information you hadthey were quite important, but they were small, says Durant. But today, you can generate information on an almost bewildering scale. Our ability to collect and accrue data has increased exponentially in the last few decades thanks to better computers, smarter software, and cheaper sensors.

A big dataset is almost like a universe of its own, Durant says. It has a potentially infinite number of internal mathematical features, correlations, and you can go fishing in this until you find something that looks interesting. Having the dataset open to the public means that different researchers can derive all kinds of insights from varying perspectives that deviate from the original intention for the data.

All sorts of new disciplines, or sub-discipline have emerged in the last few years which are derived from a change in the role of data, he adds, with data scientists and bioinformaticians as just two out of numerous examples. There are whole branches of science that are now sort of meta-scientific, where people dont actually collect data, but they go into a number of datasets and look for higher level generalizations.

Many of the traditional fields have also undergone technological revamps. Take the environmental sciences. If you want to cover more ground, more species, over a longer period of time, that becomes intractable for one person to manage without using technology tools or collaboration tools, says Loarie. That definitely pushed the ecology field more into the technical space. Im sure every field has a similar story like that.

[Related: Project Icarus is creating a living map of Earths animals]

But with an ever-growing amount of data, our ability to wrangle these numbers and stats manually becomes virtually impossible. You would only be able to handle these quantities of data using very advanced computing techniques. This is part of the scientific world we live in today, Durant adds.

Thats where machine learning algorithms come in. These are software or computer commands that can calculate statistical relationships in the data. Simple algorithms using limited amounts of data are still fairly comprehensive. If the computer makes an error, you can likely trace back to where the error occurred in the calculation. And if these are open source, then other scientists can look at the code instructions to see how the computer got the output from the input. But more often than not, AI algorithms are described as a black box, meaning that the researchers who created it dont even fully understand whats going on inside and how the machine is arriving at the decision its making. And that can lead to harmful biases.

This is one of the core challenges that the field faces. Algorithmic bias is a product of an age where we are using big data systems in ways that we do or sometimes dont fully have control over, or fully know and understand the implications of, Durant says. This is where making data and code open can help.

[Related: Artificial intelligence is everywhere now. This report shows how we got here.]

Another problem that researchers have to consider is maintaining the quality of big datasets, which can impinge on the effectiveness of analytics tools. This is where the peer-review process plays an important role. Loarie has observed that the field of data and computer science moves incredibly fast with publishing and getting findings out on the internet whether its through preprints, electronic conference papers, or some other form. I do think that the one thing that the electronic version of science struggles with is how to scale the peer-review process, which keeps misinformation at bay, he says. This kind of peer review is important, for example, in iNaturalists data processing, too. Loarie notes that although the quality of data from iNaturalist as a whole is very high, theres still a small amount of misinformation they have to check through community management.

Lastly, having science that is open creates a whole set of questions around how funding and incentives might changean issue that experts have been actively exploring. Storing huge amounts of data certainly is not free.

What people dont think about, that for us is almost more important, is that to move data around the internet, theres bandwidth charges, Loarie says. So, if someone were to download a million photos from the iNaturalist open data bucket, and wanted to do an analysis of it, just downloading that data incurs charges.

iNaturalist is a small nonprofit thats part of the California Academy of Sciences and National Geographic Society. Thats where Amazon is helping. The AWS Open Data Sponsorship Program, launched in 2009, covers the cost of storage and the bandwidth charges for datasets it deems of high value to user communities, Maggie Carter, global lead of AWS Global Social Impact says in an email. They also provide the computer codes needed to access the data and send out notifications when datasets are updated. Currently, they sponsor around 300 datasets through this program ranging from audio recordings of rainforests and whales to satellite imagery to DNA sequences to US Census data.

At a time where big data centers are getting closely scrutinized for their energy use, Amazon sees a centralized open data hub as more energy-efficient compared to everyone in the program hosting their own local storage infrastructure. We see natural efficiencies with an open data model. The whole premise of the AWS Open Data program is to store the data once, and then have everyone work on top of that one authoritative dataset. This means less duplicate data that needs to be stored elsewhere, Carter says, which she claims can result in a lower overall carbon footprint. Additionally, AWS is trying to run their operations with 100 percent renewable energy by 2025.

Despite challenges, Loarie thinks that useful and applicable data should be shared whenever possible. Many other scientists are onboard with this idea. Another platform from Cornell University, ebird, uses citizen science efforts as well to accrue open data for the scientific communityebird data has also translated back to tools for its users, like bird song ID that aims to make it easier and more engaging to interact with wildlife in nature. Outside of citizen science, some researchers, like those working to establish a Global Library of Underwater Biological Sound, are seeking to pool professionally collected data from several institutions and research groups together into a massive open dataset.

A lot of people hold onto data, and they hold onto proprietary algorithms, because they think thats the key to getting the revenue and the recognition thats going to help their program be sustainable, says Loarie. I think all of us who are involved in the open data world, were kinda taking a leap of faith that the advantages of this outweigh the cost.

Excerpt from:

Open data is a blessing for sciencebut it comes with its own curses - Popular Science

When It Comes To Header Bidding, Will Google Play Fair With FLEDGE? – AdExchanger

The Sell Sider is a column written by the sell side of the digital media community.

Todays column is written by Lukasz Wlodarczyk, VP of programmatic ecosystem growth and innovation at RTB House.

Google and Meta (formerly Facebook) have come under fire for a secret agreement known as Jedi Blue. Back in 2018, Google allegedly promised Facebook preferential treatment in ad exchange auctions in return for Facebook withdrawing its support for header-bidding auction solutions, which directly competed with Googles own.

Google has never had a warm relationship with header-bidding solutions. Indeed, Google implemented Exchange Bidding in Dynamic Allocation (EBDA), also known as open bidding, which became a server side competitor to header bidding.

Now, a key proposal from Google Chromes Privacy Sandbox, FLEDGE, presents an opportunity for a more transparent process. Moving two-level auctions to the browser seems perfect for client-side auctions, just like the header-bidding solutions that publishers like. However, there are concerns around whether the FLEDGE proposals will treat all supply-side platforms (SSPs) equally in programmatic auctions within Googles marketplace.

In theory, the FLEDGE proposal offers a solution comparing all bidding partners on a level playing field, similar to how todays header-bidding solutions work. However, there are question marks around whether this is the path that the Google Ad Manager team will choose to go down.

Can Google play fairly?

Current header-bidding solutions are a work-around to allow external demand to compete in Google Ad Manager. Header-bidding demand is transparent and easily auditable in a browser. The logic to select from that demand is governed by an open-source consortium and configured by the publisher. As a result, the header-bidding auction is auditable by the publisher and auction participants.

However, as it stands, the header-bidding auction winner is passed into Google Ad Manager so that it can compete in an (opaque) ad server auction. Google Ad Manager performs the final ad selection, choosing between header-bidding demand and Google Ad Exchange demand. This leads to accusations that Google sometimes unfairly favors its own demand during auctions.

In the future, will Google try to retain its position of top-level auctioneer? Theres no straightforward answer yet.

Industry experts like Joel Meyer, chief architect at OpenX, and Aram Zucker-Scharff, engineering lead for privacy and security compliance at The Washington Post, all agree that transparency of auctions and equal treatment of all market participants should be a top priority for Google. And Google has already shown its willingness to adapt its proposals based on the feedback of the global ad tech community.

But the debate surrounding the future of ad auctions is still a live one.

A more level playing field

To move forward, the industry needs an independent top-level auction handler (independent from a specific ad server) that will guarantee a level playing field for advertisers, DSPs, SSPs and publishers. The auction should be equal for all participants.

There is no specific reason why Google Ad Manager should fill the role of top-level auctioneer. It could just as likely participate in component auctions on an equal footing with other SSPs. Other entities be it the SSPs or, for example, Prebid should be just as capable of taking the role of top-level auctioneer.

The Prebid model as an open-source code is desirable due to its impartiality. An impartial top-level auctioneer would allow publishers and buyers to transact openly and fairly, with assurances that no party is preferred a statement that can be audited in an open codebase if needed. Support from the Prebid community and open Prebid model would guarantee higher adoption, better trust and broader support from publishers and ad tech vendors in the origin trials. This would lead to a multi-SSP auction landscape, which could deliver real benefits for the entire digital ad ecosystem.

Under this model, Google Authorized Buyers would participate in component auctions equally with other SSPs and have no way to skip bids directly to the top-level auction. Likewise, Google would not have any means to create artificial technical blocks to other SSPs willing to participate directly in FLEDGE auctions, which would, in turn, incentivize them to participate via EBDA (open bidding).

Publishers and their vendors should also have the technical means to compare demand in GAM without using EBDA.

Developing a solution based on these proposals will lay the foundation for a more transparent and equal playing field for the entire industry to the benefit of everyone.

Follow RTB House (@RTBHouse) and AdExchanger (@adexchanger) on Twitter.

Original post:

When It Comes To Header Bidding, Will Google Play Fair With FLEDGE? - AdExchanger

Synopsys Named a Leader in the 2022 Gartner Magic Quadrant for Application Security Testing for Sixth Consecutive Year – Yahoo Finance

Synopsys Consistently Placed Highest in Ability to Execute and Completeness of Vision Four Years in a Row

MOUNTAIN VIEW, Calif., April 21, 2022 /PRNewswire/ -- Synopsys, Inc. (Nasdaq: SNPS), today announced it has been named by Gartner, Inc. as a Leader in the "Magic Quadrant for Application Security Testing" for the sixth consecutive year.1 In the report, Gartner evaluated 14 application security testing vendors based on their Completeness of Vision and Ability to Execute. Synopsys placed highest in Ability to Execute and Completeness of Vision for the fourth year in a row.

As the speed and complexity of development increases and the occurrence of high-impact application security breaches becomes more frequent, security and development teams are looking to integrate and automate security testing as part of their software development activities.

According to the authors of the report, "Gartner continues to observe that the major driver in the evolution of the AST market is the need to support enterprise DevSecOps and cloud-native application initiatives. Customers require offerings that provide high-assurance, high-value findings, while not unnecessarily slowing down development efforts. Clients expect offerings to fit earlier into the development process, with testing often driven by developers, rather than security specialists. As a result, this market evaluation focuses heavily on the buyer's needs involving support of rapid and accurate testing for various application types, capable of integration in an increasingly automated fashion throughout software delivery workflows."

"Recent high-profile vulnerabilities and software supply chain attacks have highlighted that managing software risk is becoming increasingly complex," said Jason Schmitt, general manager of the Synopsys Software Integrity Group. "Organizations need a variety of integrated and interoperable application security solutions to address risks across the SDLC and the broader software supply chainsolutions that help them prioritize their remediation efforts while maintaining the velocity of their development workflows. We have made significant investments in these areas over the past year, including the release of new Rapid Scan capabilities for Coverity SAST and Black Duck SCA, the launch of Code Sight Standard Edition, a standalone version of our IDE plugin for developer-driven testing, and the acquisition of Code Dx, an open platform that helps security and development teams correlate and prioritize security findings across their AST tool portfolio. We believe our continued recognition by Gartner as a Leader in application security testing validates our strategy and ability to address the evolving needs of the market."

Story continues

Download a complimentary copy of the 2022 Gartner Magic Quadrant for Application Security Testing to learn more.

Over the past year, the Synopsys Software Integrity Group has announced several new offerings and initiatives that have contributed to the business's growth and momentum:

In June of 2021, Synopsys acquired Code Dx, the provider of an award-winning application security risk management solution that automates and accelerates the aggregation, correlation, deduplication, and prioritization of software vulnerabilities from Synopsys' broad portfolio of solutions as well as more than 100 third-party commercial and open source products. Code Dx provides consolidated risk reporting that creates a system of record for application security testing and enables a unique view into the risk associated by an organization's software.

In July of 2021, Synopsys announced the availability of new Rapid Scan capabilities within the company's Coverity static application security testing (SAST) and Black Duck software composition analysis (SCA) solutions. The Rapid Scan features provide fast, lightweight vulnerability detection for both proprietary and open source code. Rapid Scan is optimized for the early stages of development, particularly for cloud-native applications and infrastructure-as-code (IaC).

In February of 2022, Synopsys announced the general availability of Code Sight Standard Edition, a standalone version of the Code Sight plugin for integrated development environments (IDE) that enables developers to quickly find and fix security defects in source code, open source dependencies, infrastructure-as-code files, and more before they commit their code.

In October of 2021, Synopsys enhanced its Black Duck software composition analysis solution to address customers' emerging needs around software supply chain security. The enhancements enable Black Duck customers to produce a software bill of materials (SBOM) in the standardized SPDX 2.2. format approved by NIST, a capability that is increasingly important for software vendors looking to comply with Executive Order 14028.

Synopsys continues to invest in its "partner first" go-to-market approach by expanding its global channel partner network and enhancing the benefits and operational support in its partner program to better serve the channel. As a result, Synopsys has experienced significant growth and momentum in indirect sales through an expanded ecosystem of resellers, managed service providers, system integrators and consulting firms providing solutions and services to our customers. Synopsys recently received a 5-star rating in the 2022 CRN Partner Program Guide.

1. Gartner, Inc. "Magic Quadrant for Application Security Testing" by Dale Gardner, Mark Horvath, and Dionisio Zumerle, April 18 , 2022.

GARTNER and Magic Quadrant are registered trademarks and service marks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.

Gartner Disclaimer

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

About the Synopsys Software Integrity Group

Synopsys Software Integrity Group provides integrated solutions that transform the way development teams build and deliver software, accelerating innovation while addressing business risk. Our industry-leading portfolio of software security products and services is the most comprehensive in the world and interoperates with third-party and open source tools, allowing organizations to leverage existing investments to build the security program that's best for them. Only Synopsys offers everything you need to build trust in your software. Learn more at http://www.synopsys.com/software.

About Synopsys

Synopsys, Inc. (Nasdaq: SNPS) is the Silicon to Software partner for innovative companies developing the electronic products and software applications we rely on every day. As an S&P 500 company, Synopsys has a long history of being a global leader in electronic design automation (EDA) and semiconductor IP and offers the industry's broadest portfolio of application security testing tools and services. Whether you're a system-on-chip (SoC) designer creating advanced semiconductors, or a software developer writing more secure, high-quality code, Synopsys has the solutions needed to deliver innovative products. Learn more at http://www.synopsys.com.

Editorial Contacts:Mark Van Elderen Synopsys, Inc.650-793-7450mark.vanelderen@synopsys.com

Cision

View original content:https://www.prnewswire.com/news-releases/synopsys-named-a-leader-in-the-2022-gartner-magic-quadrant-for-application-security-testing-for-sixth-consecutive-year-301530185.html

SOURCE Synopsys, Inc.

Continue reading here:

Synopsys Named a Leader in the 2022 Gartner Magic Quadrant for Application Security Testing for Sixth Consecutive Year - Yahoo Finance

Google Trends Study Shows SHIB Is the Most Popular Crypto in the UK Bitcoin News – Bitcoin News

22 days ago, Bitcoin.com News wrote about a Coin Insider trends study that combed through Google Trends data in the United States. According to the report, dogecoin was the most Googled cryptocurrency in the country. Another study published by askgamblers.com has covered similar data, but concentrated on the U.K.s and Europes Google searches. According to the report, while bitcoin is the most popular crypto asset in Europe, the study of the trends shows that the meme token shiba inu is the most popular in the United Kingdom.

This week Bitcoin.com News was sent a report from askgamblers.com that analyzes Google Trends (GT) data over the last year in order to find out what the most popular crypto assets are in the U.K. and Europe. According to the findings, bitcoin (BTC) is the most popular digital currency in Europe as it was the most searched crypto in 21 countries. BTC outpaced the competitors in the askgamblers.com study, as the leading crypto asset rules the roost in countries like Germany, Finland, Norway, Poland, Romania, and Belgium.

While bitcoin (BTC) was the top crypto across Europe, shiba Inu (SHIB) is the most popular cryptocurrency in the U.K., according to the researchers collected Google searches. The meme token SHIB saw a significant increase in popularity during the last 12 months. The studys findings show SHIB commands six different countries and the United Kingdom. In fact, SHIB is huge in Russia, France, Spain, Ukraine, Italy, Hungary, and Switzerland, in terms of GT searches.

Additionally, ethereum (ETH) was the third most popular in the study capturing interest from Sweden, Czechia, Latvia, and Slovenia. Then cardano (ADA) held the fourth position in terms of GT search data, as Andorra, the Netherlands, and Bulgaria showed a lot of interest in ADA. With dogecoin (DOGE) being the most popular in the U.S., it is the fifth in Europe as the meme crypto is popular in Albania and Greece.

With 38 million crypto users in Europe, and thousands of cryptocurrencies on the market to choose from, it is fascinating to see which one people are the most interested in investing in, a spokesperson from askgamblers.com told Bitcoin.coms newsdesk. Although bitcoin is the most popular overall, the interest in shiba inu has grown to surpass bitcoin in major countries such as Russia and the U.K.

In the U.S. research study published by Coin Insider, shiba inu (SHIB) only captured seven states across the country. Dogecoin was named the leader in that study as DOGE was the most popular in 23 states in the U.S., in terms of GT searches. SHIBs popularity in the U.S., according to the data in that specific report, was ranked the fourth most popular crypto in the country.

What do you think about the popularity of bitcoin in Europe and the shiba inu interest in the U.K.? Let us know what you think about this research study in the comments section below.

Jamie Redman is the News Lead at Bitcoin.com News and a financial tech journalist living in Florida. Redman has been an active member of the cryptocurrency community since 2011. He has a passion for Bitcoin, open-source code, and decentralized applications. Since September 2015, Redman has written more than 5,000 articles for Bitcoin.com News about the disruptive protocols emerging today.

Image Credits: Shutterstock, Pixabay, Wiki Commons

Disclaimer: This article is for informational purposes only. It is not a direct offer or solicitation of an offer to buy or sell, or a recommendation or endorsement of any products, services, or companies. Bitcoin.com does not provide investment, tax, legal, or accounting advice. Neither the company nor the author is responsible, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with the use of or reliance on any content, goods or services mentioned in this article.

Read more from the original source:

Google Trends Study Shows SHIB Is the Most Popular Crypto in the UK Bitcoin News - Bitcoin News

A European approach to artificial intelligence | Shaping …

The European approach to artificial intelligence (AI) will help build a resilient Europe for the Digital Decade where people and businesses can enjoy the benefits of AI. It focuses on 2 areas: excellence in AI and trustworthy AI. The European approach to AI will ensure that any AI improvements are based on rules that safeguard the functioning of markets and the public sector, and peoples safety and fundamental rights.

To help further define its vision for AI, the European Commission developed an AI strategy to go hand in hand with the European approach to AI. The AI strategy proposed measures to streamline research, as well as policy options for AI regulation, which fed into work on the AI package.

The Commission published its AI package in April 2021, proposing new rules and actions to turn Europe into the global hub for trustworthy AI. This package consisted of:

Fostering excellence in AI will strengthen Europes potential to compete globally.

The EU will achieve this by:

The Commission and Member States agreed boost excellence in AI by joiningforces on AI policy and investment. The revised Coordinated Plan on AI outlines a vision to accelerate, act, and align priorities with the current European and global AI landscape and bring AI strategy into action.

Maximising resources and coordinating investments is a critical component of the Commissions AI strategy. Through the Digital Europe and Horizon Europe programmes, the Commission plans to invest 1 billion per year in AI. It will mobilise additional investments from the private sector and the Member States in order to reach an annual investment volume of 20 billion over the course of the digital decade.

The newly adopted Recovery and Resilience Facility makes 134 billion available for digital. This will be a game-changer, allowing Europe to amplify its ambitions and become a global leader in developing cutting-edge, trustworthy AI.

Access to high quality data is an essential factor in building high performance, robust AI systems. Initiatives such as the EU Cybersecurity Strategy, the Digital Services Act and the Digital Markets Act, and the Data Governance Act provide the right infrastructure for building such systems.

Building trustworthy AI will create a safe and innovation-friendly environment for users, developers and deployers.

The Commission has proposed 3 inter-related legal initiatives that will contribute to building trustworthy AI:

The Commission aims to address the risks generated by specific uses of AI through a set of complementary, proportionate and flexible rules. These rules will also provide Europe with a leading role in setting the global gold standard.

This framework gives AI developers, deployers and users the clarity they need by intervening only in those cases that existing national and EU legislations do not cover. The legal framework for AI proposes a clear, easy to understand approach, based on four different levels of risk: unacceptable risk, high risk, limited risk, and minimal risk.

See the original post here:
A European approach to artificial intelligence | Shaping ...

Artificial intelligence in factory maintenance is no longer a matter of the future – ReadWrite

Undetected machine failures are the most expensive ones. That is why many manufacturing companies are looking for solutions that automate and reduce maintenance costs. Traditional vibrodiagnostic methods can be too late in many cases. Taking readings in the presence of a diagnostician occasionally may not detect a fault in advance. 2017 Position Paper from Deloitte (Deloitte Analytics Institute 7/2017) claimed that maintenance in the environment of Industry 4.0.The benefits of predictive maintenance are dependent on the industry or the specific processes that it is applied to. However, Deloitte analyses at that time have already concluded that material cost savings amount to 5 to 10% on average. Equipment uptime increases by 10 to 20%. Overall maintenance costs are reduced by 5 to 10% and maintenance planning time is even reduced by 20 to 50%! Neuron Soundware has developed a artificial intelligence powered technology for predictive maintenance.

Stories from companies that have embarked on the digital journey are no longer just science fiction. They are real examples of how companies are coping with the lack of skilled labor on the market. Usually mechanic-maintainer who regularly goes around all the machines and diagnoses their condition by listening to them. Some companies are nowlooking for new maintenance technologies to replace

A failure without early identification means replacing the entire piece of equipment or its part. Waiting for the spare part which may not be in stock right now. Because it is expensive to stock replacement equipment. Devaluation of the current pieces of the component in the production thus the discarding of the entire production run. Finally, yet importantly, it would represent up to XY hours of production downtime. The losses might run into tens of thousands of euros.

Such a critical scenario is not possible if the maintenance technology is equipped with artificial intelligence in addition to the mechanical knowledge of the machines. It applies this knowledge itself to the current state of the machine. It is also able to recognize which anomalous behavior is currently occurring on the machine. Based on that send the send the corresponding alert with precise maintenance instructions. Manufacturers of mechanical equipment such as lifts, escalators, and mobile equipment use this today, for example.

However, predictive maintenance technologies have much wider applications. Thanks to the learning capabilities of artificial intelligence, they are very versatile. For example, the technology is able to assist in end-of-line testing. For example to identify defective parts of produced goods which are invisible to the eye and appear randomly.

The second area of application lies in the monitoring of production processes. We can imagine this with the example of a gravel crusher. A conveyor delivers different sized pieces of stone into grinders, which are to yield a given granularity of gravel. Previously, the manufacturer would run the crusher for a predetermined amount of time. To make sure that even in the presence of the largest pieces of rock, sufficient crushing occurred. With the artificial intelligence listening to the size of the gravel. He can stop the crushing process at the right point. This means not only saving wear and tear on the crushing equipment but more importantly, saving time and increasing the volume of gravel delivered per shift. This brings great financial benefit to the producer.

When implementing predictive maintenance technology, it does not matter how big the company is. The most common decision criterion is the scalability of the deployed solution. In companies with a large number of mechanically similar devices, it is possible to quickly collect samples that represent individual problems. From which the neural network learns. It can then handle any number of machines at once. The more machines, the more opportunities for the neural network to learn and apply detection of unwanted sounds.

Condition monitoring technologies are usually designed for larger plants rather than for workshops with a few machine tools. However, as hardware and data transmission and processing get progressively cheaper, the technology is getting there too. So even a home marmalade maker will soon have the confidence that his machines will make enough produce, deliver orders to customers on time, and not ruin its reputation.

In the future, predictive maintenance will be a necessity. In industry also in larger electronic appliances such as refrigerators and coffee machines, or in cars. For example, we can all recognize a damaged exhaust or an unusual sounding engine. Nevertheless, it is often too late to drive the car safely home from a holiday. For example, without a visit to the workshop. With the installation of an AI-driven detection device, we will know about the impending breakdown in time and be able to resolve the problem in time, before the engine seizes up and we have to call a towing service.

Pavel is a tech visionary, speaker, and founder of AI and IoT startup Neuron Soundware. He started his career at Accenture, where he took part in 35+ technology and strategy projects on 3 continents over 11years. He got into entrepreneurship in 2016 when he founded a company focused on predictive machine maintenance using sound analysis.

Follow this link:
Artificial intelligence in factory maintenance is no longer a matter of the future - ReadWrite

Artificial Intelligence and Chemical and Biological Weapons – Lawfare – Lawfare

Sometimes reality is a cold slap in the face. Consider, as a particularly salient example, a recently published article concerning the use of artificial intelligence (AI) in the creation of chemical and biological weapons (the original publication, in Nature, is behind a paywall, but this link is a copy of the full paper). Anyone unfamiliar with recent innovations in the use of AI to model new drugs will be unpleasantly surprised.

Heres the background: In the modern pharmaceutical industry, the discovery of new drugs is rapidly becoming easier through the use of artificial intelligence/machine learning systems. As the authors of the article describe their work, they have spent decades building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery.

In other words, computer scientists can use AI systems to model what new beneficial drugs may look like for specifically targeted afflictions and then task the AI to work on discovering possible new drug molecules to use. Those results are then given to the chemists and biologists who synthesize and test the proposed new drugs.

Given how AI systems work, the benefits in speed and accuracy are significant. As one study put it:

The vast chemical space, comprising >1060 molecules, fosters the development of a large number of drug molecules. However, the lack of advanced technologies limits the drug development process, making it a time-consuming and expensive task, which can be addressed by using AI. AI can recognize hit and lead compounds, and provide a quicker validation of the drug target and optimization of the drug structure design.

Specifically, AI gives society a guide to the quicker creation of newer, better pharmaceuticals.

The benefits of these innovations are clear. Unfortunately, the possibilities for malicious uses are also becoming clear. The paper referenced above is titled Dual Use of Artificial-Intelligence-Powered Drug Discovery. And the dual use in question is the creation of novel chemical warfare agents.

One of the factors investigators use to guide AI systems and narrow down the search for beneficial drugs is a toxicity measure, known as LD50 (where LD stands for lethal dose and the 50 is an indicator of how large a dose would be necessary to kill half the population). For a drug to be practical, designers need to screen out new compounds that might be toxic to users and, thus, avoid wasting time trying to synthesize them in the real world. And so, drug developers can train and instruct an AI system to work with a very low LD50 threshold and have the AI screen out and discard possible new compounds that it predicts would have harmful effects. As the authors put it, the normal process is to use a generative model [that is, an AI system, which] penalizes predicted toxicity and rewards predicted target activity. When used in this traditional way, the AI system is directed to generate new molecules for investigation that are likely to be safe and effective.

But what happens if you reverse the process? What happens if instead of selecting for a low LD50 threshold, a generative model is created to preferentially develop molecules with a high LD50 threshold?

One rediscovers VX gasone of the most lethal substances known to humans. And one predictively creates many new substances that are even worse than VX.

One wishes this were science fiction. But it is not. As the authors put the bad news:

In less than 6 hours ... our model generated 40,000 [new] molecules ... In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible. These new molecules were predicted to be more toxic, based on the predicted LD50 values, than publicly known chemical warfare agents. This was unexpected because the datasets we used for training the AI did not include these nerve agents.

In other words, the developers started from scratch and did not artificially jump-start the process by using a training dataset that included known nerve agents. Instead, the investigators simply pointed the AI system in the general direction of looking for effective lethal compounds (with standard definitions of effectiveness and lethality). Their AI program then discovered a host of known chemical warfare agents and also proposed thousands of new ones for possible synthesis that were not previously known to humankind.

The authors stopped at the theoretical point of their work. They did not, in fact, attempt to synthesize any of the newly discovered toxins. And, to be fair, synthesis is not trivial. But the entire point of AI-driven drug development is to point drug developers in the right directiontoward readily synthesizable, safe and effective new drugs. And while synthesis is not easy, it is a pathway that is well trod in the market today. There is no reasonnone at allto think that the synthesis path is not equally feasible for lethal toxins.

And so, AI opens the possibility of creating new catastrophic biological and chemical weapons. Some commentators condemn new technology as inherently evil tech. However, the better view is that all new technology is neutral and can be used for good or ill. But that does not mean nothing can be done to avoid the malignant uses of technology. And there is a real risk when technologists run ahead with what is possible, before human systems of control and ethical assessment catch up. Using artificial intelligence to develop toxic biological and chemical weapons would seem to be one of those use-cases where severe problems may lie ahead.

Go here to see the original:
Artificial Intelligence and Chemical and Biological Weapons - Lawfare - Lawfare

LEFT TO MY OWN DEVICES: Be smart. Welcome new artificial intelligence solutions. – Times Tribune of Corbin

The vast list of artificial intelligence applications continually increases as researchers, technologists, and scientists try to leverage computing power to gain competitive edges over the more slowly adopting set. Today I want to traipse across the American business and tech landscape and present a few of the new and hopefully intriguing upgrades of these mostly familiar devices and services being brought into the 21st century via AI.

First a quick overview of the concept of AI and where its come from over the past years and decades. Earlier writings comingled two phrases to identify the technology: artificial intelligence, which has become the well-known marketable way to talk about the tech, and computational intelligence, which might be useful amongst a group of AIerr, CI?subject matter experts, but doesnt carry the cachet of its more widely accepted phrase. For anyone who uses either phrase, its generally understood to refer to some sort of machine-based intelligence. Natural intelligence is the way to describe we humans intellect. Depending on ones level, there are other ways to describe intelligence: lacking, or too good for ones own good come to mind, for example.

Machines that perform AI functions are programmed to take in the various and sundried inputs of their surrounding environment, analyze the data, and perform some action that, when it all works, tends to be the best action considering those inputs. From a textbook level of perspective, you might see natural intelligence described similarly. Were strolling down Main Street about to reach an intersection. We take in sights, sounds, all sorts of information and inputs. Then, we decide whether to wait or continue. Assuming de minimis human intelligence, the action we take will have maximized survival first, pace and progress, too, and other complex results all based on a process of intelligent decision-making.

The foundational descriptions and ideas about AI go back around 20 to 25 years for most purposes of contemporary discussion, though I and others in the past have even gone way back to the nineteenth century with Shelleys Frankenstein to demonstrate a variant of the two-word phrase, Dr. Frankensteins monster displaying artificial intelligence in this sense. We might agree that from whence it came, and toward where AI is headed, the descriptive thread woven throughout is that something other than a sentient being considers information before taking some action. That rather generic description gives a wide range to what parts of modern-day living may benefit from AI technologies.

You reap those benefits everyday already. Google searches, Amazon shopping, Netflix, Hulu, or any streaming platform. Youre really enveloped in the AI landscape at home, work, and even simply being out and about in your community. Anything, for example, that presents as a Smart [Thing] implicates AI. Also, you have likely enjoyed AIs functionality for longer than you might at first think. Remember the Ken Jennings Jeopardy! era? IBMs Watson computer was, essentially, an AI device earning millions of viewers a dozen years ago. If youve picked up a U.S. passport during the past 15 or so years, the facial recognition pieces of the process were driven by AI in ways that may be considered intrusive, but definitely bolster national security as you can imagine. The ethics of AI is an entirely different, albeit important and ongoing, discussion.

To me, biased as I can be about tech advances, nearly everything that incorporates artificial intelligence is intriguing or even exciting. The future altogether, generally, drums up the same sentiments, though Ill admit that over time I catch myself going into the but in my day mode such that I might pooh-pooh something new and improved. Oftentimes I get schooled. For a year Ive had a barely functioning thermostat that I knew needed replacement. I resisted the continual advice to get a smart one. I finally buckled, and to my surprise and delight, and chagrin, Ive truly enjoyed this tiny component of living a more comfortable life. But wait theres more.

Forget room temps, how about AI functionality that senses your emotion? Consider a sales team that, whether due to the pandemic or just the new way of doing business, meets new clients via Zoom or some other video conferencing application. After a pitch meeting, where frankly both sides protect their interests by putting on a show of sorts, the team can analyze the call and see where hot issues garnered certain emotional reactions. Again, Im passing on the ethical dilemmas evident in the tech.

Maybe more agreeable, scientists are developing a fascinating concept: AI colonialism. In New Zealand researchers are trying to reverse the disproportionately laden negative effects of colonialism on minorities. Their angle? The AI functions to retain and increase the Mori language. Contra the effects of human intelligence colonialismentering a political division from afar and imposing foreign ways on its peopleAI colonialism is meant to create the opposite effects. Data, its proponents say, is the last realm of colonialism.

Within infrastructure, smart highways are on the horizon and so some degrees in action already. Autonomous vehicles will require this marriage of roadways and tech, but the applications are nearer and wider, still. In Sweden, road surfaces are being replaced with underlying charging capabilities similar to contactless charging iPhones. Electric vehicles charge as they travel. In the U.S. some metropolitan areas are experimenting with smart roadways such that depending on traffic flows, emergencies and other factors, lanes are reassigned by signage or powered barricades.

Nary is the industry or sector immune from these developments. From agriculture to litigation AI is being enveloped, or it will. When we take time to also consider the ethical implications, and they are in fact sound, it becomes a genuinely exciting time to see innovations come to life. Itd be smart of you to welcome these developments, or at least give them a chance.

Ed is a professor of cybersecurity, an attorney, and a trained ethicist. Reach him at edzugeresq@gmail.com.

We are making critical coverage of the coronavirus available for free. Please consider subscribing so we can continue to bring you the latest news and information on this developing story.

View post:
LEFT TO MY OWN DEVICES: Be smart. Welcome new artificial intelligence solutions. - Times Tribune of Corbin