What Does SSL Stand For? A 10-Minute Look at the Secure Sockets Layer – Hashed Out by The SSL Store

Whats SSL? SSL, or secure sockets layer, is the standard technology used to secure online communications. Lets take a quick look at what SSL is and what it does to enable your secure transactions online

You know when you go to a website and see a padlock icon in your browsers address bar? That means the website is using SSL, or secure sockets layer. SSL secures your communication with the website so hackers cant eavesdrop and see your credit card number or password.

(Technically speaking, SSL is an outdated term because its been replaced by a very similar but updated technology known as transport layer security, or TLS. But people still like to use the term SSL because its been around longer and, therefore, is easier to remember.)

Today, were taking a step back from more in-depth technical articles to take a quick look at the basics: what does SSL stand for? What is SSL? How does it work? And, of course, how you can protect your own website with SSL.

Lets hash it out.

SSL stands for secure sockets layer. In the simplest terms, SSL is a technology thats commonly used to securely send data (for example credit cards or passwords) between a users computer and a website. The term also describes a specific type of digital certificate (SSL certificate) that companies use to prove they own their domain. (Well speak more about that a little later.)

SSL is a protocol (i.e., a set of rules computer systems follow when communicating with each other) that was created in the 1990s to allow web browsers to securely send sensitive info to/from a website. Nowadays, however, we rely on transport layer security (TLS) to handle these tasks, but the term SSL has stuck around and thats the term most people use. Well talk more about SSL certificates and TLS a little later in the article. But just note that since youll commonly see SSL or SSL/TLS being used interchangeably across the internet, were just going to use the term here as well to keep things simple.

If youre looking for quick rundown of what SSL is and why its important, check out our TL;DR overview section.

If you want to learn how to enable SSL/TLS on your website, just click on this link and well take you to that section of the article. But if youre interested in learning more about what SSL/TLS does and how you use it, then keep reading.

The answer to this question is easy: your browser will tell you, usually in at least two ways:

The good news is that more and more websites are using SSL to keep site visitors like you and me secure. W3Techs reports that HTTPS is the default protocol for 79.6% of all websites. This is up from around 75% back in September 2021. Nice looks were moving in the right direction.

Heres a quick visual comparison of a website thats transmitting via a secure HTTPS protocol (using SSL/TLS) versus one thats using the insecure HTTP protocol:

If the website is using HTTP, this means that any data sent from your browser to the server hosting the website risks the data being read, modified, or stolen in transit. As a website owner, its really bad news for you and your customers because it means their data is exposed and you may be liable for not securing it in the first place.

Now that you understand the basics of what SSL stands for and what it does, lets take a brief look under the hood. How exactly does SSL protect website users and data against hackers?

SSL protects data while its in transit (travelling between the users browser and the website/web server). There are actually three different things SSL does to protect website users. SSL enables secure authentication, data encryption, and data integrity assurance. This allows you to:

All of these things are made possible through a cryptographic process known as an SSL handshake (AKA TLS handshake). Much like how you introduce yourself to someone and shake their hand, your computer does the same with a websites server:

From there, some other technical steps take place that we arent going to get into right now. (Check out the previously linked article for a more in depth look at how different versions of the SSL/TLS handshake work.) Bada bing, bada boom the end result is that your browser and the website server establish a secure connection through which you can transmit sensitive data (such as using your credentials to log in to a website).

Pretty cool, huh?

Remember how we mentioned an SSL certificate is part of the SSL handshake? Yep, thats a mandatory step every website needs an SSL certificate before it can enable SSL/TLS. An SSL certificate is a digital file (issued to the website owner by a certificate authority such as DigiCert or Sectigo) that verifies them as the legitimate owner of the website.

Whats the point of that? To help you assert your digital identity in a way that other entities (users, browsers, operating systems, etc.) can verify youre legitimate and not an imposter. This way, when a user connects to your website, they know its legitimate and can establish a secure, encrypted connection.

Heres a quick example of what the SSL certificate looks like for TheSSLstore.com:

For those of you who like a little more technical knowledge about what SSL stands for: The term SSL refers to the technology (cryptographic protocol, or the instructions) that makes secure communications possible. However, people sometimes use the same term to also refer to a type of data file known as an SSL certificate (AKA a TLS certificate). This digital certificate is an X.509 file containing data that ties you or your organizations verifiable information to the domain.

As such, its also known as a website security certificate because this information (along with other key cryptographic info it contains) helps to increase the security of your websites connections.

Ever visited a website and you werent sure if it was legitimate or trustworthy? Knowing how to view the details in their SSL certificate can help you figure out what company is running the website, who they are, and whether theyre a legit entity. (After all, you dont want to share your personal and sensitive details with a potential cybercriminal!)

As you can see in the left part of the above image, this provides general information about what the certificate is used for and which entity it was issued to. The right half of the image is the Subject details, which provides additional verifiable information about our company. In this case, it provides the following information:

All of this information can easily be verified using official resources, such as the State of Floridas Division of Businesses website:

Of course, thats not all of the information that this type of digital certificate provides. It also informs you:

Now, lets really throw a wrench into things by talking more about this term we touched on earlier. TLS, or transport layer security, is a closely related internet protocol thats so closely related to SSL that its actually considered its official successor. However, there are some technical differences in how SSL and TLS work, but were not going to dive into all of that here.

What you need to know is that when youre on a website thats secure by SSL, its technically secured by TLS. Unfortunately, people often use the terms SSL and TLS interchangeably. This gets confusing because so many people and organizations ours included still tend to use the term SSL to describe both terms.

So, why do we still call it SSL? After all, its a deprecated security protocol that was replaced with TLS back in 1999 after multiple iterations (SSL 1.0, 2.0, and 3.0). Frankly, its most likely because people are slow to change. Theres a strange tendency to stick to the terms were familiar with, so its easier for people to just call it SSL instead of TLS. (I guess, to quote a common adage, if it aint broke, dont fix it.)

So, whether someone calls it SSL or TLS, unless theyre talking about it at a highly technical level, theyre generally referring to the same secure protocol that makes the padlock icon appear in your browser or the digital certificate file that plays a central role in making that occur.

Now that weve gotten all of that info out of the way, answering what does SSL stand for? you may be wondering how you can put SSL/TLS to use on your website. Good news: its really easy. Just follow these five steps to turn make your secure website reality:

Of course, once all of this is done, use an SSL/TLS checker tool to ensure that your certificate is properly installed and configured. This can help prevent surprise issues from coming your way.

Alright, that brings us to the end of this article, which we hope helped you better understand what SSL stands for. But if youve skipped ahead and are now just joining us for a quick overview, SSL (or, really, TLS) is a secure internet protocol that allows users to share their data securely with websites.

The three key processes that SSL facilitates are:

SSL, as a protocol, uses information provided by digital certificates that go by the same name (SSL certificates). Nowadays, these are technically TLS certificates, but hardly anyone actually bothers calling them that. (You know, because were all a tad lazy and its easier to call them what weve been calling them for years.). So, there you have it. Now you can show off your technical chops around the water cooler or during the next trivia night by being able to answer the question, What does SSL stand for?

Here is the original post:
What Does SSL Stand For? A 10-Minute Look at the Secure Sockets Layer - Hashed Out by The SSL Store

Jailed for 11 years,Class A drug conspirator who attempted to outsmart police using encryption – About Manchester – About Manchester

Joseph McCormick aged 41 of Bob Massey Close, Manchester was this week jailed for 11 years and 4 months.for his role in a conspiracy to supply Class A drugs.

He was found to have been using the alias Butternoon while messaging other Organised Crime Group members using encrypted mobile devices to prevent police from detecting their conversations.

Encrypted mobile phones have been used by Organised Crime Groups for a number of years.

When the phones first came into operation, they were usually only reserved for use by top level OCG members. The use of encrypted devices has become a common accessory for criminals to network with their groups.

Unfortunately for the organised criminals, UK law enforcement accessed the Encrochat data through a legal data hack to secure illicit communications provided by the NCA which has enabled police to secure the conviction of McCormick and other criminals

Yesterday the court heard Joseph McCormick, played a leading role, regularly communicating with several handles on the EncroChat system and refer to dropping large packages of Class A drugs at multiple safe houses located across Greater Manchester.

Further implicating himself, messages show in May 2020, McCormick went out to collect the drugs instead of his runners, despite being released from prison less than a year ago on previous drug offences.

During the initial questioning, he was asked why he continued to commit similar offences following his previous 80-month sentence in 2015, despite knowing its illegality, to which he answered, no comment.

Detective Constable Chris Cotton of Challenger south city of Manchester team said: McCormick played a leading role by using encrypted communications within an organised network of criminals and I hope that the sentencing yesterday will reassure the public that we are committed to making our communities a safer place by disrupting this type of serious criminality. We understand the scourge that drug dealing, and the supply of class A drugs brings to our communities, and we are committed in our mission to keep drugs from the streets of Greater Manchester.

Our team worked meticulously with other agencies to piece together a timeline of McCormicks actions to bring about charges and then a conviction for his crimes, the severity of his offences should not be underestimated. The supply of drugs in our area fuels further criminality and violence across Manchester and we are committed to disrupting their networks.

Another important aspect in these investigations is the intelligence that is passed to us by members of the public which often plays a crucial part in our investigations.

Read the original:
Jailed for 11 years,Class A drug conspirator who attempted to outsmart police using encryption - About Manchester - About Manchester

WhatsApps latest campaign highlights built-in layers of protection – The Week

WhatsApp today launched the second film, part of its ongoing privacy campaign in India, focused on message privacy and WhatsApps layers of protection that come together to offer users more control and privacy over their conversations.

The film follows the launch of WhatsApps global brand campaign earlier in August that highlights WhatsApps built-in layers of privacy protections added over the years and how multiple privacy features enable people to have meaningful conversations in their most vulnerable moments.

Conceptualised by WhatsApp and directed by Jess Kohl, the film features Indian badminton star H.S. Prannoy, who was part of the team that created history by winning the Thomas Cup title in May this year. The film captures the essence of privacy and how the teams WhatsApp group, ingeniously called Its coming home gave them a safe and private space where they could not only strategise their game plan but also share their feelings in moments of self doubt. The film highlights WhatsApps privacy controls like end-to-end encrypted video calls, privacy settings like last seen and hidden online presence that promise users the privacy and security to share their most vulnerable moments and dreams that are larger than life, empowering them to live their dreams in private until they are ready to be shared with the world.

Talking about the campaign, Avinash Pant, director - marketing at Meta India, said, WhatsApps mission is to connect the world privately and this campaign highlights the multiple ways we defend privacy so users can feel free and confident with their messages. Through this film we want to celebrate our national champions who brought home the coveted Thomas Cup and demonstrate how WhatsApp provided them a safe space where they felt empowered to have private conversations even during vulnerable moments because they knew their messages were always protected and secure, no matter where they are. We want to show people the closeness thats possible with WhatsApps built-in layers of protection without compromising on the assurance of privacy and personal space to live your dreams in private until they're ready to be shared with the world.

Commenting on the film, Prannoy said, Being a part of the Thomas Cup squad was an honour and I knew to win the title as a team, we had to communicate as a team and go through those moments of hardship, emotional vulnerability, self-doubt together. WhatsApp was that safe space for us where we could have conversations, strategise and share our most private moments, thoughts and ambitions, away from the public eye. Every time I looked at the WhatsApp group name Its coming home it gave me the confidence and fervour to make my dreams a reality. In a country that loves cricket my dream was to make people love badminton as much as I do and to inspire the next generation of players to believe in themselves and the sport.

Over the last month, WhatsApp has launched an integrated brand campaign, including innovative print, OOH and digital activations to create awareness about the privacy features available on WhatsApp that provide users multiple layers of protection when messaging.

WhatsApps built-in layers of protection and privacy controls include:

1. Leave groups silently: Users will be able to exit a group privately without having to notify everyone.

2. Choose who can see when youre online: For the times you want to keep your online presence private, WhatsApp is introducing the ability to select who can and cant see when youre online.

3. Screenshot blocking for view once messages: WhatsApp is enabling screenshot blocking for View Once messages for an added layer of protection. This feature is being tested and will be rolled out to users soon.

4. Default E2EE: WhatsApp always protects your personal conversations with end-to-end encryption by default (regardless of device) so that no one, not even WhatsApp, except your intended recipient can see or hear your private personal conversations.

5. Encrypted backups: WhatsApp offers the ability to backup your chat history with end-to-end encryption so its secure and only accessible to you with a password or encryption key. No other messaging service at our scale provides this level of additional security for your messages.

6. Disappearing messages: WhatsApps disappearing messages offer peace of mind with the ability to set durations for disappearing messagings: 24 hours, 7 days, or 90 days so users can send photos and videos that disappear after they have been opened.

7. Block and report: Users can choose to stop receiving messages and calls from certain contacts by blocking them and reporting them if they are sending problematic content or spam.

8. Two-step verification: For more protection, the two-step verification feature gives users the option to set a unique six-digit PIN that can be used when registering your phone number with WhatsApp again. This optional feature adds another layer of security to their WhatsApp account.

See the article here:
WhatsApps latest campaign highlights built-in layers of protection - The Week

Sonatype and Cloud Native Computing Foundation Partner to – GlobeNewswire

Fulton, Md., Oct. 06, 2022 (GLOBE NEWSWIRE) -- Sonatype, the pioneer of software supply chain management, in partnership with The Cloud Native Computing Foundation(CNCF), which builds sustainable ecosystems for cloud native software, has announced an inaugural virtual Security Slam event to help improve their projects security posture, while raising $50,000 for its Diversity Scholarship Fund donated by Google.

Security Slam is a virtual event aimed at improving the security posture of all CNCF open source projects. This new event will use CNCFs automated CLOMonitor that measures project security, enabling maintainers and contributors to work together and improve participating projects overall security. Every CNCF project that reaches 100% Security status will win prizes for its top participating maintainers and contributors, including free Linux Foundation courses, gift cards to the CNCF online store, and more.

From our ongoing stewardship of Maven Central to the creation of our free developer solutions like OSS Index, Sonatype has a long history of supporting the open source community, says Brian Fox, co-founder and CTO of Sonatype. We are ecstatic to partner with CNCF and Google on this event to improve CNCF projects security, while raising funds that can help expand our community to include more individuals from historically underrepresented groups.

Additionally, the top overall contributor will win free airfare and hotel to the next KubeCon + CloudNativeCon, courtsey of Open Source Travel Fund by Community Classroom. Plus, for every project that achieves 100% Security, Google will donate $2,500 to CNCFs Diversity Scholarship Fund, which helps underrepresented individuals become valuable members of the CNCF community. The event will culminate at KubeCon + CloudNativeCon 2022 North America in Detroit, where winners will be announced October 24-26, 2022.

Were thrilled to be putting on this event that will help our projects become even more secure, while garnering the largest donation weve ever received for the CNCF Diversity Scholarship Fund and giving prizes to our valued contributors and maintainers, said Chris Aniszczyk, CTO of the Cloud Native Computing Foundation.

To learn more about the Security Slam, visit community.cncf.io/cloud-native-security-slam/.

Open source maintainers can sign their project up for participation here, and open source contributors can sign up to participate here.

About SonatypeSonatype is the software supply chain management company. We empower developers and security professionals with intelligent tools to innovate more securely at scale. Our platform addresses every element of an organizations entire software development life cycle, including third-party open source code, first-party source code and containerized code. Sonatype identifies critical security vulnerabilities and code quality issues and reports results directly to developers when they can most effectively fix them. This helps organizations develop consistently high-quality, secure software which fully meets their business needs and those of their end-customers and partners. More than 2,000 organizations, including 70% of the Fortune 100, and 15 million software developers already rely on our tools and guidance to help them deliver and maintain exceptional and secure software.

See the rest here:

Sonatype and Cloud Native Computing Foundation Partner to - GlobeNewswire

How Citrix dropped the ball on Xen … according to Citrix – The Register

Open Source Summit What's the difference between the Citrix Hypervisor and Xen? Well, one has quite a big crowd of upset current and former community members.

One of the more interesting talks at the Open Source Summit was from Jonathan Headland, software development manager at Citrix, with the unusual title "How to Disengage the Open-Source Community: The Citrix Hypervisor Experience." Given all the usual fist-pumping so many companies' marketing teams like to engage in, especially at an event like the Open Source Summit, The Reg FOSS desk was intrigued.

Among other things, these days Citrix offers the Citrix Hypervisor, the product formerly known as XenServer, which it has owned since it acquired XenSource in 2007. The focus of Headland's talk [PDF] was how Citrix mismanaged the relationships between its commercial version of XenServer, the free open source version, and both its upstream and its user community. His opening line was:

He went on to carefully itemize the mistakes the company made, and the four lessons he suggests for others to avoid doing the same.

Citrix originally offered XenServer under the "freemium" model: one product was free, the other commercial for enterprise users. Only paying customers received maintenance. The model was successful, and the revenue funded a full-time team of eight engineers and a community manager who worked on the upstream project.

According to Headland, Citrix's first significant misstep was in 2011, when it decided to open source the full feature set of the enterprise product, with revenue to be made from support. The goal, he said, was to get more community buy-in, but the company learned some tough lessons: "We had a very poor understanding of our customers and what they'd actually pay for. Customers happily took the new features, but it turned out that they weren't so keen to pay for the maintenance."

The result was crashes: in revenue, in the reputations of the people who made the decisions, and in that of open source itself within the company.

Another problem was that when Citrix gave away the source code to the enterprise product, it didn't provide the accompanying tooling. "Even if we had remembered, or thought, to make all these tools available, we'd still have needed to teach people how to use them."

The result was disappointing Xen enthusiasts, and "rather than increasing contributions, it inhibited them."

In 2017, in "an atmosphere of mistrust of open source projects, and with the feeling that many of its customers were free-riding and not making any contributions," the company announced a change of direction. Product management reintroduced limits, cutting or reducing the features available for free, some to lower than the free product had back in 2010. For example, the number of hosts in a server cluster went from 64 down to just three.

However, this hit one particular sub-community of free users that one engineer Headland interviewed called the "weird systems users": hobbyists who offer virtualization to non-profit and charity users, using old, out-of-maintenance hardware that had been inherited or passed on to them. Enthusiast users, with no funds to buy licenses, but who had been among the most important finders and fixers of bugs. Unable to use the free version any more, they were forced to move to other products or create them.

The result was a whole new product: XCP-ng. "We did regain revenue from our paying customers, the big ones, the enterprises who were never going to make contributions but we lost the ones who were keeping the project alive."

Headland's talk ended with some confessions, and the four lessons he felt were the most important things for any company selling products based on open source.

He said that Citrix misunderstood and underestimated the breadth and richness of the community, and the number and types of stakeholders in it. That it hadn't identified the most important members, and didn't do enough to support even the ones it did know about. It also never thought about working with the other "commercial peer contributors to Xen: Red Hat, Oracle, SUSE, and Amazon, to name a few."

His takeaways?

It is both fascinating and wonderful to see such openness and honesty from any commercial entity. Even before Citrix changed its name, Xen wasn't the most well-known commercial hypervisor that's always been VMware, the company that pretty much created the industry. But as the references to Amazon and EC2 hint, Xen has some very big users and is a more important competitor in the space than you might think.

Whether Citrix's candor is going to win it more trust is uncertain, but it's an astonishingly big olive branch, and we applaud it.

Here is the original post:

How Citrix dropped the ball on Xen ... according to Citrix - The Register

Red Hat CEO on OpenShift roadmap, competitive play – ComputerWeekly.com

Red Hat, the open source juggernaut known for its enterprise-grade Linux distribution and OpenShift container application platform in more recent years, undertook a leadership change in July 2022, when it appointed Matt Hicks as president and CEO.

Hicks, who previously served as Red Hats executive vice-president of products and technologies, took over the top job from Paul Cormier, who will serve as chairman of the company.

In a wide-ranging interview with Computer Weekly in Asia-Pacific (APAC), the newly minted CEO said he hopes to continue building on Red Hats core open source model and tap new opportunities in edge computing with OpenShift as the underlying technology platform.

Having taken over as Red Hat CEO recently, could you tell us more about how youd like to take the company forward?

Hicks: Ive been at Red Hat for a long time, and what drew me to Red Hat was its core open source model, which is very unique and empowering. I distil it down to two fundamental things. One, we genuinely want to innovate and evolve on the shoulders of giants because there are thousands of creative minds across the world who are building and contributing to the products that we refine.

The second piece is that customers also have access to the code, and they understand what were doing. They can see our roadmaps, and our ability to innovate and co-create with them is unique. Those two things go back a long time and make us special. For me, thats the core mentality we want to hold on to at Red Hat because thats what differentiates us in the industry.

In terms of where we want to go with that open source model, weve talked about the open hybrid cloud for quite a while because we think customers are going to get the best in terms of being able to run what they have today, as well as where they want to be tomorrow. We want to help customers be productive in cloud and on-premise, and use the best that those environments offer, whether its from regional providers, hyperscalers, as well as specialised hardware. We see hybrid cloud as a huge, trillion-dollar opportunity, with just 25% of workloads having moved to the cloud today.

Potentially, there are more exciting opportunities with the extension to edge. Were seeing this accelerate with technologies such as 5G, where you still need to have computing reach and move workloads closer to users while pushing technologies like AI [artificial intelligence] at the point of interaction with users.

So, its going from the on-premise excellence we have today, extending that reach into public cloud and eventually into edge use cases. Thats Red Hats three- to five-year challenge, and an opportunity we are addressing with the same strategy of open source-based innovation that weve had in the past.

Were involved in practically every SBOM effort at this point, but when we make that final choice, we want to make sure its the most applicable choice at the time Matt Hicks, Red Hat

Against the backdrop of what youve just described, what is your outlook for APAC, given that the region is very diverse with varying maturities in adopting cloud and open-source technologies?

Hicks: If we look at APAC as a market, I think the core fundamentals of using software to drive digital transformation and innovation is key, and that could be for a lot of reasons. It could be controlling costs due to inflation. It could be tighter labour markets, where we need to drive automation. It could be adjusting to the Covid-19 situation where you might not be able to access workers. And I think for all of these reasons, weve seen the drive to software innovation in APAC, similar to the other markets.

DBS Bank is a good example in Singapore. They pride themselves in driving innovation and by using OpenShift and adopting open source and cloud technologies, they were able to cut operating costs by about 80%. But they are not just trying to cut costs, they also want to push innovation and I think thats very similar to other customers we have across the globe.

Kasikorn Business Technology Group in Thailand has a very similar approach, where theyre using technologies such as OpenShift to cut development times from a month to two weeks while increasing scale. Another example is Tsingtao Alana, which is using Ansible to drive network automation and improve efficiencies.

Like other regions, the core theme of using software innovation and getting more comfortable with open source and leveraging cloud technologies is similar in APAC. But one area where we might see an acceleration in APAC more so than in the US is the push to edge technologies driven by the innovation from telcos.

You spoke a lot about OpenShift, which has been a priority for Red Hat for a number of years. Moving forward, whats the balance in priorities between OpenShift and Red Hat Enterprise Linux (RHEL), which Red Hat is known for among many companies in APAC?

Hicks: Its a great question, and heres how I tend to explain that to customers that are new to the balance between OpenShift and RHEL.

The core innovation capability that RHEL provides on a single server is still the foundation that we build on. Its done really well for decades, for being able to provide that link to open source innovation in the operating system space. I call it the Rosetta Stone between development and hardware and being able to get the most out of that is what we aspire to do with RHEL.

That said, if you look at what modern applications need and Ive been in this space for more than 20 years they far exceed the resources of a single computer today. And in many cases, they far exceed the resources of a dozen, 100 or 1,000 computers. OpenShift is like going from a single bee to a swarm of bees, which gives you all the innovation in RHEL and lets you operate hundreds or thousands of those machines as a single unit so you can build a new class of applications.

RHEL is part and parcel of OpenShift, but its not a single-server model anymore. Its that distributed computing model. For me, thats exciting because I started my open source journey with Linux and then with RHEL when I was in consulting. Since then, the power of RHEL has expanded across datacentres and helps you drive some incredible innovation. Thats why the pull to OpenShift doesnt really change our investment footprint as RHEL offers a great model to leverage all of those servers more efficiently.

Could you dive deeper into the product roadmap for OpenShift? Over the years, OpenShift has been building up more capabilities, including software as a service (SaaS) for data science, for example. Are we expecting more SaaS applications in the future?

Hicks: When we think about OpenShift, or platforms in general, we try to focus on the types of workloads that customers are using with them and how we can help make that work easier.

One of the popular trends is AI-based workloads, and that comes down to the training aspects of it, which requires capabilities like GPU rather than CPU acceleration. Being able to take trained models and incorporate them into traditional development are things that companies struggle with. So, the way to get your Nvidia GPUs to work with your stack, and then get your data scientists and developers working together, is our goal with OpenShift Data Science.

We know hardware enablement, we have a great platform to leverage both training and deployment, and we know developers and data scientists, so that MLOps space is a very natural fit. What you will see more from us in the portfolio is what we call the operating model, where for decades, the prevalent model in the industry was having customers run their own software supplied and supported by us.

The public cloud has changed some of the expectations around that. While theres still going to be a ton of software run by customers, they are also increasingly leveraging managed platforms and cloud services. Once we know the workloads that we need to get to, we will try to offer that in multiple models where customers can run the software themselves if they have a unique use case.

But at the same time, we want to improve our ability to run that software for them. One area where youll see a lot of innovation is managed services, in addition to the software and edge components.

We at Red Hat, along with IBM, have put our bet on containers. VMware, I think, has tried or was sort of a late entrant to that party around Tanzu. For us, our core is innovation in Linux which is an extension to containers Matt Hicks, Red Hat

If you look at telcos, for example, they run big datacentres with lots of layers in between where the technology stack gets smaller and smaller. They also have embedded devices, which may have RHEL on them even if they are running containers. In the middle, were seeing a pull for OpenShift to get smaller and smaller. You can think of it as the telephone pole use case for 5G, or maybe its closer to the metropolitan base station that runs MicroShift, a flavour of OpenShift optimised for the device edge.

That ability to run OpenShift on lightweight hardware is key as edge devices dont have the same power and compute capabilities of a datacentre. So, those areas, coupled with specific use cases like AI or distributed networking based applications, is where youll see a lot of the innovation around OpenShift.

Red Hat has done some security work in OpenShift to support DevSecOps processes. I understand that currently there isnt any kind of software bill of materials (SBOM) capabilities that are embedded in OpenShift. What are your thoughts around that?

Hicks: If we picked one of the most important security trends that we try to cater to, it is understanding your supply chain and being confident in the security of it. Arguably, this is what we do we take open source, where you might not have that understanding of its provenance or the expertise to understand it, and add a layer of provenance so you know where its coming from.

I would argue that for the past 20 years, whether it was the driving decision or not, you are subscribing to security in your supply chain if you are a Red Hat customer. And were excited about efforts around how you build that bill of materials when youre not only running Red Hat software but also combining Red Hat software with other things.

There are a few different approaches, and this is always Red Hats challenge: when we make a bet, we have to stick with it for a while. Were involved in practically every SBOM effort at this point, but when we make that final choice, we want to make sure its the most applicable choice at the time.

So, while we havent pulled the trigger on a single approach or said what we will support, the core foundation behind SBOM is absolutely critical and we invest a lot there. Were excited about this, and honestly, before the SolarWinds incident, this was an area that was overlooked as a risk to consuming software that you dont understand.

With open source continuing to drive innovation, I think its critical for customers to understand where theyre getting that open source code from, whether its tied to suppliers or whether theyre responsible for understanding it themselves. But we havent made that final call on the SBOM format to support right now. I fully expect, in the next year or so, that we start to converge as an industry on a couple of approaches.

What are your thoughts on the competitive landscape, particularly around VMware with its Tanzu Application Platform?

Higgs: Its really about the choice on the right technology architecture to get the most out of hybrid cloud. About a year ago, most customers were drawn to a single public cloud and that trend was certainly strong, at least in the US and Europe, for a variety of reasons.

I think enterprises have realised that they might still have that desire, but its not practical for them. Theyre going to end up in multiple public clouds, maybe through acquisition or geopolitical challenges. And your on-premise environments, whether its mainframe technology or others, are not going away quickly. The need for hybrid has therefore become much more recognised today than it was even a year or two ago.

The second piece on that is, what is the technology platform that enterprises are going to leverage to build and structure their application footprint for hybrid? VMware certainly has their traditional investment in virtualisation and the topology around that.

We at Red Hat, along with IBM, have put our bet on containers. VMware, I think, has tried or was sort of a late entrant to that party around Tanzu. For us, our core is innovation in Linux which is an extension to containers. Were pretty comfortable with that and we see a lot of traction because all the hyperscalers have adopted that model.

Personally, I think we have a great position on a technology that lets customers leverage public clouds natively and get the most out of their on-premise environments. I dont know if virtualisation will have that same reach and flexibility of being able to run on the International Space Station, as well as power DBS Banks financial transactions as containers do.

VMware, I think, will be more drawn to their core strength in virtualisation, but we still have 75% of workloads remaining that have yet to move so well see how that really shakes out. But Im pretty comfortable with the containers and OpenShift bet on our side.

Red Hat has a strategic partnership with Nutanix to deliver open hybrid cloud offerings. In light of the uncertainty around Broadcoms acquisition of VMware, are you seeing more interest from VMware customers?

Hicks: Acquisitions are tricky and its hard to predict the outcome of an acquisition like that. What I would say is that we partner pretty deeply with VMware today as virtualisation still provides a good operating model for containers. I would expect us to partner with VMware as part of Broadcom.

That said, theres a bit of uncertainty in an area like this, and it does create a decision point around architecture. Were neutral to that because for us, if customers choose to stay on that core vSphere base, we will continue to serve them, even if containers are their technology going forward.

We also partner closely with companies like Nutanix, which will compete at that core layer. For us, we really run on the infrastructure tier, and we want to let customers run applications whether they are on Nutanix, vSphere or Amazon EC2.

We dont really care too much where that substrate lies. We want to make sure we serve customers at that decision point, and I think we have a lot of options to deliver to customers regardless of how the acquisition ends or how the landscape changes with other partners.

Continued here:

Red Hat CEO on OpenShift roadmap, competitive play - ComputerWeekly.com

[Interview] Next Generation Connected Experiences: Experts Share the Story Behind Tizen’s 10 Year Development – Samsung

On October 12, Samsung Electronics will host the Samsung Developer Conference 2022 (SDC 2022) in the U.S. Through this years SDC, Samsung will showcase its latest updates that seek to create even smarter user experiences by intuitively and organically connecting various devices. By providing upgraded, next-generation connected experiences, the role of the operating system (OS) has become even more important.

Samsung recognized the importance of OS early on and subsequently began research and development. In April of 2012, Samsung unveiled the first version of Tizen, a Linux-based open-source platform. 10 years later, at this years SDC, Samsung will unveil its new vision for Tizen 7.0.

The team behind the research and development of Tizen OS at Samsung Research: VP Jinmin Chung, Head of the Platform Team (center), and researchers Seonah Moon of the MDE Lab (left) and Jin Yoon of the Tizen Platform Lab (right)

Since the first version of Tizen was released, much time has passed and Tizen has evolved in a variety of ways. To learn the details behind the development of Tizen, Samsung Newsroom met with Samsung Researchs Vice President Jinmin Chung and researchers Jin Yoon and Seonah Moon, who have been working on Tizen since the beginning.

Tizen is a Linux-based open-source platform led by Samsung and it also supports all types of smart devices. With the aim of being utilized in various types of Samsung products to support smooth product operation, Tizen has been equipped in about 330 million smart devices as of the end of 2021.

We needed Tizen to differentiate Samsungs devices from others and to provide a different service and user experience, VP Chung said. Its already been 10 years since Tizen was first developed. We experienced challenges in the initial development stage, but we felt supported by the people who believed in the possibility and usability of Tizen and rooted for us. We focused on the research proudly knowing that we were leading Samsungs own independent OS development project, he added.

In 2014, for the first time, Tizen was equipped in Samsungs Gear 2, a wearable device, proving its viability through its commercialization. Furthermore, a year later, Tizen was used in the 2015 Samsung Smart TV product line up that set a new bar of smart TVs.

Tizen has many advantages that enable it to offer the highest performance across Samsungs devices. First, Tizens flexibility allows it to be easily applied to a variety of smart devices. In order to make this possible, Tizen went through multiple platform improvement processes. Several profiles were established based on the different types of products. Then, Tizen Common, which is the common module for all products, and the Specialized Module, which is needed for certain products only, were created. The structure is designed in a way that allows the platform to be quickly modified and applied to new products as well. This enables Tizen to be utilized in a wide range of products, including smart TVs, refrigerators and air conditioners.

Additionally, Samsung utilized its advanced know-how and experience in commercializing embedded system software when developing Tizen. The Tizen platform is optimized to perform well while using minimal memory and low power. Its an open-source platform that can be used by anyone, and it supports optimized performance for immediate commercialization.

Tizen is also convenient for new product development because it is Samsungs own independent OS. The platform can be modified as desired to add new features and services to products in a timely manner.

Across the world, only a handful of companies own their own independent OS, Chung said. The fact that Samsung has its own OS called Tizen means that Samsung has become a company proficient in developing not only hardware but software as well, he emphasized.

Many developers put much effort into the development and evolution of Tizen.

Researcher Jin Yoon has been participating in the Tizen project since its early stages, meaning he has witnessed the growth of Tizen firsthand. Starting with Smart TVs, the applications for Tizen are gradually increasing, and the system is evolving and advancing further, Yoon said. In addition, the implementation of a new development language, framework and infrastructure makes development more convenient and increases the productivity of developers. Now, were working hard to secure usability that is appropriate for each product group that uses Tizen, he continued.

The code sources of Tizen are very stable because theyve gone through actual commercialization. On top of that, they come with performance-specific details and security as well. This means third party developers can trust and find these sources, Yoon said.

Expanding the application of Tizen to more devices and creating an ecosystem for Tizen is important for improving the usability of a product, but active participation in open-source communities is also crucial. This is because open-source communities enable community members to share problems and come up with solutions together, directly contributing to the improvement of software. In order to manage this, Seonah Moon from the MDE Lab, whos been developing Tizen for eight years, is responsible for tasks involving open-source maintenance. Countless open sources were also used for Tizens development. Moon monitors each open source, analyzes its errors and shares her opinion on them to help outside developers access Tizen more easily.

Tizen is more than just an OS for Samsungs developers and researchers.

Since the platforms development requires constant maintenance, this means developers must continue to hone their skills. Tizen motivated each member of the development team to continue learning and improving their software skills. The developers of Tizen have grown into experts specializing in different areas. Since the teams initial start, they have grown to accumulate many platform codes over the last 10 years. They also constantly learned by voluntarily participating in study group meetings.

Tizen is like a bridge that connects all of Samsungs products together, Moon said. Cooperation among business divisions is a must for utilizing the OS in different products. Through this cooperation, we can share our development knowledge with one another and also create a new service based on our OS, Moon continued.

Tizen has continued to evolve to allow all devices around us, including wearables, TVs, refrigerators and even robot vacuum cleaners, to provide new user experiences. When asked about what the future of Tizen will look like, the developers explained their ambitions to continue connecting devices using Tizen.

Were now living in an era where everything is connected to one another through the Internet of Things (IoT), said Yoon. By increasing the productivity of Tizen app development, Id like to provide an innovative user experience where all Samsung products are connected to one another, creating an interconnected product ecosystem, he explained.

Id like to lead the efforts in expanding Tizens use in a variety of ways by discovering new scenarios and utilizing even more advanced technologies, said Moon when explaining her ambitions. By growing together with Tizen, Id like to become the maintainer or the key contributor to the open-source project, she continued.

I dream of a future in which Tizen is equipped in all the devices that people use in their daily lives, enabling various devices to operate organically as if theyre one and providing intelligent services, like the metaverse, Chung said. To enable this, Samsung Research is developing various technologies with many teams in order to manifest a future in which various Tizen devices are all connected through the OS, providing a Multi-Device Experience (MDE), modular AI and more.

The infinite possibilities of Tizen will be showcased at SDC 2022 this year. At the conference, Samsung Research will share how easy it is to make new devices based on the flexibility of Tizen 7.0, and how the new version of Tizen can strengthen intelligent services. As it continues to evolve in line with the era of hyper-connectivity and intelligence, the future of Tizen is bright and its applications are limitless.

View original post here:

[Interview] Next Generation Connected Experiences: Experts Share the Story Behind Tizen's 10 Year Development - Samsung

First Hand analysis: a good open source demo for hand-based interactions – The Ghost Howls

I finally managed (with some delay) to find the time to try First Hand, Metas opensource demo of the Interaction SDK , which shows how to properly develop hand-tracked applications. Ive tested it for you, and I want to tell you my opinions about it, both as a user and as a developer.

First Hand is a small application that Meta has developed and released on App Lab. It is not a commercial app, but an experience developed as a showcase of the Interaction SDK inside the Presence Platform of Meta Quest 2.

It is called First Hand because it has been roughly inspired by Oculus First Contact, the showcase demo for Oculus Touch controller released for the Rift CV1. Actually, it is just a very vague inspiration, because the experiences are totally different, and the only similarity is the presence of a cute robot.

The experience is open source, so developers wanting to implement hand-based interactions in their applications can literally copy-paste the source code from this sample.

Here follows my opinion on this application.

As a user, Ive found First Hand a cute and relaxing experience. There is no plot, no challenge, nothing to worry about. I just had to use my bare hand and interact with objects in a natural and satisfying way. The only problem for me is that it was really short: in 10 minutes, it was already finished.

I expected a clone of the Oculus First Contact experience that made me discover the wonders of Oculus Touch controllers, but actually, it was a totally different experience. The only similarities are the presence of a cute little robot and of some targets to shoot. This makes sense considering that controllers shine with different kinds of interactions than the bare hands, so the applications must be different. Anyway, the purpose and the positive mood were similar, and this is enough to justify the similar name.

As Ive said, in First Hand there is not a real plot. You find yourself in a clock tower, and a package gets delivered to you. Inside there are some gloves (which are a mix between Alyxs and Thanoss ones) which you can assemble and then use to play a minigame with a robot. Over.

What is important here is not the plot, but the interactions that you perform in the application in a natural way using just your bare hands. And some of them are really cool, like for instance:

There were many other interactions, with some of them being simple but very natural and well made. What is incredible about this demo is that all the interactions are very polished, and designed so that to be ideal for the user.

For instance, pressing buttons just requires tapping on them with the index finger, but the detection is very reliable, and the button never gets trespassed by the fingertips, resulting very well-made and realistic.

The shooting with the palm is very unreliable because you cant take good aim with your palm, so the palm shoots a laser for a few seconds, giving you time to adjust its aim until you shoot the object you want to destroy.

And force-grabbing is smart-enough to work on a cone of vision centered on your palm. When you stretch your palm, the system detects what is the closest object in the area you are aiming at with your hand, and automatically shows you a curved line that goes from your palm to it. This is smart for two reasons:

All the interactions are polished so that to cope with the unreliability of hand tracking. I loved how things were designed, and Im sure that Metas UX designers made very experiments before arriving at this final version. I think the power of this experience as a showcase is given by this polish level, so that developers and designers can take inspiration from this to create their own experiences.

As Ive told you, as a user I found the experience cute and relaxing. But at the same time, I was quite frustrated by hands tracking. I used the experience both with artificial and natural light, and in both scenarios, I had issues with the tracking of my hands, with my hands losing tracking a lot of times. Its very strange, because I found tracking more unreliable than in other experiences like Hand Physics Lab. This made my experience a bit frustrating, because for instance, when I went to grab the first object to take the remote and activate the elevator, lots of times my virtual hand froze when I was close to it, and it took me many tries before being able to actually grab it.

I also noticed more how hand tracking can be imprecise: Meta did a great job in masking its problems by creating a smart UX, but it is true that these smart tricks described above were necessary because hand tracking is still unreliable. Furthermore, not having triggers to press, you have to invent other kinds of interactions to activate objects, which may be more complex. Controllers are much more precise than hands, and so for instance shooting would have been much easier with the controllers, exactly as force grab.

This showed me that hand tracking is not very reliable and cant be the primary controlling method for most VR experiences, yet.

But at the same time, when things worked, performing actions with the hands felt much better. I dont know how to describe it: it is like when using controllers, my brain knows that I have a tool in my hands, while just using my bare hands, it feels more that its truly me making that virtual action. After I assembled the Thanos gloves, I watched my hands with them on, and I had this weird sensation that those gloves were on my real hands. It never happened to me with controllers. Other interactions were satisfying to me, like for instance scrolling the Minority Report screen, or force grabbing. Force Grabbing is really well implemented here, and it felt very fun to do. All of this showed me that hand tracking may be unreliable, but when it works, it gives you a totally different level of immersion than controllers. For sure in the future hand tracking will be very important for XR experiences, especially the ones that dont require you to have tools in your hands (e.g. a gun in FPS games).

Since the experience is just 10 minutes long, I suggest everyone give it a try on App Lab, so that to understand better these sensations I am talking about.

The Unity project of First Hand is available inside this GitHub repo. If you are an Unreal Engine guy, well Im sorry for you.

The project is very well organized inside folders, and also the scene has a very tidy tree of gameobjects. This is clearly a project made with production quality, something that doesnt surprise me because this is a common characteristic of all samples released by Meta.

Since the Interaction SDK can be quite tricky, my suggestion is to read its documentation first, and only after check out the sample. Or to have a look at the sample while reading the documentation. Because there are some parts that just by looking at the sample are not easily understandable.

Since the demo is full of many different interactions, it is possible in the sample to have the source code of all of them, and this is very precious if you, as a developer, want to create a hand-tracked application. You just copy a few prefabs, and you can use them in your experience. Kudos to Meta for having made this material available to the community.

Ive not used the Interaction SDK in one of my projects yet, so this sample was a good occasion for me to see how easy it is to implement it.

My impression is that the Interaction SDK is very modular and very powerful, but also not that easy to be employed. For instance, if you want to generate a cube every time the user does a thumb up with any of his hands, you have to:

As you can see, the structure is very tidy and modular, but it is also quite heavy. Creating a UI button that when you point-and-click with the controllers generates the cube is much easier. And talking about buttons, to have those fancy buttons you can poke with your fingertip, you require various scripts and various child gameobjects, too. The structure is very flexible and customizable, but it seems also quite complicated to master.

For this reason, it is very good that there is a sample that gives developer prefabs ready out-of-the-box that could be copied and pasted, because I guess that developing all of this from scratch can be quite frustrating.

The Interaction SDK, which is part of the Presence Platform of the Meta Quest is interesting. But sometimes, I wonder: what is the point of having such a detailed Oculus/Meta SDK, if it works on only one platform? What if someone wants to do a cross-platform experience?

I mean, if I base all my experience on these Oculus classes, and this fantastic SDK with fantastic samples how can I port all of this to Pico? Most likely, I can not. So I have to re-write all the application, maybe managing the two versions in two different branches of a Git repository, with all the management hell that comes out of it.

This is why I usually am not that excited about the updates on the Oculus SDK: I want my applications to be cross-platform and work on the Quest, Pico, and Vive Focus, and committing only to one SDK is a problem. For this reason, Im a big fan of Unity.XR and the Unity XR Interaction Toolkit, even if these tools are much rougher and unriper than Oculus SDK or even the old Steam Unity plugin.

Probably OpenXR may save us, and let us develop something with Oculus SDK and make it run on Pico 4, too. But for now, OpenXR implementation is not very polished in Unity, yet, so developers have still to choose if going for only one platform with a very powerful SDK, or going cross-platform with rougher tools or using 3rd-parties plugins, like for instance AutoHand, which works well in giving physics-based interactions in Unity. I hope the situation will improve in the future.

I hope you liked this deep dive in First Hand, and if it is the case, please subscribe to my newsletter and share this post on your social media channels!

(Header image by Meta)

Read the original here:

First Hand analysis: a good open source demo for hand-based interactions - The Ghost Howls

How the blockchain helps a whisky and rum producer protect his brand – Fortune

Standing in a vineyard in the Alsace region of France, after the three-day-long Whisky Live Paris conference, longtime master distiller Mark Reynier wanted to discuss something else: the blockchain.

Although combining the age-old craft of distilling with such comparatively nascent technology may seem odd to some, Reynier, the CEO and founder of Waterford Whisky Distillery and of Renegade Rum Distillery, is no stranger to unconventional ideas.

Well known in the industry for helping revive the shuttered Bruichladdich Distillery, located on the isle of Islay in Scotland, Reynier helped pioneer applying the concept of terroiror teireoir, the term the company trademarked that includes the Irish Gaelic word for Irelandto whisky.

Terroir has been used in wine production for a millennia, with winemakers obsessing over environmental factors like microclimate, soil, and topography interacting together to create a wines flavor profile. But the practice typically hadnt been applied to whiskey or rum, which are mostly mass produced by large corporations like Paris-based Pernod Ricard, which controls 80% of the global Irish whiskey market.

Using terroir to produce alcohol means rejecting the homogenization of industrial distillation or industrial manufacture, and extolling the virtues of going au naturel, Reynier told Fortune.

Through a proprietary blockchain-enabled system called ProTrace that validates their record-keeping system for manufacturing, Waterford and Renegade Rum are proving the effectiveness of terroir for spirits and presenting the details in digital form, tracking and compiling every step of the growing and distilling process. Cian Dirrane, the group head of technology for both distilleries, said he worked with his team to create ProTrace as a custom blockchain after researching open source code on GitHub, and it was implemented it in 2019.

On the back of every bottle of whisky or rum distilled by Reyniers companies is a nine-digit code that customers can enter online to reveal myriad details. Although the company could have used another technology or system to accomplish the same goal, Dirrane said using the blockchain ensures the recorded data cant be changed.

Its not just marketing bullshit, Reynier added. Its a validation, as well as a proof of concept.

For one such bottle of whisky from Waterford Distillery, part of a bottling of 21,000, the company reported the names of the growers and when they harvested the grain, when the product was distilled and bottled, and how long the bottles contents had matureddown to the day.

To make the data more visual, along with the processing information collected by Waterfords blockchain system, each bottles unique report includes a map with the location of the farm where the barley was grown, a video of the field and the farmers, and ambient sounds.

Its really a counter to the nonsense thats spouted around the world by different sales guys and brand owners or whatever, Reynier said. Our process is so specific. And because were small guys in a world of multinational companies, I have to be able to verify what I say.

The blockchain verification involves more than 800 validation points spanning from when the grain is received by the distillery to the end of distillation when the spirit is put in casks, according to Dirrane. These validation points include the amount of malt brought to the distillery by trucks and the temperatures reached when the fermented liquid is heated into vapor and condensed back into a liquid.

Every data point is validated and logged on the digital ledger, which cant be tampered with, Dirrane said.

The whole productionfrom the raw product intake to the distillation process to the casting and agingand then the finished product is all on the blockchain, Dirrane said. If anybody wants to validate externally, they can see all the processes that happened.

Those processes do add to production costs. A typical bottle of whisky from Waterford Distillery may cost $80 to $120, whereas a bottle of Jameson, a well-known Irish whisky brand produced by Pernod Ricard, may retail for just $25. Renegade Rums first mature bottling will be released by the end of the month, and one of the first bottles will likely cost about $55, compared with a $20 price tag on a bottle of Captain Morgan from London-based Diageo.

But Waterford Distillery and Renegade Rum could soon have some company. Dirrane and his team, instead of keeping the tech to themselves, have written a white paper and are planning to make the code and ledger open source, and publish it online next year. Dirrane said that in addition to Reyniers commitment to transparency, as a software engineer, hes eager to see the program reviewed by peers.

Public or not, the blockchain has been essential for Reynier in an industry sometimes known for obfuscation.

This is taking on a completely Wild West drink sector, Reynier said, and trying to establish and verify my way of doing it so everybody can see the traceability and the transparency.

More here:

How the blockchain helps a whisky and rum producer protect his brand - Fortune

Google Chrome is the most vulnerable browser in 2022 – General Discussion Discussions on AppleInsider Forums – AppleInsider

New data reveals that Google Chrome users need to be careful when browsing the web, but Safari users don't get off scot-free.

According to a report by Atlas VPN on Wednesday, Google Chrome is the most vulnerable browser on the market. So far, in 2022, the browser had 303 vulnerabilities, totaling 3,159 cumulative vulnerabilities.

These figures are based on data from the VulDB vulnerability database, covering Janurary 1, 2022 to October 5, 2022.

Google Chrome is the only browser with new vulnerabilities in the five days in October. Recent ones include CVE-2022-3318, CVE-2022-3314, CVE-2022-3311, CVE-2022-3309, and CVE-2022-3307.

The CVE program tracks security flaws and vulnerabilities across multiple platforms. The database doesn't list details for these flaws yet, but Atlas VPN says they can lead to memory corruption on a computer.

Users can fix these by updating to Google Chrome version 106.0.5249.61.

Mozilla's Firefox browser is in second place for vulnerabilities, with 117 of them. Microsoft Edge had 103 vulnerabilities as of October 5, 61% more than the entire year of 2021. Overall, it has had 806 vulnerabilities since its release.

Next is Safari, which has some of the lowest levels of vulnerabilities. For example, in the first three quarters of 2022, it had 26 vulnerabilities, and its number for cumulative vulnerabilities 1,139 since its release.

Meanwhile, the Opera browser had no documented vulnerabilities so far in 2022 and only 344 total vulnerabilities.

Google Chrome, Microsoft Edge, and Opera all share the Chromium browser engine. Vulnerabilities in Chromium may affect all three browsers.

The Chromium open-source project generates the source code used by all Chromium-based browsers. Not all flaws will affect all of these browsers because each company creates their browsers in different ways.

As of May 2022, Safari reached over a billion users, and Apple has been working hard to make sure its browser is secure and safe to use.

To stay safe on the web, people should keep their browsers updated to the latest version. Be careful when downloading plug-ins and extensions, especially from lesser-known sources or developers.

Read on AppleInsider

Continued here:

Google Chrome is the most vulnerable browser in 2022 - General Discussion Discussions on AppleInsider Forums - AppleInsider