Page 48«..1020..47484950..6070..»

Category Archives: Cloud Computing

The Metaverse Will Need 1,000x More Computing Power, Says Intel – Singularity Hub

Posted: December 19, 2021 at 6:46 pm

With land in virtual worlds selling for millions of dollars, NFTs flooding the internet, and Meta (formerly Facebook) employees saying the word metaverse over 80 times during a keynote presentation last month, it seems like the metaverse is taking offor at least buzz around the idea of it is. Is it all hype? Does anyone really understand it? When will it get here, if it hasnt already?

According to information released this week by chipmaking giant Intel, the metaverse is on its waybut its going to take a lot more technology than we currently have to make it a reality, and the company plans to be at the forefront of the effort.

Right now, virtual and game worlds like Second Life, The Sandbox, Roblox, or Decentraland are being conflated with early iterations of the metaverse, but hype aside, were actually nowhere near having the technology to build a persistent 3D virtual world. And what does a persistent 3D virtual world even mean?

Neal Stephensons book Snow Crash, published in 1992, was where the term metaverse first appeared; there, it described a 3D virtual world people could visit as avatars; they accessed this virtual world with virtual reality headsets that connected to a worldwide fiber-optics network. Another well-known reference is the 2011 book or 2018 movie Ready Player One.

As far as how these fictional accounts translate into real life, the simplest way to describe the metaverse is as a connected network of 3D virtual worlds that is always on and happening alongside our real-world lives. Venture capitalist and writer Matthew Ball says we can think of the metaverse as a quasi-successor state to the mobile internet, which will build on and transform the internet as we currently experience it.

We will constantly be within the internet, rather than have access to it, and within the billions of interconnected computers around us, Ball wrote in his Metaverse Primer. Mark Zuckerberg described the metaverse similarly, calling it an even more immersive and embodied internet. Picture this: you strap on a headset or pair of goggles, flick a switch, and boomyoure still standing in your living room, but youre also walking through a 3D world as an avatar of yourself, and you can interact with other people who are doing the same thing from their living rooms.

Being constantly within the internet doesnt sound all that appealing to me personallyin fact, it sounds pretty terriblebut the good news for those with a similar sentiment is that the full vision of the metaverse, according to Ball, is still decades away, primarily because of the advances in computing power, networking, and hardware necessary to enable and support it.

In fact, according to Raja Koduri, VP of Intels accelerated computing systems and graphics group, powering the metaverse will require a 1,000-fold improvement on the computational infrastructure we have today. You need to access to petaflops [one thousand teraflops] of computing in less than a millisecond, less than ten milliseconds for real-time uses, Koduri told Quartz. Your PCs, your phones, your edge networks, your cell stations that have some compute, and your cloud computing need to be kind of working in conjunction like an orchestra.

Koduri pointed out in a press release this week that even just to put two people in a realistic virtual environment together requires realistic-looking avatars with detailed and unique clothing, hair, and skin. Giving these avatars real-time speech and motion capabilities means we need sensors that can pick up audio and physical data, including 3D objects in users real-world environments. This data then needs to be transferred at high bandwidth with low latencyfor hundreds of millions of users at the same time.

Intel says its developing chips designed to help power the metaverse, and plans to release a new series of graphics processors early next year. Other key components of the companys metaverse-focused work include specialized algorithms, an architecture called Xe, and open software development tools and libraries.

Its uncertain exactly how or when the metaverse will arrive; its a process that will take place incrementally over years or decades. Koduri, though, is highly optimistic. We believe that the dream of providing a petaflop of compute power and a petabyte of data within a millisecond of every human on the planet is within our reach, he wrote.

Image Credit: Andrush/Shutterstock.com

See the rest here:

The Metaverse Will Need 1,000x More Computing Power, Says Intel - Singularity Hub

Posted in Cloud Computing | Comments Off on The Metaverse Will Need 1,000x More Computing Power, Says Intel – Singularity Hub

What Is Cloud Computing? How Does Cloud Computing Work …

Posted: December 17, 2021 at 11:42 am

Cloud computing refers to any kind of hosted service that is delivered over the Internet.What Is Cloud Computing?

Cloud computing refers to any kind of hosted service delivered over the internet.These services often include servers, databases, software, networks, analytics and other computing functions that can be operated through the cloud.

Files and programs stored in the cloud can be accessed anywhere by users on the service, eliminating the need to always be near physical hardware.In the past, for example,user-created documents and spreadsheets had to be saved to a physical hard drive, USB drive or disk. Without some kind ofhardware component, the files were completely inaccessible outside the computer they originated on. Thanks to the cloud, few people worry anymore about fried hard drives or lost or corruptedUSB drives. Cloud computing makes the documents available everywhere because the data actually lives on a network of hosted servers that transmit data over the internet.

Cloud companies, sometimes referred to as Cloud Service Providers (CSPs), are companies that offer services or applications on the cloud. These cloud companies essentially host tools and data centers that allow customers to retrieve and utilize information in a flexible, manageable and cost-effective manner. Through cloud companies, customers can easily access their cloud-based data via any network connection.

Cloud computing services are broken down intothree major categories: software-as-a-service (SaaS), platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS).

Software-as-a-ServiceSaaS is the most common cloud service type. Many of us use it on a daily basis. The SaaS model makessoftware accessible through an app or web browser. Some SaaS programs are free, but many require a monthly or annual subscription to maintain the service. Requiring no hardware installation or management, SaaS solutions are a big hit in the business world. Notable examples include Salesforce, Dropbox or Google Docs.

Platform-as-a-Service PaaS is acloud environment supporting web application development and deployment. PaaS supports the full lifecycle of applications, helping users build, test, deploy, manage and update all in one place. The service also includes development tools, middleware and business intelligence solutions. Notable examples includeWindows Azure, AWS Elastic Beanstalk and Google App Engine.

Infrastructure-as-a-Service IaaS provides users with basic computer infrastructure capabilities like data storage, servers andhardware all inthe cloud.IaaS gives businesses access to largeplatforms and applications without the need forlarge onsite physical infrastructures. Notable examples of IaaS includeDigitalOcean, Amazon EC2 and Google Compute Engine.

The cloud is basically a decentralized place to share information through satellite networks. Every cloud application has a host, and the hosting company is responsible for maintaining the massive data centers that provide the security, storage capacity and computing power needed to maintain all of the information users send to the cloud.

The most prominent companies hosting the cloud are major playerslike Amazon (Amazon Web Services), Microsoft (Azure), Apple (iCloud) and Google (Google Drive), but there's also a plethora of other players, large and small.These hosting companies can sell the rights to use their clouds and store data on their networks, while also offering the end user an ecosystem that can communicate between devices and programs (e.g., download a song on your laptop and it's instantly synced to the iTunes software on your iPhone).

See Best Cloud Computing Companies With Open JobsView Top Cloud Computing Companies

Generally, cloud computing follows three delivery models:

PublicThis is the most common and all of the players mentioned above (Amazon, Microsoft, Apple & Google) run public clouds accessible anywhere with login credentials and the right web app.

PrivateThis model offers the same kind of flexibility as the public cloud, but with the infrastructure needs (hosting, data storage, IT staff, etc.)provided by the companies or users of the service. Additionally, the restricted access and hands-on management of hosting gives the private model an extra layer of security.

HybridHybrid cloud computing isa combination of the public and private models. The two cloud types are linked over the internet and can share resources when needed (e.g., if the private cloud reaches storage capacity or becomes corrupted, the public cloud can step in and save the day).

View original post here:

What Is Cloud Computing? How Does Cloud Computing Work ...

Posted in Cloud Computing | Comments Off on What Is Cloud Computing? How Does Cloud Computing Work …

What is Cloud Computing? – Computer Notes

Posted: at 11:42 am

Cloud Computing Definition is that it is a shared pool of configurable computing resource (eg. networks, servers, storage, applications, and services) network on demand over the internet. Cloud computing literally, is the use of remote servers (usually accessible via the Internet) to process or store information. Access is usually using a Web browser. Save files on a server via the Internet is one example.

Cloud computing is the best solution to manage your applications yourself; it is a shared multi-tenant platform that is supported. When using an application running in the cloud, you simply connect to it, customize it and use it.

Today, Millions of us are happy to use a variety of applications in the cloud, such as applications of CRM, HR, accounting, and even business applications.These applications based in the cloud can be operational in a few days is not possible with traditional enterprise software. They are cheap because you do not have to invest in hardware and software,or to spend money for the configuration and maintenance of complex layers of technology or to finance facilities to run them. And they are more scalable, more secure and reliable than most applications. In addition, upgrades are supported, so that your applications automatically benefit from all the improvements of safety and performance available, as well as new features.

Well be covering the following topics in this tutorial:

The advantage of cloud computing is two fold.It is a file backup shape.It also allows working on the same document for several jobs (one person or a nomad traveling) of various types (or PC, tab or smartphone).

Cloud computing simplifies usage by allowing overcoming the constraints of traditional computer tools (installation and updating of software,storage, data portability). Cloud computing also provides more elasticity and agility because it allows faster access to IT resources (server,storage or bandwidth) via a simple web portal and thus without investing in additional hardware.

The National Institute of Stands and Technology (NIST) describes cloud computing as a model for on-demand network access to computing resources (e.g., networks, servers, storage, applications and services). Common Cloud Service Models are:

Cloud Software as a Service (SaaS): The user has the possibility to use the service providers applications over the network.These applications are accessed via different interfaces, thin client, Web browser, mobile devicesThe customer manages and does not control the underlying cloud infrastructure including network, servers, operating systems,databases, storage, but can possibly benefit from access to restricted configurations, specific to user categories.

Cloud Platform as a Service (PaaS):The consumer can deploy cloud infrastructure on its own applications. The user manages and does not control the underlying cloud infrastructure (network,servers, operating systems, databases, storage), but has control over the deployed applications and the ability to configure the environment of application hosting.

Cloud Infrastructure as a Service (IaaS): The client can rent storage, processing power, network and other computing resources.The user manages and does not control the underlying cloud infrastructure but has control over databases , operating systems, and applications deployed.

Public cloud: This type of infrastructure is accessible to a wide audience and belongs to aprovider of cloud services.

Private cloud: The cloud infrastructure works for one organization. It can be managed by the company itself (internal Private Cloud). In the latter case, the infrastructure is dedicated to the company and accessible via secure VPN-type networks.The Cloud Community:The infrastructure is shared by several organizations that have common interests (e.gsafety requirements, compliance ). As private cloud, it can be managed by the organizations them selves or by third parties.Hybrid cloud: Infrastructure consists of two or more clouds (private, Community or Public), which remain unique entities but are bound together by standardized or proprietary technology, enabling data portability or applications.

Cost Reduction: Cloud computing is seen as an incremental investment, companies can save money in the long term by obtaining resources.

Storage increase: instead of purchasing large amounts of storage before the need, organizations can increase storage incrementally, requesting additional disk space on the service provider when the need is recognized.

Resource pooling: in the IT industry, this feature is also known as Multi-tenancy, where many users / clients share a type and varied level of resources.

Highly automated: As the software and hardware requirements are hosted on a cloud provider, IT departments sites no longer have to worry about keeping the things-to-date and available.

Greater mobility: Once the information is stored in the cloud, access it is quite simple, just you have an Internet connection, regardless of where they are located.Change the IT focus: Once the responsibility of the computing environment has,essentially shifted to the cloud provider, IT departments can now focus more on the organizations needs and the development of strategic applications and tactics and not on operational needs of the day-to-day.

Towards Green IT: By releasing the physical space, virtualization of applications and servers contributes to the reduction of equipment as well as the need for air conditioning, consequently, less energy waste.

Keep updated things: Similar to change the IT focus, this benefit is because of the new demands of providers cloud services, ie, the focus of providers is to monitor and maintain the most recent tools and techniques for the contractor.

Quick elasticity: this characteristic has to do with the fundamental aspects of Cloud flexibility and elasticity. For example, the web shops carry a standard amount of transactions during the year, but it is necessary to increase near Christmas time. And of course these stores do not want to pay for that capacity at peak during the rest of the year. Measurement service: which means services monitored, controlled and reported. This feature allows a model of pay-per-use service, or pay for use. It has similarities with the concept of telephone service packages where you pay a standard signature to basic levels, and paid extra for the additional service, without changing the contract.

The various problem areas for cloud computing environments are:

Security: As the data are no longer in their own organization, security becomes a major issue and questions must be answered, such as: Data is protected as adequate? There is a hacker-proof system? Can you meet the requirements regulations and government for privacy? How do you discover the leak information? Note also that corporate governance is always very concerned about the data that is stored outside the organization.Location and Data Privacy: Where the data is stored? How data is stored? The provider has adequate security for data in places where they are stored?Internet addiction: Since the cloud features are not available on the local network, you have to worry about the availability of the Internet. If you lose access to the Internet out, what that happens to your cloud computing environment? If your service provider increasing period unavailability, what you do with your employees and customers? What do you do in case of increased latency or delays the answers?

Levels of availability and service: Most organizations are familiar with the agreements service levels. The service level agreement specifies the amount of service capacity that someone has to provide, along with the penalties for not providing this level of service. How you can be sure that the cloud service provider has sufficient resources to maintain a service level agreement you signed with them?

Read this article:

What is Cloud Computing? - Computer Notes

Posted in Cloud Computing | Comments Off on What is Cloud Computing? – Computer Notes

Top Cloud Computing Trends in 2021: Beyond the ‘Big 3’ – ITPro Today

Posted: at 11:42 am

2021 was another strong year for the cloud computing industry not just for the big three public cloud providers, but also for a number of other players looking to take advantage of the growing demand for the cloud.

Its not surprising that the COVID-19 pandemic contributed to the cloud computing trends in 2021. As the pandemic continues to affect organizations of all sizes, the cloud has been helping them transition to remote work and move forward with their digital transformation plans.With this growth in cloud usage came an expansion of new capabilities from different providers, to help organizations with their digital transformation and to ease migration to the cloud.

The big three public cloud providers Amazon Web Services (AWS), Google Cloud Platform (GCP) and Microsoft Azure all boasted significant revenue growth in 2021, thanks to the growing demand for cloud services. In February, cloud spend was already up by 35% year-over-year, according to an analysis by Synergy Research Group. That trend continued into May, when the big three once again reported double-digit cloud revenue gains.

One of the main drivers of growth during the pandemic was the fact that organizations no longer wanted to manage their own technology infrastructure, said Amazon CFO Brian Olsavsky during his companys first-quarter fiscal 2021 earnings call.

The big three cloud providers ended 2021 the way they began it, with strong revenue growth that was driven by demand. And Microsoft CEO Satya Nadella expects cloud growth to continue, explaining why during his company's third-quarter fiscal 2021 earnings call: "Digital technology is a deflationary force in an inflationary economy."

While 2021 was a big year for cloud computings market leader, AWS, in terms of revenue, the company also had a change in leadership.

Since AWS' creation in the early 2000s, Andy Jassy had led the Amazon cloud business unit, but that changed in 2021. Jassy took the reins of all of Amazon, with former Tableau CEO Adam Selipsky taking over as CEO of AWS on May 17. During his short tenure as AWS CEO, Selipsky has not introduced any dramatic changes in the way that AWS operates and has largely stayed on the same course that was set by Jassy.

It wasnt until Nov. 30 that Selipsky's first truly big public moment occurred, when he delivered the keynote at the AWS re:Invent 2021 conference. At the conference, AWS announced a series of new technologies, including a Private 5G, Cloud-WAN and new processor services.

AWS may have new leadership, but its clear that it is not taking a new direction, rather continuing on the same path of aggressive pricing and a constant stream of new service introductions.

While AWS, Azure and Google Cloud dominate the cloud industry by revenue and market share, 2021 reminded us that there are other players that are more than capable of meeting user demands.

Among the alternative cloud providers that had a big year in 2021 was DigitalOcean, which had its initial public offering on March 24, listing on the New York Stock Exchange (NYSE) with the symbol DOCN. DigitalOceans focus on enabling developers has brought the company success in 2021.

Enterprise stalwarts IBM and Oracle also made significant cloud moves in 2021. IBM opened a series of new cloud computing regions in 2021, with publicly announced plans to expand even more into 2022. IBM is focusing on its enterprise customers and is seeking to differentiate its cloud services with quantum computing capabilities.

Not to be outdone, Oracle has also been aggressive with its cloud strategy this year as it grows its Oracle Cloud Infrastructure (OCI) platform. Oracle founder Larry Ellison has been pushing his company's cloud migration services and cost benefits as it seeks to take share from AWS.

There has also been greater demand for open-source cloud technologies, including OpenStack and Apache CloudStack. The OpenStack open-source cloud platform reported growth in usage over the course of 2021, and rolled out two major releases OpenStack Wallaby in May and OpenStack Xena in October.

Apache CloudStack also advanced in 2021, with the CloudStack 4.15 update in January and CloudStack 4.16 in November. One major improvement to CloudStack in 2021 was an enhanced visual interface to manage, monitor and configure cloud deployments.

We can't talk about 2021 cloud computing trends without mentioning multicloud. It was another big year for multicloud, thanks not only to the continued demand for cloud services but also the number of strong options users have. The advantage of using multicloud when there are multiple strong options available is that an organization can run workloads across multiple clouds, instead of choosing just one cloud for all of its needs.

The multicloud trend in 2021 also extended to hybrid cloud deployments a mix of public cloud resources alongside on-premises deployments. Moving from one deployment model to another, either from on-premises to the public cloud or from one public cloud to another, has been a common topic of discussion across vendors in 2021, as they try to grow share.

What 2021 proved was that organizations are continuing and will continue to turn to the cloud and there is no shortage of options, technologies and vendors from which IT pros can choose.

What other 2021 cloud computing trends did you notice? Sound off in the comments below.

Read more:

Top Cloud Computing Trends in 2021: Beyond the 'Big 3' - ITPro Today

Posted in Cloud Computing | Comments Off on Top Cloud Computing Trends in 2021: Beyond the ‘Big 3’ – ITPro Today

Global Cloud Computing in Industrial IoT Market (2021 to 2026) – Increased Use of Cloud Computing Platforms is Driving Growth – ResearchAndMarkets.com…

Posted: at 11:42 am

DUBLIN--(BUSINESS WIRE)--The "Cloud Computing in Industrial IoT: Market for Cloud support of IIoT by Software, Platforms, Infrastructure (SaaS, PaaS, and IaaS) including Centralized Cloud Edge Computing for Devices and Things 2021 - 2026" report has been added to ResearchAndMarkets.com's offering.

This report evaluates the technologies, players, and solutions relied upon for cloud computing in IIoT. The report analyzes the impact of SaaS, PaaS, and IaaS upon IIoT as well as cloud computing software, platforms, and infrastructure in support of edge computing.

The report also assesses market opportunities for cloud computing support of IIoT devices and the objects that will be monitored, actuated, and controlled through IoT enabled processes. The report includes detailed forecasts for the global and regional outlook as well as by industry verticals, devices, and things from 2021 to 2026.

Companies Mentioned

Select Report Findings:

Cloud computing is moving beyond the consumer and enterprise markets into support for manufacturing and industrial automation of other industry verticals. The Industrial Internet of Things (IIoT) represents a substantial opportunity for both the centralized cloud "as a service" model for software, platforms, and infrastructure as well as distributed computing wherein IIoT edge computing will enable the ICT industry to leverage real-time processing and analytics.

Key Topics Covered:

1 Executive Summary

2 Overview

3 IIoT Cloud Computing Ecosystem

3.1 IIoT Cloud Computing Services

3.2 Cloud Computing Deployment

3.3 IIoT Cloud Computing Applications

3.4 Cloud Manufacturing

3.5 Software Defined IIoT and Industry 4.0

3.6 Smart Connected Enterprise and Workplace

3.7 Cloud Technology in Robotics

3.8 Artificial Intelligence and IIoT Solutions

3.9 IIoT Cloud Computing Challenges

3.10 IIoT Cloud Computing Pricing Models

4 Cloud Computing in IIoT Market Dynamics

4.1 Drivers

4.1.1 Increased Use of Cloud Computing Platforms

4.1.2 Government Policies, Initiatives and Innovative Efforts

4.1.3 Optimization of operational efficiency and automation

4.2 Challenges

4.2.1 High initial cost

4.2.2 Data Security and Privacy Breaches

5 Case Study: Cloud Computing in IIoT Market

5.1 IoT Use cases of Kemppi

5.2 Smarter Systems for Increasing Customer Productivity Case Study

5.3 Caterpillar's NextGen Human-Machine Interface (HMI) software platform

5.4 Creating Smarter heating and cooling systems with cloud

5.5 Prototyping the Future Automotive Cloud

5.6 Oil and Gas production Smart Case Study

5.7 Rockwell Adapted Microsoft Azure Case Study

5.8 Cloud-first digital transformation

5.9 Eastman Case study for cloud migration

5.10 Data Analytics Improves Transportation Equipment Utilization

6 Industrial IoT Cloud Computing Market

7 IIoT Cloud Connected Devices/Things Forecasts

8 Company Analysis

9 Conclusions and Recommendations

For more information about this report visit https://www.researchandmarkets.com/r/5cn589

Here is the original post:

Global Cloud Computing in Industrial IoT Market (2021 to 2026) - Increased Use of Cloud Computing Platforms is Driving Growth - ResearchAndMarkets.com...

Posted in Cloud Computing | Comments Off on Global Cloud Computing in Industrial IoT Market (2021 to 2026) – Increased Use of Cloud Computing Platforms is Driving Growth – ResearchAndMarkets.com…

Enabling the new enterprise IT stack for the clouds Third Wave – Diginomica

Posted: at 11:42 am

Two decades after cloud computing first came into our vernacular, more than 70% of companies have migrated at least some workloads into the public cloud. This is much more than a lift-and-shift of the same applications as before in many cases the workloads are running on entirely new, cloud-native platforms.

This mainstream adoption and the acceleration of digital across all aspects of a business has completely reinvented the enterprise IT stack and given rise to a whole different ecosystem of consultancies. These cloud-savvy consultancies offer everything from experience design to digital engineering services alongside the more traditional data management, security, implementation and integration services. They are becoming the glue holding the new enterprise stack together.

Here is a quick snapshot of how the cloud has evolved over the last 20+ years, and how we see the Third Wave progressing.

Many of the same characteristics that have made cloud computing so pervasive remain true in the third wave namely the access, speed, agility, performance and innovation it enables. After all, waves do tend to build on each other. However, some things have evolved.

It's a hybrid mix. While the trend toward public cloud is still accelerating, it's easy to forget that 98% of businesses still rely on on-premise IT infrastructures. Not every legacy application can - or should - be pushed to the cloud, so there will always be at least some that remain on-premise. When you consider that IBM mainframes are still used by 44 of the top 50 banks and all top 10 insurers worldwide, it's clear that a hybrid approach that can accommodate on-prem assets will be a reality for many years to come.

It's multi-cloud and multi-app. Business demands a mix of large platforms, dozens of apps, and hundreds of tools. According to the Netskope 2021 Cloud and Threat report, app adoption increased 22% during the first six months of 2021, and the average company with 500 - 2,000 users now uses 805 distinct apps and cloud services, and has to manage the API-led integration associated with this best-of-breed portfolio. On the infrastructure side, when even the mighty Amazon Web Services (AWS) can suffer an hours-long outage that took down a wide swath of the internet as it did earlier this month, it's a reminder that a multi-cloud approach often makes sense.

It's data intensive.More than any other factor, the Third Wave is about connection and creating seamless digital experiences. In their 2019 book,Connected Strategy: Building Continuous Customer Relationships for Competitive Advantage, authors Christian Terwiesch and Nicolaj Siggelkow identified four connected customer experiences that firms can create: 1) "response-to-desire" journeys, 2) curated offerings, 3) coach behaviors, and 4) automatic execution. Each of these experiences can help turn episodic interactions into continuous relationships - and not coincidentally, revenue.

But you can't create those connections without a Third Wave enterprise IT stack, and it's not easy to keep those connections working reliably and consistently without a rock-solid service ecosystem supporting it.

As in each prior wave, there is a set of go-to Third Wave platforms that act as a foundation for many organization's IT and business operations. Some, like Salesforce, ServiceNow, AWS, Google and Microsoft, are household names that have grown their platforms and market share substantially over the last 10 years. According to Synergy Research, AWS, Microsoft Azure, Google Cloud, Alibaba and IBM comprise 80% of the public cloud market.

However, there are hundreds of smaller players and emerging platforms vying for success, including names like Okta, Cloudflare, Commerce Tools, Snowflake, Twilio, DataDog, DataBricks, and UIPath just to name a few.

These new firms are driving a tremendous amount of innovation in the space, helping companies automate, connect, secure and manage all those digital interactions that are so critical in the Third Wave. Many of these tools have made it far easier than in the past to interconnect and work together, however, those connections don't build and evolve themselves. After all, the cloud never stands still. It's constantly evolving.

Customers and vendors alike are looking for help navigating the growing complexity and interconnectedness of the Third Wave, to drive usage and deliver value on an ongoing basis. Suddenly technology services look less and less like a cost center, and more and more like the engine of customer success.

The new Enterprise IT Stack for the cloud's Third Wave depends on a vibrant service ecosystem. Gartner predicts that 85% of large organizations will engage external service providers to migrate applications to the cloud (up from 43% in 2019). And that's just to get started. There's also a growing need to manage those cloud-based solutions.

Are there a growing number of firms looking to compete for that budget, especially with valuations nudging upward? You bet there are.

But not all Third Wave cloud consultancies are equal. Nor will all grow fast enough to capitalize on the unprecedented opportunities of this next wave.

Giant ecosystems like Salesforce have 1,700 service providers, and all the emerging players in that new IT stack are working hard to build out their own set of trusted providers. Growth and diversification matters as much in services as it does in software. Customers sometimes need industry giants like Accenture, Deloitte, and Wipro, and other times they need the flexibility to work with smart boutiques who are agile, fast and know platforms or domains inside and out.

At Tercera, we focus exclusively on these Third Wave ecosystems, and our goal is to identify and help grow the top services players for these emerging platforms. Here's our view of what distinguishes the services winners in the Third Wave.

We look for firms that not only specialize in the platforms, but bring something unique to the space. Some of our portfolio companies have a deep vertical specialty, others have unique IP/accelerators, or a unique methodology.

A consultancy with broad but shallow experience across many industries isn't nearly as valuable as a firm with deep expertise serving large financial services companies.

Look at Salesforce's acquisition of Vlocity in 2020 for $1.3 billion (approximately 8x forward looking revenue), and Wipro's acquisition of Capco for $1.45 billion.

Focus matters.

Winning Third Wave cloud consultancies will offer a lifecycle of services that help their customers conceive, construct and utilize digital processes with ongoing insight and support across the entire journey. Tech firms need some design and strategy chops (or amazing partners). Strategy firms need deep technical skills. Firms that help deliver real business outcomes, not just individual projects are the ones that will be seen as a strategic partner, not just a one-and-done service provider.

Central to all of this is talent especially in today's market. The best services firms must develop new techniques to find the best talent, train a diverse team and motivate them in different ways than the past. This will require an investment in culture, in developing people (stop calling them heads or resources!), and new tools to facilitate virtual collaboration, communication and connectivity across the organization and with customers.

I've spent the majority of my career building and investing in cloud and professional services and a lot has changed. But a lot has stayed the same. Here's what I can tell you:

The vendors that are driving innovation in the space today will evolve, and they will require a diverse service ecosystem. The number of services firms serving this space is going to grow substantially over the next few years as cloud continues, but there will always be the stand-outs - the firms that attract the best talent and the most interesting customers. If you think you're one of those firms, we'd love to hear from you!

More here:

Enabling the new enterprise IT stack for the clouds Third Wave - Diginomica

Posted in Cloud Computing | Comments Off on Enabling the new enterprise IT stack for the clouds Third Wave – Diginomica

Cloud Computing Vs. Edge Computing: Who Wins the Race? – Analytics Insight

Posted: at 11:42 am

Cloud Computing Vs. Edge Computing: Who Wins the Race?

The clouds primary notion of providing a centralized data source that can be accessed from anywhere in the globe appears to be the polar opposite of edge computings local data handling concept. In many respects, though, edge computing was created by the cloud. The big data movement would never have grown to such proportions without centralized data storage. Many internet payment providers, for example, would not exist and companies like Microsoft and Amazon would be very different from what they are now. Weve spent some time attempting to sift out the benefits of edge and cloud computing. Which is the most effective? The solution isnt as simple as one would believe.

Cloud computing refers to the storage, processing, computing, and analysis of large amounts of data on remote servers or data centers. It also refers to the supply of many Internet-based services, such as data storage, servers, databases, networking, and software. Because data centers are frequently located in faraway locations, there is a time lag between data gathering and processing, which is usually undetectable in most use cases. In time-sensitive programs, however, this time latency, despite being measured in milliseconds, becomes critical. Consider real-time data collecting for a self-driving automobile, when delays might have disastrous implications.

Infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service are the three basic categories of cloud computing (SaaS). High infrastructure availability, self-service provisioning, elasticity, mobility, workload resilience, migration flexibility, broad network access, disaster recovery, and pay-per-use are just a few of the advantages of cloud computing in the form of IaaS.

The back-and-forth movement of data from the point where it is created to the central server for processing and subsequently to the end-user requires a lot of bandwidth, which slows down data processing and transfer. Because emerging technologies and IoT devices require sub-second reaction times, the tendency is to locate data processing and analytics as near to the data source as feasible.

Edge computing, as opposed to cloud computing, brings computation, storage, and networking closer to the data source, lowering travel time and latency dramatically. The procedures take place near the device or at the networks edge, allowing for speedier reaction times. Edge applications limit the amount of data that has to be moved, as well as the traffic generated by those transfers and the distance that data has to travel.

The exponential rise of IoT devices necessitates a shift in how we collect and analyze data. Consider how many smart home gadgets you possess, and then consider how many are used in healthcare, transportation, and manufacturing. The amount of data these devices send to servers regularly is enormous, and it frequently surpasses network bandwidth. Traditional centralized cloud architectures, no matter how strong or performant, cant keep up with these devices real-time requirements.

While organizations employ content delivery networks (CDNs) to decentralize data and service provisioning by copying data closer to the user, edge computing uses smart devices, mobile phones, or network gateways to conduct tasks on behalf of the cloud, bringing computing power closer to the user. Edge applications enable lower latency and cheaper transmission costs by lowering data quantities and associated traffic. Edge computings content caching, storage, and service delivery leads to faster response times and transfer rates.

Edge computing, according to some observers, may eventually supplant cloud computing because computing will become decentralized and the necessity for a centralized cloud would diminish. However, because their duties are distinct, this will not be the case. Edge computing devices are built to swiftly capture and process data on-site, as well as analyze data in real-time. It isnt concerned with data storage. Cloud computing, on the other hand, is built on infrastructure and can be quickly expanded to meet a variety of requirements. As a result, edge computing is appropriate for applications where every millisecond matters, whereas cloud computing is best for non-time-sensitive applications. Edge computing will most likely complement cloud computing rather than replace it.

The benefits of cloud computing are obvious. However, for some applications, relocating activities from a central place to the edge and bringing bandwidth-intensive data and latency-sensitive apps closer to the end-user is critical. However, because the process of putting up an edge computing infrastructure necessitates in-depth professional skills, it will be some time before mainstream adoption occurs.

Share This ArticleDo the sharing thingy

Follow this link:

Cloud Computing Vs. Edge Computing: Who Wins the Race? - Analytics Insight

Posted in Cloud Computing | Comments Off on Cloud Computing Vs. Edge Computing: Who Wins the Race? – Analytics Insight

5 Biggest Cloud Computing Trends to look out for in 2022 – Analytics Insight

Posted: at 11:42 am

This article features the top 5 trends outlined below defining the future of cloud computing

Cloud computing is becoming more popular than ever as businesses adopt data-driven business models, remote and hybrid work environments, and global supply networks. New capabilities and deployment patterns continue to develop, giving organizations of all sizes and sectors more options for consuming, and benefiting from their cloud investments. Cloud computing boomed in 2020 as the workforce turned virtual and businesses reacted to the worldwide pandemic by focusing on the supply of digital services. Gartner predicts that global spending on public cloud services would reach $1 trillion by 2024.

Cloud computing is increasingly seen as a critical component for firms seeking to work smarter and accomplish projects more quickly. With access to on-demand processing capacity, highly scalable platforms, and a more flexible approach to IT expenditure, the cloud has progressed from cutting-edge technology to an essential IT resource. Cloud computing trends portray how new technology is altering and the way firms function and spend their IT expenditures.

Cloud services are offered in different ways. The delivery model that a firm adopts depends on its functional requirements, the maturity of its IT and data governance requirements. As businesses are looking for more flexibility and choice in IT solutions Hybrid cloud and serverless cloud are trending.

a) Hybrid Cloud: Many businesses choose a hybrid cloud approach, which combines public cloud services with the placement of a private cloud devoted to a specific organization. This is especially true for companies that collect sensitive information or work in highly regulated areas like insurance, where data privacy is critical. A hybrid strategy is popular because it gives enterprises the control they need while also adapting and evolving as they roll out new services for their customers.

b) Serverless cloud: Serverless computing is a type of cloud computing that allows businesses to access IT infrastructure on-demand without having to invest in infrastructure or manage it. Serverless models are gaining popularity among large and small businesses that want to create new applications fast but lack the time, resources, and/or funding to deal with infrastructure. This allows developing firms to make use of higher computing power at a lower cost, while large corporations may launch new digital services without adding to the workload of their already overburdened IT personnel.

The cloud has evolved into more than just a storage facility for computing power. Organizations are keen on extracting insights from the data available through Machine Learning and Artificial Intelligence and are keen on boosting efficiency with best automation practices.

Machine Learning and Artificial Intelligence Cloud-based artificial intelligence (AI) technologies, such as machine learning, are assisting organizations in extracting more value from the ever-increasing amounts of data they gather. AI algorithms enable organizations to discover new insights from their data and enhance the way they work. Companies who dont have the means or talent to construct their own AI infrastructure and many dont can nevertheless benefit from it by using cloud service providers systems.

Automation- Automation is a crucial driver of cloud adoption, particularly when it comes to boosting the efficiency of corporate operations. Companies can automate many internal procedures if their data and systems are centralized on the cloud. In addition, many businesses are striving to tighten connections between various pieces of software to manage their expanding cloud footprints better and ensure that solutions from diverse suppliers operate seamlessly together.

Delegation of IT operations- As more manufacturers provide solutions that can be hosted on external servers, some organizations prefer to outsource parts of their IT operations to third parties. Companies can reduce operational expenses by focusing on the core product or service rather than engaging specialist teams to create, operate, and maintain their systems. However, they must keep sensitive data and technology in mind when determining which functions to outsource to avoid jeopardizing their governance or compliance policies.

Businesses and customers are concerned about IT security and data compliance, and todays cloud solutions are developed to resolve these concerns. This has created a huge demand for Secure Access Service Edge and Cloud-based disaster recovery practices.

a) Secure Access Service Edge (SASE)- Businesses are reconsidering their approach to security and risk management as employees access more services and data from personal devices outside of their organizations IT networks. This is a strong approach to IT security that allows organizations to swiftly launch new cloud services and ensure that their systems are secure.

b) Cloud-based disaster recovery Cloud-based disaster recovery backs up a companys data on an external cloud server. It is less expensive and time-efficient, with the added benefit of being handled by an outside source. Businesses frequently use cloud-based disaster recovery for critical servers and applications like huge databases and ERP systems.

Cloud-based platforms are rapidly expanding to serve companies development needs as they seek to differentiate themselves by fast launching new goods and services. Cloud computing has opened new opportunities in application development, from purpose-built coding environments to decentralized data storage. This has given impetus to technologies like Containers and Kubernetes, Edge Computing, and Cloud-Native application development.

a) Containers and Kubernetes- Containers provide enterprises with a specialized cloud-based environment to develop, test and deploy new applications. As a result, developers can concentrate on the intricacies of their applications, while IT teams can focus on delivering and managing solutions making the entire process faster and more efficient. Kubernetes is an open-source container orchestration technology that makes deploying and managing containerized applications easier. The software scales apps based on client demand and monitors the performance of new services so firms can address the concerns before they become a problem.

b)Edge computing- This type of cloud computing puts data processing collection, storage, and analysis closer to the sources of the data. This lowers latency while also enabling the usage of edge devices. By 2025, Gartner expects that 75% of data generated by businesses will be created and handled outside of a centralized cloud.

c)Cloud-Native Cloud-Native apps allow enterprises to design and deploy new software to their consumers more quickly than traditional cloud applications. Cloud-native apps are constructed as a network of distributed containers and microservices. As a result, various teams may work on new features simultaneously, speeding up the innovation process.

We are going to witness a huge explosion in cloud gaming in the coming years. Platforms such as Googles Stadia and Amazon Luna are going to define the direction the cloud gaming realm takes in 2022. The arrival of Cloud Virtual Reality and Augmented Reality (VR/AR) has made headsets more affordable and is fostering the growth of cloud gaming across various sections of society.

Cloud computing applications appear to be limitless, with 25% of organizations planning to move all of their software to the cloud in the next year. Increased cloud computing adoption and the discovery of new methods to leverage cloud-based systems to produce insights and efficiency are the upcoming trends to be seen in 2022. As more organizations embrace the increase in processing power, scalability, and flexibility that cloud-based systems provide, cloud adoption is expected to continue to expand. The road to adoption and the timeframe for doing so may vary for each company, but one thing is certain: there will be no going back to the old ways.

Author

Bhavesh Goswami, Founder & CEO, CloudThat

Share This ArticleDo the sharing thingy

Excerpt from:

5 Biggest Cloud Computing Trends to look out for in 2022 - Analytics Insight

Posted in Cloud Computing | Comments Off on 5 Biggest Cloud Computing Trends to look out for in 2022 – Analytics Insight

AWS: Here’s what went wrong in our big cloud-computing outage – ZDNet

Posted: at 11:42 am

Amazon Web Services (AWS) rarely goes down unexpectedly, but you can expect a detailed explainer when a major outage does happen.

12/15 update: AWS misfires once more, just days after a massive failure

The latest of AWS's major outages occurred at 7:30AM PST on Tuesday, December 7, lasted five hours and affected customers using certain application interfaces in the US-EAST-1 Region. In a public cloud of AWS's scale, a five-hour outage is a major incident.

Managing the Multicloud

It's easier than ever for enterprises to take a multicloud approach, as AWS, Azure, and Google Cloud Platform all share customers. Here's a look at the issues, vendors and tools involved in the management of multiple clouds.

Read More

According to AWS's explanation of what went wrong, the source of the outage was a glitch in its internal network that hosts "foundational services" such as application/service monitoring, the AWS internal Domain Name Service (DNS), authorization, and parts of the Elastic Cloud 2 (EC2) network control plane. DNS was important in this case as it's the system used to translate human-readable domain names to numeric internet (IP) addresses.

SEE: Having a single cloud provider is so last decade

AWS's internal network underpins parts of the main AWS network that most customers connect with in order to deliver their content services. Normally, when the main network scales up to meet a surge in resource demand, the internal network should scale up proportionally via networking devices that handlenetwork address translation (NAT)between the two networks.

However, on Tuesday last week, the cross-network scaling didn't go smoothly, with AWS NAT devices on the internal network becoming "overwhelmed", blocking translation messages between the networks with severe knock-on effects for several customer-facing services that, technically, were not directly impacted.

"At 7:30 AM PST, an automated activity to scale capacity of one of the AWS services hosted in the main AWS network triggered an unexpected behavior from a large number of clients inside the internal network," AWS says in its postmortem.

"This resulted in a large surge of connection activity that overwhelmed the networking devices between the internal network and the main AWS network, resulting in delays for communication between these networks."

The delays spurred latency and errors for foundational services talking between the networks, triggering even more failing connection attempts that ultimately led to "persistent congestion and performance issues" on the internal network devices.

With the connection between the two networks blocked up, the AWS internal operating team quickly lost visibility into its real-time monitoring services and were forced to rely on past-event logs to figure out the cause of the congestion. After identifying a spike in internal DNS errors, the teams diverted internal DNS traffic away from blocked paths. This work was completed two hours after the initial outage at 9:28AM PST.

This alleviated impact on customer-facing services but didn't fully fix affected AWS services or unblock NAT device congestion. Moreover, the AWS internal ops team still lacked real-time monitoring data, subsequently slowing recovery and restoration.

Besides lacking real-time visibility, AWS internal deployment systems were hampered, again slowing remediation. The third major cause of its non-optimal response was concern that a fix for internal-to-main network communications would disrupt other customer-facing AWS services that weren't affected.

"Because many AWS services on the main AWS network and AWS customer applications were still operating normally, we wanted to be extremely deliberate while making changes to avoid impacting functioning workloads," AWS said.

First, the main AWS network was not affected, so AWS customer workloads were "not directly impacted", AWS says. Rather, customers were affected by AWS services that rely on its internal network.

However, the knock-on effects from the internal network glitch were far and wide for customer-facing AWS services, affecting everything from compute, container and content distribution services to databases, desktop virtualization and network optimization tools.

AWS control planes are used to create and manage AWS resources. These control planes were affected as they are hosted on the internal network. So, while EC2 instances were not affected, the EC2 APIs customers use to launch new EC2 instances were. Higher latency and error rates were the first impacts customers saw at 7:30AM PST.

SEE: Cloud security in 2021: A business guide to essential tools and best practices

With this capability gone, customers had trouble with Amazon RDS (relational database services) and the Amazon EMR big data platform, while customers with Amazon Workspaces's managed desktop virtualization service couldn't create new resources.

Similarly, AWS's Elastic Cloud Balancers (ELB) were not directly affected but, since ELB APIs were, customers couldn't add new instances to existing ELBs as quickly as usual.

Route 53 (CDN) APIs were also impaired for five hours, preventing customers changing DNS entries. There were also login failures to the AWS Console, latency affecting Amazon Secure Token Services for third-party identity services, delays to CloudWatch, and impaired access to Amazon S3 buckets, DynamoDB tables via VPC Endpoints, and problems invoking serverless Lambda functions.

The December 7 incident shared at least one trait with a major outage that occurred this time last year: it stopped AWS from communicating swiftly with customers about the incident via the AWS Service Health Dashboard.

"The impairment to our monitoring systems delayed our understanding of this event, and the networking congestion impaired our Service Health Dashboard tooling from appropriately failing over to our standby region," AWS explained.

Additionally, the AWS support contact center relies on the AWS internal network, so staff couldn't create new cases at normal speed during the five-hour disruption.

AWS says it will release a new version of its Service Health Dashboard early 2022, which will run across multiple regions to "ensure we do not have delays in communicating with customers."

Cloud outages do happen. Google Cloud has had its fare share and Microsoft in October had to explain its eight-hour outage. While rare, the outages are a reminder that public cloud might be more reliable than conventional data centers, but things do go wrong, sometimes catastrophically, and can impact a wide number of critical services.

"Finally, we want to apologize for the impact this event caused for our customers," said AWS. "While we are proud of our track record of availability, we know how critical our services are to our customers, their applications and end users, and their businesses. We know this event impacted many customers in significant ways. We will do everything we can to learn from this event and use it to improve our availability even further."

View original post here:

AWS: Here's what went wrong in our big cloud-computing outage - ZDNet

Posted in Cloud Computing | Comments Off on AWS: Here’s what went wrong in our big cloud-computing outage – ZDNet

Why the healthcare cloud may demand zero trust architecture – Healthcare IT News

Posted: at 11:42 am

One of the most pressing issues in healthcare information technology today is the challenge of securing organizations that operate in the cloud.

Healthcare provider organizations increasingly are turning to the cloud to store sensitive data and backup confidential assets, as doing so enables them to save money on IT infrastructure and operations.

In fact, research showsthat the healthcare cloud computing market is projected to grow by $33.49 billion between 2021 and 2025, registering a compound annual growth rate of 23.18%.

To many in healthcare, the shift to cloud computing seems inevitable. But it also brings unique security risks in the age of ransomware. Indeed, moving to the cloud does not sanctify organizations from risk.

More than a third of healthcare organizations were hit by a ransomware attackin 2020, and the healthcare sector remains a top target for cybercriminals due to the wealth of sensitive information it stores.

Healthcare IT News sat down with P.J. Kirner, chief technology officer at Illumio, a cybersecurity company, to discuss securing a cloud environment in healthcare, and how the zero trust security model may be key.

Q. Healthcare provider organizations increasingly are turning to the cloud. That is clear. What are the security challenges that the cloud poses to healthcare provider organizations?

A. While healthcare cloud growth comes with certain advantages for example, more information sharing, lower costs and faster innovation the proliferation of multi-cloud and hybrid-cloud environments has also complicated cloud security for healthcare providers in myriad ways. And things will likely stay complicated.

Unlike companies that can move to the cloud entirely, healthcare organizations with physical addresses and physical equipment for example hospital beds, medical devices will permanently remain hybrid.

Though going hybrid might seem like a transient state for some organizations, most healthcare organizations will find that they need to continuously adapt to a permanent hybrid state and all the evolving security risks that come with it.

In a cloud environment, it's often difficult to see and detect security risks before they become problems. Hybrid-multi-cloud environments contain blind spots between infrastructure types that allow vulnerabilities to creep in, potentially exposing an organization to outside threats.

Healthcare providers that share sensitive data with third-party organizations over the cloud, for example, may also be impacted if their partner experiences a breach. Additionally, these heterogeneous environments also involve more stakeholders who can influence how a company operates in the cloud.

Because those stakeholders might be in different silos depending on their specialties and organizational needs for example, the expertise needed for Azure is not the same as the expertise needed for AWS this makes the infrastructure even more challenging to protect.

If you're a healthcare provider, you handle sensitive information, such as personally identifiable information and health records, on a daily basis, which all represent prime real estate for bad actors hoping to make a profit.

These high-value assets often live in data center or cloud environments, which an attacker can access once they breach the perimeter of an environment. Because of this, as more healthcare organizations move to the cloud, we're also going to see more attackers take advantage of the inherent flaws and vulnerabilities in this complex environment to gain access to sensitive data.

Q. When it comes to securing healthcare organizations in the cloud, you contend that adopting a zero trust architecture an approach that assumes breach and verifies every connection is vital. Why?

A. We're living in an age where cyberattacks are a given, not a hypothetical inconvenience. To adopt zero trust, security teams need to first change how they think about cybersecurity; it's no longer about just keeping attackers out, but also knowing what to do once they are in your system. Once security teams embrace an "assume breach" mindset, they can begin their zero trust journey in a meaningful way.

Zero trust strategies apply least privilege access controls, providing only the necessary information and access to a user. This makes it substantially more difficult for an attacker to reach their intended target in any attempted breach.

In practice, this means that ransomware cannot spread once it enters a system, because, by default, it doesn't have the access it needs to move far beyond the initial point of entry.

Another crucial component in a zero trust architecture is visibility. As I mentioned, it's difficult to see everything in a cloud environment and detect risks before they occur. The weak spots in an organization's security posture often appear in the gaps between infrastructure types, such as between the cloud and the data center, or between one cloud service provider and another.

With enhanced visibility for example, visibility that spans your hybrid, multi-cloud and data center environments however, organizations are able to identify niche risks at the boundaries of environments where different applications and workloads interact, which gives them a more holistic view of all activity.

This information is vital for cyber resiliency, and for a zero trust strategy, to succeed only with improved insights can we better manage and mitigate risk.

In a year where more than 40 million patient records have already been compromised by attacks, it's more imperative than ever for healthcare organizations to make accurate assessments in regard to the integrity of their security posture.

We'll see more healthcare organizations leverage zero trust architecture as we head into the new year and reflect on the ways the cybersecurity landscape has changed in 2021.

Q. Zero trust strategies have gained traction in the past year, especially in tandem with the Biden Administration's federal stamp of approval. From your perspective, what do you think it will take for more healthcare CISOs and CIOs to go zero trust?

A. While the awareness of and the importance placed on zero trust strategies have grown in the last year, organizations still have a long way to go in implementing their strategies. In 2020, only 19% of organizations had fully implemented a least-privilege model, although nearly half of IT leaders surveyedbelieved zero trust to be critical to their organizational security model.

Unfortunately, a ransomware attack is often the wake-up call that ultimately prompts CISOs and CIOs to rethink their security model and adopt zero trust architecture. We've seen an upsurge in cyberattacks on hospitals over the course of the pandemic, threatening patient data.

By leveraging zero trust solutions for breach containment, healthcare organizations can mitigate the impact of a breach, that way an attacker cannot access patient data even if they manage to initially breach the system.

Healthcare teams are starting to understand that proactive cybersecurity is essential for avoiding outcomes that may be even worse than compromised data: If a hospital system is impacted by a ransomware attack and needs to shut down, they're forced to turn patients away, neglecting urgent healthcare needs.

Healthcare CISOs and CIOs are beginning to realize that the traditional security measures they've had in place detection and protecting only the perimeter aren't enough to make them resilient to a cyberattack.

Even if you haven't been breached yet, you're seeing attacks seriously impact other hospital systems and realizing that could happen to you, too.

Healthcare CISOs and CIOs who recognize the limitations of a legacy security model against today's ransomware threats will understand the need to adopt a strategy that assumes breach and can isolate attacks, which is what the zero trust philosophy is all about.

Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

Follow this link:

Why the healthcare cloud may demand zero trust architecture - Healthcare IT News

Posted in Cloud Computing | Comments Off on Why the healthcare cloud may demand zero trust architecture – Healthcare IT News

Page 48«..1020..47484950..6070..»