Some of the world’s biggest cloud computing firms want to make millions of servers last longer doing so will save … – Yahoo! Voices

Some of the world's largest cloud computing firms, including Alphabet, Amazon, and Cloudflare, have found a way to save billions by extending the lifespan of their servers - a move expected to significantly reduce depreciation costs, increase net income, and contribute to their bottom lines.

Alphabet, Google's parent company, started this trend in 2021 by extending the lifespan of its servers and networking equipment. By 2023, the company decided that both types of hardware could last six years before needing to be replaced. This decision led to the company saving $3.9 billion in depreciation and increasing net income by $3.0 billion last year.

These savings will go towards Alphabet's investment in technical infrastructure, particularly servers and data centers, to support the exponential growth of AI-powered services.

Like Alphabet, Amazon also recently completed a "useful life study" for its servers, deciding to extend their working life from five to six years. This change is predicted to contribute $900 million to net income in Q1 of 2024 alone.

Cloudflare followed a similar path, extending the useful life of its service and network equipment from four to five years starting in 2024. This decision is expected to result in a modest impact of $20 million.

Tech behemoths are facing increasing costs from investing in AI and technical infrastructure, so any savings that can made elsewhere are vital. The move to extend the life of servers isn't just a cost cutting exercise however, it also reflects the continuous advancements in hardware technology and improvements in data center designs.

Continue reading here:

Some of the world's biggest cloud computing firms want to make millions of servers last longer doing so will save ... - Yahoo! Voices

Some of the world’s biggest cloud computing firms want to make millions of servers last longer doing so will save … – TechRadar

Some of the world's largest cloud computing firms, including Alphabet, Amazon, and Cloudflare, have found a way to save billions by extending the lifespan of their servers - a move expected to significantly reduce depreciation costs, increase net income, and contribute to their bottom lines.

Alphabet, Google's parent company, started this trend in 2021 by extending the lifespan of its servers and networking equipment. By 2023, the company decided that both types of hardware could last six years before needing to be replaced. This decision led to the company saving $3.9 billion in depreciation and increasing net income by $3.0 billion last year.

These savings will go towards Alphabet's investment in technical infrastructure, particularly servers and data centers, to support the exponential growth of AI-powered services.

Like Alphabet, Amazon also recently completed a "useful life study" for its servers, deciding to extend their working life from five to six years. This change is predicted to contribute $900 million to net income in Q1 of 2024 alone.

Cloudflare followed a similar path, extending the useful life of its service and network equipment from four to five years starting in 2024. This decision is expected to result in a modest impact of $20 million.

Tech behemoths are facing increasing costs from investing in AI and technical infrastructure, so any savings that can made elsewhere are vital. The move to extend the life of servers isn't just a cost cutting exercise however, it also reflects the continuous advancements in hardware technology and improvements in data center designs.

Read more from the original source:

Some of the world's biggest cloud computing firms want to make millions of servers last longer doing so will save ... - TechRadar

Report: NVIDIA Forms Custom Chip Unit for Cloud Computing and More – AnandTech

With its highly successful A100 and H100 processors for artificial intelligence (AI) and high-performance computing (HPC) applications, NVIDIA dominates AI datacenter deployments these days. But among large cloud service providers as well as emerging devices like software defined vehicles (SDVs) there is a global trend towards custom silicon. And, according to a report from Reuters, NVIDIA is putting together a new business unit to take on the custom chip market.

The new business unit will reportedly be led by vice president Dina McKinney, who has a wealth of experience from working at AMD, Marvell, and Qualcomm. The new division aims to address a wide range of sectors including automotive, gaming consoles, data centers, telecom, and others that could benefit from tailored silicon solutions. Although NVIDIA has not officially acknowledged the creation of this division, McKinneys LinkedIn profile as VP of Silicon Engineering reveals her involvement in developing silicon for 'cloud, 5G, gaming, and automotive,' hinting at the broad scope of her alleged business division.

Nine unofficial sources across the industry confirmed to Reuters the existence of the division, but NVIDIA has remained tight-lipped, only discussing its 2022 announcement regarding implementation of its networking technologies into third-party solutions. According to Reuters, NVIDIA has initiated discussions with leading tech companies, including Amazon, Meta, Microsoft, Google, and OpenAI, to investigate the potential for developing custom chips. This hints that NVIDIA intends to extend its offerings beyond the conventional off-the-shelf datacenter and gaming products, embracing the growing trend towards customized silicon solutions.

While using NVIDIA's A100 and H100 processors for AI and high-performance computing (HPC) instances, major cloud service providers (CSPs) like Amazon Web Services, Google, and Microsoft are also advancing their custom processors to meet specific AI and general computing needs. This strategy enables them to cut costs as well as tailor capabilities and power consumption of their hardware to their particular needs. As a result, while NVIDIA's AI and HPC GPUs remain indispensable for many applications, an increasing portion of workloads now run on custom-designed silicon, which means lost business opportunities for NVIDIA. This shift towards bespoke silicon solutions is widespread and the market is expanding quickly. Essentially, instead of fighting custom silicon trend, NVIDIA wants to join it.

Meanwhile, analysts are painting the possibility of an even bigger picture. Well-known GPU industry observer Jon Peddie Research notes that they believe that NVIDIA may be interested in addressing not only CSPs with datacenter offerings, but also consumer market due to huge volumes.

"NVIDIA made their loyal fan base in the consumer market which enabled them to establish the brand and develop ever more powerful processors that could then be used as compute accelerators," said JPR's president Jon Peddie. "But the company has made its fortune in the deep-pocked datacenter market where mission-critical projects see the cost of silicon as trivial to the overall objective. The consumer side gives NVIDIA the economy of scale so they can apply enormous resources to developing chips and the software infrastructure around those chips. It is not just CUDA, but a vast library of software tools and libraries."

Back in mid-2010s NVIDIA tried to address smartphones and tablets with its Tegra SoCs, but without much success. However, the company managed to secure a spot in supplying the application processor for the highly-successful Nintendo Switch console, and certainly would like expand this business. The consumer business allows NVIDIA to design a chip and then sell it to one client for many years without changing its design, amortizing the high costs of development over many millions of chips.

"NVIDIA is of course interested in expanding its footprint in consoles right now they are supplying the biggest selling console supplier, and are calling on Microsoft and Sony every week to try and get back in," Peddie said. "NVIDIA was in the first Xbox, and in PlayStation 3. But AMD has a cost-performance advantage with their APUs, which NVIDIA hopes to match with Grace. And since Windows runs on Arm, NVIDIA has a shot at Microsoft. Sony's custom OS would not be much of a challenge for NVIDIA."

See more here:

Report: NVIDIA Forms Custom Chip Unit for Cloud Computing and More - AnandTech

Confidential Computing and Cloud Sovereignty in Europe – The New Stack

Confidential computing is emerging as a potential game-changer in the cloud landscape, especially in Europe, where data sovereignty and privacy concerns take center stage. Will confidential computing be the future of cloud in Europe? Does it solve cloud sovereignty issues and adequately address privacy concerns?

At its core, confidential computing empowers organizations to safeguard their sensitive data even while its being processed. Unlike traditional security measures that focus on securing data at rest or in transit, confidential computing ensures end-to-end protection, including during computation. This is achieved by creating secure enclaves isolated areas within a computers memory where sensitive data can be processed without exposure to the broader system.

Cloud sovereignty, or the idea of retaining control and ownership over data within a country or region, is gaining traction as a critical aspect of digital autonomy. Europe, in its pursuit of technological independence, is embracing confidential computing as a cornerstone in building a robust cloud infrastructure that aligns with its values of privacy and security.

While the promise of confidential computing is monumental, challenges such as widespread adoption, standardization and education need to be addressed. Collaborative efforts between governments, industries and technology providers will be crucial in overcoming these challenges and unlocking the full potential of this transformative technology.

As Europe marches toward a future where data is not just a commodity but a sacred trust, confidential computing emerges as the key to unlocking the full spectrum of possibilities. By combining robust security measures with the principles of cloud sovereignty, Europe is poised to become a global leader in shaping a trustworthy and resilient digital future.

The era of confidential computing calls, and Europe stands prepared to respond. Margrethe Vestager, the European Commissions executive vice president for a Europe Fit for the Digital Age.

To learn more about Kubernetes and the cloud native ecosystem, join us at KubeCon + CloudNativeCon Europe in Paris from Mar. 19-22, 2024.

YOUTUBE.COM/THENEWSTACK

Tech moves fast, don't miss an episode. Subscribe to our YouTube channel to stream all our podcasts, interviews, demos, and more.

SUBSCRIBE

Follow this link:

Confidential Computing and Cloud Sovereignty in Europe - The New Stack

Cloud Native Efficient Computing is the Way in 2024 and Beyond – ServeTheHome

Today we wanted to discuss cloud native and efficient computing. Many have different names for this, but it is going to be the second most important computing trend in 2024, behind the AI boom. Modern performance cores have gotten so big and fast that there is a new trend in the data center: using smaller and more efficient cores. Over the next few months, we are going to be doing a series on this trend.

As a quick note: We get CPUs from all of the major silicon players. Also, since we have tested these CPUs in Supermicro systems, we are going to say that they are all sponsors of this, but it is our own idea and content.

Let us get to the basics. Once AMD re-entered the server market (and desktop) with a competitive performance core in 2017, performance per core and core counts exploded almost as fast as pre-AI boom slideware on the deluge of data. As a result, cores got bigger, cache sizes expanded, and chips got larger. Each generation of chips got faster.

Soon, folks figured out a dirty secret in the server industry: faster per core performance is good if you license software by core, but there are a wide variety of applications that need cores, but not fast ones. Todays smaller efficient cores tend to be on the order of performance of a mainstream Skylake/ Cascade Lake Xeon from 2017-2021, yet they can be packed more densely into systems.

Consider this illustrative scenario that is far too common in the industry:

Here, we have several apps built by developers over the years. Each needs its own VM and each VM is generally between 2-8 cores. These are applications that need to be online 247 but are not ones that need massive amounts of compute. Good examples are websites that serve a specific line of business function but do not have hundreds of thousands of visitors. Also, these tend to be workloads that are already in cloud instances, VMs, or containers. As the industry has started to move away from hypervisors with per-core licensing or per-socket license constraints, scaling up to bigger, faster cores that are going underutilized makes little sense.

As a result, the industry realized it needed lower cost to produce chips that are chasing density instead of per-core performance. An awesome way to think about this is to think about trying to fit the maximum number of instances for those small line-of-business applications developed over the years that are sitting in 2-8 core VMs into as few servers as possible. There are other applications like this as well that are commonly shown such as nginx web servers, redis servers, and so forth. Another great example is that some online game instances require one core per user in the data center, even if that core is relatively meager. Sometimes just having more cores is, well, more cores = more better.

Once the constraints of legacy hypervisor per core/ per socket licensing are removed, then the question becomes how to fit as many cores on a package, and then how dense those packages can be deployed in a rack. One other trend we are seeing is not just more cores, but also lower clock speed cores. CPUs that have a maximum frequency in the 2-3GHz range today tend to be considerably more power efficient than those with frequencies of P-core only servers in the 4GHz+ range and desktop CPUs now pushing well over 5GHz. This is the voltage frequency curve at work. If your goal is to have more cores, but do not need maximum per-core performance, then lowering the performance per core by 25% but decreasing the power by 40% or more, means that all of those applications are being serviced with less power.

Less power is important for a number of reasons. Today, the biggest reason is the AI infrastructure build-out. If you, for example, saw our 49ers Levis Stadium tour video, that is a perfect example of a data center that is not going to expand in footprint and can only expand cooling so much. It also is a prime example of a location that needs AI servers for sports analytics.

That type of constraint where the same traditional work needs to get done, in a data center footprint that is not changing, while adding more high-power AI servers is a key reason cloud-native compute is moving beyond the cloud. Transitioning applications running on 2017-2021 era Xeon servers to modern cloud-native cores with approximately the same performance per core can mean 4-5x the density per system at ~2x the power consumption. As companies release new generations of CPUs, the density figures are increasing at a steep rate.

We showed this at play with the same era of servers and modern P-core servers in our 5th Gen Intel Xeon Processors Emerald Rapids review.

We also covered the consolidation just between P-core generations in the accompanying video. We are going to have an article with the current AMD EPYC Bergamo parts very soon in a similar vein.

If you are not familiar with the current players in the cloud-native CPU market, that you can buy for your data centers/ colocation, here is a quick run-down.

The AMD EPYC Bergamo was AMDs first foray into cloud-native compute. Onboard, it has up to 128 cores/ 256 threads and is the densest publicly available x86 server CPU currently available.

AMD removed L3 cache from its P-core design, lowered the maximum all core frequencies to decrease the overall power, and did extra work to decrease the core size. The result is the same Zen 4 core IP, with less L3 cache and less die area. Less die area means more can be packaged together onto a CPU.

Some stop with Bergamo, but AMD has another Zen 4c chip in the market. The AMD EPYC 8004 series, codenamed Siena also uses Zen 4c but with half the memory channels, less PCIe Gen5 I/O and single-socket only operation.

Some organizations that are upgrading from popular dual 16 core Xeon servers can move to single socket 64-core Siena platforms and stay within a similar power budget per U while doubling the core count per U using 1U servers.

AMD markets Siena as the edge/ embedded part, but we need to recognize this is in the vein of current gen cloud native processors.

Arm has been making a huge splash into the space. The only Arm server CPU vendor out there for those buying their own servers, is Ampere led by many of the former Intel Xeon team.

Ampere has two main chips, the Ampere Altra (up to 80 cores) and Altra Max (up to 128 cores.) These use the same socket and so most servers can support either. The Max just came out later to support up to 128 cores.

Here, the focus on cloud-native compute is even more pronounced. Instead of having beefy floating point compute capabilities, Ampere is using Arm Neoverse N1 cores that focus on low power integer performance. It turns out, a huge number of workloads like serving web pages are mostly integer performance driven. While these may not be the cores if you wanted to build a Linpack Top500 supercomputer, they are great for web servers. Since the cloud-native compute idea was to build cores and servers that can run workloads with little to no compromise, but at lower power, that is what Arm and Ampere built.

Next up will be the AmpereOne. This is already shipping, but we have yet to get one in the lab.

AmpereOne uses a custom designed core for up to 192 cores per socket.

Assuming you could buy a server with AmpereOne, you would get more core density than an AMD EPYC Bergamo server (192 vs 128 cores) but you would get fewer threads (192 vs 256 threads.) If you had 1 vCPU VMs, AmpereOne would be denser. If you had 2 vCPU VMs, Bergamo would be denser. SMT has been a challenge in the cloud due to some of the security surfaces it exposes.

Next in the market will be the Intel Sierra Forest. Intels new cloud-native processor will offer up to 144/ 288 cores. Perhaps most importantly, it is aiming for a low power per core metric while also maintaining x86 compatibility.

Intel is taking its efficient E-core line and bringing it to the Xeon market. We have seen massive gains in E-core performance in both embedded as well as lower-power lines like the Alder Lake-N where we saw greater than 2x generational performance per chip. Now, Intel is splitting its line into P-cores for compute intensive workloads and E-cores for high-density scale-out compute.

Intel will offer Granite Rapids as an update to the current 5th Gen Xeon Emerald Rapids for all P-core designs later in 2024. Sierra Forest will be the first generation all E-core design and is planned for the first half of 2024. Intel already has announced the next generation Clearwater Forest will continue the all E-core line. As a full disclosure, this is a launch I have been excited about for years.

We are going to quickly mention the NVIDIA Grace Superchip here. With up to 144 cores across two dies packaged along with LPDDR memory.

While at 500W and usingArm Neoverse V2 performance cores, one would not think of this as a cloud native processor, it does have something really different. The Grace Superchip has onboard memory packaged alongside its Arm CPUs. As a result, that 500W is actually for CPU and memory. There are applications that are primarily memory bandwidth bound, not necessarily core count bound. For those applications, something like a Grace Superchip can actually end up being a lower-power solution than some of the other cloud-native offerings. These are also not the easiest to get, and are priced at a significant premium. One could easily argue these are not cloud-native, but if our definition is doing the same work in a smaller more efficient footprint, then the Grace Superchip might actually fall into that category for a subset of workloads.

If you were excited for our 2nd to 5th Gen Intel Xeon server consolidation piece, get ready. To say that the piece we did in late 2023 was just the beginning would be an understatement.

While many are focused on AI build-outs, projects to shrink portions of existing compute footprints by 75% or more are certainly possible, making more space, power, and cooling available for new AI servers. Also, just from a carbon footprint perspective, using newer and significantly more power-efficient architectures to do baseline application hosting makes a lot of sense.

The big question in the industry right now on CPU compute is whether cloud native energy-efficient computing is going to be 25% of the server CPU market in 3-5 years, or if it is going to be 75%. My sense is that it likely could be 75%, or perhaps should be 75%, but organizations are slow to move. So at STH, we are going to be doing a series to help overcome that organizational inertia and get compute on the right-sized platforms.

More:

Cloud Native Efficient Computing is the Way in 2024 and Beyond - ServeTheHome

ChatGPT Stock Predictions: 3 Cloud Computing Companies the AI Bot Thinks Have 10X Potential – InvestorPlace

In a world continually reshaped by technology, cloud computing stands as a pivotal force driving transformation. With its rapid ascent, early investors in cloud computing stocks have seen their investments significantly outperform the S&P 500. This serves as a highlight to the sectors explosive growth and its vital impact on business and consumer landscapes.

2024 shouldnt be any different, which is why, in seizing this momentum, I turned to ChatGPT, initiating my research on the top cloud computing picks with a precise ask.

Kindly conduct an in-depth exploration of the current dynamics and trends characterizing the United States stock market as of February 2024.

I proceeded with a targeted request to unearth gems within the cloud computing arena.

Based on this, suggest three cloud computing stocks that have 10 times potential.

The crucial insights provided by ChatGPT lay the foundation for our piece covering the three cloud computing stocks pinpointed by AI as top contenders poised to deliver stellar returns.

Source: Karol Ciesluk / Shutterstock.com

Datadog Inc. (NASDAQ:DDOG) has emerged as a stalwart in the observability and security platform sector for cloud applications. It witnessed an impressive 61.76% stock surge in the past year and currently trades at $134.91.

Further, the companys third quarter 2023 financial report underscores its robust performance. It showed a 25% year-over-year (YOY) revenue growth, reaching $547.5 million. Additionally compelling is the significant uptick in customers from 22,200 to 26,800. This signals the firms efficiency in expanding its client base and driving revenue.

Simultaneously, Datadog generative artificial intelligence (AI) and large language models (LLMs) foresee potential growth in cloud workloads. AI-related usage comprised 2.5% of third-quarter annual recurring revenue. This resonates notably with next-gen AI-native customers and positions the company for sustained growth in this dynamic landscape.

The projected $568 million revenue for the fourth quarter of 2024 reflects a commitment to sustained expansion. Also, it underlines the companys ability to adapt to market dynamics and capitalize on emerging opportunities.

Source: Sundry Photography / Shutterstock.com

Zscaler, Inc. (NASDAQ:ZS) is a pioneer in providing cloud-based information security solutions.

The company made a noteworthy shift to 100% renewable energy for its offices and data centers in November 2021. This solidifies its standing as an environmental steward and leader in the market. Also, CEO Jay Chaudhry emphasizes that beyond providing top-notch cybersecurity, Zscalers cloud services contribute to environmental conservation by eliminating the need for on-premises hardware.

Beyond sustainability, Zscaler thrives financially, boasting 7,700 customers, including 468, contributing over $1 million in annual recurring revenue (ARR). In the first quarter, non-GAAP earnings per share exceeded expectations at 67 cents, beating estimates by 18 cents. And, revenue soared to $496.7 million, a remarkable 39.7% YOY bump.

Looking forward, second-quarter guidance forecasts revenue between $505 million and $507 million, indicating a robust 30.5% YOY growth. Also, it has an ambitious target of $2.09 billion to $2.10 billion for the entire fiscal year. Thus, Zscaler attributes its success to a potent combination of technology and financial acumen.

Source: Sundry Photography / Shutterstock.com

Snowflake (NASDAQ:SNOW) stands resilient amid market fluctuations, emerging as a top performer in the cloud stock landscape over the past year.

Moreover, while yet to reach previous all-time highs, its strategic focus on AI integrations has propelled its recent success. Positioned at the intersection of the enduring narrative around AI and the high-interest cloud computing sector, Snowflake captures attention with its forward-looking approach.

Financially, Snowflake demonstrates robust figures with a gross profit margin of 67.09%, signaling financial strength. Additionally, the impressive 40.87% revenue growth significantly outpaces the sector median by 773.93%. This attests to the companys agility in navigating market dynamics.

Peering into the future, Snowflakes fourth-quarter guidance paints a promising picture, with an anticipated product revenue falling between $716 million and $721 million. Elevating the outlook, the fiscal year 2024 projection boldly sets a target of $2.65 billion in product revenue. Therefore, this ambitious trajectory demonstrates Snowflakes adept market navigation, savvy AI integration, and steadfast commitment to robust financial performance.

On the publication date, Muslim Farooque did not have (directly or indirectly) any positions in the securities mentioned in this article.The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

Muslim Farooque is a keen investor and an optimist at heart. A life-long gamer and tech enthusiast, he has a particular affinity for analyzing technology stocks. Muslim holds a bachelors of science degree in applied accounting from Oxford Brookes University.

See the rest here:

ChatGPT Stock Predictions: 3 Cloud Computing Companies the AI Bot Thinks Have 10X Potential - InvestorPlace

The 3 Best Cloud Computing Stocks to Buy in February 2024 – InvestorPlace

These cloud computing stocks can march higher in 2024

Source: Blackboard / Shutterstock

Cloud computing has helped corporations increase productivity and reduce costs. Once a business uses cloud computing, it continues to pay annual fees to keep its digital infrastructure.

Cloud solutions can quickly turn into a companys backbone. Its one of the last costs some companies will think of removing. Firms that operate in the cloud computing industry often benefit from high renewal rates, recurring revenue and the ability to raise prices in the future. Investors can capitalize on the trend with these cloud computing stocks.

Source: Tada Images / Shutterstock.com

Amazon (NASDAQ:AMZN) had a record-breaking Black Friday and optimized its logistics to offer the fastest delivery speeds ever for Amazon Prime members. Over seven billion products arrived at peoples doors on the same or the next day or the order. Its a testament to Amazons vast same-day delivery network that encompasses 110 U.S. metro areas and more than 55 dedicated same-day sites across the United States.

The delivery network makes Amazon Prime more enticing for current members and people on the fence. The companys efforts paid off and resulted in 14% year-over-year (YoY) revenue growth in the fourth quarter of 2023.

Amazons ventures into artificial intelligence (AI) can also lead to meaningful stock appreciation. The companys generative AI investments have paid off and strengthened Amazon Web Services value proposition. Developers can easilyscale AI appswith Amazons Bedrock. These resources can help corporations increase productivity and generate more sales.

Innovations like these will help Amazon generate more traction for its e-commerce and cloud computing segments. The AI sector has many tailwinds that can help Amazon stock march higher for long-term investors.

Source: IgorGolovniov / Shutterstock.com

Alphabet (NASDAQ:GOOG, NASDAQ:GOOGL) is a staple in many funds. The equity has outperformed the broader market with a 58% gain over the past year. Shares are up by 170% over the past five years.

Shares trade at a reasonable 22x forward P/E ratio. The stock initially lost some value after earnings but has parried some of its losses. The earnings report wasnt too bad, with 13% YoY revenue growth and 52% YoY net income growth.

Investors may have wanted higher numbers since Meta Platforms (NASDAQ:META) reported better results. However, a 7% drop in earnings didnt make much sense. The business model is still robust and is accelerating revenue and earnings growth. Alphabet also has a lengthy history of rewarding long-term investors.

Many analysts believe the equity looks like a solid long-term buy. The average price target implies a 9% upside. The highest price target of $175 per share suggests the equity can rally 16.5% from current levels.

Source: Sundry Photography / Shutterstock.com

ServiceNow (NYSE:NOW) is an information technology company with an advanced cloud platform that helps corporations increase their productivity and sales. The equity has comfortably outperformed the market with 1-year and 5-year gains of 77% and 248%, respectively.

The company currently trades at a 61x forward P/E ratio, meaning youll need a long-term outlook to justify the valuation. ServiceNow certainly delivers on the financial front, increasing revenue by 26% YoY in Q4 2023. ServiceNow also reported $295 million in GAAP net income, a 97% YoY improvement. The company generated $150 million in GAAP net income during the same period last year.

Revenue is going up, and profit margins are accelerating. These are two promising signs for a company that boasts a 99% renewal rate for its core product. The companys subscription revenue continues to grow at a fast clip and generates predictable annual recurring revenue.

On this date of publication, Marc Guberti held a long position in NOW. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Marc Guberti is a finance freelance writer at InvestorPlace.com who hosts the Breakthrough Success Podcast. He has contributed to several publications, including the U.S. News & World Report, Benzinga, and Joy Wallet.

Read the original:

The 3 Best Cloud Computing Stocks to Buy in February 2024 - InvestorPlace

Ex VR/AR lead at Unity joins new spatial computing cloud platform to enable the open metaverse at scale, AI, Web3 – Cointelegraph

The metaverse is reshaping the digital world and entertainment landscape. Ozones platform empowers businesses to create, launch and profit from various 3D projects, ranging from simple galleries or meetup spaces to AAA games and complex 3D simulations, transforming how we engage with immersive content in the spatial computing era.

Apples Vision OS launch is catalyzing mainstream adoption of interactive spatial content, opening new horizons for businesses. 95% of business leaders anticipate a positive impact from the metaverse within the next five to ten years, potentially establishing a $5 trillion market by 2030.

Ozone cloud platform has the potential to become the leading spatial computing cloud. Source: Ozone

The future of 3D technology seamlessly blends the virtual and physical realms using spatial computing technology. But, spatial computing can be challenging, especially when the tools are limited and the methods for creating 3D experiences are outdated.

A well-known venture capital firm, a16z, recently pointed out that its time to change how game engines are used for spatial computing, describing the future of 3D engines as a cloud-based 3D creation Engine and this is exactly what the Ozone platform is.

The Ozone platform is a robust cloud computing cloud for 3D applications. Source: Ozone

The platforms OZONE token is an innovative implementation of crypto at a software-as-a-service (SaaS) platform level. You can think of the OZONE token as the core platform token that will unlock higher levels of spatial and AI computing over time, fully deployed and interoperating throughout worlds powered by our cloud.

Ozone is fully multichain and cross-chain, meaning it supports all wallets, blockchains, NFT collections and cryptocurrencies and already integrated several in the web studio builder with full interoperability across spatial experiences said Jay Essadki, executive director for Ozone.

Ozone Studios already integrated and validated spatial computing cross-chain interoperability. Source: Ozone Studio

He added, You can think of the Ozone composable spatial computing cloud as an operating system, or as a development environment. It continuously evolves by integrating new technologies and services.

The OZONE token, positioned as the currency of choice, offers not just discounts and commercial benefits but also, through the integration with platform oracles and cross-chain listings, enables the first comprehensive horizontally and vertically integrated Web3 ecosystem for the metaverse and spatial computing era.

Ozone eliminates technical restrictions and makes spatial computing, Web3 and AI strategies accessible to organizations looking to explore the potential of the metaverse with almost no technical overhead or debt.

Ozone is coming out of stealth with a cloud infrastructure supported by AI and Web3 microservices and is expanding its executive, engineering and advisory teams as it raises more capital in view to replace legacy game engines such as Unreal or Unity.

At the same time, Ozone provides full support for those engines created assets to be deployed on the Ozone platform across Web2 and Web3 alike.

Also Ozone is on a roll of enterprise and government discussions and has been establishing and closing enterprise and government customer relationships in view of initial cloud infrastructure deployment.

Ozone welcomes new advisoers as the platform comes out of stealth.

Ozones new 2024 advisors to make the open metaverse happen:

Ozone will finalize a full game engine based on fully integrated micro-templates that will make the build and deployment of all games and 3D spatial computing as simple as clicking a few buttons, and it is already working.

The upcoming features on the Ozone 3D Web Studio. Source: Ozone

Ozone is announcing a new suite of templatized games. With multi-AI integration, three completed games (Quest, Hide and Seek and RPG, coming in 2024) and more are underway.

It opens up the way to building interactive 3D experiences in a new way.

Ozone helps companies to build and share 3D experiences. Source: Ozone

At the heart of Ozone is the innovative Studio 3D development platform, complemented by a marketplace infrastructure to support e-commerce and the economy.

Ozones SaaS platform empowers businesses to create, deploy and monetize Spatial Computing experiences at scale for Web3 or traditional e-commerce applications. The platforms features, including social infrastructure, AI integration and gamification elements, enhance the interactive aspect of 3D experiences, digital twins and spatial data automation, while providing full interoperability and portability of content and data across experiences and across devices

Ozones vision of becoming the industry standard for interactive 3D development, with compatibility across devices and accessibility from any device, positions it as a catalyst for innovation in media and entertainment. Ozone is set to play a key role in shaping the future of immersive spatial web experiences.

Ozone has secured investments from prominent Web3 VC funds and is opening its first-ever VC equity financing round.

Disclaimer. Cointelegraph does not endorse any content or product on this page. While we aim at providing you with all important information that we could obtain in this sponsored article, readers should do their own research before taking any actions related to the company and carry full responsibility for their decisions, nor can this article be considered as investment advice.

View post:

Ex VR/AR lead at Unity joins new spatial computing cloud platform to enable the open metaverse at scale, AI, Web3 - Cointelegraph

Get Rich Quick With These 3 Cloud Computing Stocks to Buy Now – InvestorPlace

As part of our day-to-day life, cloud computing companies are completely necessary as they keep us interconnected and take care of streamlining our operations, allowing us to be more efficient and effective. They also make many tasks much easier to perform through their great technological solutions. These solutions can be applied from the financial area to the human resources area.

If you want to take advantage of the great boom and the strong demand of these companies, here are three cloud computing stocks to buy quick and that you can consider adding to your portfolio.

Source: IgorGolovniov / Shutterstock.com

Behind pharmaceutical companies and biotech companies there is a big figure that is responsible for providing them with cloud-based software solutions to streamline their entire operations, that big figure is Veeva Systems Inc (NYSE:VEEV).

Financially VEEV is completely stable and are always on the move. Its revenues speak for themselves as they are on the rise and if we focus on net income, it is growing consistently reflected in their market performance.

One of the particularities that distinguishes this company is its capacity for innovation.

For example, their most recent release, the Veeva Compass Suite, is a comprehensive set of tools that gives healthcare companies a much deeper understanding of existing patient populations and a picture of healthcare provider behaviors.

Its practically like giving you a complete and specific picture of the entire healthcare network landscape.

On top of that, they make a real impact on the lives of patients, as their training solutions are helping many companies modernize their employee qualification processes.

Source: Sundry Photography / Shutterstock.com

Next on the list of companies involved in the cloud computing sector is Workday Inc (NASDAQ:WDAY), which specializes in providing companies with cloud-based enterprise applications for financial management and human resources.

They provide practical software-based solutions that allow companies to streamline their processes in managing their financial operations and human talent.

One of the things that makes this company completely attractive is its great financial performance, since in their last financial quarter they indicated that their revenues increased by 16.7% compared to the same period of the previous year, which can be translated into $1.87 billion, what good figures.

As part of their most important metrics we have subscription revenues, which increased much stronger than their normal revenues, with 18.1%, reaching approximately $1.69 billion.

In addition to these incredible numbers, they are making important strategic alliances, where they have partnered with McLaren Racing to provide them with innovative solutions.

This partnership demonstrates the versatility of Workday, as they not only provide business solutions in traditional sectors, but they also have a large participation in completely competitive industries.

Source: Jonathan Weiss / Shutterstock.com

And to close the list of these companies completely necessary in our day to day, we have the giant Oracle Corporation (NYSE:ORCL), a technology company completely recognized worldwide.

This company specializes entirely in data management solutions and of course in cloud computing. One of its main commitments is to help organizations improve their efficiency and optimize their operations through completely innovative technological solutions.

Financially, this company is in a phase of solid growth specifically in its total revenue and in its cloud division.

One of the stars of this company is its cloud application suite, which has gained a strong foothold in the healthcare sector.

Large and important institutions such as Baptist Health Care and the University of Chicago Medicine, are adopting the solutions provided by this company to improve their experience with employees and of course the care of their patients.

In addition, they are expanding their global presence with the grand opening of a new cloud region in Nairobi, Kenya. This major expansion makes clear their important commitment to economic and technological development in the greater African continent.

Oracle Cloud Infrastructures (OCI) unique infrastructure allows them the great opportunity and advantage to offer governments and businesses the opportunity to drive innovation and growth in the region.

As of this writing, Gabriel Osorio-Mazzilli did not hold (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines(no position)

Gabriel Osorio is a former Goldman Sachs and Citigroup employee. He possesses discipline in bottom-up value investing and volatility-based long/short equities trading.

Read more here:

Get Rich Quick With These 3 Cloud Computing Stocks to Buy Now - InvestorPlace

Leveraging Cloud Computing and Data Analytics for Businesses – Analytics Insight

In todays dynamic business landscape, organizations are constantly seeking innovative ways to drive efficiency, agility, and value. Among the transformative technologies reshaping business operations, cloud computing and data analytics stand out as powerful tools that, when leveraged effectively, can yield significant business value. By integrating these technologies strategically, businesses can unlock new opportunities for growth, streamline operations, and gain a competitive edge in the market.

Cloud computing offers organizations the flexibility to access computing resources on-demand, without the need for substantial investments in hardware and software infrastructure. This agility enables businesses to scale their operations rapidly in response to changing market demands, without the constraints of traditional IT environments. By migrating workloads to the cloud, organizations can streamline their operations, reduce downtime, and optimize resource utilization, leading to improved efficiency across the board.

In todays data-driven world, businesses are sitting on a goldmine of valuable information. Data analytics empowers organizations to extract actionable insights from vast volumes of data, enabling informed decision-making and driving business value. By leveraging advanced analytics techniques, such as machine learning and predictive modeling, businesses can identify trends, anticipate customer needs, and optimize processes for maximum efficiency. Furthermore, effective data governance and quality assurance practices ensure that insights derived from data analytics are accurate, reliable, and actionable.

Cloud FinOps, a practice focused on optimizing cloud spending and maximizing business value, plays a crucial role in ensuring that cloud investments deliver tangible returns. By tracking key performance indicators (KPIs) and measuring the business impact of cloud transformations, organizations can quantify the value derived from their cloud investments. Cloud FinOps goes beyond cost savings to encompass broader metrics such as improved resiliency, innovation, and operational efficiency, providing a comprehensive view of the business value generated by cloud initiatives.

Cloud computing infrastructure provides organizations with the foundation they need to harness the power of data analytics at scale. By leveraging cloud-based platforms for big data processing and analytics, organizations can access virtually unlimited computing resources, enabling them to analyze large datasets quickly and efficiently. Additionally, cloud infrastructure offers built-in features for data protection, disaster recovery, and security, ensuring that sensitive information remains safe and secure at all times. Furthermore, the pay-as-you-go pricing model of cloud services allows organizations to optimize costs and maximize ROI on their infrastructure investments.

Cloud computing accelerates the pace of software development by providing developers with access to scalable resources and flexible development environments. By leveraging cloud-based tools and platforms, organizations can streamline the software development lifecycle, reduce time-to-market, and improve collaboration among development teams. Furthermore, cloud-based development environments enable developers to experiment with new ideas and technologies without the constraints of traditional IT infrastructure, fostering innovation and driving business growth.

In conclusion, cloud computing and data analytics represent powerful tools for driving business value in todays digital economy. By embracing these technologies and implementing sound strategies for their deployment, organizations can unlock new opportunities for growth, enhance operational efficiency, and gain a competitive edge in the market. With the right approach, cloud computing and data analytics can serve as catalysts for innovation and transformation, enabling businesses to thrive in an increasingly data-driven world.

Join our WhatsApp and Telegram Community to Get Regular Top Tech Updates

Go here to read the rest:

Leveraging Cloud Computing and Data Analytics for Businesses - Analytics Insight

Cloud-Computing in the Post-Serverless Era: Current Trends and Beyond – InfoQ.com

Key Takeaways

[Note: The opinions and predictions in this article are those of the author and not of InfoQ.]

As AWS Lambda approaches its 10th anniversary this year, serverless computing expands beyond just Function as a Service (FaaS). Today, serverless describes cloud services that require no manual provisioning, offer on-demand auto-scaling, and use consumption-based pricing. This shift is part of a broader evolution in cloud computing, with serverless technology continuously transforming. This article focuses on the future beyond serverless, exploring how the cloud landscape will evolve beyond current hyperscaler models and its impact on developers and operations teams. I will examine the top three trends shaping this evolution.

In software development, a "module" or "component" typically refers to a self-contained unit of software that performs a cohesive set of actions. This concept corresponds elegantly to the microservice architecture that typically runs on long-running compute services such as Virtual Machines (VMs) or a container service. AWS EC2, one of the first widely accessible cloud computing services, offered scalable VMs. Introducing such scalable, accessible cloud resources provided the infrastructure necessary for microservices architecture to become practical and widespread. This shift led to decomposing monolithic applications into independently deployable microservice units.

Lets continue with this analogy of software units. A function is a block of code that encapsulates a sequence of statements performing a single task with defined input and output. This unit of code nicely corresponds to the FaaS execution model. The concept of FaaS executing code in response to events without the need to manage infrastructure existed before AWS Lambda but lacked broad implementation and recognition.

The concept of FaaS, which involves executing code in response to events without the need for managing infrastructure, was already suggested by services like Google App Engine, Azure WebJobs, IronWorker, and AWS Elastic Beanstalk before AWS Lambda brought it into the mainstream. Lambda, emerging as the first major commercial implementation of FaaS, acted as a catalyst for its popularity by easing the deployment process for developers. This advancement led to the transformation of microservices into smaller, individually scalable, event-driven operations.

In the evolution toward smaller software units offered as a service, one might wonder if we will see basic programming elements like expressions or statements as a service (such as int x = a + b;). The progression, however, steers away from this path. Instead, we are witnessing the minimization and eventual replacement of functions by configurable cloud constructs. Constructs in software development, encompassing elements like conditionals (if-else, switch statements), loops (for, while), exception handling (try-catch-finally), or user-defined data structures, are instrumental in controlling program flow or managing complex data types. In cloud services, constructs align with capabilities that enable the composition of distributed applications, interlinking software modules such as microservices and functions, and managing data flow between them.

Cloud construct replacing functions, replacing microservices, replacing monolithic applications

While you might have previously used a function to filter, route, batch, split events, or call another cloud service or function, now these operations and more can be done with less code in your functions, or in many cases with no function code at all. They can be replaced by configurable cloud constructs that are part of the cloud services. Lets look at a few concrete examples from AWS to demonstrate this transition from Lambda function code to cloud constructs:

These are just a few examples of application code constructs becoming serverless cloud constructs. Rather than validating input values in a function with if-else logic, you can validate the inputs through configuration. Rather than routing events with a case or switch statement to invoke other code from within a function, you can define routing logic declaratively outside the function. Events can be triggered from data sources on data change, batched, or split without a repetition construct, such as a for or while loop.

Events can be validated, transformed, batched, routed, filtered, and enriched without a function. Failures were handled and directed to DLQs and back without a try-catch code, and successful completion was directed to other functions and service endpoints. Moving these constructs from application code into construct configuration reduces application code size or removes it, eliminating the need for security patching and any maintenance.

A primitive and a construct in programming have distinct meanings and roles. A primitive is a basic data type inherently part of a programming language. It embodies a basic value, such as an integer, float, boolean, or character, and does not comprise other types. Mirroring this concept, the cloud - just like a giant programming runtime - is evolving from infrastructure primitives like network load balancers, virtual machines, file storage, and databases to more refined and configurable cloud constructs.

Like programming constructs, these cloud constructs orchestrate distributed application interactions and manage complex data flows. However, these constructs are not isolated cloud services; there isnt a standalone "filtering as a service" or "event emitter as service." There are no "Constructs as a Service," but they are increasingly essential features of core cloud primitives such as gateways, data stores, message brokers, and function runtimes.

This evolution reduces application code complexity and, in many cases, eliminates the need for custom functions. This shift from FaaS to NoFaaS (no fuss, implying simplicity) is just beginning, with insightful talks and code examples on GitHub. Next, I will explore the emergence of construct-rich cloud services within vertical multi-cloud services.

In the post-serverless cloud era, its no longer enough to offer highly scalable cloud primitives like compute for containers and functions, or storage services such as key/value stores, event stores, relational databases, or networking primitives like load balancers. Post-serverless cloud services must be rich in developer constructs and offload much of the application plumbing. This goes beyond hyperscaling a generic cloud service for a broad user base; it involves deep specialization and exposing advanced constructs to more demanding users.

Hyperscalers like AWS, Azure, GCP, and others, with their vast range of services and extensive user bases, are well-positioned to identify new user needs and constructs. However, providing these more granular developer constructs results in increased complexity. Each new construct in every service requires a deep learning curve with its specifics for effective utilization. Thus, in the post-serverless era, we will observe the rise of vertical multi-cloud services that excel in one area. This shift represents a move toward hyperspecialization of cloud services.

Consider Confluent Cloud as an example. While all major hyperscalers (AWS, Azure, GCP, etc.) offer Kafka services, none match the developer experience and constructs provided by Confluent Cloud. With its Kafka brokers, numerous Kafka connectors, integrated schema registry, Flink processing, data governance, tracing, and message browser, Confluent Cloud delivers the most construct-rich and specialized Kafka service, surpassing what hyperscalers offer.

This trend is not isolated; numerous examples include MongoDB Atlas versus DocumentDB, GitLab versus CodeCommit, DataBricks versus EMR, RedisLabs versus ElasticCache, etc. Beyond established cloud companies, a new wave of startups is emerging, focusing on a single multi-cloud primitive (like specialized compute, storage, networking, build-pipeline, monitoring, etc.) and enriching it with developer constructs to offer a unique value proposition. Here are some cloud services hyperspecializing in a single open-source technology, aiming to provide a construct-rich experience and attract users away from hyperscalers:

This list represents a fraction of a growing ecosystem of hyperspecialized vertical multi-cloud services built atop core cloud primitives offered by hyperscalers. They compete by providing a comprehensive set of programmable constructs and an enhanced developer experience.

Serverless cloud services hyperspecializing in one thing with rich developer constructs

Once this transition is completed, bare-bones cloud services without rich constructs, even serverless ones, will seem like outdated on-premise software. A storage service must stream changes like DynamoDB; a message broker should include EventBridge-like constructs for event-driven routing, filtering, and endpoint invocation with retries and DLQs; a pub/sub system should offer message batching, splitting, filtering, transforming, and enriching.

Ultimately, while hyperscalers expand horizontally with an increasing array of services, hyperspecializers grow vertically, offering a single, best-in-class service enriched with constructs, forming an ecosystem of vertical multi-cloud services. The future of cloud service competition will pivot from infrastructure primitives to a duo of core cloud primitives and developer-centric constructs.

Cloud constructs increasingly blur the boundaries between application and infrastructure responsibilities. The next evolution is the "shift left" of cloud automation, integrating application and automation codes in terms of tools and responsibilities. Lets examine how this transition is unfolding.

The first generation of cloud infrastructure management was defined by Infrastructure as Code (IaC), a pattern that emerged to simplify the provisioning and management of infrastructure. This approach is built on the trends set by the commoditization of virtualization in cloud computing.

The initial IaC tools introduced new domain-specific languages (DSL) dedicated to creating, configuring, and managing cloud resources in a repeatable manner. Tools like Chef, Ansible, Puppet, and Terraform led this phase. These tools, leveraging declarative languages, allowed operation teams to define the infrastructures desired state in code, abstracting underlying complexities.

However, as the cloud landscape transitions from low-level coarse-grained infrastructure to more developer-centric programmable finer-grained constructs, a trend toward using existing general-purpose programming languages for defining these constructs is emerging. New entrants like Pulumi and the AWS Cloud Development Kit (CDK) are at the forefront of this wave, supporting languages such as TypeScript, Python, C#, Go, and Java.

The shift to general-purpose languages is driven by the need to overcome the limitations of declarative languages, which lack expressiveness and flexibility for programmatically defining cloud constructs, and by the shift-left of configuring cloud constructs responsibilities from operations to developers. Unlike the static nature of declarative languages suited for low-level static infrastructure, general-purpose languages enable developers to define dynamic, logic-driven cloud constructs, achieving a closer alignment with application code.

Shifting-left of application composition from infrastructure to developer teams

The post-serverless cloud developers need to implement business logic by creating functions and microservices but also compose them together using programmable cloud constructs. This shapes a broader set of developer responsibilities to develop and compose cloud applications. For example, a code with business logic in a Lambda function would also need routing, filtering, and request transformation configurations in API Gateway.

Another Lambda function may need DynamoDB streaming configuration to stream specific data changes, EventBridge routing, filtering, and enrichment configurations.

A third application may have most of its orchestration logic expressed as a StepFunction where the Lambda code is only a small task. A developer, not a platform engineer or Ops member, can compose these units of code together. Tools such as Pulumi, AWS SDK, and others that enable a developer to use the languages of their choice to implement a function and use the same language to compose its interaction with the cloud environment are best suited for this era.

Platform teams still can use declarative languages, such as Terraform, to govern, secure, monitor, and enable teams in the cloud environments, but developer-focused constructs, combined with developer-focused cloud automation languages, will shift left the cloud constructs and make developer self-service in the cloud a reality.

The transition from DSL to general-purpose languages marks a significant milestone in the evolution of IaC. It acknowledges the transition of application code into cloud constructs, which often require a deeper developer control of the resources for application needs. This shift represents a maturation of IaC tools, which now need to cater to a broader spectrum of infrastructure orchestration needs, paving the way for more sophisticated, higher-level abstractions and tools.

The journey of infrastructure management will see a shift from static configurations to a more dynamic, code-driven approach. This evolution hasnt stopped at Infrastructure as Code; it is transcending into a more nuanced realm known as Composition as Code. This paradigm further blurs the lines between application code and infrastructure, leading to more streamlined, efficient, and developer-friendly practices.

In summarizing the trends and their reinforcing effects, were observing an increasing integration of programming constructs into cloud services. Every compute service will integrate CI/CD pipelines; databases will provide HTTP access from the edge and emit change events; message brokers will enhance capabilities with filtering, routing, idempotency, transformations, DLQs, etc.

Infrastructure services are evolving into serverless APIs, infrastructure inferred from code (IfC), framework-defined infrastructure, or explicitly composed by developers (CaC). This evolution leads to smaller functions and sometimes to NoFaaS pattern, paving the way for hyperspecialized, developer-first vertical multi-cloud services. These services will offer infrastructure as programmable APIs, enabling developers to seamlessly merge their applications using their preferred programming language.

The shift-left of application composition using cloud services will increasingly blend with application programming, transforming microservices from an architectural style to an organizational one. A microservice will no longer be just a single deployment unit or process boundary but a composition of functions, containers, and cloud constructs, all implemented and glued together in a single language chosen by the developer. The future is shaping to be hyperspecialized and focused on the developer-first cloud.

Follow this link:

Cloud-Computing in the Post-Serverless Era: Current Trends and Beyond - InfoQ.com

Cloud Computing Security Start with a ‘North Star’ – ITPro Today

Cloud computing has followed a similar journey to other introductions of popular technology: Adopt first, secure later. Cloud transformation has largely been enabled by IT functions at the request of the business, with security functions often taking a backseat. In some organizations, this has been due to politics and blind faith in the cloud services providers (CSPs), e.g., AWS, Microsoft, and GCP.

In others, it has been because security functions only knew and understood on-premises deployments and simply didn't have the knowledge and capability to securely adapt to cloud or hybrid architectures and translate policies and processes to the cloud. For lucky organizations, this has only led to stalled migrations while the security and IT organizations played catch up. For unlucky organizations, this has led to breaches, business disruption, and loss of data.

Related: What Is Cloud Security?

Cloud security can be complex. However, more often than not, it is ridiculously simple the misconfigured S3 bucket being a prime example. It reached a point where malefactors could simply look for misconfigured S3 buckets to steal data; no need to launch an actual attack.

It's time for organizations take a step back and improve cloud security, and the best way to do this is to put security at the core of cloud transformations, rather than adopting the technology first and asking security questions later. Here are four steps to course correct and implement a security-centric cloud strategy:

Related: Cloud Computing Predictions 2024: What to Expect From FinOps, AI

For multi-cloud users, there is one other aspect of cloud security to consider. Most CSPs are separate businesses, and their services don't work with other CSPs. So, rather than functioning like internet service providers (ISPs) where one provider lets you access the entire internet, not just the sites that the ISP owns CSPs operate in silos, with limited interoperability with their counterparts (e.g., AWS can't manage Azure workloads, security, and services, and vice versa). This is problematic for customers because, once more than one cloud provider is added to the infrastructure, the efficacy in managing cloud operations and cloud security starts to diminish rapidly. Each time another CSP is added to an organization's environment, their attack surface grows exponentially, unless secured appropriately.

It's up to each company to take steps to become more secure in multi-cloud environments. In addition to developing and executing a strong security strategy, they also must consider using third-party applications and platforms such as cloud-native application protection platforms (CNAPPs), cloud security posture management (CSPM), infrastructure as code (IaC), and secrets management to provide the connective tissue between CSPs in hybrid or multi-cloud environments. Taking this vital step will increase security visibility, posture management, and operational efficiency to ensure the security and business results outlined at the start of the cloud security journey.

It should be noted that a cloud security strategy like any other form of security needs to be a "living" plan. The threat landscape and business needs change so fast that what is helpful today may not be helpful tomorrow. To stay in step with your organization's desired state of security, periodically revisit cloud security strategies to understand if they are delivering the desired benefits and make adjustments when they are not.

Cloud computing has transformed organizations of all types. Adopting a strategy for securing this new environment will not only allow security to catch up to technology adoption, it will also dramatically improve the ROI of cloud computing.

Ed Lewis is Secure Cloud Transformation Leader at Optiv.

Read this article:

Cloud Computing Security Start with a 'North Star' - ITPro Today

The Future of Cloud Computing in Business Operations – Data Science Central

The digital era has witnessed the remarkable evolution of cloud computing, transforming it into a cornerstone of modern business operations. This technology, which began as a simple concept of centralized data storage, has now evolved into a complex and dynamic ecosystem, enabling businesses to operate more efficiently and effectively than ever before. The Future of Cloud Computing holds unparalleled potential, promising to revolutionize the way companies operate, innovate, and compete in the global market.

Cloud computing refers to the delivery of various services over the Internet, including data storage, servers, databases, networking, and software. Rather than owning their computing infrastructure or data centers, companies can rent access to anything from applications to storage from a cloud service provider.

Cloud computing has revolutionized the way businesses operate, offering a plethora of advantages that enhance efficiency, flexibility, and scalability. In this discussion, well delve into the key benefits of cloud computing, explaining each in simple terms and underlining their significance in todays business landscape.

Cloud computing significantly cuts down on the capital cost associated with purchasing hardware and software, especially in sectors like the healthcare industry. Its an economical alternative to owning and maintaining extensive IT infrastructure, allowing businesses, including those in the healthcare sector, to save on setup and maintenance costs. This aspect is particularly beneficial in cloud computing in healthcare industry, where resources can instead be allocated toward patient care and medical research.

The ability to scale resources elastically with cloud computing is akin to having a flexible and adaptable IT infrastructure. Businesses can efficiently scale up or down their IT resources based on current demand, ensuring optimal utilization and avoiding wastage.

Cloud services are hosted on a network of secure, high-performance data centers globally, offering superior performance over traditional single corporate data centers. This global network ensures reduced latency, better application performance, and economies of scale.

Cloud computing facilitates a swift and agile business environment. Companies can quickly roll out new applications or resources, empowering them to respond swiftly to market changes and opportunities.

The efficiency and speed offered by cloud computing translate into enhanced productivity. Reduced network latency ensures applications and services run smoothly, enabling teams to achieve more in less time.

Cloud computing enhances collaboration by enabling team members to share and work on data and files simultaneously from any location. This virtual collaboration space is crucial for businesses with remote teams and global operations.

Here, we explore the transformative role of cloud computing in business, focusing on 7 key points that forecast its future impact and potential in streamlining and innovating operational landscapes.

In the Future of Cloud Computing, handling enormous amounts of data will become more critical than ever. Businesses of all sizes generate data at unprecedented rates. From customer interactions to transaction records, every piece of data is a potential goldmine of insights. Cloud computing steps in as the ideal solution to manage this surge efficiently.

Cloud storage provides a scalable and flexible way to store and access vast datasets. As we move forward, cloud providers will likely offer more tailored storage solutions, catering to different business needs. Whether its for high-frequency access or long-term archiving, cloud storage can adapt to various requirements.

Another significant aspect of data management in the Future of Cloud Computing is real-time data processing. Businesses will rely on cloud computing not just for storage, but also for the immediate processing and analysis of data. This capability allows for quicker decision-making, a crucial factor in maintaining a competitive edge.

One of the most transformative impacts of cloud computing is its ability to transcend geographical boundaries. In the Future of Cloud Computing, remote and global teams can collaborate as if they were in the same room. Cloud-based tools and platforms allow team members from different parts of the world to work on projects simultaneously, share files instantaneously, and communicate in real-time.

In the Future of Cloud Computing, we can expect a rise in virtual workspaces. These digital environments simulate physical offices, providing a space where remote workers can feel connected and engaged. They offer features like virtual meeting rooms, shared digital whiteboards, and social areas, replicating the office experience in a digital realm.

Cloud computing does more than just streamline operations; it also opens doors to innovation. With cloud resources, businesses can experiment with new ideas without significant upfront investment in infrastructure. This flexibility encourages creativity and risk-taking, which are essential for innovation.

Cloud computing accelerates the product development cycle. Teams can quickly set up and dismantle test environments, prototype more efficiently, and bring products to market faster. This agility gives businesses a significant advantage in rapidly evolving markets.

The landscape of cloud computing is rapidly evolving, with new trends constantly emerging to redefine how businesses leverage this technology. In the context of the future of cloud computing, 3 key trends stand out for their potential to significantly shape the industry. Understanding these trends is crucial for businesses looking to stay competitive and innovative.

Artificial Intelligence (AI) and Machine Learning (ML) are becoming increasingly integral to cloud computing. This integration is revolutionizing how cloud services are delivered and utilized. AI algorithms are enhancing the efficiency of cloud platforms, offering smarter data analytics, automating routine tasks, and providing more personalized user experiences. For instance, cloud-based AI services can analyze vast amounts of data to predict market trends, customer behavior, or potential system failures, offering invaluable insights for businesses.

This integration not only boosts the performance and scalability of cloud solutions but also opens up new avenues for innovation across various sectors.

As cloud computing becomes more prevalent, the focus on security and compliance is intensifying. The increasing frequency and sophistication of cyber threats make robust cloud security a top priority for businesses. In response, cloud service providers are investing heavily in advanced security measures, such as enhanced encryption techniques, identity and access management (IAM), and AI-powered threat detection systems.

Furthermore, with regulations like GDPR and CCPA in place, compliance has become a critical aspect of cloud services. The future of cloud computing will likely witness a surge in cloud solutions that are not only secure but also compliant with various global and industry-specific regulations. This trend ensures that businesses can confidently and safely leverage the cloud while adhering to legal and ethical standards.

Sustainability is a growing concern in the tech world, and cloud computing is no exception. There is an increasing trend towards green cloud computing, focusing on reducing the environmental impact of cloud services. This involves optimizing data centers for energy efficiency, using renewable energy sources, and implementing more sustainable operational practices.

It will likely see a stronger emphasis on sustainability as businesses and consumers become more environmentally conscious. Cloud providers who prioritize and implement eco-friendly practices will not only contribute to a healthier planet but also appeal to a growing segment of environmentally-aware customers.

The future of cloud computing is bright and offers a plethora of opportunities for businesses to grow and evolve. By staying informed and adapting to these changes, companies can leverage cloud computing to gain a competitive edge in the market.

Remember, the future of cloud computing isnt just about technology; its about how businesses can harness this technology to drive innovation, efficiency, and growth.

For businesses aiming to thrive in the ever-changing digital world, embracing the advancements in cloud computing is not just a choice but a necessity. Staying updated and adaptable will be key to harnessing the power of cloud computing for business success in the years to come.

Originally posted here:

The Future of Cloud Computing in Business Operations - Data Science Central

Global $83.7 Bn Cloud Computing Management and Optimization Market to 2030 with IT and Telecommunications … – PR Newswire

DUBLIN, Jan. 23, 2024 /PRNewswire/ -- The"Global Cloud Computing Management and Optimization Market 2023 - 2030 by Types, Applications - Partner & Customer Ecosystem Competitive Index & Regional Footprints" report has been added to ResearchAndMarkets.com's offering.

The Cloud Computing Management and Optimization Market size is estimated to grow from USD 17.6 Billion in 2022 to reach USD 83.7 Billion by 2030, growing at a CAGR of 21.7% during the forecast period from 2023 to 2030.

The Adoption of Cloud Based Solution Is Drive the Cloud Computing Management and Optimization Market Growth

As businesses migrate their operations to cloud-based ecosystems, as it offers a number of benefits, such as scalability, flexibility, and cost savings. A growing number of companies are adopting cloud computing includingSMEs and Large scale companies, which will lead to an increase in demand for cloud computing management and optimisation solutions.

Cloud computing environments are becoming increasingly complex, as businesses adopt a variety of cloud services from different providers. This complexity can make it difficult for businesses to manage their cloud costs and performance. Cloud computing management and optimization solutions can help businesses to simplify their cloud environments and optimize their costs and performance. Cloud computing can be a cost-effective way for businesses to IT resources.

However, businesses can still incur significant costs if they do not manage their cloud usage effectively. Cloud computing management and optimization solutions can help businesses to track their cloud usage and identify opportunities to optimize their costs. The cloud computing industry is constantly evolving, with the emergence of new technologies, such as artificial intelligence and machine learning. These new technologies can be used to improve the efficiency and effectiveness of cloud computing management and optimization solutions.

The IT and Telecommunications industries hold the highest market share in the Cloud Computing Management and Optimization Market

The IT and Telecommunications industries hold the highest market share in the Cloud Computing Management and Optimization Market in 2022, due to their intrinsic reliance on advanced technology solutions and their pivotal role in driving digital transformation across various sectors. In the IT industry, cloud computing has become a cornerstone for delivering software, platforms, and infrastructure services, enabling organizations to enhance agility, scalability, and operational efficiency.

As IT companies transition their operations to the cloud, the need for effective management and optimization of cloud resources becomes paramount to ensure optimal performance, cost control, and resource allocation. Cloud management and optimization solutions enable IT enterprises to streamline provisioning, monitor workloads, automate processes, and maintain stringent security protocols.

Furthermore, the Telecommunications sector has embraced cloud computing to modernize and expand its network infrastructure, offer innovative communication services, and adapt to the demands of an interconnected world. Cloud-based solutions empower telecom companies to efficiently manage network resources, deliver seamless customer experiences, and explore new revenue streams.

In this context, cloud computing management and optimization are essential for maintaining network reliability, ensuring data privacy, and dynamically scaling resources to meet fluctuating demand. The complex and dynamic nature of both IT and Telecommunications operations necessitates sophisticated tools and strategies for cloud resource management, making these industries prime contributors to the Cloud Computing Management and Optimization Market

Regional Insight: North America dominated the Cloud Computing Management and Optimization Market during the forecast period.

North America dominated the Cloud Computing Management and Optimization Market during the forecast period. Cloud computing has been continuously adopted by the United States and Canada, which are at the forefront of technological development, which helps strengthen North America's remarkable position as market leader. The strong presence of major companies like Adobe, Salesforce, Oracle,AWS, Google, and IBM inside the region's wide geography provides a foundation for this rise. With their cutting-edge solutions, these major players make a significant impact on adoption and innovation.

The region's commitment to technical advancement also serves as another indication of its dominance. Continuous improvements in a number of technologies are transforming the cloud computing industry, and North America is recognized as a hub for important developments.

As a result, organizations and enterprises in North America are pushed to the forefront of cloud optimization and administration, utilizing the full range of technologies and expertise provided by both local and international industry experts. Strong vendor presence, widespread acceptance, and constant technological innovation place North America in the lead for snatching the highest market share during the forecast period.

Major Classifications are as follows:

Cloud Computing Management and Optimization Market, Type of Solutions

Cloud Computing Management and Optimization Market, By Deployment Models

Cloud Computing Management and Optimization Market, By Organization Size

Cloud Computing Management and Optimization Market, By Cloud Service Models

Cloud Computing Management and Optimization Market, By Technologies

Cloud Computing Management and Optimization Market, By Industries

Cloud Computing Management and Optimization Market, By Geography

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/bx3846

About ResearchAndMarkets.com

ResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

Media Contact:

Research and Markets Laura Wood, Senior Manager [emailprotected] For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900 U.S. Fax: 646-607-1907 Fax (outside U.S.): +353-1-481-1716

Logo: https://mma.prnewswire.com/media/539438/Research_and_Markets_Logo.jpg

SOURCE Research and Markets

Read this article:

Global $83.7 Bn Cloud Computing Management and Optimization Market to 2030 with IT and Telecommunications ... - PR Newswire

AWS to invest $15bn in cloud computing in Japan – DatacenterDynamics

Amazon Web Services (AWS) is planning to invest 2.26 trillion yen ($15.24 billion) in expanding its cloud computing infrastructure in Japan by 2027.

As part of this investment, the company will seek to expand its data center facilities in Tokyo and Osaka.

The cloud giant previously invested 1.51 trillion yen (~$10.2bn) between 2011 and 2022 in the country. Yearly, this works out at just under $1bn spent per year. The new announcement will see this increase to more than $5bn a year for the next three years.

"The adoption of digital technology has become a source of a countrys competitiveness, said Takuya Hirai, former digital minister and current chair of headquarters for the promotion of a digital society in Japans Liberal Democratic Party.

The development of digital infrastructure in Japan is key to strengthening the country's industrial competitiveness, and data centers play an important role to this end. It promotes the use of important technologies such as AI [artificial intelligence] and improves the capabilities of research and development in Japan."

The digital infrastructure in the country is also the backbone of AWS' artificial intelligence solutions. AWS provides generative AI services to Japanese customers including Asahi Group, Marubeni, and Nomura Holdings.

AWS first entered Japan in 2009. The company launched its first cloud region in the country in 2011 in Tokyo, and another in Osaka in 2021.

Amazon's Bedrock AI offering was made available in Tokyo in October 2023. The company also invested $100m in a generative AI innovation center in June 2023.

It is currently estimated that the latest investment will contribute 5.57 trillion yen (~$37.6bn) to Japans GDP and support an average of 30,500 full-time jobs in Japanese businesses each year.

Japan's government is seeking to catch up in AI development. Prime Minister Fumio Kishida has met with the heads of OpenAI and Nvidia in the past year to discuss AI regulation and infrastructure.

In December 2023, Minister Ken Saito announced the government would double down on its pledge to support the domestic chip manufacturing industry.

Follow this link:

AWS to invest $15bn in cloud computing in Japan - DatacenterDynamics

Amazon’s AWS to invest $15 billion to expand cloud computing in Japan – Yahoo! Voices

TOKYO (Reuters) - Amazon Web Services (AWS) said on Friday it plans to invest 2.26 trillion yen ($15.24 billion) in Japan by 2027 to expand cloud computing infrastructure that serves as a backbone for artificial technology (AI) services.

The Amazon.com unit is spending to expand facilities in the metropolises of Tokyo and Osaka to meet growing customer demand, it said in a statement.

That comes on top of 1.51 trillion yen spent from 2011 to 2022 to build up cloud capacity in Japan, AWS said. The company offers generative AI services to Japanese corporate customers including Asahi Group, Marubeni and Nomura Holdings, it said.

The investment comes as Japan's government and corporate sector race to catch up in AI development. Prime Minister Fumio Kishida met with the heads of ChatGPT creator OpenAI and advanced chipmaker Nvidia in the past year to discuss AI regulation and infrastructure.

($1 = 148.2700 yen)

(This story has been refiled to add dropped words 'creator OpenAI' after 'ChatGPT', in paragraph 4)

(Reporting by Rocky Swift; Editing by Muralikumar Anantharaman and Christopher Cushing)

Read the original post:

Amazon's AWS to invest $15 billion to expand cloud computing in Japan - Yahoo! Voices

Beyond Cloud Nine: 3 Cutting-Edge Tech Stocks Shaping the Future of Computing – InvestorPlace

Source: Peshkova / Shutterstock

Cloud computing has helped millions of companies save time and money. Businesses dont have to worry about hardware costs and can access data quickly. Also, cloud computing companies offer cybersecurity resources to keep data safe from hackers.

Many stocks in the sector have outperformed the market over several years and can generate more gains in the years ahead. Therefore, these cutting-edge tech stocks look poised to expand and shape the future of cloud computing.

Source: Sundry Photography / Shutterstock.com

ServiceNow(NYSE:NOW) boasts a high retention rate for its software and continues to attract customers with deep pockets. The company has over 7,700 customers and almost 2,000 of them haveannual contract values that exceed $1 million.

Further, NOWs remaining performance obligations are more than triple the companys Q3 revenue. The platform allows businesses to runmore efficient help desksand streamline repetitive tasks with built-in chatbots. Also, ServiceNow offers high-level security to protect sensitive data.

Additionally, the company has been a reliable pick for investors who want to outperform the market. Shares are up by 74% over the past year and have gained 284% over the past five years. The stock is trading at a 58-forward P/E ratio. The companys net income growth can lead to a better valuation in the future. And, ServiceNow more than tripled its profits year over year (YOY) in thethird quarter. Revenue grew at a nice 25% clip YOY.

Source: IgorGolovniov / Shutterstock.com

Alphabet(NASDAQ:GOOG, NASDAQ:GOOGL) makes most of its revenue from advertising and cloud computing. Google Cloud has become a popular resource for business owners, boasting over 500,000 customers. Also, Alphabet stands at the forefront of AI , enhancing the tech giants future product offerings.

Notably, the companys cloud segment remains a leading growth driver. Revenue for Google Cloud increased by 22.5% YOY in thethird quarter. And, Alphabets entire business achieved 11% YOY revenue growth, which is an acceleration from the previous period.

Also, Google Cloud reported a profitable quarter, swinging from a $440 million net loss in Q3 2022 to $266 million net income in Q3 2023. Alphabet investors positive response to the news helped the stock rally by 57% over the past year. The stock has gained 163% over the past five years.

Alphabet currently trades at a 22-forward P/E ratio and has a $1.8 trillion market cap. Finally, the companys vast advertising network gives them plenty of capital to reinvest in Google Cloud and the companys smaller business segments.

Source: Karol Ciesluk / Shutterstock.com

Datadog(NASDAQ:DDOG) helps companies improve their cybersecurity across multiple cloud computing solutions. Cloud spending is still in its early innings and is expected to reach$1 trillion in annual spending in 2026. The company is projected to have a $62 billion total addressable market (TAM) in that year.

Specifically, Datadog removes silos and friction associated with keeping cloud applications safe from hackers. Over 26,000 customers use Datadogs software including approximately 3,130 customers with annual contract values exceeding $100,000. The companys revenue growth over the trailing twelve months is currently 31%. Further, operating margins have improved significantly to help the company secure a net profit in the third quarter.

In fact, DDOG has a good relationship with many cloud computing giants, including Alphabet. The two corporationsexpanded their partnership to close out 2023.

Investors have been rushing to accumulate Datadog stock in recent years. Shares have gained 68% over the past year and are up by 240% over the past five years. DDOG is still more than 35% removed from its all-time high. However, continued revenue growth and profit margin expansion can help the stock reclaim its all-time high.

On this date of publication, Marc Guberti held a long position in NOW. The opinions expressed in this article are those of the writer, subject to theInvestorPlace.com Publishing Guidelines.

Marc Guberti is a finance freelance writer at InvestorPlace.com who hosts the Breakthrough Success Podcast. He has contributed to several publications, including the U.S. News & World Report, Benzinga, and Joy Wallet.

Read the original post:

Beyond Cloud Nine: 3 Cutting-Edge Tech Stocks Shaping the Future of Computing - InvestorPlace

What is cloud computing? Everything you need to know now

Cloud computing is an abstraction of compute, storage, and network infrastructure assembled as a platform on which applications and systems can be deployed quickly and scaled on the fly. Crucial to cloud computing is self-service: Users can simply fill in a web form and get up and running.

The vast majority of cloud customers consume public cloud computing services over the internet, which are hosted in large, remote data centers maintained by cloud providers. The most common type of cloud computing, SaaS (software as service), delivers prebuilt applications to the browsers of customers who pay per seat or by usage, exemplified by such popular apps as Salesforce, Google Docs, or Microsoft Teams. Next in line is IaaS (infrastructure as a service), which offers vast, virtualized compute, storage, and network infrastructure upon which customers build their own applications, often with the aid of providers API-accessible services.

When people casually say the cloud, they most often mean the big IaaS providers: AWS (Amazon Web Services), Google Cloud, or Microsoft Azure. All three have become gargantuan ecosystems of services that go way beyond infrastructure: developer tools, serverless computing, machine learning services and APIs, data warehouses, and thousands of other services. With both SaaS and IaaS, a key benefit is agility. Customers gain new capabilities almost instantly without capital investment in hardware or softwareand they can instantly scale the cloud resources they consume up or down as needed.

Way back in 2011, NIST posted a PDF that divided cloud computing into three service modelsSaaS, IaaS, and PaaS (platform as a service)the latter a controlled environment within which customers develop and run applications. These three categories have largely stood the test of time, although most PaaS solutions now make themselves available as services within IaaS ecosystems rather than presenting themselves as their own clouds.

Two evolutionary trends stand out since NISTs threefold definition. One is the long and growing list of subcategories within SaaS, IaaS, and PaaS, some of which blur the lines between categories. The other is the explosion of API-accessible services available in the cloud, particularly within IaaS ecosystems. The cloud has become a crucible of innovation where many emerging technologies appear first as services, a big attraction for business customers who understand the potential competitive advantages of early adoption.

This type of cloud computing delivers applications over the internet, typically with a browser-based user interface. Today, the vast majority of software companies offer their wares via SaaSif not exclusively, then at least as an option.

The most popularSaaS applications for business can be found in Googles G Suite and Microsofts Office 365; most enterprise applications, including giant ERP suites from Oracle and SAP, come in both SaaS and on-prem versions. SaaS applications typically offer extensive configuration options as well as development environments that enable customers to code their own modifications and additions. They also enable data integration with on-prem applications.

At a basic level,IaaS cloud providers offer virtualized compute, storage, and networking over the internet on a pay-per-use basis. Think of it as a data center maintained by someone else, remotely, but with a software layer that virtualizes all those resources and automates customers ability to allocate them with little trouble.

But thats just the basics. The full array of services offered by the major public IaaS providers is staggering:highly scalable databases, virtual private networks,big dataanalytics, developer tools,machine learning, application monitoring, and so on.Amazon Web Serviceswas the first IaaS provider and remains the leader, followed by Microsoft Azure,Google Cloud Platform, Alibaba Cloud, andIBM Cloud.

PaaS provides sets of services and workflows that specifically target developers, who can use shared tools, processes, and APIs to accelerate the development, testing, and deployment of applications. Salesforces Heroku and Salesforce Platform (formerly Force.com) are popular public cloud PaaS offerings; Cloud Foundry and Red Hats OpenShift can be deployed on premises or accessed through the major public clouds. For enterprises, PaaS can ensure that developers have ready access to resources, follow certain processes, and use only a specific array of services, while operators maintain the underlying infrastructure.

FaaS, the cloud version of serverless computing, adds another layer of abstraction to PaaS, so that developers are completely insulated from everything in the stack below their code. Instead of futzing with virtual servers, containers, and application runtimes, developers upload narrowly functional blocks of code, and set them to be triggered by a certain event (such as a form submission or uploaded file). All the major clouds offer FaaS on top of IaaS: AWS Lambda,Azure Functions, Google Cloud Functions, and IBM Cloud Functions. A special benefit of FaaS applications is that they consume no IaaS resources until an event occurs, reducing pay-per-use fees.

A private cloud downsizes the technologies used to run IaaS public clouds into software that can be deployed and operated in a customers data center. As with a public cloud, internal customers can provision their own virtual resources to build, test, and run applications, with metering to charge back departments for resource consumption. For administrators, the private cloud amounts to the ultimate in data center automation, minimizing manual provisioning and management. VMware provides the most popular commercial private cloud software, while OpenStack is the open source leader.

Note, however, that the private cloud does not fully conform to the definition of cloud computing. Cloud computing is a service. A private cloud demands that an organization build and maintain its own underlying cloud infrastructure; only internal users of a private cloud experience it as a cloud computing service.

A hybrid cloud is the integration of a private cloud with a public cloud. At its most developed, the hybrid cloud involves creating parallel environments in which applications can move easily between private and public clouds. In other instances, databases may stay in the customer data center and integrate with public cloud applicationsor virtualized data center workloads may be replicated to the cloud during times of peak demand. The types of integrations between private and public cloud vary widely, but they must be extensive to earn a hybrid cloud designation.

Just as SaaS delivers applications to users over the internet, public APIs offer developers application functionality that can be accessed programmatically. For example, in building web applications, developers often tap into the Google Maps API to provide driving directions; to integrate with social media, developers may call upon APIs maintained by Twitter, Facebook, or LinkedIn. Twilio has built a successful business delivering telephony and messaging services via public APIs. Ultimately, any business can provision its own public APIs to enable customers to consume data or access application functionality.

Data integration is a key issue for any sizeable company, but particularly for those that adopt SaaS at scale. iPaaS providers typically offer prebuilt connectors for sharing data among popular SaaS applications and on-premises enterprise applications, though providers may focus more or less on business-to-business and e-commerce integrations, cloud integrations, or traditional SOA-style integrations. iPaaS offerings in the cloud from such providers as Dell Boomi, Informatica, MuleSoft, and SnapLogic also let users implement data mapping, transformations, and workflows as part of the integration-building process.

The most difficult security issue related to cloud computing is the management of user identity and its associated rights and permissions across private data centers and pubic cloud sites. IDaaS providers maintain cloud-based user profiles that authenticate users and enable access to resources or applications based on security policies, user groups, and individual privileges. The ability to integrate with various directory services (Active Directory, LDAP, etc.) and provide single sign-on across business-oriented SaaS applications is essential. Okta is the clear leader in cloud-based IDaaS; CA, Centrify, IBM, Microsoft, Oracle, and Ping provide both on-premises and cloud solutions.

Collaboration solutions such as Slack and Microsoft Teams have become vital messaging platforms that enable groups to communicate and work together effectively. Basically, these solutions are relatively simple SaaS applications that support chat-style messaging along with file sharing and audio or video communication. Most offer APIs to facilitate integrations with other systems and enable third-party developers to create and share add-ins that augment functionality.

Key providers in such industries as financial services, health care, retail, life sciences, and manufacturing provide PaaS clouds to enable customers to build vertical applications that tap into industry-specific, API-accessible services. Vertical clouds can dramatically reduce the time to market for vertical applications and accelerate domain-specific B-to-B integrations. Most vertical clouds are built with the intent of nurturing partner ecosystems.

The most widely accepted definition of cloud computing means that you run your workloads on someone elses servers, but this is not the same as outsourcing. Virtual cloud resources and even SaaS applications must be configured and maintained by the customer. Consider these factors when planning a cloud initiative.

Objections to the public cloud generally begin with cloud security, although the major public clouds have proven themselves much less susceptible to attack than the average enterprise data center.

Of greater concern is the integration of security policy and identity management between customers and public cloud providers. In addition, government regulation may forbid customers from allowing sensitive data off premises. Other concerns include the risk of outages and the long-term operational costs of public cloud services.

The bar to qualify as a multicloud adopter is low: A customer just needs to use more than one public cloud service. However, depending on the number and variety of cloud services involved, managing multiple clouds can become quite complex from both a cost optimization and technology perspective.

In some cases, customers subscribe to multiple cloud services simply to avoid dependence on a single provider. A more sophisticated approach is to select public clouds based on the unique services they offer and, in some cases, integrate them. For example, developers might want to use Googles TensorFlow machine learning service on Google Cloud Platform to build AI-driven applications, but prefer Jenkins hosted on the CloudBees platform for continuous integration.

To control costs and reduce management overhead, some customers opt for cloud management platforms (CMPs) and/or cloud service brokers (CSBs), which let you manage multiple clouds as if they were one cloud. The problem is that these solutions tend to limit customers to such common-denominator services as storage and compute, ignoring the panoply of services that make each cloud unique.

You often see edge computing described as an alternative to cloud computing. But it is not. Edge computing is about moving compute to local devices in a highly distributed system, typically as a layer around a cloud computing core. There is typically a cloud involved to orchestrate all the devices and take in their data, then analyze it or otherwise act on it.

The clouds main appeal is to reduce the time to market of applications that need to scale dynamically. Increasingly, however, developers are drawn to the cloud by the abundance of advanced new services that can be incorporated into applications, from machine learning to internet of things (IoT) connectivity.

Go here to read the rest:

What is cloud computing? Everything you need to know now

What is Private Cloud? | IBM

Private cloud is a cloud computing environment dedicated to a single customer. It combines many of the benefits of cloud computing with the security and control of on-premises IT infrastructure.

Private cloud (also known as an internal cloud or corporate cloud) is a cloud computing environment in which all hardware and software resources are dedicated exclusively to, and accessible only by, a single customer. Private cloud combines many of the benefits of cloud computingincluding elasticity, scalability, and ease of service deliverywith the access control, security, and resource customization of on-premises infrastructure.

Many companies choose private cloud over public cloud (cloud computing services delivered over infrastructure shared by multiple customers) because private cloud is an easier way (or the only way) to meet their regulatory compliance requirements. Others choose private cloud because their workloads deal with confidential documents, intellectual property, personally identifiable information (PII), medical records, financial data, or other sensitive data.

By building private cloud architecture according to cloud native principles, an organization gives itself the flexibility to easily move workloads to public cloud or run them within a hybridcloud (mixed public and private cloud) environment whenever theyre ready.

More here:

What is Private Cloud? | IBM

What is a Private Cloud – Definition | Microsoft Azure

The private cloud is defined as computing services offered either over the Internet or a private internal network and only to select users instead of the general public. Also called an internal or corporate cloud, private cloud computing gives businesses many of the benefits of apublic cloud- including self-service, scalability, and elasticity - with the additional control and customization available from dedicated resources over a computing infrastructure hosted on-premises. In addition, private clouds deliver a higher level of security and privacy through both company firewalls and internal hosting to ensure operations and sensitive data are not accessible to third-party providers. One drawback is that the companys IT department is held responsible for the cost and accountability of managing the private cloud. So private clouds require the same staffing, management, and maintenance expenses as traditional datacenter ownership.

Two models for cloud services can be delivered in a private cloud. The first isinfrastructure as a service (IaaS)that allows a company to use infrastructure resources such as compute, network, and storage as a service. The second isplatform as a service (PaaS)that lets a company deliver everything from simple cloud-based applications to sophisticated-enabled enterprise applications. Private clouds can also be combined with public clouds to create ahybrid cloud, allowing the business to take advantage ofcloud burstingto free up more space and scale computing services to the public cloud when computing demand increases.

See the original post:

What is a Private Cloud - Definition | Microsoft Azure