Page 52«..1020..51525354..6070..»

Category Archives: Cloud Computing

What is Cloud Computing? | Oracle

Posted: November 28, 2021 at 9:49 pm

There are three types of clouds: public, private, and hybrid. Each type requires a different level of management from the customer and provides a different level of security.

In a public cloud, the entire computing infrastructure is located on the premises of the cloud provider, and the provider delivers services to the customer over the internet. Customers do not have to maintain their own IT and can quickly add more users or computing power as needed. In this model, multiple tenants share the cloud providers IT infrastructure.

A private cloud is used exclusively by one organization. It could be hosted at the organizations location or at the cloud providers data center. A private cloud provides the highest level of security and control.

As the name suggests, a hybrid cloud is a combination of both public and private clouds. Generally, hybrid cloud customers host their business-critical applications on their own servers for more security and control, and store their secondary applications at the cloud providers location.

The main difference between hybrid cloud and multicloud is the use of multiple cloud computing and storage devices in a single architecture.

There are three main types of cloud services: software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). Theres no one-size-fits-all approach to cloud; its more about finding the right solution to support your business requirements.

SaaS is a software delivery model in which the cloud provider hosts the customers applications at the cloud providers location. The customer accesses those applications over the internet. Rather than paying for and maintaining their own computing infrastructure, SaaS customers take advantage of subscription to the service on a pay-as-you-go basis.

Many businesses find SaaS to be the ideal solution because it enables them to get up and running quickly with the most innovative technology available. Automatic updates reduce the burden on in-house resources. Customers can scale services to support fluctuating workloads, adding more services or features they grow. A modern cloud suite provides complete software for every business need, including customer experience, customer relationship management, customer service, enterprise resource planning, procurement, financial management, human capital management, talent management, payroll, supply chain management, enterprise planning, and more.

PaaS gives customers the advantage of accessing the developer tools they need to build and manage mobile and web applications without investing inor maintainingthe underlying infrastructure. The provider hosts the infrastructure and middleware components, and the customer accesses those services via a web browser.

To aid productivity, PaaS solutions need to have ready-to-use programming components that allow developers to build new capabilities into their applications, including innovative technologies such as artificial intelligence (AI), chatbots, blockchain, and the Internet of Things (IoT). The right PaaS offering also should include solutions for analysts, end users, and professional IT administrators, including big data analytics, content management, database management, systems management, and security.

IaaS enables customers to access infrastructure services on an on-demand basis via the internet. The key advantage is that the cloud provider hosts the infrastructure components that provide compute, storage, and network capacity so that subscribers can run their workloads in the cloud. The cloud subscriber is usually responsible for installing, configuring, securing, and maintaining any software on the cloud native solutions, such as database, middleware, and application software.

Read more:

What is Cloud Computing? | Oracle

Posted in Cloud Computing | Comments Off on What is Cloud Computing? | Oracle

NIST Cloud Computing Program – NCCP | NIST

Posted: at 9:49 pm

Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is composed of five essential characteristics (On-demand self-service, Broad network access, Resource pooling, Rapid elasticity, Measured Service); three service models (Cloud Software as a Service (SaaS), Cloud Platform as a Service (PaaS), Cloud Infrastructure as a Service (IaaS)); and, four deployment models (Private cloud, Community cloud, Public cloud, Hybrid cloud). Key enabling technologies include: (1) fast wide-area networks, (2) powerful, inexpensive server computers, and (3) high-performance virtualization for commodity hardware.

The Cloud Computing model offers the promise of massive cost savings combined with increased IT agility. It is considered critical that government and industry begin adoption of this technology in response to difficult economic constraints. However, cloud computing technology challenges many traditional approaches to datacenter and enterprise application design and management. Cloud computing is currently being used; however, security, interoperability, and portability are cited as major barriers to broader adoption.

The long term goal is to provide thought leadership and guidance around the cloud computing paradigm to catalyze its use within industry and government. NIST aims to shorten the adoption cycle, which will enable near-term cost savings and increased ability to quickly create and deploy enterprise applications. NIST aims to foster cloud computing systems and practices that support interoperability, portability, and security requirements that are appropriate and achievable for important usage scenarios

Read the original:

NIST Cloud Computing Program - NCCP | NIST

Posted in Cloud Computing | Comments Off on NIST Cloud Computing Program – NCCP | NIST

Cloud computing has won. But we still don’t know what that means – ZDNet

Posted: at 9:49 pm

There's little doubt that cloud computing is now the absolutely dominant force across enterprise computing. Most companies have switched from buying their own hardware and software to renting both from vendors who host their services in vast anonymous data centers around the globe.

Tech analysts are predicting that the vast majority of new computing workloads will go straight into the cloud, and most companies will switch to a cloud-first policy in the next couple of years: total cloud spending will soon hit $500 billion.

The best cloud storage services

Free and cheap personal and small business cloud storage services are everywhere. But, which one is best for you? Let's look at the top cloud storage options.

Read More

There are plenty of good reasons for this. The cloud companies whether that's software-as-a-service or infrastructure-as-a-service or any other as-a-service are experts at what they do, and can harness the economies of scale that come with delivering the same service to a vast number of customers. Most companies don't need to be experts in running email servers or invoicing systems when it doesn't bring them any real competitive advantage, so it makes sense to hand over these commodity technologies to cloud providers.

SEE: Having a single cloud provider is so last decade

Still, that doesn't mean every consequence of the move to the cloud is resolved.

Renting is often more expensive than buying, so keeping a lid on cloud costs remains a challenge for many businesses. And hybrid cloud where enterprises pick and choose the best services for their needs and then try to connect them up is increasingly in vogue. Few companies want to trust their entire infrastructure to one provider; services do go down and everyone needs a backup option. The risk of vendor lock-in in the cloud is something companies increasingly want to avoid.

The impact of cloud computing on skills is more complex. Certainly the shift has seen some tech jobs disappear as companies no longer need to manage basic services themselves. Tech staff will need to shift from maintaining systems to developing new ones, most likely by tying cloud services together. That's going to be important for companies that want to create new services out of the cloud, but it's a significant skills shift for many staff to go from admin to developer and not everyone will want to.

Also, as those administrator jobs vanish, the career path in IT will shift, too: skills around project management, innovation and teamwork will become more important for tech workers that want to move up.

There's no obvious cloud-computing backlash ahead right now. Even a few major outages have done little to shake confidence in the idea that for most applications and most organisations the cloud makes business sense. However, the implications of that decision may take a few years to play out yet.

Continue reading here:

Cloud computing has won. But we still don't know what that means - ZDNet

Posted in Cloud Computing | Comments Off on Cloud computing has won. But we still don’t know what that means – ZDNet

Increasing Importance Of Cloud Computing In Businesses – GISuser.com

Posted: at 9:49 pm

The modern cloud computing platform dates back to the year 2006, although it was invented in the late 1960s by J.C.R Licklider, the usage of these services became prominent since 2006. Due to increasing advancement in modern technologies and usage of the internet. Cloud computing has come a long way and still, theres more to go due to the dynamic environment in the emerging technologies.

So, what exactly is cloud computing and why businesses are rushing towards this platform, and how this can be a game-changer to the current business market.

Cloud computing refers to the delivery of on-demandcomputing services over the internet on a pay-as-you-go basis. In simple terms, the user can store all the data over the internet by using cloud storage services unlike in the traditional forms like hard disk, pen drive, etc.

Before the era of cloud services, businesses had to maintain on-premise data servers to store and manipulate the data, which has more drawbacks and these drawbacks are filled by cloud services. So how on-premise and cloud services vary from each other.

These cloud computing services will provide easy and effective solutions on which businesses can rely and expand their services and platform. They can maintain a competitive edge over others, and cloud services can be used by individuals too over the internet.

There are two types of cloud computing models namely

In Deployment Model again there are threetypes of models known as

Here, cloud infrastructure is available to the public and is owned by a cloud provider.

Examples like Amazon Web Services (AWS), Google Cloud Platform, Microsoft Azure, Sun Cloud, and IBMS BLUE CLOUD can be taken.

Here the cloud infrastructure is maintained by a single organization and can be managed by the company itself or a third party and can be on-premise or off-premise.

Examples like AWS, VMware can be taken.

Here, this cloud has both the characteristics of a public and private cloud.

Examples like government agencies can be taken.

In the service model again there are three types of models known as

If an organization needs a virtual machine,then IAAS can be opted for. Here, most of the users can be IT administrators. Examples: AWS, Microsoft Azure, Google.

If a company needs the platform to build the software products,then PAAS can be opted for. Here most of the users can be software developers.

If a company requires the final product or doesnt want to maintain any IT equipment, then SAAS can be opted for. Here, most of the users can be end customers. Examples: Microsoft Office 365, Google apps.

There are abundant benefits of cloud computing and businesses are rushing towards these services in order to ease their approach towards complex services. However, there are disadvantages of a cloud, like a security breach, hijacking, and external sharing of data. But here the advantages of the cloud definitely outweigh the disadvantages and cloud services can provide more than they are supposed to.

Due to the ongoing pandemic since 2020 and during these unprecedented times, the progress of the economy has a serious toll from COVID-19. However, things are slowly coming back to normal, and work from home (WFH) is still practiced by many of the organizations employees.

During these times cloud usage by both organizations and individuals has increased rapidly since most of the employees are using the cloud platforms to perform various duties.

Since more than half of the world is on the cloud platform, start-ups and various new emerging companies are rushing towardscloud computing engineeringto expand their base and to reach the corners of the world. Personal computers and laptops sales were hiked during the pandemic which led to an increase in expenditure incurred by computer chip makers by 20 to 30%. On the bright side can be that due to the cloud computing availability the climate crisis is having the least negative impact on it due to less pollution.

As we already discussed above, there are IAAS, PAAS, and SAAS. The future can be more than 100 million times fasterdue to the availability ofQuantum Computing As A Service (QaaS).However, this is already in the market in its initial stages and the companies which provide this are IBM Q, AWS, and Google.Quantum computers are 100 million times faster than the current classic computers and can solve mysteries by using Qubits,unlike BITSwhich are used by current computers.

It is estimated that $1 trillion to be spent on cloud computing over the coming decade and the new conceptof containerizationis being provided by various companies likeKubernetes.Itcan avoid vendor lock-in periodand this containerization can be completely serverless.

Nevertheless, due to dynamic emerging technologies in the market, it can be estimated that cloud computing can increase to unexpected heights and will be a boosting career opportunity.

Cloud computing as a career opportunity can be the next best thing one could do and throughcloud computing trainingone can learn these skills. There are many platforms like Great Learning where one can master every IT-related and emerging technologies course and gain abundant skills.

Cloud computing has come a long way and still, there is much more which can add to the future regarding this technology advancement. There can be advanced serverless quantum computing hubs where the mysteries of the universe can be decoded, and space highways can be calculated very accurately for the space travel to other planets like Mars and Venus.

https://radixweb.com/blog/cloud-computing-is-an-ace-of-spades-streamline-your-business

https://builtin.com/cloud-computing/cloud-computing-examples

https://www.marketsandmarkets.com/Market-Reports/cloud-computing-market-234.html#:~:text=The%20cloud%20computing%20market%20is,progressively%20adopting%20cloud%20computing%20services.

https://financesonline.com/cloud-computing-statistics/.

More:

Increasing Importance Of Cloud Computing In Businesses - GISuser.com

Posted in Cloud Computing | Comments Off on Increasing Importance Of Cloud Computing In Businesses – GISuser.com

Why your cloud computing costs are so high – and what you can do about them – SiliconANGLE News

Posted: at 9:49 pm

Small mistakes in the cloud can have big consequences.

John Purcell, chief product officer at custom developer DoiT International Ltd., tells of one customer who made a keystroke error that caused the company to spin up an Amazon Web Services Inc. instance much larger than what was needed. A job that was supposed to finish on Friday was never turned off and ran all weekend, resulting in $300,000 in unnecessary charges. There is a small single-digit percentage of companies that manage cloud costs well, he said.

Fifteen years after Amazon.com Inc. launched the first modern cloud infrastructure service, customers are still coming to grips with how to plan for and manage in an environment with dozens of variables that dont exist in the data center including a nearly limitless capacity to waste money.

DoiTs Purcell: There is a small single-digit percentage of companies that manage cloud costs well Photo: DoiT

Not that this is slowing cloud adoption. The recently released 2022 State of IT Reportfrom Spiceworks Inc. and Ziff Davis Inc. reported that 50% of business workloads are expected to run in the cloud by 2023, up from 40% in 2021. But information technology executives express frustration at the difficulty of getting the visibility they need to plan accurately for cloud infrastructure costs.

A recent survey of 350 IT and cloud decision-makers by cloud observability platform maker Virtana Inc. found that 82% said they had incurred unnecessary cloud costs, 56% lack tools to manage their spending programmatically and 86% cant easily get a global view of all their costs when they need it. Gartner Inc. predicts that 60% of infrastructure and operations leaders will encounter public cloud cost overruns. And Flexera Software LLCs 2020 State of the Cloud Report estimated that 30% of enterprise spending on cloud infrastructure is wasted.

Nearly 50% of cloud infrastructure spend is unaccounted for, estimated Asim Razzaq, chief executive of Yotascale Inc., which makes dynamic cost management software targeted at engineers.

The issue leapt into public view earlier this year inaposttitled The Cost of Cloud, a Trillion Dollar Paradox.Martin CasadoandSarah Wangat the venture capital firm Andreessen Horowitz concluded that for software companies operating at large scale, the cost of cloud could double a firms infrastructure bill resulting in a collective loss of$100 billion in market value based on the impact of cloud costs on margins.

Although not everyone agreed with the analysis, its clear that cloud costs can rise quickly and unexpectedly, and its something even staunch cloud advocates say needs to be addressed head-on. The topic is likely to be discussed this coming week in the exhibit halls at AWS re:Invent conference in Las Vegas, since AWS remains far and away the largest cloud services provider.

No one is laying the blame for the situation squarely at the door of infrastructure-as-a-service companies. Every single cloud service provider wants good revenue, said Eugene Khvostov, vice president of product and engineering at Apptio Inc., a maker of IT cost management products. They dont want to make money on resources that arent used.

But the sheer complexity of options users have for deploying workloads, compounded by multiple discount plans, weak governance policies and epic bills can frustrate anyone trying to get a coordinated picture of how much theyre spending. You want granularity, but the costs are coming in every day, every hour, very second, Khvostov said.

Cloud bills can be 50 pages long, agreed Randy Randhawa, Virtanas senior vice president of research and development and engineering. Figuring out where the optimize is difficult.

Much of the reason for cloud cost overruns comes down to organizations failing to understand and accommodate the fundamental differences between the data center capital-expense cost model and the operating-expense nature of the cloud. Simply stated, the costs of running a data center are front-end loaded into equipment procurement, but the marginal cost of operating that equipment once its up and running is relatively trivial.

Yotascales Razzaq: Nearly 50% of cloud infrastructure spend is unaccounted for. Photo: Yotascale

In the cloud, there are no capital expenses. Rather, costs accrue over time based on the size, duration and other characteristics of the workload. That means budgeting for and managing cloud resources is a constant ongoing process that requires unique tools, oversight and governance.

To get a sense of where economies can be achieved, SiliconANGLE contacted numerous experts who specialize in cloud economics. Their approaches to helping clients rein in costs range from automated tools that look for cost-saving opportunities to consulting services centered on budgeting and organizational discipline. They identified roughly three major areas where money is most often wasted provisioning, storage and foregone discounts as well as an assortment of opportunities for what Yotascales Razzaq called micro-wastage, or small money drips that add up over time.

Provisioning infrastructure in the cloud is pretty much the same as it is in the data center. The application owner or developer specifies what hardware and software resources are needed and a dedicated virtual server or instance is allocated that matches those requirements.

If needs change, though, the time and cost dynamics of the data center and the cloud diverge. Getting access to additional on-premises memory or storage can take hours or days in the case of a virtual machine and weeks if new hardware must be procured.

In contrast, cloud providers allow additional resources and machines to be quickly allocated either on a temporary or a permanent basis. There is no longer a need to provision a workload with all the capacity it will need over its lifespan, said Karl Adriaenssens, chief technology officer at GCSIT Inc., an infrastructure engineering firm.

Old habits die hard, though. To accommodate the relative inflexibility of data center infrastructure, developers and application owners tend to overestimate the resources that will be needed for a given workload. Developers are concerned about making sure their apps perform well and they tend to overprovision to be on the safe side, said Harish Doddala, senior director of product management at cloud cost management firm Harness Inc.

Incentives reward this approach. You dont get into trouble if you overspend a little but you do get into trouble if the application doesnt perform, said Razzaq.

All cloud platform providers offer autoscaling capabilities that make allocating additional capacity automatic, with the only charges being for the additional capacity used. However, users often dont think to deploy them.

As a result, 40% of cloud-based instances are at least one size too big, estimates DoiTs Purcell. Youd be surprised how often workloads run at 5% to 10% utilization.

CloudCheckrs Rehl: Scaling down and RDS database is painful. It may change your license type. Photo: CloudCheckr

As attractive as autoscaling sounds, experts advise using it with caution. On-demand instances, which are the most commonly used but also the most expensive type of cloud VM, can run up large bills if capacity expands too much. Autoscaling can become addictive, prompting users to create multiple database copies and log files that eat up storage and money.

Autoscaling will allow you to meet just about any workload requirement, but running almost unlimited scaling with on-demand instances can get out of hand, said Adriaenssens.

The tactic also doesnt always work as easily in reverse. If database sizes grow during a period of high activity, they may exceed a threshold that makes it difficult to scale the instance back down again. If you scale down [Amazons] RDS, its painful. It may change your license type, said Travis Rehl, vice president of product at CloudCheckr Inc., a maker of cloud visibility and management software thats being acquired by NetApp Inc. Its possible but the effort can be very high.

The second type of overprovisioning occurs when cloud instances are left running after theyre no longer needed. In an on-premises environment, this isnt a big problem, but the clock is always running in the cloud.

Usage policies that give users too much latitude to control their own instances are a common culprit. Someone may spin up an instance for a short-term project and then forget to shut it down. It may be months before anyone notices if its noticed at all.

Companies may have a policy of creating new accounts for new workloads, and after hundreds have been created, it becomes a bear to manage, said Razzaq. IT administrators may fear shutting down instances because they dont know whats running in them and the person who could tell them has left the company, he said.

Wasabis Friend: If you want to leave, it can be massively expensive. Photo: SiliconANGLE

Developers, who are more motivated by creating software than managing costs, are often culprits, particularly when working on tight deadlines. Typically, the budget is managed by finance, but the ones who actually cause the overruns are the developers themselves, said Harness Doddala.

When Cognizant Technology Solutions Corp. was called in to help one financial services customer rein in its costs in the Microsoft Corp. Azure cloud, it found numerous unnecessary copies of databases, some of which exceeded a terabyte in size. Virtual machines were running round-the-clock whether needed or not.

The company was prioritizing deadlines over efficiency, said Ryan Lockard, Cognizants global chief technology officer. Cognizant cut its cloud costs by half mainly by imposing operational discipline.

A wide variety of automated tools from the cloud providers and their marketplace partners can help tame runaway instances, but customers often dont have time to learn how to use them. Simple tactics can yield big savings, though, such as tagging instances so that administrators can view and manage them as a group. You can specify policies for apps by tags and within those policy constructs define what you want to track and take actions, said Virtanas Randhawa.

All cloud providers offer automated tools to manage instances in bulk. For example, Amazons Systems Manager Automation can start and shut down instances on a pre-defined schedule and the companys CloudWatch observability platform has drawn high praise for its ability to spot and stop overages. Microsofts Azure Cost Management and Billing does the same on the Azure platform and Google LLCs Active Assist uses machine learning to automate a wide variety of administrative functions, including sizing instances appropriately and identifying discount opportunities.

Numerous well-funded startups are also active in this market, including NetApp Inc.s Spot for optimizing the use of Spot Instances, ParkMyCloud for resource optimization and CloudZero for cost visibility. IBM Corp., VMware Inc., Nutanix Inc. and HashiCorp all have footholds in the market. Zesty Tech Ltd. just this week announced a $35 million Series A funding round for an approach that uses artificial intelligence to automatically adjust instances that allocate storage.

Its cheap to move data into the cloud but expensive to take it out. That means data volumes and costs tend to grow over time, with charges accruing month by month.

Apptios Khvostov: Cloud providers dont want to make money on resources that arent used. Photo: SiliconANGLE

This so-called data gravity is core to keeping customers in the fold, said Corey Quinn, chief cloud economist at The Duckbill Group. The more data the customer commits to a provider, the more applications tend to follow and the greater the risk of abandoned instances because no one wants to delete data, he said. As a result, cloud providers will continue to grow even without new customers.

The costs are attractive AWS charges a little over a penny per gigabyte for infrequently accessed data but that creates a temptation to shortcut discipline.

Studies show that up to 46% of data is just trash, said Gary Lyng, chief marketing officer at Aparavi Software Corp., a distributed data management software provider. Get rid of that first before you back it up or move it to the cloud.

Time-based pricing can also be insidious in the long term. The two cents per month per gigabyte that AWS charges for S3 storage becomes a dollar over a four-year period, making it far more expensive than local disk storage.

And getting it out adds to that cost. A customer that downloads 10 terabytes of data per month can expect to pay about $90 for the privilege. Extracting 150 terabytes costs $7,500. If you want to leave, it can be massively expensive, said David Friend, CEO of cloud storage service provider Wasabi Technologies Inc.

Cloud infrastructure customers may know how much storage they have but not how often they use it, Friend said. That can lead to overpaying for high-availability access to data that is rarely touched. And the more data they have, the more expensive it is for you to leave, he said.

Archeras Khanna: Customers leave money on the table by failing to negotiate. Photo: Archera

Data and compute instances are functionally separate in cloud infrastructure, meaning that shutting down a virtual machine doesnt affect the data. You pay for everything, whether you use it or not, Randhawa said.

Apptio has found tens of thousands of storage instances in the Azure cloud that are orphaned, not because operations have bad intentions but because they forget to hit the switch to terminate them or move them to cold storage, Khvostov said.

Cloud providers also bundle high-performance package offerings based on input/output operations per second to defined database sizes, mean that buyers seeking the fastest speed can inadvertently pay for too much storage. Overprovisioning can get very expensive on the storage side very fast, said GCSITs Adriaenssens.

As in the case of infrastructure, automated tools can move little-used storage to archive automatically, but customers need to know of their existence and take the time to configure them. In the meantime, cloud providers have little incentive to make it easy for customers to take data out, since it makes switching to other platforms that much more difficult.

Cloud infrastructure providers can deliver bills at nearly any level of granularity customer desires, but the tradeoff for specificity is nearly impossible complexity. Cloud providers make all that data available to you but you have to be looking for it, said DoiTs Purcell.

Numerous discount plans are available, but its generally up to the customer to ask for them.

The vendors are happy to teach people how to use the cloud as opposed to understanding the different modalities of working in the cloud, said Aran Khanna, CEO of Archera.ai Inc., a maker of cloud cost management software. Cloud providers say theyre more than happy to help customers look for cost savings and provider calculators that weigh various options.

Amazon Spot and Reserved Instances (Microsoft calls them Spot VM and Reserved VM instances) offer customers deep discounts for committing to using capacity over an extended period of time in the case of Reserved Instances, or for buying surplus time temporarily as available in the case of Spot Instances. There are also discount plans for customers that are willing to exercise some flexibility in shifting workloads across regions.

Virtanas Randhawa: You pay for everything, whether you use it or not. Photo: Virtana

However, DoiTs Purcell estimates that fewer than 25% of customers take advantage of cost-savings plans such as reserved instances and spot instances. Its like going to the grocery store; I have a pocket full of coupons, but I have to make sure theyre the right ones, he said.

They also tend to be reluctant to accept terms that limit their flexibility. Where customers leave money on the table is where they buy the least risky thing and dont negotiate, said Archeras Khanna. Its easier to buy the most expensive option.

Fear of overcommitting can deter users from seeking the most aggressive long-term discounts, but the savings from three-year reserved instance plans, for example, can more than compensate for underutilization, experts say.

A prepaid three-year reserved instance on AWS provides for a discount of more than 60%, while the one-year version saves a little over 40%. A customer that is confident in needing an instance for two years would be better off buying the three-year option and letting one year go unused than opting for the smaller discount. AWS provides a marketplace for buying and selling instances and Microsoft will buy back unused time up to a limit.

Having a negotiated global rate discount plan yields the first base of a strong discounted pricing portfolio, said Cognizants Lockard. Combining that with pay-as-you-go-style Reserved Instances allows for credits to be applied for planned future consumption.

GCSITs Adriaenssens advises users to budget for a balance of reserved, on-demand and even spot instances so that the most economical options are available for a given workload. He also recommends creating a Cloud Center of Excellence team thats responsible for measuring, planning and tuning deployment parameters so that workloads align with a cloud providers savings plans.

If youre willing to pay someone else to get your cloud costs in order, there are plenty of businesses ready to take your money. Many say they typically save their customers 30% or more, making their fees easy to justify.

Cognizants Lockard: Lifted-and-shifted virtual machines are the most expensive way to operate in the cloud. Photo: Ryan Lockard

However, many of the savings can be achieved by simply applying more organizational discipline to the task. That starts with making informed decisions about which applications to put in the cloud in the first place. The perception that cloud platforms are cheaper is baloney, said CloudCheckrs Rehl. Cloud is more expensive but you are intentionally buying speed.

That means leaving legacy applications where they are is often also a better strategy than moving them to the cloud, experts advise. Workloads built for a data center environment one in which resources, once deployed, are always available waste money by design in an opex spending model.

Legacy applications running in lifted-and-shifted virtual machines are the most expensive way to operate in the cloud, said Cognizants Lockard. You are paying for the services and storage 24 hours a day, seven days a week, whether you use them or not.

Legacy applications can also be opaque, with little documentation and no one around who built them in the first place. As a result, said Rehl, We have seen customers who lift and shift bring over all sorts of thing they dont need. They may import data sets they think were necessary even if they havent been touched in a very long time.

Everyone agrees that the best way to optimize costs is to use applications built for the cloud. These take advantage of dynamic scaling, ephemeral servers, storage tiering and other cloud-native features by design. Cloud management needs to be automated using the many tools the cloud providers have to offer, said Chetan Mathur, CEO of cloud migration specialist Next Pathway Inc.

FinOps is a relatively new discipline that addresses the new reality that a lot of things that finance and procurement would have taken care of is now the domain of engineers, said Archeras Khanna. FinOps brings together engineers and financial professionals to understand each other better and to set guidelines that enable more cost-efficient decision-making. A recent survey by the FinOps Foundation found that the discipline is now becoming mainstream across large enterprises in particular and that FinOps team sizes grew 75% in the last 12 months.

Major platform shifts always bring disruption and the move to the cloud is the biggest one most IT managers will see in their careers. Despite the adjustments theyre making to a new operating model, most are willing to accept the tradeoffs of business agility and speed to market for what FinOps Foundation Executive Director J.R. Storment recently told ZDNet: The dirty little secret of cloud spend is that the bill never really goes down.

Original post:

Why your cloud computing costs are so high - and what you can do about them - SiliconANGLE News

Posted in Cloud Computing | Comments Off on Why your cloud computing costs are so high – and what you can do about them – SiliconANGLE News

Joint Cloud Computing: How Can Organizations Benefit From This New Trend? – Toolbox

Posted: at 9:49 pm

When the demands of a cloud-dependent company exceed the capacity of a single cloud, several cloud providers may be required. The arrival of Cloud 1.0 introduced lower IT costs and on-demand service availability. However, it is fair to say that the globalisation of cloud services has not been without its fair share of difficulties. To lessen the challenges and reduce cost, Joint Cloud can be the way forward. Lets see what exactly it is and how organizations can absorb the benefits offered by this technology.

Cloud technology behemoths have begun to collaborate in order to expedite the go-to-market cycle and capitalize on each others unique selling points. Its a partnership between cloud service providers that will help joint clients with their migration capabilities and application operations across various cloud platforms. Competitors Oracle and Microsoft recently formed a partnership that combines their strengths and provides the best of both worlds. Similarly, tech company Avaya recently announced a collaboration with Microsoft to develop a joint cloud communication solution.

Joint Cloud is a modern computing platform that encourages developers to design cross-cloud services through software-defined interaction and cooperation across different cloud service organizations. Furthermore, container platform automation capabilities handle multi-cloud access, providing enterprises with a compelling solution to work with various cloud providers, infrastructure, and cloud types.

Aron Brand, the CTO of CTERA, explained this new age phenomenon as a new generation of computing model which facilitates providing cross-cloud services through integration and cooperation among different cloud providers. While this term is currently used mostly by academia, some of the required components of joint cloud computing already exist in the commercial sector, said Brand.

Take for example the concept of a global file system, which creates a single namespace, globally accessible file system, overlayed on multiple object storage providers which can be located in different clouds, regions and operated by different service entities. A global file system eliminates vendor lock-in by allowing transparent data movement across cloud providers; enables boundless storage capacity, while providing comprehensive control over and visibility into this global distributed data. Using the global file system as a foundation, service organizations can develop federated applications that span heterogeneous clouds and data centers, including edge devices, he added.

From our point of view Joint Cloud Computing is an extension of what we would call a multi-cloud strategy, commented Sathya Sankaran, COO of Catalogic Software. The current discussion around joint cloud computing/multi-cloud is all about building infrastructure that makes it easier to communicate between applications running on different clouds, migrate loads (data and applications) between various clouds, and manage loads in various clouds, he said.

See More: AI Summit Silicon Valley 2021: Top Highlights & Insights from AI Experts

Even though its a decade-old phenomenon, the year 2021 has further accelerated the growth of Joint Cloud computing, helping it become one of the most trending cloud computing technologies of the year. Lets look at some of the recent developments in this space.

Thales and Google Cloud have partnered up to establish Joint Cloud offering in France together

Thales and Google Cloud collaborated in October to co-develop a French hyperscale cloud product. With this new service, French businesses and government agencies will have access to all of the capacity, security, agility, and autonomy that the two entities respective technologies have to offer.

Google Cloud and Genesys expand their Joint Cloud Contact Center

The two companies recently announced their partnership on new AI, deep learning, and data analytics applications. They have a number of objectives in mind. Automating customer service, providing predicted customer satisfaction, and AI-driven verification are just a few examples. Besides, the plans include creating new conversational routes that use Google Search, Maps, and other tools.

Woori Financial modified Joint Cloud platform to speed up their digital transformation

The joint cloud infrastructure has supposedly assisted in advancing the companys digital innovation since its deployment in February. It optimally manages the groups IT assets and cloud space, enhancing the divisions synergy in digital-based companies.

At times, cloud-dependent organizations require more than just one cloud provider as the needs surpass the capacity of a single cloud. We see events like Black Friday that demand tens of thousands of times more resources than normal days, straining a solitary cloud vendor, which is either unable to provide the requested resources or must provide IT resources based on access demand. This might result in higher costs and lower IT resource use, which contradicts cloud computings primary purpose of increasing IT resource utilization.

The advent of Cloud 1.0 has enabled reduced IT costs and on-demand availability of services. But, it wont be wrong to state that there have also been certain challenges with the globalization of cloud services. In a research conducted by IEEE on JointCloud, a few challenges associated with Cloud 1.0 were identified. These were:

See More: Top Security Considerations for Transitioning from Private to Public Cloud

Both academia and industry have begun to examine partnerships amongst independent public cloud providers to overcome these difficulties. Cloud 2.0s core element is cooperative cloud computing, which removes the barrier between numerous clouds. Lets look at some of the benefits that make Joint Cloud a perfect fit for organizations.

With the increasing expansion of data buffers and the heterogeneity of customer tastes, one cloud provider can scarcely meet all of their needs. Joint Cloud is an effective way to coordinate autonomous cloud peers to deliver a high-level of storage service. However, the storage services must strike global balances between accuracy and consistency under various conditions and needs by exploiting resources that are scattered across diverse cloud peers.

Paul Repice, VP of sales, Datadobi said, Gone are the days where enterprises rely on one single storage vendor for their data. Today, 92% of organizations either have a multi-cloud strategy in place or are moving in that direction, and over 80% of large enterprises have already adopted a hybrid cloud infrastructure. These trends make sense because the pandemic encouraged global enterprise companies to adopt effective, proven cloud technologies offered by market-leading brands due to the lack of need for sudden infrastructure spending. The availability of cloud-based file storage offered a cost-effective, quick fix and an apparent win-win for businesses under pressure to adapt on the fly.

To make the change back to the office smoother, organizations must work with vendor-neutral solutions that can handle the scale and complexity of large storage environments in 2022. When evaluating a particular vendor, IT teams need to check the compatibility with hyperscalers, preserve data integrity throughout any data management projects, and make sure that the vendors offer access to a comprehensive support team. With these building blocks in place, organizations can make the best use of cloud and on-premises storage in the long term.

Joint Cloud computing is a new research project spearheaded by Chinese institutes to address the computing challenges associated with various clouds. Customers diversified and changing cloud resource needs are fulfilled by Joint Cloud through delivering virtual cloud resources (VC). The Joint Cloud users may use an internet browser to write, debug, and execute activities in their work environment without worrying about minimal issues like framework deployment, setup, and parameterization. This working environment is based on a customized VC that offers the most appropriate resources from the underlying clouds. This architecture might smoothly traverse different clouds, allowing apps to elastically scale out and utilize fresh resources from third parties to deal with load demand issues.

Over the last several years, there has been a tremendous surge in corporate investment and expanding research interest in the Internet of Things. The synergy between traditional cloud and edge cloud is already underway to give quality service to varied consumers, spurred by the application demand from the Internet of Everything in the future. However, because of the high marginal operating and maintenance expenses, traditional clouds must continue to collaborate. JointCloud computing is a new computing paradigm that supports collaboration within heterogeneous clouds. JointCloud intends to make it easier for many cloud vendors to work together to deliver effective cross-cloud services. Also, it is cost-effective because it focuses on vertical cloud resource integration and horizontal cloud vendor collaboration.

The first edition of cloud computing (Cloud 1.0) is gradually being phased out in favor of the second (Cloud 2.0). The novel JointCloud efforts, which were lately financially backed by Chinas Ministry of Science and Technology as part of the National Key Program for Cloud Computing and Big Data, is the latest generation of computing paradigm for Cloud 2.0 that enables businesses to customize cross-cloud services.

JointCloud aspires to bring together key cloud providers in China and those from across the world to create a joint cloud network. Unlike previous Cloud 2.0 projects such as SuperCloud and InterCloud, JointCloud is the initial stride toward creating an expanding community where all cloud providers may utilize the specified service infrastructure to deliver deep cooperation, and customers can design services above the virtualized cloud.

There are plenty of opportunities for vendors to help multi-cloud customers, but we dont see the big managed cloud providers doing much to help customers, other than enabling their management consoles to manage infrastructure in other clouds (thus putting their management tools on top), says Sankaran from Catalogic Software.

We do note that its in their interest to keep customers locked in rather than have their services totally commoditized. We dont see enthusiasm for a push on standards and tools for joint cloud computing. Although thats not to say that some industry standards couldnt eventually help here. We think Kubernetes and containerized workloads will also help to some extent, because it provides somewhat of a standard platform layer.

Do you think Joint Cloud computing will become commonplace in the coming years? Let us know on LinkedIn, Twitter, or Facebook. Wed love to hear from you!

Read more:

Joint Cloud Computing: How Can Organizations Benefit From This New Trend? - Toolbox

Posted in Cloud Computing | Comments Off on Joint Cloud Computing: How Can Organizations Benefit From This New Trend? – Toolbox

Forecasting cloud-powered progress in 2022 – MedCity News

Posted: at 9:49 pm

As we enter 2022, the pandemic continues to exact a terrible toll in both human and economic terms. But one side effect of this crisis helped healthcare move forward dramatically: We experienced a decades worth of technology adaptation and adoption in the space of mere months.

Tools and technologies that had been lingering (or languishing) on the fringes of the industry moved from the periphery to the core of healthcare IT practically overnight. High-speed research and epidemiology infrastructure, remote work and telemedicine systems, models for rapid vaccine development and deployment all suddenly became pressing public-health necessities. And all were enabled by harnessing the cloud to support integrated operations, data science, and massive on-demand computational needs.

If theres any silver lining to this historically bad event, its courtesy of the cloud. The past two years really opened our eyes to what is possible in healthcare when you invest in innovation and unleash it with agile technologies. I forecast more cloud-powered healthcare progress on the horizon in three key areas:

Vaccine R&D

The game has changed when it comes to vaccines. Before the pandemic, mRNA vaccines had been in development for roughly three decades but were consistently stymied by hesitancy and preference for the status quo. Covid response forced that technology over the finish line, and its success is going to impact more than coronavirus propagation. It is already encouraging emboldened approaches to other huge threats to human health.

Vaccine development and deployment is going to go from a years-long activity to a months-long activity with mRNA technology. And were going to need new regulatory frameworks to handle that speed. In the U.S., the only reason we were able to get Covid vaccine shots in our arms so quickly was because the FDA issued emergency authorizations both research-and-development timeframes and the standard process for clinical trials are typically much longer and more arduous. But sizable scientific groundwork had already been laid for messenger RNA technology, which supported and enabled accelerated action. More importantly, that action proved the speed and efficacy with which we can safely deploy new vaccines.

Assuming we can fashion a regulatory framework update that addresses this change in capability, I think were going to start to see more rapid development of vaccines against known pathogens trials are already in the works against certain cancers, against malaria, and against HIV. We will also be able to more quickly respond to future pandemics.

Covid-19 has already killed more Americans than the infamous Spanish Flu of 1918. It wont be the last pandemic. This is going to happen again, but were starting to grasp the power of our new tools for facing future threats. To wield them, we require a significant upgrade in the agility of our development and IT models because we are going to need the ability to manage more clinical trials much more quickly than we did in the past. The cloud is proving crucial in that process. Its time to flex and adjust. And Im excited about how cloud technology can help speed both regulatory innovation and clinical trial innovation in support of this mission.

Telehealth unleashed

As the pandemic unfolded, telehealth usage skyrocketed, fueled by physical distancing needs and responsive regulatory/reimbursement modifications. According to McKinsey, In April 2020, overall telehealth utilization for office visits and outpatient care was 78 times higher than in February 2020. Once again, existing but underutilized technology was quickly scaled with the aid of cloud power to meet unprecedented demand. Though usage stabilized by summer, telehealth claim volumes consistently maintained nearly 40-times their pre-pandemic numbers. The convenience factor has staying power. The ability to not have to go to your doctor to go to your doctor is not going to go away.

Further enhancing and extending telehealth and telemedicine applications is one of many cloud-based IT opportunities stirring the health sector to explore new directions. For example, virtual care is great because it enables timely access to health care professionals from afar, but it doesnt let them touch you. They cant look in your ears, swab your nose, or take your blood pressure. So secondary and tertiary services will start to advance in parallel as telemedicine matures.

Home-based health monitoring devices will quickly find new and more medically integrated applications, for example. And a redefined role for tech-savvy home health workers may also emerge, with nurses and aides becoming the eyes and ears of the doctor at the other end of the telemedicine session. Picture a distributed set of health care professionals who meet patients at home while they jointly communicate with the physician back in the office, or visiting nurses utilizing a whole set of connected equipment for monitoring patients at home and feeding results back to their doctor remotely.

Such services would be convenient for everyone, but theyd really shine in populations that do not have any easy access to care. Consider the plight of rural Americans who have to drive for hours to get to their nearest medical facility. Advancing technology can help. In addition to near-limitless cloud capacity, 5G is upgrading the prime limiting factor in addressing rural populations with telehealth bandwidth. Coupled with Starlink, Boeing, and Kuiper among others racing to establish satellite broadband, ubiquitous connectivity for virtual care is imminent. This is great news for the American farmer, but also for the indigenous Brazilian facing a 14-day walk to the nearest medical facility. Truly global accessibility to virtualized healthcare with that combination of technologies is about to begin, and well start to see real applications as soon as next year.

Making data make a difference

Another meaningful inroad we can expect in the coming year is the blending of historically disparate healthcare-related datasets for much more insightful decision making. The dream is precision medicine and were getting closer to realizing it. Typically, we have clinical data in a repository, we may have genomic data in a repository, and we might also have behavioral health data in a repository. But the ties between them all often remain elusive. Historically, those data sets are held separately. Melding them together creates a more holistic viewpoint, and cloud-based healthcare IT enables that amalgamation. Assuming good care coordination, the treatment team gets a much better ability to address all of the things that are happening with the patient instead of distinct elements.

Consider breast cancer treatment, where certain genetic sequences are starting to inform care. As the medical community explores cloud-based molecular modeling, combined with genetic insight and clinical insight into a particular patients condition, they can forego the sledgehammer approach and start to build a customized treatment program for a particular person based on all those combinations of factors. In the past, housing the enormous datasets required for such capabilities was very expensive. And the amount of computing power necessary to actually do that work was also very expensive. But thats no longer the case. Cloud power is eminently affordable, which mitigates the cost barrier and makes it all feasible. Working on unthinkably large datasets and integrating them with massive amounts of computing power, unleashing millions of trained machine-learning models to inform clinical decision making in real time that factor in all of these elements that make up a person this will enable significantly better clinical outcomes. Were going to be able to attack disease progression as its happening in an individual patient, as well as how it happens statistically across a diverse population.

Technologists in the healthcare sector tend to spend their days focusing on the Hows to keep the health system up and running and moving forward. As we anticipate a new year, its good to reflect on some of the amazing Whats that have already been accomplished, and prepare for those coming into view. Most importantly, we should never lose sight of the Whys that give our work purpose. Now and into the future, amazing advancements in human health are made possible with cloud power directed by amazing information technology professionals.

Photo: shylendrahoode, Getty Images

See more here:

Forecasting cloud-powered progress in 2022 - MedCity News

Posted in Cloud Computing | Comments Off on Forecasting cloud-powered progress in 2022 – MedCity News

How to transition to the cloud: 7 best practices – TechTarget

Posted: at 9:49 pm

It's easy to identify the reasons an organization would transition to the cloud. The concepts and practices necessary to accomplish a cloud migration, however, can be difficult to grasp.

Every organization's experience with the cloud will be unique depending on exactly which types of cloud resources it uses and what it deploys on them. Nevertheless, the seven practices discussed here help to establish a foundation to plan an efficient, low-risk migration to the cloud.

Before planning a cloud transition, it's important for businesses to identify which benefits they seek to gain. The most common reasons to move to the cloud include the following:

The importance of each of these factors to a particular business will vary. For example, a retailer whose applications see significant seasonal fluctuations in usage may value scalability more than a company that uses the cloud to host internal, line-of-business applications whose usage is relatively steady.

A transition to the cloud is often difficult for various reasons:

These benefits can be maximized -- and challenges simplified -- by adhering to practices that make successful cloud transitions easier to achieve.

A transition to the cloud becomes much easier when all stakeholders are on board. They include the technical practitioners, who will set up and manage cloud environments, as well as management, who should support the migration to the cloud and the temporary and permanent expenses that come with it.

Other employees, too, should understand why the business is moving to the cloud. As users, they should learn how cloud computing will benefit them, which ways applications will become easier to use and how there will be a learning curve. Business leaders should prepare clear answers to questions like these before embarking on a cloud transition.

Given the vast number of cloud services available -- from VMs and containers, to object and block storage, to IoT device management and beyond -- businesses should identify upfront which cloud services they plan to deploy. Otherwise, they may end up running more types of services than they can effectively manage at once. They may also fail to determine in a systematic way which cloud services are the best fit for their workloads.

The right services will vary from one workload and business to the next. In general, businesses should consider factors such as how much each type of cloud service costs, how hard or easy it is to deploy workloads on the service, how the service can be monitored and managed, and how a particular service might create security risks.

Organizations should know that certain workloads may be better left out of a cloud environment. Some applications, for instance, depend on local networking configurations that could be difficult to replicate in the cloud. Other apps may need direct access to bare-metal hardware, which is harder to find -- and more costly -- in the cloud.

Early in a cloud transition is the best time for a business to identify which applications won't work well off premises. Plan steps to modify those applications to suit a cloud environment, or alternatively, commit to keep those applications out of the cloud.

The cloud presents specific security challenges. Because cloud environments are connected to the internet by default, it is easier for attackers to locate and exploit cloud resources. Cloud environments can be complex, and even small misconfigurations, such as accidentally allowing public access to a sensitive storage bucket, can have large security implications.

Businesses should assess how they will mitigate these security risks as part of their cloud transition plan.

Cost models, too, can change dramatically in the cloud. Cloud computing enables an organization to pay as it goes, which simplifies cost management in one respect. That said, a business needs to consider costs related to a transition to the cloud. For example, a provider will assess egress fees when a customer moves data out of a cloud environment, and customers may face fees for using a provider's monitoring and security tools.

For these reasons, it's important to perform a detailed assessment to learn how much each type of cloud service and resource will cost and then to seek ways to control those costs.

A business should specifically identify who within their organization will be in charge of the cloud environment. Who can launch new cloud resources? Will the entire organization share one cloud environment, or will each business unit or team have its own account? Do changes to the cloud environment need to be documented in a certain way?

Answering questions like these before a formal transition to the cloud begins should help to ensure that the business has a consistent plan to manage its cloud environment responsibly.

Cloud environments almost always change over time. Businesses may migrate applications from one type of cloud service, such as VMs, to another, such as Kubernetes. They may move more workloads from on premises to the cloud. They may expand from single-cloud architectures to hybrid or multi-cloud configurations.

It's impossible to anticipate every change ahead of time, of course. But organizations can at least create a roadmap that identifies -- in general terms -- how they expect their cloud strategy to evolve over time. For instance, the roadmap might specify that the business plans to launch a single cloud at first, and two years after that date, it will begin evaluating multi-cloud options.

Every cloud transition is unique, but an organization can get ahead of many of the challenges that complicate cloud migrations. The secret is to systematically evaluate key factors, such as which cloud services to use, how to assign cloud resources to different parts of the business and how to grow a cloud environment over time. Additionally, organizations also should keep abreast of broader trends that are driving cloud migrations, which might influence future cloud decisions.

See more here:

How to transition to the cloud: 7 best practices - TechTarget

Posted in Cloud Computing | Comments Off on How to transition to the cloud: 7 best practices – TechTarget

3 Reasons to Buy Ankr – Motley Fool

Posted: at 9:49 pm

For cryptocurrency investors thinking long term, Ankr (CRYPTO:ANKR) may be one of the best crypto plays in the market right now. An emerging force in decentralized finance (DeFi), Ankr has been making some serious gains lately. In fact, ANKR is up about 40% during the past month.

Ankr has a great deal to offer investors. This blockchain network allows cloud-computing providers to offer underutilized resources to users requiring cloud infrastructure.Cloud-computing providers are rewarded in ANKR tokensas compensation.

Additionally, there are other greatbenefits users receive on the Ankr network. Let's dive into why this is a top cryptocurrency on my watch list right now.

Image source: Getty Images.

The cloud-computing world is relatively well-defined, and just a handful of large players dominate the market. With such an oligopolistic structure, pricing power resides mainly with cloud-computing providers. This is great for someone who owns Amazonstock, but not so great for companies or users requiring cloud infrastructure.

Ankr seeks to change all that. This network takes existing underutilized hardware from cloud-computing providers and rents it out. In exchange for ANKR tokens, cloud-computing companies can maximize the use of their computing power. Wastage is a big deal in every sector, and Ankr helps minimize this issue to a great extent.

The idea of maximizing underutilized assets happens to be a very eco-friendly endeavor. Of course, not all blockchains are environmentally progressive. Much has been made about how much energy Bitcoin consumes every year. (Hint: almost as much power as the entire country of Thailand.)

Ankr has found a way to create utility for end users. This blockchain network aims to do so by using what already exists, rather than adding to the energy-consumption problems plaguing this sector.

Back to the decentralized piece of the equation. Decentralization is a buzz word in crypto for a reason. By cutting into the centralized market power of a few companies controlling any one sector, blockchain projects like Ankr aim to democratize pockets of the economy (and maybe the whole economy, one day).

At a high level, these goals sound idealistic and unattainable. However, the implications of Ankr's cloud-computing potential are immense.

Most centralized cloud-computing services have a single or just a few points of failure if various central locations lose power. For decentralized cloud-computing players like Ankr, this risk is minimized. By using a decentralized network of providers, Ankr can offer network stability and relatively low-cost cloud-computing services to companies looking for decentralized options.

As demand for decentralized solutions increases, Ankr could see increased adoption drive the value of its network higher. Therefore, those banking on the value of ANKR tokens as representative of the value its ecosystem creates may consider ANKR an intriguing growth option.

After all, this is a network that's looking to find novel solutions to modern problems. There's a lot investors should like about that.

Besides the cloud-computing angle (which I think is really something), Ankr also provides unique value in how investors stake tokens. Staking refers to putting up ones tokens or locking them into a given blockchain protocol to allow validation of transactions. People who stake their tokens typically receive interest in the form of additional tokens.Accordingly, this is a passive income opportunity many crypto investors are looking to get into.

However, Ankr provides an intriguing way for investors to stake tokens while putting up much smaller capital investments to do so. How?

Ankr's StakeFi product lets investors put up as little as 0.5 Ether to earn staking rewards. Currently, 32 ETH are required to stake on Ethereum's (CRYPTO:ETH) beacon chain. This would require the equivalent of more than $125,000, at present.

The platform does this by utilizing synthetic derivatives to essentially limit the amount of initial capital investors need to put up. Similar to options in the stock market, Ankr is becoming a revolutionary force in this growing area of decentralized finance.

Sure, Ethereum is moving toward Ethereum 2.0, which is likely to streamline its staking process substantially. However, delays in the move to Ethereum 2.0 have persisted. For now, Ankr has an opportunity to expand its market share in this emerging DeFi category.

Cryptocurrency investing is inherently risky, and Ankr is no exception. This crypto network faces the same systemic risks and competitive environment as its peers.

However, Ankr is creating some real-world value with its network. The fact that companies can utilize Ankr's protocol to maximize their return on assets while providing decentralized cloud-computing services to users is impressive. Additionally, I think there's a lot to like about Ankr's DeFi potential.

The ANKR token is one that represents a blockchain with a tremendous (and growing) value right now. Accordingly, I'm watching this is token closely.

This article represents the opinion of the writer, who may disagree with the official recommendation position of a Motley Fool premium advisory service. Were motley! Questioning an investing thesis -- even one of our own -- helps us all think critically about investing and make decisions that help us become smarter, happier, and richer.

See the rest here:

3 Reasons to Buy Ankr - Motley Fool

Posted in Cloud Computing | Comments Off on 3 Reasons to Buy Ankr – Motley Fool

One of the world’s largest supercomputers lived for only 10 minutes – TechRadar

Posted: at 9:49 pm

There was a time when supercomputers were available only to a handful of organizations, mostly governments, public research facilities and scientific bodies. The rise of cloud computing and the widespread availability of sophisticated cloud workload management (CWM) tools have reduced the barrier of entry considerably.

Only last week, YellowDog, a CWM specialist based in Bristol, United Kingdom, assembled a virtual supercomputer using its proprietary platform - and at its peak, which lasted about 10 minutes, the system had mustered an army of more than 3.2 million vCPUs.

While it was nowhere as powerful as Fugaku, that was enough to propel it into the top 10 of the world's fastest supercomputers, at least for a few minutes.

(PSA: by the way, we are going to update our Black Friday web hosting deals and Black Friday website builder deals page at least once per day till Cyber Monday)

The provisioning, which was done on behalf of a pharmaceutical company, helped run a popular drug discovery application as a single cluster. Back of the envelope calculation puts the raw cost of the project at about $65,000.

That's accounting for 33,333 AWS 96-core c5.24xlarge instances. This is one of a number of instances used during the run (essentially similar to bare metal servers or dedicated servers) and it costs $1.6013 per hour. So that's $53,376 per hour or $57,824 to account for the entire length of the session (65 minutes in all).

"With access to this on-demand supercomputer, the researchers were able to analyze and screen 337 million compounds in 7 hours. To replicate that using their on-premises systems would have taken two months," said Colin Bridger from AWS.

What's extraordinary is that this sort of firepower is available to anybody who can afford it. And it is based on the sort of hardware that runs our cloud computing world: web hosting, website builders, cloud storage, email services among others.

CWM platforms have evolved over the years to develop algorithms and machine learning capabilities to choose the best source of compute, regardless of its origin or type.

For example, one cloud provider may have the cheapest spot compute, but the algorithm wouldn't select it if it were unavailable in the territory set by the customer, or if there weren't actually a sufficient number of servers of the required instance type available within that cloud provider. In this case another source of compute would be chosen. Clever indeed!

View original post here:

One of the world's largest supercomputers lived for only 10 minutes - TechRadar

Posted in Cloud Computing | Comments Off on One of the world’s largest supercomputers lived for only 10 minutes – TechRadar

Page 52«..1020..51525354..6070..»