Daily Archives: December 17, 2021

5 Biggest Cloud Computing Trends to look out for in 2022 – Analytics Insight

Posted: December 17, 2021 at 11:42 am

This article features the top 5 trends outlined below defining the future of cloud computing

Cloud computing is becoming more popular than ever as businesses adopt data-driven business models, remote and hybrid work environments, and global supply networks. New capabilities and deployment patterns continue to develop, giving organizations of all sizes and sectors more options for consuming, and benefiting from their cloud investments. Cloud computing boomed in 2020 as the workforce turned virtual and businesses reacted to the worldwide pandemic by focusing on the supply of digital services. Gartner predicts that global spending on public cloud services would reach $1 trillion by 2024.

Cloud computing is increasingly seen as a critical component for firms seeking to work smarter and accomplish projects more quickly. With access to on-demand processing capacity, highly scalable platforms, and a more flexible approach to IT expenditure, the cloud has progressed from cutting-edge technology to an essential IT resource. Cloud computing trends portray how new technology is altering and the way firms function and spend their IT expenditures.

Cloud services are offered in different ways. The delivery model that a firm adopts depends on its functional requirements, the maturity of its IT and data governance requirements. As businesses are looking for more flexibility and choice in IT solutions Hybrid cloud and serverless cloud are trending.

a) Hybrid Cloud: Many businesses choose a hybrid cloud approach, which combines public cloud services with the placement of a private cloud devoted to a specific organization. This is especially true for companies that collect sensitive information or work in highly regulated areas like insurance, where data privacy is critical. A hybrid strategy is popular because it gives enterprises the control they need while also adapting and evolving as they roll out new services for their customers.

b) Serverless cloud: Serverless computing is a type of cloud computing that allows businesses to access IT infrastructure on-demand without having to invest in infrastructure or manage it. Serverless models are gaining popularity among large and small businesses that want to create new applications fast but lack the time, resources, and/or funding to deal with infrastructure. This allows developing firms to make use of higher computing power at a lower cost, while large corporations may launch new digital services without adding to the workload of their already overburdened IT personnel.

The cloud has evolved into more than just a storage facility for computing power. Organizations are keen on extracting insights from the data available through Machine Learning and Artificial Intelligence and are keen on boosting efficiency with best automation practices.

Machine Learning and Artificial Intelligence Cloud-based artificial intelligence (AI) technologies, such as machine learning, are assisting organizations in extracting more value from the ever-increasing amounts of data they gather. AI algorithms enable organizations to discover new insights from their data and enhance the way they work. Companies who dont have the means or talent to construct their own AI infrastructure and many dont can nevertheless benefit from it by using cloud service providers systems.

Automation- Automation is a crucial driver of cloud adoption, particularly when it comes to boosting the efficiency of corporate operations. Companies can automate many internal procedures if their data and systems are centralized on the cloud. In addition, many businesses are striving to tighten connections between various pieces of software to manage their expanding cloud footprints better and ensure that solutions from diverse suppliers operate seamlessly together.

Delegation of IT operations- As more manufacturers provide solutions that can be hosted on external servers, some organizations prefer to outsource parts of their IT operations to third parties. Companies can reduce operational expenses by focusing on the core product or service rather than engaging specialist teams to create, operate, and maintain their systems. However, they must keep sensitive data and technology in mind when determining which functions to outsource to avoid jeopardizing their governance or compliance policies.

Businesses and customers are concerned about IT security and data compliance, and todays cloud solutions are developed to resolve these concerns. This has created a huge demand for Secure Access Service Edge and Cloud-based disaster recovery practices.

a) Secure Access Service Edge (SASE)- Businesses are reconsidering their approach to security and risk management as employees access more services and data from personal devices outside of their organizations IT networks. This is a strong approach to IT security that allows organizations to swiftly launch new cloud services and ensure that their systems are secure.

b) Cloud-based disaster recovery Cloud-based disaster recovery backs up a companys data on an external cloud server. It is less expensive and time-efficient, with the added benefit of being handled by an outside source. Businesses frequently use cloud-based disaster recovery for critical servers and applications like huge databases and ERP systems.

Cloud-based platforms are rapidly expanding to serve companies development needs as they seek to differentiate themselves by fast launching new goods and services. Cloud computing has opened new opportunities in application development, from purpose-built coding environments to decentralized data storage. This has given impetus to technologies like Containers and Kubernetes, Edge Computing, and Cloud-Native application development.

a) Containers and Kubernetes- Containers provide enterprises with a specialized cloud-based environment to develop, test and deploy new applications. As a result, developers can concentrate on the intricacies of their applications, while IT teams can focus on delivering and managing solutions making the entire process faster and more efficient. Kubernetes is an open-source container orchestration technology that makes deploying and managing containerized applications easier. The software scales apps based on client demand and monitors the performance of new services so firms can address the concerns before they become a problem.

b)Edge computing- This type of cloud computing puts data processing collection, storage, and analysis closer to the sources of the data. This lowers latency while also enabling the usage of edge devices. By 2025, Gartner expects that 75% of data generated by businesses will be created and handled outside of a centralized cloud.

c)Cloud-Native Cloud-Native apps allow enterprises to design and deploy new software to their consumers more quickly than traditional cloud applications. Cloud-native apps are constructed as a network of distributed containers and microservices. As a result, various teams may work on new features simultaneously, speeding up the innovation process.

We are going to witness a huge explosion in cloud gaming in the coming years. Platforms such as Googles Stadia and Amazon Luna are going to define the direction the cloud gaming realm takes in 2022. The arrival of Cloud Virtual Reality and Augmented Reality (VR/AR) has made headsets more affordable and is fostering the growth of cloud gaming across various sections of society.

Cloud computing applications appear to be limitless, with 25% of organizations planning to move all of their software to the cloud in the next year. Increased cloud computing adoption and the discovery of new methods to leverage cloud-based systems to produce insights and efficiency are the upcoming trends to be seen in 2022. As more organizations embrace the increase in processing power, scalability, and flexibility that cloud-based systems provide, cloud adoption is expected to continue to expand. The road to adoption and the timeframe for doing so may vary for each company, but one thing is certain: there will be no going back to the old ways.

Author

Bhavesh Goswami, Founder & CEO, CloudThat

Share This ArticleDo the sharing thingy

Excerpt from:

5 Biggest Cloud Computing Trends to look out for in 2022 - Analytics Insight

Posted in Cloud Computing | Comments Off on 5 Biggest Cloud Computing Trends to look out for in 2022 – Analytics Insight

Cloud Computing Vs. Edge Computing: Who Wins the Race? – Analytics Insight

Posted: at 11:42 am

Cloud Computing Vs. Edge Computing: Who Wins the Race?

The clouds primary notion of providing a centralized data source that can be accessed from anywhere in the globe appears to be the polar opposite of edge computings local data handling concept. In many respects, though, edge computing was created by the cloud. The big data movement would never have grown to such proportions without centralized data storage. Many internet payment providers, for example, would not exist and companies like Microsoft and Amazon would be very different from what they are now. Weve spent some time attempting to sift out the benefits of edge and cloud computing. Which is the most effective? The solution isnt as simple as one would believe.

Cloud computing refers to the storage, processing, computing, and analysis of large amounts of data on remote servers or data centers. It also refers to the supply of many Internet-based services, such as data storage, servers, databases, networking, and software. Because data centers are frequently located in faraway locations, there is a time lag between data gathering and processing, which is usually undetectable in most use cases. In time-sensitive programs, however, this time latency, despite being measured in milliseconds, becomes critical. Consider real-time data collecting for a self-driving automobile, when delays might have disastrous implications.

Infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service are the three basic categories of cloud computing (SaaS). High infrastructure availability, self-service provisioning, elasticity, mobility, workload resilience, migration flexibility, broad network access, disaster recovery, and pay-per-use are just a few of the advantages of cloud computing in the form of IaaS.

The back-and-forth movement of data from the point where it is created to the central server for processing and subsequently to the end-user requires a lot of bandwidth, which slows down data processing and transfer. Because emerging technologies and IoT devices require sub-second reaction times, the tendency is to locate data processing and analytics as near to the data source as feasible.

Edge computing, as opposed to cloud computing, brings computation, storage, and networking closer to the data source, lowering travel time and latency dramatically. The procedures take place near the device or at the networks edge, allowing for speedier reaction times. Edge applications limit the amount of data that has to be moved, as well as the traffic generated by those transfers and the distance that data has to travel.

The exponential rise of IoT devices necessitates a shift in how we collect and analyze data. Consider how many smart home gadgets you possess, and then consider how many are used in healthcare, transportation, and manufacturing. The amount of data these devices send to servers regularly is enormous, and it frequently surpasses network bandwidth. Traditional centralized cloud architectures, no matter how strong or performant, cant keep up with these devices real-time requirements.

While organizations employ content delivery networks (CDNs) to decentralize data and service provisioning by copying data closer to the user, edge computing uses smart devices, mobile phones, or network gateways to conduct tasks on behalf of the cloud, bringing computing power closer to the user. Edge applications enable lower latency and cheaper transmission costs by lowering data quantities and associated traffic. Edge computings content caching, storage, and service delivery leads to faster response times and transfer rates.

Edge computing, according to some observers, may eventually supplant cloud computing because computing will become decentralized and the necessity for a centralized cloud would diminish. However, because their duties are distinct, this will not be the case. Edge computing devices are built to swiftly capture and process data on-site, as well as analyze data in real-time. It isnt concerned with data storage. Cloud computing, on the other hand, is built on infrastructure and can be quickly expanded to meet a variety of requirements. As a result, edge computing is appropriate for applications where every millisecond matters, whereas cloud computing is best for non-time-sensitive applications. Edge computing will most likely complement cloud computing rather than replace it.

The benefits of cloud computing are obvious. However, for some applications, relocating activities from a central place to the edge and bringing bandwidth-intensive data and latency-sensitive apps closer to the end-user is critical. However, because the process of putting up an edge computing infrastructure necessitates in-depth professional skills, it will be some time before mainstream adoption occurs.

Share This ArticleDo the sharing thingy

Follow this link:

Cloud Computing Vs. Edge Computing: Who Wins the Race? - Analytics Insight

Posted in Cloud Computing | Comments Off on Cloud Computing Vs. Edge Computing: Who Wins the Race? – Analytics Insight

AWS: Here’s what went wrong in our big cloud-computing outage – ZDNet

Posted: at 11:42 am

Amazon Web Services (AWS) rarely goes down unexpectedly, but you can expect a detailed explainer when a major outage does happen.

12/15 update: AWS misfires once more, just days after a massive failure

The latest of AWS's major outages occurred at 7:30AM PST on Tuesday, December 7, lasted five hours and affected customers using certain application interfaces in the US-EAST-1 Region. In a public cloud of AWS's scale, a five-hour outage is a major incident.

Managing the Multicloud

It's easier than ever for enterprises to take a multicloud approach, as AWS, Azure, and Google Cloud Platform all share customers. Here's a look at the issues, vendors and tools involved in the management of multiple clouds.

Read More

According to AWS's explanation of what went wrong, the source of the outage was a glitch in its internal network that hosts "foundational services" such as application/service monitoring, the AWS internal Domain Name Service (DNS), authorization, and parts of the Elastic Cloud 2 (EC2) network control plane. DNS was important in this case as it's the system used to translate human-readable domain names to numeric internet (IP) addresses.

SEE: Having a single cloud provider is so last decade

AWS's internal network underpins parts of the main AWS network that most customers connect with in order to deliver their content services. Normally, when the main network scales up to meet a surge in resource demand, the internal network should scale up proportionally via networking devices that handlenetwork address translation (NAT)between the two networks.

However, on Tuesday last week, the cross-network scaling didn't go smoothly, with AWS NAT devices on the internal network becoming "overwhelmed", blocking translation messages between the networks with severe knock-on effects for several customer-facing services that, technically, were not directly impacted.

"At 7:30 AM PST, an automated activity to scale capacity of one of the AWS services hosted in the main AWS network triggered an unexpected behavior from a large number of clients inside the internal network," AWS says in its postmortem.

"This resulted in a large surge of connection activity that overwhelmed the networking devices between the internal network and the main AWS network, resulting in delays for communication between these networks."

The delays spurred latency and errors for foundational services talking between the networks, triggering even more failing connection attempts that ultimately led to "persistent congestion and performance issues" on the internal network devices.

With the connection between the two networks blocked up, the AWS internal operating team quickly lost visibility into its real-time monitoring services and were forced to rely on past-event logs to figure out the cause of the congestion. After identifying a spike in internal DNS errors, the teams diverted internal DNS traffic away from blocked paths. This work was completed two hours after the initial outage at 9:28AM PST.

This alleviated impact on customer-facing services but didn't fully fix affected AWS services or unblock NAT device congestion. Moreover, the AWS internal ops team still lacked real-time monitoring data, subsequently slowing recovery and restoration.

Besides lacking real-time visibility, AWS internal deployment systems were hampered, again slowing remediation. The third major cause of its non-optimal response was concern that a fix for internal-to-main network communications would disrupt other customer-facing AWS services that weren't affected.

"Because many AWS services on the main AWS network and AWS customer applications were still operating normally, we wanted to be extremely deliberate while making changes to avoid impacting functioning workloads," AWS said.

First, the main AWS network was not affected, so AWS customer workloads were "not directly impacted", AWS says. Rather, customers were affected by AWS services that rely on its internal network.

However, the knock-on effects from the internal network glitch were far and wide for customer-facing AWS services, affecting everything from compute, container and content distribution services to databases, desktop virtualization and network optimization tools.

AWS control planes are used to create and manage AWS resources. These control planes were affected as they are hosted on the internal network. So, while EC2 instances were not affected, the EC2 APIs customers use to launch new EC2 instances were. Higher latency and error rates were the first impacts customers saw at 7:30AM PST.

SEE: Cloud security in 2021: A business guide to essential tools and best practices

With this capability gone, customers had trouble with Amazon RDS (relational database services) and the Amazon EMR big data platform, while customers with Amazon Workspaces's managed desktop virtualization service couldn't create new resources.

Similarly, AWS's Elastic Cloud Balancers (ELB) were not directly affected but, since ELB APIs were, customers couldn't add new instances to existing ELBs as quickly as usual.

Route 53 (CDN) APIs were also impaired for five hours, preventing customers changing DNS entries. There were also login failures to the AWS Console, latency affecting Amazon Secure Token Services for third-party identity services, delays to CloudWatch, and impaired access to Amazon S3 buckets, DynamoDB tables via VPC Endpoints, and problems invoking serverless Lambda functions.

The December 7 incident shared at least one trait with a major outage that occurred this time last year: it stopped AWS from communicating swiftly with customers about the incident via the AWS Service Health Dashboard.

"The impairment to our monitoring systems delayed our understanding of this event, and the networking congestion impaired our Service Health Dashboard tooling from appropriately failing over to our standby region," AWS explained.

Additionally, the AWS support contact center relies on the AWS internal network, so staff couldn't create new cases at normal speed during the five-hour disruption.

AWS says it will release a new version of its Service Health Dashboard early 2022, which will run across multiple regions to "ensure we do not have delays in communicating with customers."

Cloud outages do happen. Google Cloud has had its fare share and Microsoft in October had to explain its eight-hour outage. While rare, the outages are a reminder that public cloud might be more reliable than conventional data centers, but things do go wrong, sometimes catastrophically, and can impact a wide number of critical services.

"Finally, we want to apologize for the impact this event caused for our customers," said AWS. "While we are proud of our track record of availability, we know how critical our services are to our customers, their applications and end users, and their businesses. We know this event impacted many customers in significant ways. We will do everything we can to learn from this event and use it to improve our availability even further."

View original post here:

AWS: Here's what went wrong in our big cloud-computing outage - ZDNet

Posted in Cloud Computing | Comments Off on AWS: Here’s what went wrong in our big cloud-computing outage – ZDNet

Why the healthcare cloud may demand zero trust architecture – Healthcare IT News

Posted: at 11:42 am

One of the most pressing issues in healthcare information technology today is the challenge of securing organizations that operate in the cloud.

Healthcare provider organizations increasingly are turning to the cloud to store sensitive data and backup confidential assets, as doing so enables them to save money on IT infrastructure and operations.

In fact, research showsthat the healthcare cloud computing market is projected to grow by $33.49 billion between 2021 and 2025, registering a compound annual growth rate of 23.18%.

To many in healthcare, the shift to cloud computing seems inevitable. But it also brings unique security risks in the age of ransomware. Indeed, moving to the cloud does not sanctify organizations from risk.

More than a third of healthcare organizations were hit by a ransomware attackin 2020, and the healthcare sector remains a top target for cybercriminals due to the wealth of sensitive information it stores.

Healthcare IT News sat down with P.J. Kirner, chief technology officer at Illumio, a cybersecurity company, to discuss securing a cloud environment in healthcare, and how the zero trust security model may be key.

Q. Healthcare provider organizations increasingly are turning to the cloud. That is clear. What are the security challenges that the cloud poses to healthcare provider organizations?

A. While healthcare cloud growth comes with certain advantages for example, more information sharing, lower costs and faster innovation the proliferation of multi-cloud and hybrid-cloud environments has also complicated cloud security for healthcare providers in myriad ways. And things will likely stay complicated.

Unlike companies that can move to the cloud entirely, healthcare organizations with physical addresses and physical equipment for example hospital beds, medical devices will permanently remain hybrid.

Though going hybrid might seem like a transient state for some organizations, most healthcare organizations will find that they need to continuously adapt to a permanent hybrid state and all the evolving security risks that come with it.

In a cloud environment, it's often difficult to see and detect security risks before they become problems. Hybrid-multi-cloud environments contain blind spots between infrastructure types that allow vulnerabilities to creep in, potentially exposing an organization to outside threats.

Healthcare providers that share sensitive data with third-party organizations over the cloud, for example, may also be impacted if their partner experiences a breach. Additionally, these heterogeneous environments also involve more stakeholders who can influence how a company operates in the cloud.

Because those stakeholders might be in different silos depending on their specialties and organizational needs for example, the expertise needed for Azure is not the same as the expertise needed for AWS this makes the infrastructure even more challenging to protect.

If you're a healthcare provider, you handle sensitive information, such as personally identifiable information and health records, on a daily basis, which all represent prime real estate for bad actors hoping to make a profit.

These high-value assets often live in data center or cloud environments, which an attacker can access once they breach the perimeter of an environment. Because of this, as more healthcare organizations move to the cloud, we're also going to see more attackers take advantage of the inherent flaws and vulnerabilities in this complex environment to gain access to sensitive data.

Q. When it comes to securing healthcare organizations in the cloud, you contend that adopting a zero trust architecture an approach that assumes breach and verifies every connection is vital. Why?

A. We're living in an age where cyberattacks are a given, not a hypothetical inconvenience. To adopt zero trust, security teams need to first change how they think about cybersecurity; it's no longer about just keeping attackers out, but also knowing what to do once they are in your system. Once security teams embrace an "assume breach" mindset, they can begin their zero trust journey in a meaningful way.

Zero trust strategies apply least privilege access controls, providing only the necessary information and access to a user. This makes it substantially more difficult for an attacker to reach their intended target in any attempted breach.

In practice, this means that ransomware cannot spread once it enters a system, because, by default, it doesn't have the access it needs to move far beyond the initial point of entry.

Another crucial component in a zero trust architecture is visibility. As I mentioned, it's difficult to see everything in a cloud environment and detect risks before they occur. The weak spots in an organization's security posture often appear in the gaps between infrastructure types, such as between the cloud and the data center, or between one cloud service provider and another.

With enhanced visibility for example, visibility that spans your hybrid, multi-cloud and data center environments however, organizations are able to identify niche risks at the boundaries of environments where different applications and workloads interact, which gives them a more holistic view of all activity.

This information is vital for cyber resiliency, and for a zero trust strategy, to succeed only with improved insights can we better manage and mitigate risk.

In a year where more than 40 million patient records have already been compromised by attacks, it's more imperative than ever for healthcare organizations to make accurate assessments in regard to the integrity of their security posture.

We'll see more healthcare organizations leverage zero trust architecture as we head into the new year and reflect on the ways the cybersecurity landscape has changed in 2021.

Q. Zero trust strategies have gained traction in the past year, especially in tandem with the Biden Administration's federal stamp of approval. From your perspective, what do you think it will take for more healthcare CISOs and CIOs to go zero trust?

A. While the awareness of and the importance placed on zero trust strategies have grown in the last year, organizations still have a long way to go in implementing their strategies. In 2020, only 19% of organizations had fully implemented a least-privilege model, although nearly half of IT leaders surveyedbelieved zero trust to be critical to their organizational security model.

Unfortunately, a ransomware attack is often the wake-up call that ultimately prompts CISOs and CIOs to rethink their security model and adopt zero trust architecture. We've seen an upsurge in cyberattacks on hospitals over the course of the pandemic, threatening patient data.

By leveraging zero trust solutions for breach containment, healthcare organizations can mitigate the impact of a breach, that way an attacker cannot access patient data even if they manage to initially breach the system.

Healthcare teams are starting to understand that proactive cybersecurity is essential for avoiding outcomes that may be even worse than compromised data: If a hospital system is impacted by a ransomware attack and needs to shut down, they're forced to turn patients away, neglecting urgent healthcare needs.

Healthcare CISOs and CIOs are beginning to realize that the traditional security measures they've had in place detection and protecting only the perimeter aren't enough to make them resilient to a cyberattack.

Even if you haven't been breached yet, you're seeing attacks seriously impact other hospital systems and realizing that could happen to you, too.

Healthcare CISOs and CIOs who recognize the limitations of a legacy security model against today's ransomware threats will understand the need to adopt a strategy that assumes breach and can isolate attacks, which is what the zero trust philosophy is all about.

Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

Follow this link:

Why the healthcare cloud may demand zero trust architecture - Healthcare IT News

Posted in Cloud Computing | Comments Off on Why the healthcare cloud may demand zero trust architecture – Healthcare IT News

Top 4 cloud misconfigurations and best practices to avoid them – TechTarget

Posted: at 11:42 am

As organizations use more cloud services and resources, they become responsible for a staggering variety of administrative consoles, assets, services and interfaces. Cloud computing is a large and often interconnected ecosystem of software-defined infrastructure and applications. As a result, the cloud control plane -- as well as assets created in cloud environments -- can become a mishmash of configuration options. Unfortunately, it's all too easy to misconfigure elements of cloud environments, potentially exposing the infrastructure and cloud services to malicious activity.

Let's take a look at the four most common cloud configuration misconfigurations and how to solve them.

Among the catalog of cloud misconfigurations, the first one that trips up cloud tenants is overly permissive identity and access management (IAM) policies. Cloud environments usually include identities that are human, such as cloud engineers and DevOps professionals, and nonhuman -- for example, service roles that enable cloud services and assets to interact within the infrastructure. In many cases, there can be many nonpeople identities in place. These can frequently have overly broad permissions that may allow unfettered access to more assets than needed.

To combat this issue, be sure to do the following:

Another typical misconfiguration revolves around exposed and/or poorly secured cloud storage nodes. Organizations may inadvertently expose storage assets to the internet or other cloud services, as well as reveal assets internally. In addition, they often also fail to properly implement encryption and access logging where appropriate.

To ensure cloud storage is not exposed or compromised, security teams should do the following:

Overly permissive cloud network access controls are another area ripe for cloud misconfigurations. These access control lists are defined as policies that can be applied to cloud subscriptions or individual workloads.

To mitigate this issue, security and operations teams should review all security groups and cloud firewall rule sets to ensure only the network ports, protocols and addresses needed are permitted to communicate. Rule sets should never allow access from anywhere to administrative services running on ports 22 (Secure Shell) or 3389 (Remote Desktop Protocol).

In some cases, organizations have connected workloads to the internet accidentally or without realizing what services are exposed. This exposure allows would-be attackers to assess these systems for vulnerabilities.

Vulnerable and misconfigured workloads and images also plague cloud tenants. In some cases, organizations have connected workloads to the internet accidentally or without realizing what services are exposed. This exposure enables would-be attackers to assess these systems for vulnerabilities. Outdated software packages or missing patches are another common issue. Exposing cloud provider APIs via orchestration tools and platforms, such as Kubernetes, meanwhile, can let workloads be hijacked or modified illicitly.

To address these common configuration issues, cloud and security engineering teams should regularly do the following:

Guardrail tools can help companies avoid cloud misconfigurations. All major cloud infrastructure providers offer a variety of background security services, among them logging and behavioral monitoring, to further protect an organization's data.

In some cases, configuring these services is as easy as turning them on. Amazon GuardDuty, for example, can begin monitoring cloud accounts within a short time after being enabled.

While cloud environments may remain safe without using services like these, the more tools an organization puts in place to safeguard its operations, the better chance it has to know if an asset or service is misconfigured.

More:

Top 4 cloud misconfigurations and best practices to avoid them - TechTarget

Posted in Cloud Computing | Comments Off on Top 4 cloud misconfigurations and best practices to avoid them – TechTarget

What is the future of VPN and cloud computing? – TechCentral.ie

Posted: at 11:42 am

Virtual private networks are gaining importance for home working but they come with their own risks

Print

In association with CyberHive

The significance of VPNs has changed and grown over the years, particularly with the massive digital transformation that businesses have been forced to implement post-pandemic.

Virtual private networks (VPN) arewidely usedby many businessesfor accessing critical infrastructure and to secure connections between sites. They are also progressively important for the increasing number of employees who work from home, but who still need to retain access to key systems as if they were in the office. Prioritising data security for these remote workers is a key cyber resilience factor for any company.

A VPN works by creating a virtual point-to-point connection through either the use of dedicated circuits, or with tunnelling protocols over existing networks. This can also be done over wider area network (WAN) geographically, but also in the same methods of enabling data to be transmitted over the Internet.

Unfortunately, this very flexibility can offer security challenges for some organisations, with 55% of organisations reported challenges with their VPN infrastructure during the pandemic.

A simple misconfiguration, loss of a single password, or security credential,canresult in a major data breach.Furthermore, many VPNs, particularly those used as border security for cloud infrastructure, run on virtual machines which are just as susceptible to zero-day vulnerabilities or advanced hacking techniques as any other server.

Cyber criminals will often use VPNs as the first rung in an attack, enabling them to get a good position in a network. Several significant data breaches in the recent past have resulted from security vulnerabilities in VPNs. Even hardware-based firewalls fundamentally run software that needs to be patched and maintained to provide adequate security.

Should a breach happen via VPN, an organisation will need to have a rapid response plan to reset accounts and appliances, so valid users can still use the network whilst an investigation can take place.

With the adoption of public cloud platforms or a hybrid mix of cloud services and on-premise infrastructure, data security is even more critical with potentially sensitive data being sent over the public Internet.Even the cloud providers like AWS, Azure, and Google Cloud offer secure VPN connectivity between remote offices, client devices and their own networks, based on IPsec.

However, again there are disadvantages which range from data loss/leakage, insecure interfaces, to account hijacking. Also, if the cloud does experience outages or other technical problems, there needs to a process in place to enable business operations. Nevertheless, cloud computing may not be a realistic option for companies. There are many businesses that have some older non-cloud based programmes or have files that are primarily stored in private data centres. Employees that need to access those files will still require secure remote connectivity.

Deploying and managing VPN can be complex and resource intensive, with high risks for misconfigurations and a potentially large blast radius for network level access. As such, organisations are considering a move to alternative remote access solutions and prioritising the adoption of a zero-trust network access (ZTNA) model. These ZNTA models can highlight gaps in traditional network security architecture, but also introduce a new layer of complexity in implementation and deployment, as this framework cannot leave any gaps open and maintenance and access permissions must be kept up to date regularly.

VPNs and ZTNA are at opposing ends of the security spectrum, but it is possible to reap the benefits of both from a security and usability perspective.

CyberHive has recently developed a Mesh VPN platform called Connect. This novel approach implements a low-latency P2P topology, suitable for traditional enterprise applications. But it is also equally efficient on low-power embedded devices to add connection security to IoT devices, or high-cost equipment running lightweight hardware and operating systems all whilst adding the principles of zero-trust and future proofing encryption by employing post-quantum resistant cryptographic algorithms. This is a solution that is designed for ease of deployment and central management, so even if your long-term vision is to deploy the latest security technology buzzword, you can protect your users and critical devices easily today with no network disruption.

For more info on CyberHive Connect, and how it could support your business, contactinfo@cyberhive.com

Continued here:

What is the future of VPN and cloud computing? - TechCentral.ie

Posted in Cloud Computing | Comments Off on What is the future of VPN and cloud computing? – TechCentral.ie

The best cloud and IT Ops conferences of 2022 – TechBeacon

Posted: at 11:42 am

After two years of mainly virtual events, the majority of cloud and IT Ops conferences in 2022 will be in-person events, although some organizers have decided to hold a combination of in-person and virtual events.

These conferences offer IT operations, cloud, and IT management professionals the chance to come together to consult with experts, collaborate with other professionals, demonstrate the latest tools, and hear the most up-to-date information aboutcloud management and IT operations.

Here's TechBeacon's shortlist of the best cloud and IT Ops conferences in 2022.

Twitter: @TechForge_MediaWeb: techforge.pub/events/hybrid-cloud-congress-2/Date: January 18Location: VirtualCost: Free

This conference revolves around the business benefits that can arise from combining and unifying public and private cloud services to create a single, flexible, agile, and cost-optimal IT infrastructure. Attendees will learn how establishing a strategic hybrid cloud can align IT resources with business and application needs to accelerate optimal business outcomes and achieve excellence in the cloud.

Who should attend: Cloud specialists, program managers, heads of innovation, CIOs, CTOs, CISOs, infrastructure architects, chief engineers, consultants, and digital transformation executives

Twitter: @CloudExpoEuropeWeb: cloudexpoeurope.comDate: March 23Location: London, UKCost: TBD

Cloud Expo Europe focuses on the latest trends and developments in cloud technology and digital transformation. Attendees will seecloud-based solutions and services while hearingother information and expert advice. Speakers and exhibitors aim to "inspire attendees," according to organizers,with the newest technology for cloud strategy, optimizing costs, and sustainability.

Who should attend: Technologists, business leaders, senior business managers, IT architects, data center managers, developers, and network and infrastructure professionals

Twitter: @cloudfest,#cloudfestWeb: cloudfest.comDate: March 2224Location: Europa-Park, GermanyCost: Standard pass,399 plus VAT; VIPpass,999 plus VAT; discount codes available

Organizers say attendees should"get ready for new partnerships, deep knowledge sharing, and the best parties the industry has ever seen." This year's event will revolve around three themes: the Intelligent Edge, Our Digital Future, and the Sustainable Cloud.

Who should attend: People in the cloud service provider and Internet infrastructure industries, and web professionals

Twitter: @datacenterworld,#datacenterworldWeb: datacenterworld.comDate: March 2831Location: Austin, Texas, USACost: Regular prices range from $1,999 to $3,299;time-sensitive and AFCOM discounts are available, with prices as low as$1,399

Data Center World delivers strategy and insight aboutthe technologies and concepts attendees need to know to plan, manage, and optimize their data centers. Educational conference programming focuses on rapidly advancing data center technologies, such as edge computing, colocation, hyperscale, and predictive analytics.

Who should attend: Infrastructure managers, facilities managers, cloud architects, engineers, architects, consultants, operations professionals, network security, storage professionals, and C-level executives

Twitter: @RedHatSummit,#RHSummitWeb: redhat.com/en/summitDates (2021):virtual April 2728 andJune 1516, and a series of in-person events starting in OctoberLocations (2021): TBDCost (2021): Virtual, free

At the April event, attendees will hear the latest Red Hatnews and announcements and have the opportunity to ask experts their technology questions. The June event will include breakout sessions and technical content geared toward the topics most relevant to the participants. Attendees will also be able to interact live with Red Hat professionals. Finally, attendees can explore labs, demos, trainings, and networking opportunities at in-person events that will be held in several cities.

Who should attend: System admins, IT engineers, software architects, vice presidents of IT, and CxOs

Twitter: @DellTech, #DellTechWorldWeb: delltechnologiesworld.com/index.htmDate: May 25Location: Las Vegas, Nevada, USACost: $2,295 until February28; $2,495 fromMarch 1May 5

Attendees can learn about what Dell sees on the horizon, as well as develop new skills and strategies to advance their careers and refine their road maps for the future. They'll also get hands-on time with up-and-comingtechnologies and be able to meet experts who work on those technologies.

Who should attend: IT pros, business managers, Dell customers, and partners

Twitter: @KubeCon_,@CloudNativeFdn,#CloudNativeConWeb: events.linuxfoundation.org/kubecon-cloudnativecon-europeDate: May 1620Location: Valencia, SpainCost: TBD

KubeCon and CloudNativeCon are a single conference sponsored by the Linux Foundation and the Cloud Native Computing Foundation (CNCF). The conference brings together leading contributors in cloud-native applications, containers, microservices, and orchestration.

Who should attend: Application developers, IT operations staff, technical managers, executive leadership, end users, product managers, product marketing executives, service providers, CNCF contributors, and people looking to learn more about cloud-native

Twitter: @DockerCon, #DockerConWeb: docker.com/dockerconDate: May 10Location: VirtualCost: Free

DockerCon is a free, immersive online experience complete with product demos; breakout sessions;deep technical sessions from Docker and its partners, experts, community members, and luminariesfrom across the industry;and much more. Attendees can connect with colleagues from around the world at one of the largest developer conferences of the year.

Who should attend: Developers, DevOps engineers, CxOs, and managers

Twitter: @CiscoLive,#CLUSWeb: ciscolive.com/us/Date: June 1216Location: Las Vegas, Nevada, USA, and virtualCost: In-person event, $795 to $2,795, withearly-bird pricing ($725 to $2,595) available through May 16; virtual event, free

Cisco's annual user conference is designed to inform attendees about the company's latest products and technology strategies for networking, communications, security, and collaboration.

Who should attend: Cisco customers from IT and business areas

Twitter: @Monitorama,#monitoramaWeb: monitorama.comDate: June 2729Location: Portland, Oregon, USACost: $700

Monitorama has become popular thanks to its commitment to purely technical content without a lot of vendor fluff. The conference brings together the biggest names from the open-source development and operations communities, who teach attendees about the tools and techniques that are used in some of the largest web architectures in the world.

Its focus is strictly on monitoring and observability in software systems, which the organizers feel is an area in much need of attention. The goal of the organizers is to continue to push the boundaries of monitoring software, while having a great time in a casual setting.

Who should attend: Developers and DevOps engineers, operations staff, performance testers, and site reliability engineers

Twitter: @VMworld, #VMworldWeb: vmworld.com/en/us/index.htmlDate: August 29September 1Locations: San Francisco, California, USA;a sister conference will be held in Barcelona, Spain, November 710Cost: TBD

This conference offers sessions on the trends relevant to business and IT. It also includes breakout sessions, group discussions, hands-on labs, VMware certification opportunities, expert panels, and one-on-one appointments with leading subject-matter experts. Attendees will learn how to deliver modern apps and secure them,manage clouds in any environment,seamlessly support an "anywhere workspace,"and accelerate business innovation from all their apps in a multi-cloud world.

Who should attend: System admins, IT engineers, software architects, vice presidents of IT, and CxOs

Twitter: @SpiceworksWeb: spiceworks.com/spiceworldDate: September 2830Location: Austin, Texas, USA, and virtualCost: TBD

Spiceworld brings together thousands of IT pros, dozens of sponsoring vendors, and hundreds of tech marketers for three days of practical how-to sessions, tech conversations with key vendors, in-the-trenches stories from IT pros, networking, and "tons of fun," according to the organizers.

Who should attend: IT managers, operations engineers, help desk staff, and system admins

Twitter: @googlecloudWeb: cloud.withgoogle.com/next/sf/Date (2021): October 1214Location (2021): VirtualCost: Free

Google Cloud Next focuses on Google's cloud services (infrastructure-as-a-serviceand platform-as-a-service) for businesses. Tracks include infrastructure and operations, app development, and data and analytics.

Who should attend: IT Ops pros and developers using Google Cloud Platform services

Twitter: @BigDataAITO,#BigDataTOWeb: bigdata-toronto.comDate (2021): October 1314Location: VirtualCost (2021): $299, with time-sensitive discounts available

A conference and trade show, Big Data Toronto, which is colocated with AI Toronto, brings together a diverse group of data analysts, data managers, and decision makers to explore and discuss insights, showcase the latest projects, and connect with their peers. The event features more than 150 speakers andover 20 exhibitors.

Who should attend: Data scientists, data analysts, and business analysts

Twitter: #GartnerSYMWeb: gartner.com/en/conferences/na/symposium-usDate: October 1720Location: Orlando, Florida, USACost: Standard price: $6,675; public-sector price, $4,975

Gartner Symposium/ITxpo is aimed specifically at CIOs and technology executives in general, addressing topics from an enterprise IT perspective. These include mobility, cybersecurity, cloud computing, application architecture, application development, the Internet of Things, and digital business.

Who should attend: CIOs and senior IT execs

Twitter: @451ResearchWeb: spglobal.com/marketintelligence/en/events/webinars/451-nexusDate (2021): October 1920Location (2021): VirtualCost (2021): Free

Formerly known as the Hosting & Cloud Transformation Summit, 451Nexus is a forum for executives in the business of enterprise IT technology. The agenda is setby 451 Research analysts to provide insight into the competitive dynamics of innovation and to offer practical guidance on designing and implementing effective IT strategies.

Who should attend: Technology vendors and managed service providers, IT end users, financial professionals, and investors

Twitter: @MS_Ignite,#MSIgniteWeb: microsoft.com/en-us/igniteDate (2021): November 24Location (2021): VirtualCost (2021): Free with registration

Microsoft Ignite allows attendees toexplore the latest tools, receive deep technical training, and have questions answered by Microsoft experts. Ignite covers architecture, deployment, implementation and migration, development, operations and management, security, access management and compliance, and usage and adoption.

Who should attend: IT pros, decision makers, implementers, architects, developers, and data professionals

Twitter: #SMWorldWeb: smworld.comDate: November 1216Location: Orlando, Florida, USACost: TBD

This event is staged by HDI, an events and services organization for the technical support and services industry. The event includes an expo hall, training sessions, learning tracks, and keynote speeches.

Who should attend: Service and technical support professionals

Twitter: @AWSreInvent,#reInventWeb: reinvent.awsevents.comDate (2021): November 29December 3Location (2021): Las Vegas, Nevada, USA (virtual, but live keynotesand leadership sessions; breakout sessions on demand)Cost (2021): In-person, $1,799; virtual,free

AWS re:Invent is the Amazon Web Services annual user conference, which brings customers together to network, engage, and learn more about AWS. The virtual event features breakout sessions, keynotes, and live content.

Who should attend: AWS customers, developers and engineers, system administrators, and systems architects

Twitter: @salesforce,@Dreamforce,#DF20Web: salesforce.com/form/dreamforceDate (2021): December 9Location (2021): VirtualCost (2021): Free

Sponsored by Salesforce, Dreamforce to You is "a completely reimagined Dreamforce experiencefor the work-from-anywhere world,"organizers said. At the event, attendees will hear about Salesforce's customer successes. They'll also have some fun and learn from one another. This event will highlight relevant conversationsand showcase innovations geared for this new, all-digital world.

Who should attend: Salesforce customers

Twitter: #gartnerioWeb: gartner.com/en/conferences/emea/infrastructure-operations-cloud-uk,gartner.com/en/conferences/na/infrastructure-operations-cloud-usDate (2021): December 2223Location (2021):Europe, Africa, and Middle Eastand virtualCost (2021): Standard price,1,275; public-sector price, 850

This conference primarily focuses on scaling DevOps, but also addresses cloud computing and operations automation. Attendees come to learn about the biggest IT infrastructure and operations challenges, priorities, and trends.

Who should attend: Infrastructure and operations executives and strategists, IT operations managers, data center and infrastructure managers, infrastructure and operations architects, and project leaders

***

Review the options and make your choices soon: Prices may vary based on how early you register. Also, remember that hotel and travel costs are generally separate from the conference pricing.

We've listed them all, although not all dates, locations, and pricing were available at publication time, especially for those events taking place later in the year. In those cases, we have provided historical information aboutthe event to give you an idea of what to expect and what you'll get out of attending.

Continue reading here:

The best cloud and IT Ops conferences of 2022 - TechBeacon

Posted in Cloud Computing | Comments Off on The best cloud and IT Ops conferences of 2022 – TechBeacon

Amazon Web Services to further tap cloud biz in Chinese market – Chinadaily USA

Posted: at 11:42 am

Attendees at Amazon.com Inc annual cloud computing conference walk past the Amazon Web Services logo in Las Vegas, Nevada, US, on Nov 30, 2017. [Photo/Agencies]

Amazon Web Services, the cloud service platform of US technology giant Amazon, is banking on the burgeoning cloud computing market in China and ramping up efforts to offer more cloud services to help Chinese enterprises in digital transformation.

China is and will continue to be, one of Amazon Web Services' most strategically important markets, said Elaine Chang, corporate vice-president and managing director of AWS China.

AWS has been increasing its investment in the Chinese market to build an innovation engine for bolstering the digital transformation in various industries and fueling the rapid development of China's digital economy, Chang said.

The new features and services landed in AWS China Regions grew by 50 percent year-on-year in the first half, the company said.

"The digital wave has swept through all industries, both in China and globally, and cloud computing is a key element of digital transformation. We help Chinese enterprises accelerate innovation, reinvent businesses and build smart industries by introducing leading global technology and practical experience," Chang said.

With its global infrastructure, industry-leading security expertise and compliance practice, AWS helps Chinese companies gain access to best-in-class technologies and services in overseas markets to enhance their competitiveness and accelerate globalization.

AWS came to China in 2013, and has since been investing and expanding its infrastructure and business. It launched AWS China (Beijing) Region, operated by Beijing Sinnet Technology Co Ltd, in 2016, and AWS China (Ningxia) Region, operated by Ningxia Western Cloud Data Technology Co Ltd, in 2017.

The company has increased its investment in China this year, such as expanding its Ningxia Region by adding 130 percent more computing capacity compared to the first phase, and adding a third availability zone in the Beijing Region.

China's overall cloud computing market increased 56.6 percent to 209.1 billion yuan ($32.9 billion) last year, according to the China Academy of Information and Communications Technology. The market is expected to grow rapidly in the next three years and reach nearly 400 billion yuan by 2023.

In addition, AWS has upgraded its strategic collaboration with auditing firm Deloitte in China. The two companies plan to carry out close collaboration in four vertical industries, including auto, healthcare and life science, retail and financial services.

"As one of the leaders in the global public cloud market, the acceleration of AWS in cloud services in China will effectively provide more competitive options for enterprises in China and worldwide to modernize applications and drive digital transformation," said Charlie Dai, a principal analyst at Forrester, a business strategy and economic consultancy.

At present, the scale of the cloud computing industry is growing rapidly, and competition in the domestic market is becoming more intense.

Cloud infrastructure services expenditure in China grew 43 year-on-year in the third quarter to $7.2 billion, said a report from Canalys, a global technology market analysis company.

Alibaba Cloud remained the market leader with a 38.3 percent share of total cloud infrastructure spending in China, while Huawei Cloud was the second largest provider, with a 17 percent market share. Tencent Cloud and Baidu AI Cloud ranked third and fourth, respectively.

The report noted that AWS and Microsoft Azure have both announced their intention to expand their presence in China through existing partnerships with local companies.

Chen Jiachun, an official from the information and communications development department at the Ministry of Industry and Information Technology, said cloud computing services have expanded from e-commerce, government affairs and finance to manufacturing, healthcare, agriculture and other fields.

"Cloud computing is promoting more enterprises to step up digital transformation. It has gradually become an important engine driving the transformation and upgrading of traditional industries and empowering China's digital economy," Chen said.

Li Wei, deputy director of the Cloud Computing and Big Data Research Institute under CAICT, said the COVID-19 pandemic has accelerated the development of cloud services and cloud computing applications, which has played a vital role in bolstering the development of the digital economy.

Read the rest here:

Amazon Web Services to further tap cloud biz in Chinese market - Chinadaily USA

Posted in Cloud Computing | Comments Off on Amazon Web Services to further tap cloud biz in Chinese market – Chinadaily USA

DeepBrain Chain Computing Power Mainnet Launches Online, Meaning All GPU Servers Can Now Freely Connect to the DBC Network, All Information Available…

Posted: at 11:42 am

Get inside Wall Street with StreetInsider Premium. Claim your 1-week free trial here.

Singapore, Singapore--(Newsfile Corp. - December 17, 2021) - With the advent of a digital era represented by Metaverse + AI, high performance computing power will become the most important basic resource. As the most important computing infrastructure in the Web3 world, DeepBrain Chain can strongly improve the problems faced in the field of computing power and empower the digital era.

To view an enhanced version of this graphic, please visit:https://orders.newsfilecorp.com/files/7987/107943_7fa5838534139be3_001full.jpg

DeepBrain Chain - Distributed high-performance GPU computing network

DeepBrain Chain was founded in 2017 with the vision of building an infinitely scalable distributed high-performance computing network based on blockchain technology to become the most important computing infrastructure in the era of 5G+AI+metaverse. DeepBrain Chain itself is an open-source GPU computing power pool and GPU cloud platform, which means anyone may become a contributor and user of computing power in DeepBrain Chain. So, whether it is idle GPU computing devices (which meet the requirements of DeepBrain Chain network) or some professional GPU computing providers, they can access the DeepBrain Chain system without restriction and get incentives by providing computing power. As for computing power users, they can get high-quality and cost-friendly computing power services in the DeepBrain Chain system based on DeepBrain Chain's native token, DBC, which constructs a decentralized computing power supply and demand ecosystem.

DeepBrain Chain contains three important parts: the high-performance computing network, the blockchain mainnet and the GPU computing mainnet. The high-performance computing network officially launched at the end of 2018, the blockchain mainnet on May 20th, 2021, after nearly 4 months of public testing, the GPU computing mainnet has officially launched on November 20th .

DeepBrain Chain's main chain is developed based on Polkadot's Substrate framework and is a member of the Polka family. The distributed computing network, on the other hand, is the computing power supply center of DeepBrain Chain and works together with the DeepBrain Chain blockchain network. The computing power user, on the other hand, gets the service through the DeepBrain Chain cloud platform, which can be considered as the client-end. The overall system architecture of DeepBrain Chain is relatively complex, and the computing network it builds has two main advantages: global service capability and strong computing resources.

The launch of DeepBrain Chain GPU computing mainnet means that anyone in the world can freely join the network with GPU resources that meet the requirements of DeepBrain Chain network, and everyone can freely rent GPU resources in DeepBrain Chain network to support their business development, and all transactions are traceable on the chain, realizing complete decentralization.

The Ability to Serve the Globe

Traditional centralized computing platforms may only be able to serve some regional users due to trust factors such as data security, making it difficult to expand their business globally. Likewise, such large centralized computing providers will concentrate their data centers in remote areas with fewer natural disasters, which means they have difficulty in meeting the proximity computing requirements of different territories. In particular, it is difficult to meet the requirements of some application scenarios with high computing requirements, such as autonomous driving.

The computing power of DeepBrain Chain itself is distributed, and the introduction of blockchain technology solves the trust issue well. Through moving the computing power on chain and distributed configuration terminal, DeepBrain Chain as a platform party does not hold the control of any machine. At the same time, the computing resources will be allocated through smart contracts, and any economic-related behaviors (token pledge, resource contribution) will be presented on the chain, and in general, DeepBrain Chain is trustworthy and not affected by geopolitical factors.

As a distributed cloud computing network, the computing power supply of DeepBrain Chain is distributed all over the world, and the computing power supply nodes all over the world can be automatically transformed into metropolitan nodes and edge nodes to meet the nearby computing demands, and even a single point of node failure does not affect the GPU computing power supply, and the system as a whole becomes more fault-tolerant due to the decentralization.

Powerful And Inexpensive High-Performance Computing Resources As Support

At present, mainstream cloud computing service providers usually concentrate their computing power relatively closed in multiple data centers consisting of hundreds of thousands of servers with CPUs as the core, so as to continuously provide computing services to the global network. With the surge in market demand, such cloud providers will further expand their hardware, but the overall price level of computing power is still very expensive.

For example, AI requires huge computing power to run, which requires a large amount of computing power supply. With the GPU computing hardware equipment, the price can be up to hundreds of thousands to millions, and some AI projects such as Alphago, which once beat Go master Lee Sedol, cost hundreds of thousands of dollars for one training model. The cost of expensive computing is also one of the elements that hinder the development of AI.

DeepBrain Chain allows GPU computing servers all over the world to become its nodes, which theoretically has unlimited scalability, and any computing power provider who meets the conditions can become a computing power supply node and gain revenue. For professional GPU computing power providers, these GPU servers are hosted in T3 level or higher IDC server rooms to ensure stability, and on top of that, DBC software is installed into the servers to access the DBC computing power network. Some idle computing power can be connected to the mining pool of DeepBrain Chain to improve GPU utilization and further exchange for extra income. Therefore, in DeepBrain Chain, a large amount of distributed computing power will be gathered, and the cost of computing power will be much lower than the centralized computing power platform, which greatly reduces the cost of GPU computing power acquisition.

Although, the model of DeepBrain Chain and the current mainstream cloud computing platform may be in a competitive relationship, but in fact such mainstream platforms such as Ali cloud and Amazon cloud can access the DBC network as computing nodes and gain revenue, so DeepBrain Chain and these computing suppliers are actually in a competitive but also cooperative relationship.

In a nutshell, computing power enhancement and energy sustainability are both the core constraints and investment opportunities of the meta-universe. The opportunities spawned by the meta-universe will not be limited to GPU, 3D graphics engine, cloud computing and IDC, high-speed wireless communication, Internet and game company platforms, digital twin cities, sustainable energy such as industrial meta-universe solar energy, etc. In particular, the decentralized ecosystem of DeepBrain Chain with a layout in the field of high-performance GPU computing power, while providing high-performance computing resources for the field of science and technology, is positioned in a huge blue ocean market. Of course, with the launch of the mainnet of DeepBrain Chain, all people will be able to participate in it and enjoy the dividends of the meta-universe era.

Empowering Meta-universe and AI

Although DFINITY, which has a higher reputation, also focuses on decentralized computing power market, DFINITY mainly focuses on CPU computing power, while DeepBrain Chain focuses on GPU computing power, which is an important difference between the two.

Both CPUs and GPUs can produce computing power, but CPUs are mainly used for complex logic calculations, while GPUs, as special processors with hundreds or thousands of cores, can perform massively parallel calculations and are more suitable for visual rendering and deep learning algorithms. In contrast, GPUs provide faster and cheaper computing power than CPUs, with GPU computing power often costing as little as one-tenth the cost of a CPU.

At present, GPU computing power has been deeply embedded to artificial intelligence, cloud games, autonomous driving, weather forecasting, cosmic observation, and other scenarios that need high-end computing supply, there is a surge for the demand of GPU power in these high-end industries, the market demand for GPU computing power in the future will be much higher than CPU computing power.

Therefore, DFINITY is mainly dedicated to the blockchainization needs of popular network applications, such as decentralizing information websites and chat software. DeepBrain Chain, on the other hand, is more suitable to serve the needs of high-performance computing, such as artificial intelligence, cloud gaming and deep learning.

The founder of DeepBrain Chain, a veteran AI entrepreneur, has stated that DeepBrain Chain was built in the early days to combine AI with blockchain in order to reduce the cost of the massive computation required for AI. And the total global market for AI-powered hardware and software infrastructure is set to grow to $115.4 billion by 2025.

The artificial intelligence space involves a wide range of fields, and the AI-driven infrastructure accounts for 70% of the total. The current popular technology fields such as autonomous driving, robotics, high-end Internet of Things, etc. are interspersed with AI technology, which means that DeepBrain Chain will further drive the development of the whole technology field by empowering the AI segment. At present, the computing power required for AI doubles every 2 months, and the supply level of the new computing power infrastructure carrying AI will directly affect the AI innovation iteration and industrial AI application landing. The high-performance computing and AI industry driven by GPU power will grow exponentially in the next few years.

At present, some AI research fields quite favor the services provided by the DeepBrain Chain system. It is understood that from 2019 to date, DeepBrain Chain AI developer users come from 500+ universities in China and abroad. Many universities that offer AI majors have teachers or students who are using the GPU computing network of DeepBrain Chain, and the application scenarios cover cloud games, artificial intelligence, autonomous driving, blockchain, visual rendering, and the AI developer users based on DeepBrain Chain have exceeded 20,000. At present, more than 50 GPU cloud platforms, including Congtu.cloud, 1024lab and Deepshare.net, have been built on the DeepBrain Chain network, and the enterprise customers served by DeepBrain Chain have exceeded hundreds.

A meta-verse is a virtual ecosystem that is very complex and needs a lot of computing power to support. For example, the construction of a large number of 3D scenes requires large-scale rendering; for example, in the metaverse, multi-person interaction in the same space requires more algorithmic support, such as multi-person voice interaction in some multi-person scenes involving distance and proximity, dynamic capture and real-time rendering of many users' mutual actions, and the resulting high rendering and low latency requirements caused by the massive amount of computing. In addition, the open meta-universe ecosystem, UGC (user-generated content) built by a large number of users all need the support of a large number of operations or a large number of AI scenes, etc.

Large models of artificial intelligence will serve as the brains of the ecosystem operation of the meta-universe. AI utilizes advanced data, tensor and pipeline parallelization techniques that enable the training of large language models to be efficiently distributed across thousands of GPUs, and it is evident that the construction of the meta-universe is deeply dependent on the development of AI technology.

With the convergence of 5G+AIOT and the advent of the meta-universe era, the global computing industry is entering the era of high-performance computing + edge intelligence, and the massive, real-time distributed high-performance cheap GPU computing power provided by the DeepBrain Chain network has become the most important computing infrastructure in the AI+meta-universe era.

In a word, the distributed GPU computing power ecosystem built by DeepBrain Chain will help break through the bottleneck faced by the computing field nowadays, accelerate the coming of the digital era, and become one of the most important infrastructures in the Web3.0 world.

Media contactContact: MayCompany Name: DEEPBRAIN CHAIN FOUNDATION LTD.Website: http://www.deepbrainchain.orgEmail: may@deepbrainchain.org

To view the source version of this press release, please visit https://www.newsfilecorp.com/release/107943

See the article here:

DeepBrain Chain Computing Power Mainnet Launches Online, Meaning All GPU Servers Can Now Freely Connect to the DBC Network, All Information Available...

Posted in Cloud Computing | Comments Off on DeepBrain Chain Computing Power Mainnet Launches Online, Meaning All GPU Servers Can Now Freely Connect to the DBC Network, All Information Available…

Asseco Poland S A : Cloud will have its headquarters in Szczecin – marketscreener.com

Posted: at 11:42 am

Asseco Cloud, a company belonging to Asseco Poland, which supports companies and institutions in designing, implementing and operating cloud solutions, will have its headquarters in Szczecin. It wants to support the city in the development of modern IT services and will create new jobs. The company, established in September this year, is currently building its structures and will recruit IT specialists from Szczecin and the West Pomeranian Voivodeship.

Asseco Cloud is the Asseco Group's entity that focuses on strategic resources and competencies in the area of cloud computing. It uses its own resources, data centers and IT infrastructure in order to provide customers with optimum cloud services. It offers its proprietary solutions as well as those of leading cloud providers. It ensures full support from design to implementation, and delivers expert knowledge.

Asseco has long been associated with Szczecin. It is here that we have located our important business division, Certum, a part of Asseco Data Systems responsible for electronic signatures and SSL certificates and a leader in trust services in Poland. One of our data centers is also located in this city. By locating the new Asseco Cloud in Szczecin, we wish to develop a broad cooperation with the City, support the region and local business in the development of modern IT services and contribute to improving the attractiveness of Szczecin and Western Pomerania for employees, investors, entrepreneurs and students - says Andrzej Dopieraa, Vice President of the Management Board of Asseco Poland, Vice Chairman of the Supervisory Board of Asseco Cloud.

I am glad that a giant in the IT industry, which Asseco undoubtedly is, is opening its next company in Szczecin. This is a good place to live, work and for self-fulfillment. I am sure that this will also be another chapter of cooperation between Asseco and the City. I am looking forward to many fruitful projects, further development and see you in Szczecin - says Piotr Krzystek, Mayor of Szczecin

The value of the global cloud market will grow to $937 billion by 2027. The share of global IT spending on cloud computing will also increase. Public cloud spending will grow to about $304.9 billion, at a rate of 18%.

Asseco Cloud is our response to the enormous economic demand for cloud services. Already today, companies allocate 1/3 of their IT investments to cloud computing. Having potential in the form of our own data centers, IT infrastructure and high-end competence, we wish to serve clients from the Polish and, in a longer perspective, also from the European market. To do so, we will need high-class IT specialists whom we want to recruit locally. Currently, Asseco Cloud employs more than 40 people in Szczecin; ultimately, we are planning to increase our team to 100 people - says Lech Szczuka, President of the Management Board of Asseco Cloud.

Fore more information about Asseco Cloud, please see https://www.asseco.cloud/.

****

Asseco is the largest IT company in Poland and Central and Eastern Europe. For 30 years it has been creating technologically advanced software for companies in key sectors of the economy. The company is present in 60 countries worldwide and employs over 29 thousand people. It has been expanding both organically and through acquisitions. Asseco companies are listed on the Warsaw Stock Exchange (WSE), NASDAQ and the Tel Aviv Stock Exchange.

Asseco Cloud is an IT company of the Asseco Group, specializing in the design, supply, implementation and maintenance of cloud solutions. It executes implementations based on its proprietary solutions and those of leading cloud providers, while offering full support from design to implementation and providing expert knowledge. The company's offer includes services based on a private cloud, preferred by customers from the public or regulated sectors, and solutions in the multi-cloud model, based on the public cloud of global providers.

View original post here:

Asseco Poland S A : Cloud will have its headquarters in Szczecin - marketscreener.com

Posted in Cloud Computing | Comments Off on Asseco Poland S A : Cloud will have its headquarters in Szczecin – marketscreener.com