The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Cloud Computing
SAP to offer cloud computing – The News International
Posted: March 11, 2022 at 12:04 pm
KARACHI: A global tech giant SAP has announced to leverage cloud computing technology in Pakistan as part of its plans for 2022, The News learnt on Friday.
The facility would help organisations reduce complexity, optimise costs, enable real-time decision making, and support new digital business innovation, SAP informed.
The decision has been taken to support and accelerate digital transformation in the country, announced Saquib Ahmad, managing director SAP Pakistan, during a media session organised at a local hotel in Karachi.
Cloud computing, which involves delivering hosted services over the internet, is recognised as a core technological building block for digital innovation, and businesses prefer adopting a cloud-first strategy in their IT operations.
Talking about SAPs existing portfolio for the promotion of cloud strategy across public and private sectors, Ahmad explained how SAPs cloud offerings enable companies optimise their end-to-end processes and innovate with new capabilities in the cloud, while reducing operating costs, improving productivity, and unlocking new opportunities for growth.
He further underscored SAPs plans to increase its efforts to provide better online security for its clients, highlighting security as one of the persistent issues organisations face in an online world.
The rest is here:
Posted in Cloud Computing
Comments Off on SAP to offer cloud computing – The News International
Will HPC Be Eaten By Hyperscalers And Clouds? – The Next Platform
Posted: at 12:04 pm
Some of the most important luminaries in the HPC sector have spoken from on high, and their conclusions about the future of the HPC market are probably going to shock a lot of people.
In a paper called Reinventing High Performance Computing: Challenges and Opportunities, written by Jack Dongarra at the University of Tennessee and Oak Ridge National Laboratory Jack, didnt you retire? and Dan Reed at the University of Utah and Dennis Gannon formerly of Indiana University and Microsoft, we get a fascinating historical view of HPC systems and then some straight talk about how the HPC industry needs to collaborate more tightly with the hyperscalers and cloud builders for a lot of technical and economic reasons.
Many in the HPC market have no doubt been thinking along the same lines. It is in the zeitgeist, being transported on the cosmic Ethernet. (And sometimes InfiniBand where low latency matters.) And by the way, we are not happy about any of this, as we imagine you are not either. We like the diversity of architectures, techniques, and technologies that the HPC market has developed over the years. But we also have to admit that the technology trickle down effect where advanced designs eventually make their way down into large enterprises and then everywhere else did not happen at the speed or to the extent that we had hoped it might over the decades we have been watching this portion of the IT space.
As usual, the details of the scenario painted in this paper and the conclusions that the authors draw are many and insightful, and we agree wholeheartedly that there are tectonic forces at play in the upper echelons of computing. Frankly, we founded The Next Platform with this idea in mind and used the same language, and in recent years have also wondered how long the market for systems that are tuned specifically for HPC simulation and modeling would hold out against the scale of compute and investment by the hyperscalers and cloud builders of the world.
The papers authors have a much better metaphor for contrasting large-scale HPC system development, and that is to look at it like a chemical reaction. HPC investments, especially for capability-class machines, are endothermic, meaning they require infusions of capital from governments and academia to cover the engineering costs of designing and producing advanced systems. But investments in large-scale machinery at the hyperscalers and cloud builders are exothermic, meaning they generate cash among the Magnificent Seven of Amazon, Microsoft, Google, Facebook, Alibaba, Baidu, and Tencent, it is enormous amounts of money. We would go so far as to say that the reaction is volcanic among the hyperscalers and cloud builders, which is exothermic with extreme attitude. Enough to melt rock and build mountains.
The geography of the IT sector has been utterly transformed by these seven continents of compute, and we all know it, and importantly, so does the HPC community that is trying to get to exascale and contemplating 10 exascale and even zettascale.
Economies of scale first fueled commodity HPC clusters and attracted the interest of vendors as large-scale demonstrations of leading edge technology, the authors write in the paper. Today, the even larger economies of scale of cloud computing vendors has diminished the influence of high-performance computing on future chip and system designs. No longer do chip vendors look to HPC deployments of large clusters as flagship technology demonstrations that will drive larger market uptake.
The list of truisms that Dongarra, Reed, and Gannon outline as they survey the landscape is unequivocal, and we quote:
There is no question that the HPC and hyperscaler/cloud camps have been somewhat allergic to each other over the past decade or two, although there has been some cross pollination in recent years, with both people and technologies from the HPC sector being employed by the hyperscalers and cloud mostly to attract HPC simulation and modeling workloads, but also because of the inherent benefits of technologies such as MPI or InfiniBand when it comes to driving the key machine learning workloads that have made the hyperscalers and cloud builders standard bearers for the New HPC. They didnt invent the ideas behind machine learning the phone company did but they did have the big data and the massive compute scale to perfect it, and they are also going to be the ones building the metaverse or metaverses that are really just humungous simulations, driven by the basic principles of physics, done in real time.
What it comes down to is that standalone HPC in the national and academic labs takes money and has to constantly justify the architectures and funding for the machines that run their codes, and that the traditional HPC vendors so many of them are gone now could not generate enough revenue, much less profit, to stay in the game. HPC vendors were more of a public-private partnership than Wall Street ever wanted to think about or the vendors ever wanted to admit. And when they made any profits, it was never sustainable just like being a server OEM is getting close to not being sustainable due to the enormous buying power of the hyperscalers and cloud builders.
We will bet a Cray-1 supercomputer assembled from parts acquired on eBay that the hyperscalers and cloud builders will figure out how to make money on HPC, and they will do it by offering applications as a service, not just infrastructure. National and academic labs will partner there and get their share of the cloud budget pool, and in some cases where data sovereignty and security are particularly high, the clouds will offer HPC outposts or whole dedicated datacenters, shared among the labs and securely away from other enterprise workloads. And moreover, the cloud makers will snap up the successful AI hardware vendors or design their own AI accelerator chips as AWS and Google do and the HPC community will learn to port their routines to these devices as well as CPUs, GPUs, and FPGAs. In the longest of runs, people will recode HPC algorithms to run on a Google TPU or an AWS Tranium. No, this will not be easy. But HPC will have to ride the coattails of AI because otherwise it will diverge from the same hardware path and not be an affordable endeavor.
As they prognosticate about the future of HPC, Dongarra, Reed, and Gannon outline the following six maxims that should be used to guide its evolution:
Maxim One: Semiconductor constraints dictate new approaches. There are constraints from Moores Law slowing and Dennard scaling stopping, but it is more than that. We have foundry capacity issues and geopolitical problems arising from chip manufacturing, as well as the high cost of building chip factories, and there will need to be standards of interconnecting chiplets to allow ease of integration of diverse components.
Maxim Two: End-to-end hardware/software co-design is essential. This is a given for HPC, and chiplet interconnect standards will help here. But we would counter that the hyperscalers and cloud builders limit the diversity of their server designs to drive up volumes. So just like AI learned to run on HPC iron back in the late 2000s, HPC will have to learn to run on AI iron of the 2020s. And that AI iron will be located on the clouds.
Maxim Three: Prototyping at scale is required to test new ideas. We are not as optimistic as Dongarra, Reed, and Gannon that HPC-specific systems will be created much less prototyped at scale unless one of the clouds corners the market on specific HPC applications. Hyperscalers bend their software to fit cheaper iron, and they only create unique iron with homegrown compute engines when they feel they have no choice. They will adopt mainstream HPC/AI technologies every time, and HPC researchers are going to have to make do. In fact, that will be largely what the HPC jobs of the future will be: Making legacy codes run on new clouds.
Maxim Four: The space of leading edge HPC applications is far broader now than in the past. And, as they point out, it is broader because of the injection of AI software and sometimes hardware technologies.
Maxim Five. Cloud economics have changed the supply chain ecosystem. Agreed, wholeheartedly, and this changes everything, even if cloud capacity costs 5X to 10X as much as running it on premises, the cloud builders have so much capacity that every national and academic lab could be pulling in the same direction as they modernize codes for cloud infrastructure and where it is sitting doesnt matter. What matters is changing from CapEx to OpEx.
Maxim Six: The societal implications of technical issues really matter. This has always been a hard sell to the public, even if scientists get it, and the politicians of all the districts where supercomputing labs exist certainly dont want HPC centers to be Borged into the clouds. But, they will get to brag about clouds and foundries, so they will adapt.
Investing in the future is never easy, but it is critical if we are to continue to develop and deploy new generations of high-performance computing systems, ones that leverage economic shifts, commercial practice, and emerging technologies. Let us be clear. The price of innovation keeps rising, the talent is following the money, and many of the traditional players companies and countries are struggling to keep up.
Welcome to the New HPC.
Authors Note: We read a certain amount of technical papers here at The Next Platform, and one of the games we like to play when we come across an interesting paper is to guess what it will conclude before we even read it.
This is an old and perhaps bad habit learned from a physics professor many decades ago, where he admonished us to figure out the nature of the problem and estimate an answer, in ours heads and out loud before the class, before we actually wrote down the first line to solve the problem. This was a form of error detection and correction, which is why we were taught to do this. And it kept you on your toes, too, because the class was at 8 am and you didnt know you were going to have to solve a problem until the professor threw a piece of chalk to you. (Not at you, but to you.)
So when we came across the paper outlined above, we immediately went into speculative execution mode and this popped out:
Just like we cant really have a publicly funded mail service that works right or a publicly funded space program that works right, in the long run we will not be able to justify the cost and hassle of bespoke HPC systems. The hyperscalers and cloud builders can now support HPC simulation and modeling and have the tools, thanks to the metaverse, to not only calculate a digital twin of the physical world and alternate universes with different laws of physics if we want to go down those myriad roads but to allow us to immerse ourselves in it to explore it. In short, HPC centers are going to be priced out of their own market, and that is because of the fundamental economics of the contrast between hyperscaler and HPC center. The HPC centers of the world drive the highest performance possible for specific applications at the highest practical budget, whereas the hyperscalers always drive performance and thermals for a wider set of applications at the lowest cost possible. The good news is that in the future HPC sector, scientists will be focusing on driving collections of algorithms and libraries, not on trying to architect iron and fund it.
We were pretty close.
Here is the original post:
Will HPC Be Eaten By Hyperscalers And Clouds? - The Next Platform
Posted in Cloud Computing
Comments Off on Will HPC Be Eaten By Hyperscalers And Clouds? – The Next Platform
Global PC and Server Industry Report 2022: PC and Desktop PC Market Volumes are Expected to Decline Slightly to 240 Million Units and Nearly 80…
Posted: at 12:04 pm
Company Logo
Dublin, March 11, 2022 (GLOBE NEWSWIRE) -- The "2022 Global PC and Server Industry Development Trends and Key Issues" report has been added to ResearchAndMarkets.com's offering.
Enterprises return-to-office plans have stimulated demand for commercial PCs but have reduced demand for consumer and education PCs.
The global notebook PC and desktop PC market volumes are expected to decline slightly to 240 million units and nearly 80 million units, respectively, in 2022. Since the motherboard market development is always highly correlated with desktop PCs, global motherboard market volume is estimated at between 85 million to 95 million units in the next five years.
Meanwhile, the global server market continues to grow and the increasing cloud computing applications based on 5G, AI, and edge computing technologies will become a major growth enabler for the server market in the next five years.
This report provides an overview of the global notebook PC, desktop PC, motherboard, and server markets with market volume forecast for the period 2022-2025; looks into the latest industry and market trends and examines the major strategies of key players.List of Topics
Development of the global economy, touching on global economic growth rate predictions, pandemic development, and global inflation
Development of the global PC market, touching on key determinant factors for the market development in 2022
Global notebook PC and desktop PC market volume forecast for the period 2022-2025 with shipment share breakdown by brand and by ODM
Global motherboard market volume forecast for the period 2022-2025
Development of key PC products, touching on Chromebooks, gaming notebook PCs, gaming monitors, AIO (All In One) PC, etc.
Development of key PC brands, touching on Apple's in-house chip M1 and the deployment of Arm-based processors in macOS, Chrome OS, Windows OS ecosystems, and includes global notebook PC market share by CPU supplier
Development of the global server market and server market volume forecast for the period 2022-2025, identifying key components in short supply such as PMIC, BMC, and more
Global server shipment share by brand for the period 2020-2022, with shipment share breakdown by overall processor and by ODM Direct processor
Development strategy of major brands such as Dell, HP, and Inspur
M&A and in-house chip development strategies of major cloud service providers, including AWS, Microsoft Azure, and Google Cloud
Development of key server products and applications, touching on HPC (High Performance Computing), net zero carbon emissions, information security compliance,
Outlook for the 2022 IT industry, touching on product trends, market trends, and brand strategies of server brands and PC brands
Story continues
Key Topics Covered:
Global Economic Development
Global PC Market Development
Development of Key PC Products and Applications
Global Server Market Development
Development of Key Server Products and Applications
Conclusion
Companies Mentioned
Acer
Actifio
Adobe
ADRM
Affirmed Networks
Alibaba
Alooma
AMD
Anvato
Apigee
Apple
AppSheet
Arm
ASUS
Avere Systems
Bebo
Bitium
BludData
BlueTalon
Bonsai
Cask Data
Citus Data
Cloud 9
Cloud Knox
CloudSimple
Cloudyn
Compal
CyberX
Cycle Computing
DataSense
Dell
E8 Storage
ECS
Elastifile
Fitbit
Foxconn
GameSparks
Genee
Github
Harvest.ai
Hexadite
HP
Huawei
IBM
Intel
Inventec
JClarity
Kaggle
Lenovo
Lobe
Looker
Maluuba
MediaTek
Metaswitch
Microsoft
Movere
Movial
MSI
Nice
North
Nuance
Nvidia
Oracle
Orbitera
Orions
Pegatron
Pointy
QCI
Qualcomm
Quanta
Qwiklabs
See the article here:
Posted in Cloud Computing
Comments Off on Global PC and Server Industry Report 2022: PC and Desktop PC Market Volumes are Expected to Decline Slightly to 240 Million Units and Nearly 80…
Want Monster Returns? 2 Unstoppable Tech Stocks to Buy and Hold for the Next Decade – The Motley Fool
Posted: at 12:04 pm
Technology is constantly evolving, and enterprises must keep pace with the latest innovations if they hope to remain competitive. Cloud computing is a perfect example. Today, businesses can provision cloud services through the internet, and that technology allows them to scale more quickly and operate more efficiently because they don't have to make sizable upfront investments in infrastructure or pay to maintain costly hardware.
As a result, research company Gartner believes enterprises will spend over $540 billion on cloud services this year, and that figure will surpass $915 billion by 2025. Companies like DigitalOcean ( DOCN -3.35% ) and Arista Networks ( ANET ) are well-positioned to benefit from that unstoppable trend, and both stocks could produce monster returns over the next decade.
Here's what you should know about these two unstoppable stocks.
Image source: Getty Images.
DigitalOcean is a cloud-services provider. Its portfolio includes infrastructure services like compute, storage, and networking, and a growing number of platform services, like a selection of fully managed databases, including MongoDB. More importantly, those products are designed with a click-and-go interface that simplifies cloud computing for small- and medium-sized businesses (SMBs). Thanks to DigitalOcean, clients can quickly build, deploy, and scale applications without specialized training, even if they don't have an IT department.
Doubling down on that niche, the company also provides 24/7 customer service and tech support to all clients, as well as an extensive collection of learning materials, including thousands of tutorials and community-generated questions and answers. In short, its focus on simplicity and support makes its platform perfect for SMBs, and that differentiates DigitalOcean from rivals like Amazon and Microsoft. While both cloud titans undoubtedly have a more robust lineup of services, their products are designed for large enterprises backed by big IT departments.
So far, DigitalOcean's approach is working. It now has 609,000 customers, and the average customer spent 16%more over the past year. During that time, revenue rose 35% to $428.6 million, and the company generated free cash flow (FCF) of $24 million, marking its first full year of positive FCF. Even so, DigitalOcean has hardly scratched the surface of its true potential.
The company puts its market opportunity at $145 billionby 2025, a figure that accounts for the 100 million SMBs globally. And with the stock trading at 11.1 times sales -- below its historical average of 15.1 times sales -- now looks like a good time to buy. Over the next decade, I wouldn't be surprised to see DigitalOcean grow tenfold in valuation, from $5.5 billion today to $55 billion by 2032.
Cloud computing has fundamentally changed the world, allowing businesses and consumers to access software and services through the internet. But it requires powerful IT infrastructure. Data centers owned by companies like Microsoft or Meta Platforms see traffic in the millions (and even billions) of users each day. Therefore, the underlying network must be fast and flexible. Arista Networks provides that technology.
Arista's core innovation is the Extensible Operating System (EOS), the software that powers its lineup of switching and routing platforms, allowing clients to deploy a seamless network across public clouds and private data centers. That approach differs from legacy vendors like Cisco Systems, which use multiple operating systems, making network management more costly and complex for clients. Additionally, Arista relies exclusivelyon merchant silicon, sourcing chips from suppliers like Broadcom and Intel. That means Arista can equip its devices with the latest silicon without spending money to develop those solutions in house.
As a result, while Cisco still leads the broader data-center switching industry, Arista is the leader in the high-speed category, with ethernet switches that offer throughput of 100 gigabits per second (Gbps) and above. And that dominance has made Arista a financial machine. In 2021, revenue rose 27% to $2.9 billion, and free cash flow jumped 32% to $951.1 million.
More importantly, Arista's edge in the high-speed category bodes well for the future because that's where the industry is headed. As cloud computing becomes more common and applications become more data-intensive, data centers will need faster networking solutions. With that in mind, Arista puts its market opportunity at $35 billion by 2025, leaving plenty of room for growth. I wouldn't be surprised to see this $36 billion business grow fourfold over the next decade. That's why this unstoppable tech stock could supercharge your portfolio.
This article represents the opinion of the writer, who may disagree with the official recommendation position of a Motley Fool premium advisory service. Were motley! Questioning an investing thesis even one of our own helps us all think critically about investing and make decisions that help us become smarter, happier, and richer.
Read more from the original source:
Posted in Cloud Computing
Comments Off on Want Monster Returns? 2 Unstoppable Tech Stocks to Buy and Hold for the Next Decade – The Motley Fool
Combining the best of blockchain and cloud computing – MSUToday
Posted: February 17, 2022 at 8:12 am
If you ask Nick Ivanov about his research, he might tell you that he is creating an anti-corruption machine in response to the inequity and unfairness he has seen in the world around him. Ivanov, a doctoral candidate in the Department of Computer Science and Engineering, believes that decentralized computer systems can address these issues.Blockchain is a decentralized system. It is secure, but the security comes at the cost of performance. The cloud, too, is a centralized system. It offers high performance and significant data storage, but one shortcoming is the potential for trust abuse.
As a fellow in the 2020 Cloud Computing Fellowship, Ivanov saw an opportunity to bring a long-term vision to life. He set out to create a system that combines the best features of blockchain and cloud computing, which he calls Blockumulus.Blockumulus, named by combining the words blockchain and cumulus, is a groundbreaking system that combines the security of blockchain with the high performance of the cloud.
Blockumulus allows us to preserve decentralization but make it much faster, explains Ivanov.
Existing research tries to address the limitations of blockchain by fixing the blockchain itself. The resulting improvements are still insufficient to meet large-scale demand. For example, Ivanov said, none of the existing blockchains are capable of replacing credit card transactions.
In our research, we use a completely different approach, he said. Instead of fixing blockchain, we try to run smart contracts on the cloud and use blockchain to secure this process.
Ivanov has demonstrated that Blockumulus can process tens of thousands of simultaneous transactions, which is similar to the throughput of credit card transactions in the world.
Ivanov credits the Cloud Computing Fellowship, run by the Institute for Cyber-Enabled Research and the IT Services Analytics and Data Solutions group, for providing the resources needed for him to execute the research he had imagined for so long.
I am very thankful for the opportunity to implement the project that would otherwise be impossible, Ivanov said.
Follow communications from ICER for more information about the Cloud Computing Fellowship and how you can get involved.
View original post here:
Combining the best of blockchain and cloud computing - MSUToday
Posted in Cloud Computing
Comments Off on Combining the best of blockchain and cloud computing – MSUToday
Ampere Goes Quantum: Get Your Qubits in the Cloud – AnandTech
Posted: at 8:12 am
When we talk about quantum computing, there is always the focus on what the quantum part of the solution is. Alongside those qubits is often a set of control circuitry, and classical computing power to help make sense of what the quantum bit does in this instance, classical computing is our typical day-to-day x86 or Arm or others with ones and zeros, rather than the wave functions of quantum computing. Of course, the drive for working quantum computers has been a tough slog, and to be honest, Im not 100% convinced its going to happen, but that doesnt mean that companies in the industry arent working together for a solution. In this instance, we recently spoke with a quantum computing company called Rigetti, who are working with Ampere Computing who make Arm-based cloud processors called Altra, who are planning to introduce a hybrid quantum/classical solution for the cloud in 2023.
The striking thing about quantum computing has always been the extravagant hardware required a golden steampunk chandelier of tubes and cables all required to bring the temperature of the hardware down hundredths of a degree above absolute zero. This minimizes thermal effects on the elements of a quantum computer, known as the qubit. Depending on the type of qubit involved, those cables can carry microwave signals, and how the chandelier is constructed often determines how many qubits are involved.
Qubits are the quantum computational power, and the more you have (in theory) the more exponentially more computing power there is on tap. However, because quantum computing doesnt deal in absolutes, sometimes those qubits are used for resiliency, which is needed at such extreme environments. Youll find that quantum computers list an effective number of qubits equivalent to the computational power, rather than the actual physical number present. Beyond that, there are different types of Qubits.
Transmon qubits rely on superconducting electron pairs being controlled inside a three-dimensional cavity. A spin qubit controls individual electron spins with magnetic fields. Most companies use Transmon qubits (Google, IBM, Rigetti), whereas Intel dropped its Transmon development in favour of spin qubits. Exactly how many qubits a system needs to do useful work is a hot topic in the literature, although Google claims it has performed computation impossible on classical computing with only 53 physical Transmon qubits again, another hot topic for debate.
The ultimate goal of quantum computing is to enable computing resources that can solve classical problems whose compute requirements are impossible within reasonable time frames. The typical example is Shors Algorithm, to find prime factors of number (essentially solving the underlying basis for cryptography that should take millions of years) in seconds. Another example is solving a typically quantum-like system, such as chemistry and biochemical interactions. Also optimization, going beyond typical traveling salesman into machine learning the idea is that quantum computing can assist training or inference to check all possible answers, simultaneously.
Quantum computing has always been seen as a future horizon of where high-performance should go. However, it is one of those elements that always seems 10-20 years away. In the early 2000s it was seen as 10-20 years away, and the same is true today. However there are now more startups and funded ventures willing to put in more research to get these systems up and running. One of those is Rigetti, and today is an announcement of a collaboration with Ampere Computing.
For the last few years, there has been a focus in putting high-performance computational resources within reach of everyone. The offering of cloud computing, web services, and 1000s of processors at your fingertips has never been more real, or been more easy. With enough money in your bucket, the cloud providers make it easy to spin up resources for storage, networking, services, or compute. Cloud computing like this is designed to scale as and when you need it. Rigetti wants to do the same with quantum computing.
Rigetti Computing, founded in 2013, is a series C funded quantum computing startup with a public $200m investment to date. Late last year, it announced the start of its new scalable quantum computing infrastructure with a chip containing 40 transmon-style qubits, multiple chips can be embedded onto a single package for a single quantum computing chandelier. The goal of these designs is to accelerate machine learning, both for quantum compute and classical compute, and as a result, theyre partnering with Ampere Computing which makes the Altra Max Arm-based CPUs.
The goal of the partnership is to provide a cloud-native solution combining both classical and quantum computing. Spinning up an instance would include some qubits and some cores, allowing customers to use standard machine learning APIs that would be naturally split across the two types of hardware. In this heterogeneous combination, the goal is to take advantage of the quantum system to do what it does best, and then leverage the traditional compute resources with the Altra Max CPUs for machine learning scale out.
Rigetti says that its solution will scale to hundreds of qubits, while Ampere resources can scale as naturally as most compute can. Rigetti chose Ampere as a partner in this instance because of what the company can provide Ampere always states that its processors are cloud-native, or built for the cloud, and that its 128-core chip can provide 1024 cores in a traditional 2U server with Arm Neoverse N1 performance.
At this point of the partnership, Rigetti and Ampere are at work developing a combination system up and running. Right now, the Ampere CPUs are to be part of the coupled performance resource, although Rigetti says that there could be a time where Amperes hardware might replace the FPGAs in the control units of the quantum system itself. The partnership aims to start working on a proof of concept, creating a local-to-Rigetti example of a cloud-native hybrid quantum/classical infrastructure, and creating a software stack optimized for machine learning. Rigetti says that it is already working with customers interested in the co-design to give itself targets for software optimizations.
The timeline for the rollout is still early, with a proof-of-concept planned over the next few months, then deployment with tier 1 cloud partners through 2023. The idea is to initially work with key customers to help optimize their workflows to combine with the hardware. Then its simply a case of scale out more qubits for quantum, more CPUs for classical. Ampere is set to launch Siryn this year, its own custom Arm core built on next generation process node technology, and we were told that the scope is to bring in future Ampere generations as they are developed.
Rigetti says that it has made strides in enabling transmon qubits viable at scale. Intel dropped its transmon qubit program because it didnt think it could scale, but also because they could create spin qubits fairly easily (however, control is a different part of that story). Rigetti plans to scale to the hundreds of qubits, allowing cloud customers to take a chunk of however many qubits they need at the time. One issue I brought up with them is synchronicity, and it sounds like they have a system that, in a traditional sense, can be asynchronous to scale. Rigetti believes there are elements to machine learning, both training and inference, that will scale with qubit count in this way.
Is Quantum Computing still a distant hope? The promise here is a hybrid product, with quantum and classical resources, for cloud customers in 2023. I fully expect that to be a viable use case. However, as is always the question with quantum computing what problem is it solving, and is it better than classical?
Read more from the original source:
Ampere Goes Quantum: Get Your Qubits in the Cloud - AnandTech
Posted in Cloud Computing
Comments Off on Ampere Goes Quantum: Get Your Qubits in the Cloud – AnandTech
Global Cloud Computing in Higher Education Market to 2027 – by Institute Type, Application, Ownership, Deployment and Region – PRNewswire
Posted: at 8:12 am
DUBLIN, Feb. 14, 2022 /PRNewswire/ -- The "Global Cloud Computing in Higher Education Market" report has been added to ResearchAndMarkets.com's offering.
The global Cloud Computing in Higher Education market held a market value of USD 2,182.4 Million in 2020 and is forecasted to reach USD 8,779.1 Million by the year 2027. The market is anticipated to register a CAGR of 22% over the projected period.
Cloud computing in higher education assists teachers, administrators, and students in their education related activities. It helps teachers for uploading learning materials, students to access their homeworks, and administrators to easily collaborate with each other and save money on data storage. Increasing adoption of SaaS based cloud platforms in higher education and growing adoption of e-learning is anticipated to boost the market growth. Furthermore, rising IT spending on cloud infrastructure in education coupled with increasing application of quantum computing in education sector is also expected to fuel the market growth.
Despite the driving factors, cybersecurity and data protection risks are estimated to restrain the market growth. Also, lack of compliance to the SLA and legal & jurisdiction issue is estimated to negatively hamper the market growth. Furthermore, rigid design of cloud-based systems is also expected to hinder the market growth during the forecast period.
Growth Influencers:
Increasing adoption of SaaS based cloud platforms in higher education
Adoption of SaaS based cloud platforms have increased in many industries. In the higher education sector, adoption of these platforms increased rapidly during the COVID-19 pandemic. This is because the transition to virtual learning has driven many institutions for reevaluate the longevity of their technology stack. Furthermore, various advantages associated with SaaS based platform are expected to boost the market growth. These benefits include fewer IT demands & constraints on capacity, greater flexibility to meet needs, enhanced collaboration, less down time, data recovery, enhanced security, and predictable monthly expenses. Therefore, increasing adoption of SaaS based cloud platforms in higher education is estimated to fuel the market growth.
Regional Overview:
Based on region, the global Cloud Computing in Higher Education market is divided into Europe, North America, Asia, Middle East, Africa, and South America.
The North America region is expected to hold the largest market share of around 29% owing to the rising adoption of technologically advanced products in the U.S. and Canada. The Asia Pacific region is anticipated to witness the fastest growth rate of around 26.6% owing to growing awareness regarding cloud computing technologies in the region.
Competitive Landscape:
Key players operating in the global Cloud Computing in Higher Education market include Adobe Systems, Inc., Alibaba Group, Cisco Systems, Inc., International Business Machines (IBM) Corporation, Netapp, Oracle Corporation, NEC Corporation, Microsoft Corporation, VMware, Inc., Amazon Web Services, Inc., Ellucian Company L.P., Dell EMC, Salesforce.com, SAP, and Blackboard, among others.
The approximate market share of the top 4 players is near about 61%. These market players are engaged in mergers & acquisitions, collaborations, and new product launches to strengthen their market presence. For instance, in August 2021, Oracle was appointed by the Ministry of Electronics and Information Technology, Government of India (MeitY) for providing empanelled cloud infrastructure solutions.
The global Cloud Computing in Higher Education market report provides insights on the below pointers:
The global Cloud Computing in Higher Education market report answers questions such as:
Key Topics Covered:
Chapter 1. Research Framework
Chapter 2. Research Methodology
Chapter 3. Executive Summary: Global Cloud Computing in Higher Education Market
Chapter 4. Global Cloud Computing in Higher Education Market Overview4.1. Industry Value Chain Analysis4.1.1. Software Developers4.1.2. Technology Integrators4.1.3. Service Providers/Owners4.1.4. End-User4.2. Technology Lifecycle4.2.1. Early Adopters & Pioneers use cases 4.2.2. Digital Transformation Trend and Impact of Market Growth4.3. Porter's Five Forces Analysis4.3.1. Bargaining Power of Suppliers4.3.2. Bargaining Power of Buyers4.3.3. Threat of Substitutes4.3.4. Threat of New Entrants4.3.5. Degree of Competition4.4. PEST Analysis4.5. Market Dynamics and Trends4.5.1. Growth Drivers4.5.2. Restraints4.5.3. Challenges4.5.4. Key Trends4.6. Competition Dashboard4.6.1. Market Concentration Rate4.6.2. Company Market Share Analysis (%), 20204.6.3. Competitor Mapping4.7. Pricing Analysis
Chapter 5. Cloud Computing in Higher Education Market Analysis, By Institute Type5.1. Key Insights5.2. Market Size and Forecast, 2017 - 2027 (US$ Mn)5.2.1. Universities5.2.2. Technical Schools5.2.3. Ivy League Schools (Universities)5.2.4. Community Colleges & Others
Chapter 6. Cloud Computing in Higher Education Market Analysis, By Ownership6.1. Key Insights6.2. Market Size and Forecast, 2017 - 2027 (US$ Mn)6.2.1. Public Institutes6.2.2. Private Institutes6.2.3. Corporate Learning
Chapter 7. Cloud Computing in Higher Education Market Analysis, By Application7.1. Key Insights7.2. Market Size and Forecast, 2017 - 2027 (US$ Mn)7.2.1. Administration7.2.1.1. Payments7.2.1.2. Calendar (Scheduling & Planning)7.2.1.3. Identity and Access Management7.2.2. Content/Document Storage & Management7.2.3. Unified Communication (Email, video conferencing/seminars)7.2.4. Others
Chapter 8. Cloud Computing in Higher Education Market Analysis, By Deployment8.1. Key Insights8.2. Market Size and Forecast, 2017 - 2027 (US$ Mn)8.2.1. Private Cloud8.2.2. Public Cloud 8.2.3. Hybrid Cloud 8.2.4. Community Cloud
Chapter 9. Global Cloud Computing in Higher Education Market Analysis, By Geography
Chapter 10. North America Cloud Computing in Higher Education Market Analysis10.1. Key Insights10.2. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Institute Type10.3. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Application10.4. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Ownership10.5. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Deployment10.6. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Country
Chapter 11 Europe Cloud Computing in Higher Education Market Analysis11.1 Key Insights11.2. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Institute Type11.3. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Application11.4. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Ownership11.5. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Deployment11.6. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Country
Chapter 12 Asia Pacific Cloud Computing in Higher Education Market Analysis12.1 Key Insights12.2. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Institute Type12.3. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Application12.4. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Ownership12.5. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Deployment12.6. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Country
Chapter 13 South America Cloud Computing in Higher Education Market Analysis13.1 Key Insights13.2. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Institute Type13.3. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Application13.4. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Ownership13.5. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Deployment13.6. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Country
Chapter 14 Middle East Cloud Computing in Higher Education Market Analysis14.1 Key Insights14.2. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Institute Type14.3. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Application14.4. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Ownership14.5. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Deployment14.6. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Country
Chapter 15 Africa Cloud Computing in Higher Education Market Analysis15.1. Key Insights15.2. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Institute Type15.3. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Application15.4. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Ownership15.5. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Deployment15.6. Market Size and Forecast, 2017 - 2027 (US$ Mn), By Country
Chapter 16. Company Profile (Company Overview, Financial Matrix, Key Product landscape, Key Personnel, Key Competitors, Contact Address, and Business Strategy Outlook) *16.1. Adobe Systems, Inc.16.2. Alibaba Group16.3. Cisco Systems, Inc.16.4. International Business Machines (IBM) Corporation16.5. Netapp16.6. Oracle Corporation16.7. NEC Corporation16.8. Microsoft Corporation16.9. VMware, Inc.16.10. Amazon Web Services, Inc.16.11. Ellucian Company L.P.16.12. Dell EMC16.13. Salesforce.com16.14. SAP16.15. Blackboard
For more information about this report visit https://www.researchandmarkets.com/r/lygcly
Media Contact:
Research and Markets Laura Wood, Senior Manager [emailprotected]
For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900
U.S. Fax: 646-607-1904 Fax (outside U.S.): +353-1-481-1716
SOURCE Research and Markets
Read more:
Posted in Cloud Computing
Comments Off on Global Cloud Computing in Higher Education Market to 2027 – by Institute Type, Application, Ownership, Deployment and Region – PRNewswire
TechM, Amazon to offer free training in cloud computing to unemployed youth – The Hindu
Posted: at 8:12 am
Bengaluru
Tech Mahindra Foundation, in collaboration with Amazon Internet Services, will offer training to unemployed and underemployed youth in technologies around cloud computing.
The 12-week, in-person training programme would be offered for free through Mahindra SMART Academies of Digital Technologies centres in Hyderabad, Mohali, Visakhapatnam, Bengaluru, Delhi, Mumbai, and Pune, Tech Mahindra said in a release.
Through scenario-based exercises, hands-on labs, and coursework, students would be able to learn to build programming language (Linux and Python), networking, security and relational database skills. The objective was to prepare candidates for an entry-level cloud role in the areas such as operations, site reliability, and infrastructure support, the company further said.
The training programme would also cover fundamental AWS Cloud skills as well as practical career skills, such as interviewing and resume writing, to help prepare individuals for an entry-level cloud position,
Rakesh Soni, CEO, Tech Mahindra Foundation said, Cloud computing is a 21st-century technological innovation that is enabling digital transformation. This programme will help learners build valuable and in-demand cloud computing skills, get them AWS Certifications, setting them up for a start.
Continue reading here:
TechM, Amazon to offer free training in cloud computing to unemployed youth - The Hindu
Posted in Cloud Computing
Comments Off on TechM, Amazon to offer free training in cloud computing to unemployed youth – The Hindu
Build better cloud systems with this Azure certification bundle – BleepingComputer
Posted: at 8:12 am
By BleepingComputer Deals
Microsoft Azure is one of the more useful cloud computing tools available, and getting certified in administration and architecture can open new doors for your IT career. The 2022 Microsoft Azure Architect & Administrator Exam Certification Prep Bundle helps you get up to speed on Microsoft Azure and how it works.
The bundle's instructors include Microsoft certified trainer and cloud architect Scott Duffy, twenty-year elearning veteran Ben Stretcha, and IT professional Vijay Saini. All of them have worked with Azure and have years of experience training, helping to bridge the gap between theory and practice in their courses.
The courses are broken out by the level of certifications. Those completely new to Azure should start with the AZ-900 certification, which reviews the fundamentals of both cloud computing in general, and Azure in particular. There's also an exam prep course for AZ-900, so you can get more practice.
If you're familiar with the basics of Azure, the bundle includes a module of project-based hands-on training to hone your skills, and a detailed look at storage in Azure, looking at real-world scenarios and solutions. System administrators curious how Azure can help them should start with an in-depth look at how Azure interacts with PowerShell and its task automation tools.
The bundle wraps up with a look at the next few levels of certification in Azure. AZ-204 explores solution development, AZ-303 develops your Azure architecture skills, and there's a full exam prep course for the AZ-304 exam. That lays the groundwork for both a broader scope of cloud computing skills and delve deeper into Azure.
Cloud computing is becoming a core skill for IT departments. The 2022 Microsoft Azure Architect & Administrator Exam Certification Prep Bundle offers a detailed overview for $39.99, a 97% discount off the $1800 MSRP.
Prices subject to change.
Disclosure: This is a StackCommerce deal in partnership with BleepingComputer.com. In order to participate in this deal or giveaway you are required to register an account in our StackCommerce store. To learn more about how StackCommerce handles your registration information please see the StackCommerce Privacy Policy. Furthermore, BleepingComputer.com earns a commission for every sale made through StackCommerce.
Read the original here:
Build better cloud systems with this Azure certification bundle - BleepingComputer
Posted in Cloud Computing
Comments Off on Build better cloud systems with this Azure certification bundle – BleepingComputer
AWS Steps Up Edge Investment, Will Add 32 Local Zones Across the World – Data Center Frontier
Posted: at 8:12 am
Amazon Web Services is beginning to expand its Local Zones for edge computing. (Image: AWS)
Amazon Web Services is going global with its edge computing ambitions. In a major expansion of its Local Zones program, AWS will deploy edge infrastructure in 32 cities around the world, building upon the 16 existing zones in the United States.
The huge expansion is a sign that investment in edge computing infrastructure is beginning to accelerate. The AWS announcement comes just a day after Akamai, one of the largest players in edge computing, said it would acquire Linode for $900 million to boost the reach and features of its distributed global network.
AWS says the Local Zones will help customers deploy low-latency applications in new markets, as well as meeting data residency requirements in regulated sectors like health care, financial services, and government.
The edge of the cloud is expanding and is now becoming available virtually everywhere, said Prasad Kalyanaraman, Vice President of Infrastructure Services at AWS. Thousands of AWS customers using U.S.-based AWS Local Zones are able to optimize low-latency applications designed specifically for their industries and the use cases of their customers. With the success of our first 16 Local Zones, we are expanding to more locations for our customers around the world who have asked for these same capabilities to push the edge of cloud services to new places.
Todays announcement fills in the details on the edge expansion announced at AWS re:Invent last November by CTO Werner Vogels, naming the new markets where AWS will deploy zones, along with a lineup of high-profile customers who will be early adopters, including Netflix and a cluster of gaming companies.
Amazon operates a massive global network of data centers to power its cloud computing platform, with most of its capacity focused on clusters of large campuses in key network hubs like Northern Virginia. With Local Zones, AWS is creating a more distributed infrastructure to supportedge computingand low-latency applications.
AWS OutpostsandLocal Zones are the key building blocks for Amazons edge computing strategy.AWS Outposts are racks filled with turn-key AWS cloud infrastructure, which allow enterprises to deploy hybrid clouds in their on-premises data centers. Outposts will also drive Amazons push into edge computing through Local Zones, which are regional facilities filled with Outposts.
AWS currently has Local Zones available in 16 North American markets; Atlanta, Boston, Chicago, Dallas, Denver, Houston, Kansas City, Las Vegas, Los Angeles, Miami, Minneapolis, New York City, Philadelphia, Phoenix, Portland, and Seattle.
Starting this year, new AWS Local Zones will launch in Amsterdam, Athens, Auckland, Bangkok, Bengaluru, Berlin, Bogot, Brisbane, Brussels, Buenos Aires, Chennai, Copenhagen, Delhi, Hanoi, Helsinki, Johannesburg, Kolkata, Lima, Lisbon, Manila, Munich, Nairobi, Oslo, Perth, Prague, Quertaro, Rio de Janeiro, Santiago, Toronto, Vancouver, Vienna, and Warsaw. The total deployment will happen across 2022 and 2023, the company said.
One of the best-known AWS customers is Netflix, which delivers streaming entertainment to 214 million paid memberships in over 190 countries. Weve previously written about Netflix interest in using edge computing to bring new efficiencies to TV and film production, changing the way huge video files are managed and shared. Netflix is now working with AWS to virtualize key parts of its visual effects operations, using low-latency cloud access to deliver virtual desktops for rendering and animation workloads.
(Visual) artists need specialized hardware and access to petabytes of images to create stunning visual effects and animations, said Stephen Kowalski, Director of Digital Production Infrastructure Engineering at Netflix. Historically, artists had specialized machines built for them at their desks; now, we are working to move their workstations to AWS to take advantage of the cloud. In order to provide a good working experience for our artists, they need low latency access to their virtual workstations.
AWS Local Zones brings cloud resources closer to our artists and have been a game changer for these applications, said Kowalski.By taking advantage of AWS Local Zones, we have migrated a portion of our content creation process to AWS while ensuring an even better experience for artists. We are excited about the expansion of AWS Local Zones globally, which brings cloud resources closer to creators, allowing artists to get to work anywhere in the world and create without boundaries.
AWS also shared examples of customers using Local Zones in other industries:
Read the original post:
AWS Steps Up Edge Investment, Will Add 32 Local Zones Across the World - Data Center Frontier
Posted in Cloud Computing
Comments Off on AWS Steps Up Edge Investment, Will Add 32 Local Zones Across the World – Data Center Frontier