The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: February 26, 2024
DigitalOcean beats expectations under the helm of new CEO Paddy Srinivasan – SiliconANGLE News
Posted: February 26, 2024 at 12:17 am
Shares of the cloud computing infrastructure firm DigitalOcean Holdings Inc.traded higher in extended trading today after it delivered earnings, revenue and guidance for the current quarter that came in above expectations.
The company reported earnings before certain costs such as stock compensation of 44 cents per share, nicely ahead of Wall Streets target of 37 cents per share. Revenue growth was a little sluggish at just 11% more than a year earlier, but the companys $181 million in reported sales still came in ahead of the analysts consensus estimate of $178.1 million.
Investors were likely also pleased to see the company boosting its profitability. It reported net income for the quarter of $15.9 million, rising from a loss of $10.3 million one year earlier.
The company also announced its full-year fiscal 2023 results, with revenue growing by 20% from the previous year, to $693 million.
The results clearly pleased investors, as DigitalOceans stock rose more than 6% in extended trading, reversing a 3% decline that occurred in the regular trading session.
DigitalOcean stands out as a competitor to Amazon Web Services Inc. and Microsoft Corp. in the public cloud infrastructure market. Instead of competing against those giants head on, it has carved a niche for itself serving small businesses with its developer cloud that makes it easy for small teams of developers to create modern applications.
With the DigitalOcean App Platform, developers can deploy application code in production with a few clicks, in line with the companys stated aim of keeping cloud computing simple. DigitalOceans pitch is that it takes care of the cloud infrastructure and deployment side of things, so developers can maintain a focus on their code.
The company was formerly led by Yancey Spruill, but last month it announced that Paddy Srinivasan (pictured) was taking over as its new chief executive, concluding a leadership transition plan that was outlined last summer.
Taking part in his first earnings call as the companys new CEO, Srinivasan said hes eager to help the company invest in transformational new AI solutions. Indeed, the company has ambitious plans to play a role in the growing artificial intelligence industry. While most of the headlines around AI are focused on bigger players like Google LLC and Meta Platforms Inc., there are plenty of smaller companies hoping to tap into the power of generative AI, and its these that DigitalOcean is hoping to cater to with its recently acquired Paperspace platform.
Last month, the company announced that its making Nvidia Corp.s most powerful H100 graphics processing units available to small and medium-sized businesses via Paperspace, enabling them to access the critical hardware needed to power AI workloads. It said at the time that it sees big demand for the offering, because the large cloud providers such as AWS and Microsoft Azure have largely optimized their GPU offerings to serve bigger enterprises.
The companys efforts in AI appear to have been well-received so far, for DigitalOcean reported 11% growth in its annual revenue run rate, which ended the quarter at $730 million. It also saw its average revenue per customer increase to $92.63, up 6% from a year earlier, while Builders and Scalers, which refers to customers that spend at least $50 per month on its offerings, increased 8% from a year earlier.
Holger Mueller of Constellation Research Inc. said DigitalOcean did well to swing back to profitability, showing good growth over the full year. However, he said investors may be concerned to see that the companys growth slowed in the final quarter. One of the most urgent jobs for the new CEO Paddy Srinivasan will be to find a way to rekindle its growth, and it will be interesting to see how he does that, the analyst said. From his comments, it seems AI will likely be a key strategy, and well hopefully learn more about its plans and prospects in the coming quarters.
Looking to the coming quarter, DigitalOcean said it sees earnings of between 37 and 39 cents per share, ahead of the Streets forecast of 37 cents. It also expects revenue of between $182 million and $183 million, ahead of Wall Streets target of $181.4 million.
For fiscal 2024, the company is targeting earnings of between $1.60 and $1.67 per share, versus the consensus estimate of $1.62. For revenue, its looking at a range of $755 million to $775 million, versus the analysts $765.9 million target.
THANK YOU
Read the original:
DigitalOcean beats expectations under the helm of new CEO Paddy Srinivasan - SiliconANGLE News
Posted in Cloud Computing
Comments Off on DigitalOcean beats expectations under the helm of new CEO Paddy Srinivasan – SiliconANGLE News
Securing Kubernetes in a Cloud Native World – The New Stack
Posted: at 12:17 am
Kubernetes has revolutionized the way cloud native applications are deployed and managed, but how can you mitigate those weak links in cloud environments?
Simply put, cloud native means building, deploying and managing your applications in cloud computing environments. Applications that are born to live in the cloud tend to be resilient, portable, easily scalable to meet the ups and downs of demand, and easy to update as needs change. Indeed, being cloud native means apps can be changed and updated quickly and frequently, with no impact on service delivery. Apps can be developed and optimized quickly, and then undergo continuous improvement based on user feedback all at the speed of business.
As the adoption of cloud native applications increases, Kubernetes has emerged as the go-to container orchestrator for many organizations. It automates the deployment, scaling and management of containerized applications, making it an essential part of modern DevOps environments. However, as powerful and prevalent as Kubernetes is, ensuring its security is a non-trivial task. With built-in security features and a growing market of third-party tools, creating a secure Kubernetes deployment requires careful planning, diligent implementation and ongoing management.
Securing your Kubernetes deployments requires a holistic and integrated approach from the earliest stages in the development process. Begin by hardening your infrastructure and host operating system to minimize potential attack vectors. Container images should always be vetted and secure before they are deployed.
Kubernetes includes an array of native security features, including role-based access control (RBAC), network policies and secrets management. RBAC is a fundamental tool that allows administrators to define roles and bind them to users or groups of users, allowing granular control over who can access and modify resources within the cluster. Network policies offer another layer of protection, providing control over how pods communicate with each other and other network endpoints. Secrets management helps in securely storing and managing sensitive information like passwords, tokens and API keys, and allows secrets to be stored and managed centrally within Kubernetes.
Regular and continuous scanning of container images for vulnerabilities is critical to preemptive threat management. To maintain the integrity of containerized applications, signing and verification processes before deployment are also essential.
As the methods of malicious actors evolve, real-time threat detection systems can act as the last line of defense. These systems let you continuously monitor your Kubernetes environment to instantly identify and respond to threats, ensuring that your containerized landscape stays secure.
Successfully navigating Kubernetes security isnt just about setting up your security program correctly; its an ongoing commitment. The path is riddled with challenges, such as properly configuring Kubernetes, securing container images, managing secrets and ensuring runtime monitoring. Perhaps the most demanding aspect is the need for continuous visibility over the full life cycle of Kubernetes deployments to detect misconfigurations and vulnerabilities promptly.
To achieve this, runtime container security requires agentless scanning across the full stack, including the container, cloud and workloads. Image scanning of running containers and container image registries is vital in this process.
Ensuring long-term security for Kubernetes deployments underlies the need for robust strategies. Regular updates, correct configuration, vulnerability scanning and strict adherence to best security practices are the cornerstones of a secure Kubernetes environment. Likewise, understanding and monitoring industry and regulatory rules is vital for Kubernetes security, ensuring compliance and avoiding data privacy issues.
Changing security regulatory standards make it vital for organizations to keep their Kubernetes deployments compliant. This eliminates various risks including security vulnerabilities, noncompliance penalties and system inefficiencies.
Despite its importance, maintaining compliance is not without challenges. First, the dynamic nature of Kubernetes deployments makes it difficult to track and manage all resources effectively. Second, a lack of visibility into configurations can result in noncompliant setups. Third, manual compliance checks are tedious, error-prone and dont scale well with the increase in Kubernetes clusters.
To meet these challenges head-on, there are several strategies. Automating compliance checks saves time and reduces errors, while introducing uniform policy enforcement across all deployments ensures better control and traceability.
Integrating compliance into the CI/CD pipeline allows for early detection of noncompliance issues, and thus easier remediation. Using these strategies ensures compliance and helps optimize the overall performance of your deployments.
Your organization must watch over your containerized applications, which are vulnerable to all kinds of exploits and threats. Identity and access management are your responsibility, along with all the various configurations, encryption, network traffic protection, segmentation and other details. Adopting industry-grade security best practices can significantly enhance your Kubernetes security profile. The following 10 best practices should guide your Kubernetes security program:
Kubernetes security is a complex but manageable challenge. Organizations can navigate the cloud native world securely by starting with a strong foundation, correctly implementing isolation and multitenancy, securing containers throughout their life cycle and fostering a culture of security.
Continuous monitoring and using the right tools further ensure that the Kubernetes environment remains resilient against evolving threats. As cloud native technologies continue to advance, staying informed and adaptable is key to maintaining a secure Kubernetes ecosystem.
To learn more about Kubernetes and the cloud native ecosystem, join us at KubeCon + CloudNativeCon Europe, in Paris, on March 19-22.
YOUTUBE.COM/THENEWSTACK
Tech moves fast, don't miss an episode. Subscribe to our YouTube channel to stream all our podcasts, interviews, demos, and more.
SUBSCRIBE
Go here to read the rest:
Posted in Cloud Computing
Comments Off on Securing Kubernetes in a Cloud Native World – The New Stack
How to Build a Chat Interface using Gradio & Vultr Cloud GPU SitePoint – SitePoint
Posted: at 12:17 am
This article was created in partnership with Vultr. Thank you for supporting the partners who make SitePoint possible.
Gradio is a Python library that simplifies the process of deploying and sharing machine learning models by providing a user-friendly interface that requires minimal code. You can use it to create customizable interfaces and share them conveniently using a public link for other users.
In this guide, youll be creating a web interface where you can interact with the Mistral 7B large language model through the input field and see model outputs displayed in real time on the interface.
On the deployed instance, you need to install some packages for creating a Gradio application. However, you dont need to install packages like the NVIDIA CUDA Toolkit, cuDNN, and PyTorch, as they come pre-installed on the Vultr GPU Stack instances.
Follow the next steps for populating this file.
The above code snippet imports all the required modules in the namespace for inferring the Mistral 7B large language model and launching a Gradio chat interface.
The above code snippet initializes model, tokenizer and enable CUDA processing.
The above code snippets inherits a new class named StopOnTokens from the StoppingCriteria class.
The above code snippet defines variables for StopOnToken() object and storing the conversation history. It formats the history by pairing each of the message with its response and providing tags to determine whether it is from a human or a bot.
The code snippet in the next step is to be pasted inside the predict() function as well.
The streamer requests for new tokens from the model and receives them one by one ensuring a continuous flow of text output.
You can adjust the model parameters such as max_new_tokens, top_p, top_k, and temperature to manipulate the model response. To know more about these parameters you can refer to How to Use TII Falcon Large Language Model on Vultr Cloud GPU.
Gradio uses the port 7860 by default.
Executing the application for the first time can take additional time for downloading the checkpoints for the Mistral 7B large language model and loading it on to the GPU. This procedure may take anywhere from 5 mins to 10 mins depending on your hardware, internet connectivity and so on.
Once it executes, you can access the Gradio chat interface via your web browser by navigating to:
The expected output is shown below.
In this guide, you used Gradio to build a chat interface and infer the Mistral 7B model by Mistral AI using Vultr GPU Stack.
This is a sponsored article by Vultr. Vultr is the worlds largest privately-held cloud computing platform. A favorite with developers, Vultr has served over 1.5 million customers across 185 countries with flexible, scalable, global Cloud Compute, Cloud GPU, Bare Metal, and Cloud Storage solutions. Learn more about Vultr.
See the article here:
How to Build a Chat Interface using Gradio & Vultr Cloud GPU SitePoint - SitePoint
Posted in Cloud Computing
Comments Off on How to Build a Chat Interface using Gradio & Vultr Cloud GPU SitePoint – SitePoint
Microsoft to invest $2.1bn in cloud and AI infrastructure in Spain – DatacenterDynamics
Posted: at 12:17 am
Microsoft has committed to investing $2.1 billion in cloud computing and artificial intelligence (AI) infrastructure in Spain over the next two years.
The planned investment was shared by the vice chair and president of Microsoft Brad Smith via a post on X following a meeting with the Prime Minister of Spain, Pedro Sanchez.
"Im thrilled to announce that we will expand our AI and cloud infrastructure in Spain by $2.1bn in the next two years," Smith said. "Our investment is beyond just building data centers, its a testament to our 37-year commitment to Spain, its security, and development and digital transformation of its government, businesses, and people."
Prime Minister Sanchez added: "In addition, we have analyzed cooperation opportunities to strengthen cybersecurity and promote artificial intelligence in Public Administration. Public-private collaboration is essential to successfully face the challenges of digital transformation."
Microsoft has previously invested in AI in Spain with its 2021 R&D hub in Barcelona which is dedicated to artificial intelligence, machine learning, and deep learning.
The company first announced plans for a Spanish Azure cloud region in early 2020, and will be the last of the major providers to set up there. The company is currently developing data centers in Madrid and the Aragon region.
The former will be constructed by Ferrovial as confirmed in January 2023. Microsoft ultimately plans to operate three data centers in Madrid.
The Aragon data center development, announced in October 2023, is reported to be three sites around the city of Zaragoza; A 63-acre site in the Recycling Technology Park (PTR), as well as land in La Puebla de Alfindn to the east of Zaragoza, and La Muelato to the west. Build-out is reportedly expected over seven years.
Google was the first to launch a Spanish region, opening one in Madrid in May 2022. Amazon launched an AWS cloud region in Aragon in November 2022. Oracle opened a Madrid region in September with plans for a second in the works.
The latest investment commitment comes shortly after Microsoft announced that it was committing $3.44bn to doubling Germany's AI infrastructure and cloud computing capacity.
As with the Spanish investment, this will be delivered over two years, with some of the money dedicated to training 1.2m people in Germany with AI skills.
Read more here:
Microsoft to invest $2.1bn in cloud and AI infrastructure in Spain - DatacenterDynamics
Posted in Cloud Computing
Comments Off on Microsoft to invest $2.1bn in cloud and AI infrastructure in Spain – DatacenterDynamics
Stannah looks to enterprise cloud software to lift IT systems – ComputerWeekly.com
Posted: at 12:16 am
Stairlift and lift manufacturer Stannah Group said it has made a significant investment in a cloud-based enterprise software roll-out to modernise its IT systems and prepare for the companys future growth.
Stannah is one of the UKs leading engineering companies, and remains an independent, family-owned and run business that has sold over 700,000 stairlifts worldwide. It now operates in more than 40 countries, and has subsidiaries in 12 countries, including France, Italy and the US.
The companywill be rolling out IFS Cloud for enterprise resource planning(ERP), field service management (FSM), planning and scheduling optimisation (PSO), and enterprise asset management (EAM).
The implementation of IFS Cloud will enable Stannah to replace an existing legacy system and 67 edge systems with the new enterprise offering, which will provide real-time visibility across company departments.
Stannah said that, for the first time, all of its employees across 13 countries will be aligned on a modern and cohesive solution that consolidates all of Stannahs operational functions from manufacturing to field service management. When fully rolled out, the new system will support over 2,500 users.
Nick Stannah, joint CEO of Stannah Home Accessibility, said this is a major project for Stannah, and a keystone of a new IT strategy to modernise the Stannah Groups whole IT estate.
Our legacy systems have served us well, but we have grown considerably over the past decade and plan to build on that, he said. We have a lot of technical debt in many legacy systems, and infrastructure that is not joined up and no longer fit for purpose, he told Computer Weekly.
Stannah said the group also has a lot of international businesses running separate systems and needs to move to a globally integrated enterprise-wide IT platform that can better support its business growth plans.
Our new IT strategy aims to ensure that IT is an enabler for the Group to achieve its future business objectives, he said. Moving to cloud services, and modernising our whole infrastructure, enables us to achieve this, providing scalability, flexibility and security for our growing business.
Stannah said the company had selected IFS as its cloud ERP provider after a very thorough selection process during 2023. IFSs evergreen cloud solution means we can keep modernising our platform as technology evolves, he said, adding that IFS is a strong fit for Stannah because it can provide the range of functionality the company requires, providing very capable solutions for our manufacturing and field service operations.
The company is at the very start of this project, and is currently at the planning stage. This will be a considerable undertaking, he said.
It has created a team of dedicated people from across the business to see the project through, and will be working through audits of current systems to assess business needs and processes before working on training every member of staff on the system.
Phase one of the roll-out is likely to go live around January 2026, when Stannah will deploy it across its manufacturing plants as well as all of its UK and France trading businesses. Phase two will see the roll-out of the system across all of the companys other international businesses over the following two years.
Among the broader business benefits Stannah aims to get will be standardised business processes that will deliver significant efficiency improvements across the business. The project will also lead to improved user experience for employees using the business systems, and improved service experience for customers.
It should also offer real-time business reporting and integrated data to improve business decisions, and improved planning and scheduling operations across Stannahs manufacturing and field services.
As the project only kicked off in January, Stannah said its still early days. A key focus is not only developing the solution, but also communicating our strategy and plans to everyone in the business to prepare our people for the changes the system will bring, he said. This will be a huge change for our people, and we need to ensure we provide the necessary change management, training and support for everyone to succeed.
The move to the cloud-based system will drive efficiencies and productivity forStannahby enabling it to run a wide range of operational functions on a single platform with a single data model, the companies said. Initially, these will include customer relationship management, sales, planning, supply chain, manufacturing, human capital management, finance, asset and field service management.
By streamlining business processes, the new system will allow for more responsiveness in the service offered to customers, thereby driving enhanced productivity and efficiencies including first-time fix rates, route planning and more, said Stannah.
IFS said its AI-driven PSO engine will also streamline Stannahs field service operations, helping to implement cost-saving route planning and solve complex scheduling issues. Using EAM and PSO in tandem will ensure Stannah can optimise asset uptime and maintenance time, IFS said. The firm will also use the IFS Success Services to support customers from adoption and engagement to software support. Stannah has also invested in IFS Implementation Services, offered by IFS to its clients to assist them in implementing its software.
Cloud adoption by enterprises continues to grow at a rapid rate. According to tech analyst Gartner, cloud computing is in the process of shifting from being a technology disruptor to a necessity. The analyst recently predicted that by 2028, modernisation efforts will culminate in 70% of workloads running in a cloud environment, up from 25% in 2023. It said that this year, worldwide user spending on public cloud services is expected to reach $679bn and is likely to exceed $1tn in 2027.
Read the original:
Stannah looks to enterprise cloud software to lift IT systems - ComputerWeekly.com
Posted in Cloud Computing
Comments Off on Stannah looks to enterprise cloud software to lift IT systems – ComputerWeekly.com
AI vendor finds opportunity amid AI computing problem – TechTarget
Posted: at 12:16 am
With the growth of generative AI, a big problem enterprises and vendors are concerned with is computing power.
Generative AI systems such as ChatGPT suck up large amounts of compute to train and run, making them costly.
One AI vendor trying to address the massive need for compute is Lambda.
The GPU cloud vendor, which provides cloud services including GPU compute as well as hardware systems, revealed it had achieved a valuation of more than $1.5 billion valuation after raising $320 million in a Series C funding round.
The vendor was founded in 2012 and has focused on building AI infrastructure at scale.
As a provider of cloud services based on H100 Tensor Core GPUs from its partner Nvidia, Lambda gives AI developers access to architectures for training, fine-tuning, and inferencing generative AI and large language models (LLMs).
One of the early investors and a participator in the latest funding round is Gradient Ventures.
Gradient Ventures first invested in the AI vendor in 2018 and then did so again in 2022.
The investment fund became interested in Lambda during a time when the vendor faced the challenge of trying to build AI models without having the workstation and infrastructure it needed. This led Lambda to start building AI hardware that researchers can use.
"That's why we were excited is that we saw this sort of challenge to the development," said Zachary Bratun-Glennon, general partner at Gradient Ventures. "Since then, we've been excited as the product has developed."
Lambda grew from building workstations to hosting servers for customers that had bigger compute needs and budgets and then to offering a cloud service with which users can point and click on their own desktop without needing to buy a specialized workstation.
"Our excitement is just seeing them meet the developer and the researcher where they are with what they need," Bratun-Glennon said.
Lambda's current fundraising success comes as the vendor continues to take advantage of the demand for computing in the age of generative AI, Futurum Group research director Mark Beccue said.
"I really think the fundraise ... has got to do with that opportunistic idea that AI compute is in high demand, and they're going to jump on it," he said.
As a vendor with experience building on-premises GPU hardware for data centers, Lambda appeals to investors because of the options it brings to enterprises, he added.
Lambda also enables enterprises to get up and running quickly with their generative AI projects, Constellation Research founder R "Ray" Wang said.
"GenAI on demand is the best way to look at it," Wang said. "Lambda labs basically says, 'Hey, we've got the fastest, the best, and not necessarily the cheapest but a reasonably priced ability to actually get LLMs on demand.'"
"What people are rushing to deliver is the ability to give you your compute power when you need it," he continued.
However, as generative AI evolves, the compute problem could ease somewhat.
Over the past year and a half, generative AI systems have evolved from large models that run on up to 40 billion parameters to smaller models that run on as few as 2 billion parameters, Beccue said.
"The smaller the language models are, the less compute you have to use," he said.
Moreover, while Nvidia is known for providing powerful AI accelerators like GPUs, competitors including Intel and AMD have also released similar offerings in the last few months, Beccue added.
For example, Intel's Gaudi2 is a deep-learning processor comparable to Nvidia's H100.
In December, AMD introduced MI300X Accelerators. The chips are designed for generative AI workloads and rival Nvidia H100s in performance.
"The models are getting better, and the chips are getting better and we're getting more of them," Beccue said. "It's a short-term issue."
For Lambda, the challenge will be how to extend beyond solving the current AI computing challenge.
"They're not necessarily going to be competing head-to-head with the cloud compute people," Beccue said. He noted that the major cloud computing vendors -- the tech giants -- are deep-pocketed and have vast financial resources. "I'm sure what they're thinking about is, 'Okay, right now, there's kind of a capacity issue that we can fill. How do we extend over time?'"
As an investor in AI companies, Bratun-Glennon said he thinks generative AI will produce thousands of language models, requiring different amounts of compute.
"Even if there are models that have lower compute requirements, the more use cases people will find to apply them to, the lower the cost that creates so the more ubiquitous that becomes," he said. "Even as models get more efficient, and more companies can use them that expands the amount of compute that is required."
AI compute is also a big market, helping Lambda serve developers -- a different audience than what other cloud providers target, he added. Hyper-scale cloud providers focus on selling to large enterprises and getting large workloads.
"Lambda is the AI training and inference cloud," Bratun-Glennon said. "The thing that has carried through the six years I've been working with them is the AI developer mindset."
Lambda is not the only vendor working to meet the demand of AI compute.
On February 20, AI inference vendor Recogni revealed it raised $102 million in series C funding co-led by Celesta Capital and GreatPoint Ventures. Recogni develops AI inference systems to address AI compute.
The latest Lambda round was led by Thomas Tull's U.S. Innovative Technology fund, with participation, in addition to Gradient Ventures, from SK Telecom, Crescent Cove and Bloomberg Beta.
Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems.
See the original post here:
AI vendor finds opportunity amid AI computing problem - TechTarget
Posted in Cloud Computing
Comments Off on AI vendor finds opportunity amid AI computing problem – TechTarget
Nvidia Worth More Than Alphabet, Amazon – 24/7 Wall St.
Posted: at 12:16 am
Technology
Published: February 23, 2024 6:45 am
After blockbuster earnings, Nvidia Corp.s (NASDAQ: NVDA) market cap is larger than that of Alphabet Inc. (NASDAQ: GOOGL), the owner of Google, and Amazon.com Inc. (NASDAQ: AMZN), the owner of cloud computing giant AWS. The question raised by this is whether artificial intelligence (AI) has a better economic future than search or cloud computing. (These 25 American industries are booming.)
Nvidias market cap is $1.96 trillion. Alphabets is $1.80 trillion, and Amazons is $1.81 trillion. In the past year, Nvidias shares are up 278% to $801. Amazons are up 82% to $174, while Alphabets are up 58% to $145. The Nasdaq is 45% higher than a year ago.
Most of Nvidias revenue comes from AI chips. Of its $22.1 billion in revenue in the most recent quarter, a small part ($2.9 billion) came from gaming. AI was virtually all of the balance. Overall revenue was up 265% year over year.
Amazons revenue rose 14% to $170 billion in the most recent quarter. AWSs cloud business had a revenue increase of 13% to $24.2 billion. Some investors think AWS is worth more than Amazons e-commerce business because cloud computing has been a major driver of tech revenue and stock valuations for several years. Based on Amazons stock price compared to Nvidias, AI may have replaced cloud computing as the core of tech growth across the industry.
In the most recent quarter, Alphabets revenue rose 13% to $86.3 billion. Its core ad business, Google and YouTube combined, had revenue of $76.3 billion, up 13%. Search has been a critical technology for over a decade, and Google has a market share of over 90% in many countries.
A look at the revenue growth of the three companies answers the question of comparison. For now, AI revenue is growing much faster than search or cloud computing revenue. Many analysts believe that may be true permanently. Based on this alone, the distance between Nvidias market cap and those of Amazon and Alphabet will continue to grow in Nvidias favor.
Thank you for reading! Have some feedback for us? Contact the 24/7 Wall St. editorial team.
Read the original:
Posted in Cloud Computing
Comments Off on Nvidia Worth More Than Alphabet, Amazon – 24/7 Wall St.
VIB spearheads banking innovation with deployment of Temenos Banking Platform on AWS cloud – VnExpress International
Posted: at 12:16 am
VIB will implement the latest Core Banking version R23 from Temenos on the AWS cloud and VIB's private cloud platforms, marking it the first bank to deploy the Temenos Core Banking system on a cloud computing platform in Vietnam. The project is a collaboration with ITSS, a technology solutions and IT services company from Switzerland.
VIB is currently the first banks in Vietnam to implement Temenos Core Banking solution on Cloud, in collaboration with AWS and ITSS. Photo courtesy of VIB
According to Tran Nhat Minh, Deputy CEO and CIO of VIB, with its strategic positioning to become a leading retail bank in Vietnam, VIB is at the forefront of investing in technology initiatives, digitalization, and building an advanced and robust technology platform.
The banking industry in Vietnam is undergoing a significant transformation, driven by an increase in customer demand for digital experiences and the constant advancement of technology.
The migration of core banking systems to the cloud has emerged as a game-changing factor, promising flexibility, innovation, and enhanced customer experiences.
This core banking modernization project on the cloud reaffirms VIB's commitment to continuously improve service quality and operational efficiency while creating a strong foundation to serve customers in a prominent digital era.
In this project, VIB will deploy the latest version of Temenos Core Banking on the AWS cloud and the VIB private cloud. The deployment ensures compliance, safety, and continuous system operation, marking an important step towards the comprehensive restructuring and innovation of VIB's digital platform, aligned with its three drivers: "mobile first, cloud first, and AI first."
With "cloud first," VIB pioneers the deployment of Temenos Core Banking solutions on the cloud, using cloud-native services. This ensures automatic and rapid on-demand infrastructure scalability, superior seamless service delivery to users, mitigating the impact of service disruption, and optimizing development and operational costs.
With "mobile first," mobile banking is one of the key factors, and deploying Temenos Core Banking on the Cloud provides robust APIs, high operational performance, flexibility, enabling VIB to maintain 24/7 services, enhancing the best customer experience, and ensuring seamless access to banking services.
With "AI first," VIB has been focusing on leveraging artificial intelligence (AI) and machine learning to provide predictive analytics and suitable financial solutions.
The deployment of Temenos Core Banking aims to ensure the enhancement of input data quality through tight automated system controls, which is a crucial input supporting the "AI first" key factor.
VIB and partners representatives at the signing ceremony on Feb. 24, 2024. Photo courtesy of VIB
Ramki Ramakrishnan, Managing Director for APAC at Temenos, said that they are delighted to welcome VIB as the first bank in Vietnam to embrace Temenos core banking on the cloud, further solidifying our market-leading position in this vital market.
Temenos serves as the trusted banking platform for large banks globally, including leading Vietnamese banks, spanning the retail, corporate, wealth, and private banking sectors.
Vietnams banking sector has experienced significant growth over the last decade and is home to some of the largest and most innovative banks in Southeast Asia, and VIB is leading the way.
"With VIBs forefront position in cloud-powered banking, we are excited to be working with them as their strategic partner to support their future growth," said Ramakrishnan.
In its strategic transformation phase (20172026), VIB witnesses impressive growth, positioning the bank as one of the leading retail banks in Vietnam.
To date, VIB is one of the most retail-oriented banks in Vietnam, with a retail loan proportion of over 85% of its loan portfolio.
Over the past seven years, it has achieved compounded growth in its credit portfolio of 32% due to strategic business changes and successful penetration and expansion in the individual customer segment.
With the aim of becoming the most innovative bank in Vietnam, VIB is implementing long-term digital strategies, with digital banking witnessing significant growth alongside high penetration and digital conversion rates.
From 2017 to 2023, digital transaction CARG was 100%, with over 94% of retail transactions conducted through digital channels, serving millions of active customers every month.
The implementation of Temenos core banking solution on AWS Cloud marks VIB's latest advancement in its digital transformation journey, contributing to the establishment of a robust infrastructure that serves as the foundation for the adoption and development of cutting-edge technology products.
It also optimizes the customer experience, aligns with growth plans, and accommodates future increases in customer numbers, products, and transactions through digital banking channels.
Eric Yeo, Vietnam Country Manager, AWS, said that the cloud revolution provides capabilities to optimize operations, improve security, reduce operation costs for technology infrastructure, and minimize downtime during peak traffic.
"We are proud to collaborate with VIB and Temenos on this journey to help shape the banking industry in Vietnam. This technology platform will help banks meet and exceed rapid business growth in the future," he stated.
With a diverse product ecosystem and the integration of cutting-edge technologies such as AI and augmented reality (AR), as well as superior convenience and speed, VIB has asserted its position as one of the leading banks in delivering innovative digital products with a customer-centric approach.
The digital strategy also earns VIB recognition from both domestic and international organizations for its high-tech and trend-leading products.
The utilization of Temenos Core Banking solution with its cloud-native and cloud-agnostic features in this project allows VIB the freedom to operate across public clouds such as AWS and VIB private clouds; provides the agility, scalability, and resilience necessary to meet the dynamic needs of modern customers; and fosters an environment of continuous innovation and improvement for VIB.
The new system will also expedite VIB's ability to upgrade, expand, and introduce new digital products and services to the market quickly and significantly save time.
Additionally, with high performance and flexibility, VIB can maintain 24/7 services and provide a better customer experience. Accordingly, customers will benefit from maximum security when using VIB's services, as the AWS Cloud holds over 140 security compliance certifications.
"We are pleased to be putting our more than two decades of digital core banking transformation expertise to work, allowing VIB to innovate faster using cloud capabilities to grow their business while meeting sustainability commitments. What sets ITSS apart are our accelerators, hands-on experience within global markets, and how we keep the end-user experience in mind while driving digital transformations. ITSSs Temenos services and cloud technology expertise pave the way for VIB to deliver mission-driven products and services that delight their customers," Manju HC, Director of ITSS Global, said.
VIB has been pioneering in cloud computing since 2021 and developing a cloud-native mobile banking application with MyVIB.
Temenos is the leading open platform for composable banking, serving 3,000 banks globally, helping them build new services and enhance customer experiences, and achieving triple return on equity and half-industry cost-to-income ratios. For more information, visit here.
ITSS, a global banking software integrator, has supported over 350 Temenos clients since 2001, offering banking technology services and solutions for various banking products, including Transact, FCM, Islamic banking, TAP, Multifonds, Inclusive Banking, and Infinity.
See more here:
Posted in Cloud Computing
Comments Off on VIB spearheads banking innovation with deployment of Temenos Banking Platform on AWS cloud – VnExpress International
Wall Street Favorites: 3 Quantum Computing Stocks with Strong Buy Ratings for February 2024 – InvestorPlace
Posted: at 12:16 am
These three quantum computing stocks are worth buying in February 2024
Once people are done fawning over generative AI, investors might think, what will be the next big thing? The field ofquantum computingmay be just that. Quantum computing has the potential to solve complex problems that generally slow down classical computers, such as optimization, cryptography, machine learning, and simulation.
While quantum computing technology may still be in its infancy, investors desiring to invest in the up-and-coming technology should consider one of the following three quantum computing stocks with Strong Buy ratings from Wall Street analysts.
Source: T. Schneider / Shutterstock
D-Wave Quantum(NYSE:QBTS) is a well-established quantum computing company. In particular, D-Wave specializes inquantum annealing, a computing technique used to find the optimal solution for a given problem. The quantum computing firm has successfully built several quantum annealers withmore than 5,000 qubits, which allows greater potential for commercial applications.
D-Wave Quantum offers its quantum annealers and software tools through its cloud platform, Leap. QBTS also offers a suite of developer tools called Ocean, which helps users design, develop, and deploy quantum applications. The quantum computing company has a diverse customer base, includinggovernment agenciesand corporations. Most recently, D-Wavereleasedits 1200+ qubit Advantage2 quantum computing machine prototype. Those already subscribed to the D-Wave Leap platform can access the prototype and test out its capabilities.
Wall Streetanalysts expectD-Wave to generate more than $10.5 million in revenue at the end of 2023, representing a 47% YoY increase from the prior period. The market seems excited about D-Wave Quantums prospects. Shares have risen 117% over the past 12 months, and the company has a Strong Buy rating from Wall Street analysts.
Source: JHVEPhoto / Shutterstock.com
Advanced Micro Devices(NASDAQ:AMD) is a fabless chipmaker that initially made a name after dethroningIntel(NASDAQ:INTC) in the CPU market. AMD is now poised to challenge and siphon market share away from Nvidia in the AI space as the chipmaker prepares to enter the AI computing market in 2024. The chipmakerexpects to sell $2 billionin AI chips in 2024.
On top of tackling the artificial intelligence space, AMD has also made strides in quantum computing. The companys Zynq SoCs have been leveraged to create operating systems for quantum computers. Though AMDs quantum offerings are not a main line of business, as quantum computing becomes commercial, AMD will likely benefit from already having dipped its toes into the space.
Wall Street currently rates AMD as a Strong Buy, and the companys shares are likely to do well this year as its AI chips come to market.
Source: Shutterstock
Rigetti Computing(NASDAQ:RGTI) is a pure-play quantum computing business that isvertically integrated. This means the company is involved in both designing and manufacturing its multi-chip quantum processors. Rigetti uses superconducting circuits as qubits fabricated on silicon chips and operating at near-zero temperatures. To deliver its quantum computing capabilities to clients, Rigetti leverages cloud service networks while also providing quantum software development tools as well as quantum hardware design and manufacturing.
In January, Rigetti Computingannouncedthe availability of its 84-qubit Ankaa-2 quantum computing system, which will be accessible through Rigettis cloud service. RGTIs shares have risen 53% over the past twelve months. As the company continues to make advancements in its product, shares could rise even more.
Wall Street analysts have given the stock a resounding Strong Buy rating.
On the date of publication, Tyrik Torres did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.
Tyrik Torres has been studying and participating in financial markets since he was in college, and he has particular passion for helping people understand complex systems. His areas of expertise are semiconductor and enterprise software equities. He has work experience in both investing (public and private markets) and investment banking.
Read the rest here:
Posted in Quantum Computing
Comments Off on Wall Street Favorites: 3 Quantum Computing Stocks with Strong Buy Ratings for February 2024 – InvestorPlace
Never-Repeating Tiles Can Safeguard Quantum Information – Quanta Magazine
Posted: at 12:16 am
This extreme fragility might make quantum computing sound hopeless. But in 1995, the applied mathematician Peter Shor discovered a clever way to store quantum information. His encoding had two key properties. First, it could tolerate errors that only affected individual qubits. Second, it came with a procedure for correcting errors as they occurred, preventing them from piling up and derailing a computation. Shors discovery was the first example of a quantum error-correcting code, and its two key properties are the defining features of all such codes.
The first property stems from a simple principle: Secret information is less vulnerable when its divided up. Spy networks employ a similar strategy. Each spy knows very little about the network as a whole, so the organization remains safe even if any individual is captured. But quantum error-correcting codes take this logic to the extreme. In a quantum spy network, no single spy would know anything at all, yet together theyd know a lot.
Each quantum error-correcting code is a specific recipe for distributing quantum information across many qubits in a collective superposition state. This procedure effectively transforms a cluster of physical qubits into a single virtual qubit. Repeat the process many times with a large array of qubits, and youll get many virtual qubits that you can use to perform computations.
The physical qubits that make up each virtual qubit are like those oblivious quantum spies. Measure any one of them, and youll learn nothing about the state of the virtual qubit its a part of a property called local indistinguishability. Since each physical qubit encodes no information, errors in single qubits wont ruin a computation. The information that matters is somehow everywhere, yet nowhere in particular.
You cant pin it down to any individual qubit, Cubitt said.
All quantum error-correcting codes can absorb at least one error without any effect on the encoded information, but they will all eventually succumb as errors accumulate. Thats where the second property of quantum error-correcting codes kicks in the actual error correction. This is closely related to local indistinguishability: Because errors in individual qubits dont destroy any information, its always possible to reverse any error using established procedures specific to each code.
Zhi Li, a postdoc at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, was well versed in the theory of quantum error correction. But the subject was far from his mind when he struck up a conversation with his colleague Latham Boyle. It was the fall of 2022, and the two physicists were on an evening shuttle from Waterloo to Toronto. Boyle, an expert in aperiodic tilings who lived in Toronto at the time and is now at the University of Edinburgh, was a familiar face on those shuttle rides, which often got stuck in heavy traffic.
Normally they could be very miserable, Boyle said. This was like the greatest one of all time.
Before that fateful evening, Li and Boyle knew of each others work, but their research areas didnt directly overlap, and theyd never had a one-on-one conversation. But like countless researchers in unrelated fields, Li was curious about aperiodic tilings. Its very hard to be not interested, he said.
Interest turned into fascination when Boyle mentioned a special property of aperiodic tilings: local indistinguishability. In that context, the term means something different. The same set of tiles can form infinitely many tilings that look completely different overall, but its impossible to tell any two tilings apart by examining any local area. Thats because every finite patch of any tiling, no matter how large, will show up somewhere in every other tiling.
If I plop you down in one tiling or the other and give you the rest of your life to explore, youll never be able to figure out whether I put you down in your tiling or my tiling, Boyle said.
To Li, this seemed tantalizingly similar to the definition of local indistinguishability in quantum error correction. He mentioned the connection to Boyle, who was instantly transfixed. The underlying mathematics in the two cases was quite different, but the resemblance was too intriguing to dismiss.
Li and Boyle wondered whether they could draw a more precise connection between the two definitions of local indistinguishability by building a quantum error-correcting code based on a class of aperiodic tilings. They continued talking through the entire two-hour shuttle ride, and by the time they arrived in Toronto they were sure that such a code was possible it was just a matter of constructing a formal proof.
Li and Boyle decided to start with Penrose tilings, which were simple and familiar. To transform them into a quantum error-correcting code, theyd have to first define what quantum states and errors would look like in this unusual system. That part was easy. An infinite two-dimensional plane covered with Penrose tiles, like a grid of qubits, can be described using the mathematical framework of quantum physics: The quantum states are specific tilings instead of 0s and 1s. An error simply deletes a single patch of the tiling pattern, the way certain errors in qubit arrays wipe out the state of every qubit in a small cluster.
The next step was to identify tiling configurations that wouldnt be affected by localized errors, like the virtual qubit states in ordinary quantum error-correcting codes. The solution, as in an ordinary code, was to use superpositions. A carefully chosen superposition of Penrose tilings is akin to a bathroom tile arrangement proposed by the worlds most indecisive interior decorator. Even if a piece of that jumbled blueprint is missing, it wont betray any information about the overall floor plan.
Read more from the original source:
Never-Repeating Tiles Can Safeguard Quantum Information - Quanta Magazine
Posted in Quantum Computing
Comments Off on Never-Repeating Tiles Can Safeguard Quantum Information – Quanta Magazine