Daily Archives: October 11, 2022

Cougar Pride Robotics has eyes on the problems of the future – White Mountain Independent

Posted: October 11, 2022 at 12:20 am

Country

United States of AmericaUS Virgin IslandsUnited States Minor Outlying IslandsCanadaMexico, United Mexican StatesBahamas, Commonwealth of theCuba, Republic ofDominican RepublicHaiti, Republic ofJamaicaAfghanistanAlbania, People's Socialist Republic ofAlgeria, People's Democratic Republic ofAmerican SamoaAndorra, Principality ofAngola, Republic ofAnguillaAntarctica (the territory South of 60 deg S)Antigua and BarbudaArgentina, Argentine RepublicArmeniaArubaAustralia, Commonwealth ofAustria, Republic ofAzerbaijan, Republic ofBahrain, Kingdom ofBangladesh, People's Republic ofBarbadosBelarusBelgium, Kingdom ofBelizeBenin, People's Republic ofBermudaBhutan, Kingdom ofBolivia, Republic ofBosnia and HerzegovinaBotswana, Republic ofBouvet Island (Bouvetoya)Brazil, Federative Republic ofBritish Indian Ocean Territory (Chagos Archipelago)British Virgin IslandsBrunei DarussalamBulgaria, People's Republic ofBurkina FasoBurundi, Republic ofCambodia, Kingdom ofCameroon, United Republic ofCape Verde, Republic ofCayman IslandsCentral African RepublicChad, Republic ofChile, Republic ofChina, People's Republic ofChristmas IslandCocos (Keeling) IslandsColombia, Republic ofComoros, Union of theCongo, Democratic Republic ofCongo, People's Republic ofCook IslandsCosta Rica, Republic ofCote D'Ivoire, Ivory Coast, Republic of theCyprus, Republic ofCzech RepublicDenmark, Kingdom ofDjibouti, Republic ofDominica, Commonwealth ofEcuador, Republic ofEgypt, Arab Republic ofEl Salvador, Republic ofEquatorial Guinea, Republic ofEritreaEstoniaEthiopiaFaeroe IslandsFalkland Islands (Malvinas)Fiji, Republic of the Fiji IslandsFinland, Republic ofFrance, French RepublicFrench GuianaFrench PolynesiaFrench Southern TerritoriesGabon, Gabonese RepublicGambia, Republic of theGeorgiaGermanyGhana, Republic ofGibraltarGreece, Hellenic RepublicGreenlandGrenadaGuadaloupeGuamGuatemala, Republic ofGuinea, RevolutionaryPeople's Rep'c ofGuinea-Bissau, Republic ofGuyana, Republic ofHeard and McDonald IslandsHoly See (Vatican City State)Honduras, Republic ofHong Kong, Special Administrative Region of ChinaHrvatska (Croatia)Hungary, Hungarian People's RepublicIceland, Republic ofIndia, Republic ofIndonesia, Republic ofIran, Islamic Republic ofIraq, Republic ofIrelandIsrael, State ofItaly, Italian RepublicJapanJordan, Hashemite Kingdom ofKazakhstan, Republic ofKenya, Republic ofKiribati, Republic ofKorea, Democratic People's Republic ofKorea, Republic ofKuwait, State ofKyrgyz RepublicLao People's Democratic RepublicLatviaLebanon, Lebanese RepublicLesotho, Kingdom ofLiberia, Republic ofLibyan Arab JamahiriyaLiechtenstein, Principality ofLithuaniaLuxembourg, Grand Duchy ofMacao, Special Administrative Region of ChinaMacedonia, the former Yugoslav Republic ofMadagascar, Republic ofMalawi, Republic ofMalaysiaMaldives, Republic ofMali, Republic ofMalta, Republic ofMarshall IslandsMartiniqueMauritania, Islamic Republic ofMauritiusMayotteMicronesia, Federated States ofMoldova, Republic ofMonaco, Principality ofMongolia, Mongolian People's RepublicMontserratMorocco, Kingdom ofMozambique, People's Republic ofMyanmarNamibiaNauru, Republic ofNepal, Kingdom ofNetherlands AntillesNetherlands, Kingdom of theNew CaledoniaNew ZealandNicaragua, Republic ofNiger, Republic of theNigeria, Federal Republic ofNiue, Republic ofNorfolk IslandNorthern Mariana IslandsNorway, Kingdom ofOman, Sultanate ofPakistan, Islamic Republic ofPalauPalestinian Territory, OccupiedPanama, Republic ofPapua New GuineaParaguay, Republic ofPeru, Republic ofPhilippines, Republic of thePitcairn IslandPoland, Polish People's RepublicPortugal, Portuguese RepublicPuerto RicoQatar, State ofReunionRomania, Socialist Republic ofRussian FederationRwanda, Rwandese RepublicSamoa, Independent State ofSan Marino, Republic ofSao Tome and Principe, Democratic Republic ofSaudi Arabia, Kingdom ofSenegal, Republic ofSerbia and MontenegroSeychelles, Republic ofSierra Leone, Republic ofSingapore, Republic ofSlovakia (Slovak Republic)SloveniaSolomon IslandsSomalia, Somali RepublicSouth Africa, Republic ofSouth Georgia and the South Sandwich IslandsSpain, Spanish StateSri Lanka, Democratic Socialist Republic ofSt. HelenaSt. Kitts and NevisSt. LuciaSt. Pierre and MiquelonSt. Vincent and the GrenadinesSudan, Democratic Republic of theSuriname, Republic ofSvalbard & Jan Mayen IslandsSwaziland, Kingdom ofSweden, Kingdom ofSwitzerland, Swiss ConfederationSyrian Arab RepublicTaiwan, Province of ChinaTajikistanTanzania, United Republic ofThailand, Kingdom ofTimor-Leste, Democratic Republic ofTogo, Togolese RepublicTokelau (Tokelau Islands)Tonga, Kingdom ofTrinidad and Tobago, Republic ofTunisia, Republic ofTurkey, Republic ofTurkmenistanTurks and Caicos IslandsTuvaluUganda, Republic ofUkraineUnited Arab EmiratesUnited Kingdom of Great Britain & N. IrelandUruguay, Eastern Republic ofUzbekistanVanuatuVenezuela, Bolivarian Republic ofViet Nam, Socialist Republic ofWallis and Futuna IslandsWestern SaharaYemenZambia, Republic ofZimbabwe

Read the rest here:

Cougar Pride Robotics has eyes on the problems of the future - White Mountain Independent

Posted in Robotics | Comments Off on Cougar Pride Robotics has eyes on the problems of the future – White Mountain Independent

igus acquires majority stake in Commonplace Robotics – Robotics and Automation News

Posted: at 12:20 am

Motion plastics specialist igus is investing in growing its low cost automation activities and has now acquired the majority stake in robot integrator Commonplace Robotics, based in Bissendorf near Osnabrck.

Commonplace Robotics specialises in intuitive control and software as well as power electronics for robotics, in the industrial and educational sectors.

Both companies have worked together closely for six years and have jointly developed the iRC igus Robot Control, which complements iguss low cost kinematics parts which are made of high-performance plastics.

Commonplace Robotics was founded 11 years ago by Dr. Christian Meyer, who until then had worked at the Fraunhofer Institute for Manufacturing Engineering and Automation.

The name says it all: making the integration and operation of robots so low cost and accessible that they become commonplace i.e. they can be used anywhere.

In 2016, Dr. Meyer approached igus because he found that iguss robot kinematics matched his vision of commonplace: low cost, simple, and suitable for industry.

Since then, the two companies have jointly developed products such as the iRC igus Robot Control and the ReBeL Cobot as well as an actuator.

The manufacture of Commonplace Robotics components from firmware and software to control cabinet construction and board assembly is vertically integrated so new developments can be implemented quickly.

Frank Blase, CEO and entrepreneur of igus, says: Many customers are surprised that they can realise simple automation tasks in just 30 minutes without any programming knowledge.

We are very pleased that after the intensive cooperation of the last six years, an even more focused approach to low-cost automation is now possible.

With the acquisition, Commonplace Robotics and igus say they are consolidating their innovative strength.

Dr Meyer says: We are looking forward to several exciting projects with igus, especially via the RBTX platform for low-cost robots, where new requirements from customers from all areas of industry come to our laboratories every day. Much of this can be implemented quickly, especially as we will grow with the new investment.

The latest product of this cooperation is the ReBeL Cobot, costing just EU4,970 including the control. The actuator, also available as a single component, combines the plastic know-how of igus in the gearbox with the power electronics and software from Commonplace Robotics.

With six DoF, the ReBeL can handle a 2 kg maximum payload with a reach of 664mm, with an operating dead weight of only 8.2 kg.

Enquiries and orders come from both classical applications such as quality control and pick-and-place applications in mechanical engineering, as well as new application areas such as restaurant automation and automated farming.

You might also like

Link:

igus acquires majority stake in Commonplace Robotics - Robotics and Automation News

Posted in Robotics | Comments Off on igus acquires majority stake in Commonplace Robotics – Robotics and Automation News

Matt Butcher on Web Assembly as the 3rd Wave of Cloud Computing – InfoQ.com

Posted: at 12:19 am

Subscribe on: Apple Podcasts Google Podcasts Soundcloud Spotify Overcast Podcast Feed

Wesley Reisz: Cloud computing can be thought of as two, or as today's guest will discuss, three different waves.

The first wave of cloud computing can be described as virtualization. Along came the VM and we no longer were running on our physical compute. We introduced virtual machines to our apps. We improved density, resiliency, operations. The second wave came along with containers and we built orchestrators like Kubernetes to help manage them. Startup times decreased. We improved isolation between teams, we improved flow, velocity. We embraced DevOps. We also really introduced the network into how our applications operated. We've had to adapt and think about that as we've been building apps, taking all of that into consideration. Many have described Serverless (or functions as a service) as a third wave of cloud compute.

Today's guest, the CEO of Fermyon Technologies, is working on functions as a service delivered via Wasm (Web Assembly), and that will be the topic today's podcast.

Hi, my name is Wes Reisz. I'm a technical principal with ThoughtWorks and cohost of the InfoQ podcast. In addition, I chair a software conference called QCon San Francisco. QCon is a community of senior software engineers focused on sharing practical, no marketing based solutions to real-world engineering problems. If you've search the web for deeply technical topics and ran across videos on InfoQ, odds are you've seen some of the talks I'm referring to about QCon. If you're interested in being a part of QCon and contributing to that conversation, the next one is happening at the end of October in the Bay Area. Check us qconsf.com.

As I mentioned, today our guest is Matt Butcher. Matt is a founding member of dozens of open-source projects, including Helm, Cloud Native Application Bundles, Krustlet, Brigade, Open Application Model Glide, the PHP HTML5 parser and Query Path. He's contributed to over 200 open source projects spanning dozens of programming languages. Today on the podcast we're talking about distributed systems and how Web Assembly can be used to implement functions as a service. Matt, welcome to the podcast.

Matt Butcher: Thanks for having me, Wes.

Wesley Reisz: In that intro, I talked about two waves of cloud compute. You talk about a third, what is the third wave of cloud compute?

Matt Butcher: Yes, and it actually, spending a little time on the first two autobiographically helps articulate why I think there's a third. I got into cloud services really back when OpenStack got started. I had joined HP and joined the HP Cloud group right when they really committed a lot of resources into developing OpenStack, which had a full virtual machine layer and object storage and networking and all of that. I came into it as a Drupal developer, of all things. I was doing content management systems and having a great time, was running the developer CMS system for HP, and as soon as I got my first taste of the virtual machine world, I was just totally hooked because it felt magical.

In the past, up until that time, we really thought about the relationship between a piece of hardware and the operating system as being sort of like one to one. My hardware at any given time can only run one operating system. And I'm one of those people who's been dual booting with Linux since the nineties and suddenly the game changed. And not only that, but I didn't have to stand up a server anymore. I could essentially rent space on somebody else's server and pay their electricity bill to run my application, right?

Wesley Reisz: Yes, it was magic.

Matt Butcher: Yes, magic is exactly the word that it felt like at that time, and I was just hooked and got really into that world and had a great time working on OpenStack. Then along came containers and things changed up for me job wise and I ended up in a different job working on containers. At the time I was trying to wrestle through this inner conflict. Are containers going to defeat virtual machines, or are virtual machines going to defeat containers? And I was, at the time, really myopically looking at these as competitive technologies where one would come out the victor and the other one would fall by the wayside of the history of computing, as we've seen happen so many other times with different technologies.

It took me a while, really all through my Deis days, up until Microsoft acquired Deis, and I got a view of what it looked like inside the sausage factory to realize that no, what we weren't seeing is two competing technologies. We were really seeing two waves of computing happen. The first one was us learning how to virtualize workloads using a VM style, and then containers offered an alternative way with some different pros and some different cons. But when you looked at the Venn diagram of features and benefits and even patterns that we used, there was actually very little overlap between the two, surprisingly little overlap between the two.

I started reconceptualizing the cloud compute world as having this wavy kind of structure. So here we are at Microsoft, the team that used to be Deis, and then we joined Microsoft and we gain new developers from other parts of Microsoft and we start to interact with the functions as a service team, the IoT team, the AKS team, and all of these different groups inside of Azure and get a real look, a very, very eyeopening look for what all of this stuff looks like under the hood and what the real struggles are to run a cloud at scale. I hate using the term at scale, but that's really what it is there. But also we're doing open source and we're engaged with startups and medium-sized companies and large companies, all of whom are trying to build technologies using this stuff, containers, virtual machines, object storage and stuff like that.

We start seeing where both the megacorp and the startups are having a hard time and we're trying to solve this by using containers and using virtual machines. At some point we started to realize, "Hey, there are problems we can't solve with either of these technologies." We can only push the startup time to containers down to a few hundred milliseconds, and that's if you are really packing stuff in and really careful about it. Virtual machine images are always going to be large because you've always got to package the kernel. We started this checklist of things and at some point it became the checklist of what is the next wave of cloud computing?

That's where we got into Web Assembly. We start looking around and saying, "Okay, what technology candidates are there that might fill a new compute niche, where we can pack something together and distribute it onto a cloud platform and have the cloud platform executed?" serverless at the time is getting, and we should come back to serverless later cause it's an enticing topic on its own. serverless is getting popular but wasn't necessarily solving that problem and we wanted to address it more at an infrastructure layer and say, "Is there a third kind of cloud compute?"

And after looking around at a couple of different technologies, we landed on Web Assembly of all things, a browser technology, but what made it good for the browser, that security isolation model, small binary sizes, fast startup times, those are just core things you have to have in a web browser. People aren't going to wait for the application to start. They're not going to tolerate being able to root your system through the browser and so all these security and performance characteristics and multilanguage, multi-architecture characteristics were important for the browser. That list was starting to match up very closely with the list of things that we were looking for in this third wave of cloud computing.

This became our Covid project. We spent our Fridays, what would it mean to try and write a cloud compute layer with Web Assembly? And that became Krustlet, which is a Web Assembly runtime essentially for Kubernetes. We were happy with that, but we started saying, "Happy, yes, but is this the right complete solution? Probably not." And that was about the time we thought, "Okay, it's time to do the startup thing. Based on all the knowledge we've accrued about how Web Assembly works, we're going to start without the presupposition that we need to run inside of a container ecosystem like Kubernetes and we just need to start fresh." And that was really what got us kicking with Fermyon and what got us excited and what got us to create a company around this idea that we can create the right kind of platform that illustrates what we mean by this kind of third wave of cloud computing.

Wesley Reisz: We're talking about Web Assembly to be able to run server side code. Are we talking about a project specifically, like Krustlet's a project, or are we talking about an idea? What is the focus?

Matt Butcher: Oh, that's a great question because as a startup founder, my initial thing is, "Well, we're talking about a project," but actually I think we're really talking more about an ecosystem. There's several ecosystems we could choose from, the Java ecosystem or the dotnet ecosystem as illustrations of this. But I think the Docker ecosystem, it's such a great example of an ecosystem evolving and one that's kind of recent, so we all kind of remember it, but there were some core technologies like Docker of course, and early schedulers including Mesos and Swarm and Fleet and the key value storage systems like ETCD and Consul. So there were a whole bunch of technologies that co-evolved in order to create an ecosystem, but the core of the ecosystem was the container.

And that's what I think we are really in probably the first year or two years of seeing that develop inside of Web Assembly, a number of different companies and individual developers and scholars in academia have all sort of said, "Hey, the Web Assembly binary looks like it might be the right foundation for this. What are the technologies we need to build around it and what's the community structure we need to build around it?" Because standardizing is still the gotcha for almost all of our big efforts. We want things standardized enough so that we can run reliably and understand how things are going to execute and all of that while we all still want to keep enough space open that we can do our own thing and pioneer a little bit.

I think that the answer to your question is the ecosystem is the first thing for this third wave of cloud compute. We need groups like Bytecode Alliance where the focus is on working together to create the specifications like Web Assembly system interface that determines how you interface with a system clock, how you load environment variables, how you read and write files, and we need that as a foundational piece. So there's that in a community.

There's the conferences like Web Assembly Summit and Wasm Day at KubeCon, and we need those as areas where we can collaborate and then we need lots and lots of developers, often working for different companies, that are all trying to solve a set of problems that define the boundaries of the ecosystem. I think we are in about year one and a half to year two of really seeing that flourishing. Bytecode Alliance has been around a little longer, but only formalized about a year and a half ago. You're seeing a whole bunch of startups like Fermyon and Suborbital and Cosmonic and Profion bubbling up, but you're also seeing Fastly and CloudFlare buying into this Microsoft, Amazon, Google buying into this so we're really seeing once again the same replay of a ecosystem formation that we saw in the Docker ecosystem when it was Red Hat at Google.

Wesley Reisz: I know of Fastly doing things at the Edge, being able to compile things at the Edge and be able to run Web Assembly Wasm there. I can write Wasm applications myself and deploy them, but the cloud part, how do I deploy Wasm in a Cloud Native way? How does that work today?

Matt Butcher: In this case, Cloud Native and Edge are similar. Maybe the Edge is a little more constrained in some of the things it can do and a little faster to deliver on others. But at the core of it, we need to be able to push a number of artifacts somewhere and understand how they're going to be executed. We know, for example, we've got the binary, a Web Assembly binary file, and then we need some supporting file. A good example of this is fermyon.com is powered by a CMS that we wrote called Bartholomew. For Bartholomew, we need the Web Assembly binaries that serve out the different parts of the site, and it's created with a microservice architecture. I think it's got at this point five different binary files that execute fermyon.com.

Then we need all of the blog posts and all the files and all the images and all the CSS, some of which are dynamic and some of which are static. And somehow we have to bundle all of these up. This is a great example of where Bytecode Alliance is a great entity to have in a burgeoning ecosystem. We need to have a standard way of pushing these bundles up to a cloud. And Fastly's Compute@Edge is very similar. We need a way to push their artifacts up to Compute@Edge with Fastly or any of these.

There's a working group called SIG Registry that convenes under Bytecode Alliance that's working on defining a package format and defining how we're going to push and pull packages, essentially where you think of in the Docker world, pushing and pulling from registries and packaging things up with a Docker file and creating an image file, same kind of thinking is happening in Bytecode Alliance specific to Web Assembly. SIG Registries is a great place to get involved if that's the kind of thing that people are interested in. You can find out about it at bytecodealliance.org. That's one of the pieces of community building/ecosystem building that we've got to be engaged in.

Wesley Reisz: You started a company, Fermyon, and now what's the mission of Fermyon? Is it to be able to take those artifacts and then be able to deploy them onto a cloud footprint? What is Fermyon doing?

Matt Butcher: For us, we're really excited about the idea that we can create a cloud run time that can run in AWS, in Azure, in Google, in Digital Ocean that can execute these Web Assembly modules and that we can streamline that experience to make it frictionless. It's really kind of a two part thing. We want to make it easy for developers to build these kinds of applications and then make it easy for developers to deploy and then manage these applications over the long term.

When you think about the development cycle, oftentimes as we build these new kinds of systems, we introduce a lot of fairly heavy tooling. Virtual machines are still hard to build for us now even a decade and some into the ecosystem. And technologies like Packer have made it easier, but it's still kind of hard. The number one thing that Docker did amazingly well was create a format that made it easy for people to take their applications that already existed, package them up using a Docker file into a image, and we looked at that and said, "Could we make it simpler? Could we make the developer story easier than that?"

And the cool thing about Web Assembly is that all these languages are adding support into their compilers. So with Rust, you just add --target Wasm32-wasi and it compiles the binary for you. We've really opted for that lightweight tooling.

Spin is our developer tool, and the Spin project is basically designed to assist in what we call the inner loop of development. This is a big microsoft-y term, I think inner and outer loop of development.

Wesley Reisz: Fast compile times.

Matt Butcher: What we really mean is when you as the individual developer are focused on your development cycle and you've blocked out the world and you're just wholly engaged in your code, you're in your inner loop, you're in flow. And so we wanted to build some tools that would help developers when they're in that mode to be able to very quickly and rapidly build Web Assembly based applications without having to think about the deployment time so much and without having to use a lot of external tools. So Spin is really the one tool that we think is useful there, and we've written VS code extension to streamline that.

And then on the cloud side, you got to run it somewhere, and we built the tool we call Fermyon or the Fermyon platform, to really execute there. And that's kind of a conglomeration of a number of open source projects with a nice dashboard on top of it that you can install into Digital Ocean or AWS or Azure or whatever you want and get it running there.

Wesley Reisz: And that runs a full Wasm binary? Earlier I talked functions as a service, does it run functions or does it run full Wasm binaries?

Matt Butcher: And this gets us back into the serverless topic, which we were talking about earlier, and serverless I think has always been a great idea. The core of this is can we make it possible so that the developer doesn't even have to think about what a server is?

Wesley Reisz: Exactly. The plumbing.

Matt Butcher: And functions as a service to me is just about the purest form of serverless that you can get where not only do you not have to think about the hardware or the operating system, but you don't even have to think about the web framework that you're running in, right? You're merely saying, "When a request comes into this endpoint, I'm going to handle it this way and I'm going to serve back this data." Within moments of starting your code, you're deep into the business logic and you're not worried about, "Okay, I'm going to stand up an HTTP server, it's got to listen on this port, here's the SSL configuration."

Wesley Reisz: No Daemon Sets, it's all part of the platform.

Matt Butcher: Yes. And as a developer, that to me is like, "Oh, that's what I want. No thousand lines of YAML config." serverless and functions as a service were looking like very promising models to us. So as we built out Spin, we decided that at least as the first primary model that we wanted to use, we wanted to use that particular model. Spin for example, it functions more like an event listener where you say, "Okay, on an HTTP request, here's the request object, do your thing and send back a response object." Or, "As a Redis listener, when a message comes in on this channel, here's the message, do your thing and then optionally send something back." And that model really is much closer to Azure functions and Lambda and technologies like that. We picked that because developers seem to really enjoy that. Developers say they really enjoy that model. We think it's a great compliment for Web Assembly. It really gets you thinking about writing microservices in terms of very, very small chunks of code and not in terms of HTTP servers that happen to have microservice infrastructure built in.

Wesley Reisz: Spin lets you write this inner loop, fast flow, event driven model where you can respond to the events that are going like the serverless model, and then you're able to package that into Wasm that can then be deployed with Fermyon cloud? Is that the idea?

Matt Butcher: Yes, and when you think about writing a typical HTTP application, even going back to say Rails, Rails and Django I think really defined how we think about HTTP applications, and you have got this concept of the routing table. And in the routing table you say, "When somebody hits /foo, then that executes myFoo module. If I hit /bar that executes myBar module." That's really the direction that we went with the programming model where when you hit fermyon.com/index, it executes the Web Assembly module that generates the index file and serves that out. When you hit /static/file.jpeg, it loads the file server and serves it back. And I think that model really kind of resonates with pretty much all modern web application and microservice developers, but all the writing in the back end is just a function. I really like that model because it just feels like you're getting right to the meat of what you actually care about within a moment of starting your application instead of a half hour or an hour later when you've written out all the scaffolding for it.

Wesley Reisz: What about State? You mentioned Redis before having Redis listeners, how do you manage State when you're working with Spin or with Fermyon cloud? How does that come into play?

Matt Butcher: That's a great architectural discussion for microservices as a whole, and we really have felt strongly that what we have observed coming from Deis and Microsoft and then on into Fermyon or Google, in the case of some of the other engineers who work on Fermyon, Google into Fermyon, we've seen the microservice pattern be successful repeatedly. And Statelessness has been a big virtue of the microservice model as far as the binary keeping state internally, but you got to put state full information somewhere.

Wesley Reisz: At some point.

Matt Butcher: The easy one is, "Well, you can put it in files," and WASI and Web Assembly introduced file support two years ago and that was good, but that's not really where you want to stop. With Spin, we began experimenting with adding some additional ones like Redis support and generic key-value storage, which is coming out and released very soon. Database support is coming out really soon and those kinds of things. Spin, by the way, is open source, so you can actually go see all these PRs in flight as we work on PostgreSQL support and stuff like that.

It's coming along and the strategy we want to use is the same strategy that you used in Docker containers and other stateless microservice architectures where State gets persisted in the right kind of data storage for whatever you're working on, be that a caching service or a relational database or a noSQL database. We are hoping that as the Web Assembly component model and other similar standards kind of solidify, we're going to see this kind of stuff not be a Spin specific feature, but just the way that Web Assembly as a whole works and different people using different architectures will be able to pull the same kinds of components and get the same kind of feature set.

Wesley Reisz: Yes, very cool. When we were talking just before we started recording, you mentioned that you wanted to talk a little bit about performance of Web Assembly and how it's changed. I remember I guess a year ago, maybe two years ago, I did a podcast with Linn Clark. We were talking about Fastly and running Web Assembly at the Edge, like we were talking about before, and if I remember right, I may be wrong, but if I remember right, it was like 3 ms was the overhead that was for the inline request compiled time, which I thought was impressive, but you said you're way lower than that now. What is the request level inline performance time of Web Assembly these days?

Matt Butcher: We're lower now. Fastly's lower now. As an eco, we've learned a lot in the last couple years about how to optimize and how to pre initialize and cache things ahead of time. 3ms even a year and a half ago would've been a very good startup time. Then we are pushing down toward a millisecond and now we are sub one millisecond.

And so again, let's characterize this in terms of these three waves of cloud computing, a virtual machine, which is a powerhouse. You start with the kernel and you've got the file system and you've got all the process table and everything starting up and initializing and then opening sockets and everything, that takes minutes to do. Then you get to containers. And containers on average take a dozen seconds to start up. You can push down into the low seconds range and if you get really aggressive and you're really not doing very much, you might be able to get into the hundred milliseconds or the several hundred milliseconds range.

One of the core features that we think this third wave of cloud compute needed, and one of our criteria coming in was it's got to be in the tens of milliseconds. That was a design goal coming out of the gate for us, and the fact that now we're seeing that push down below the millisecond marker for being able to get from cold State VM to something executing, to that first instruction, having that under a millisecond is just phenomenal.

In many ways we've kind of learned lessons from the JVM and the CLR and lots and lots of other research that's been done in this area. And in another, some of it just comes about because with both us and with Fastly and other cloud providers distinctly from the browser scenario, we can preload code, compile it ahead of time into Native and then be able to have it cached there and ready to go because we know everything we need to know about what the architecture and what the system is going to look like when that first invocation hits, and that's why we can really start to drive times way, way down.

Occasionally you'll see a blog post of somebody saying, "Well, Web Assembly wasn't terribly fast when I ran it in the browser." And then those of us on the cloud side are saying, "Well, we can just make it blazingly fast." A lot of that difference is because the things that the run time has to be able to learn about the system at execution time in the browser, we know way ahead of time on the cloud and so we can optimize for that. I wouldn't be surprised to see Fastly, Fermyon, other companies pushing even lower until it really does start to appear to be at Native or faster than Native speeds.

Wesley Reisz: That's awesome. Again, I haven't really tracked Web Assembly in the last year and a half or so, but some of the other challenges were types and I think component approach to where you could share things. How has that advanced over the last year and a half? What's the state of that today?

Matt Butcher: Specifications often move in fits and starts, right? And W3C, by the way, the same standards body that does CSS, HTML and HTTP, this is the same standards body that works on Web Assembly. Types was one of the initial, 'How do we share type information?" And that morphed in and out of several other models. And ultimately what's emerged out of that is borrowing heavily from existing academic work on components. Web Assembly is now gaining a component model. What that means in practice is that when I compile a Web Assembly module, I can also build a file that says, "These are my exported functions and this is what they do and these are the types that they use." And types here aren't just like instant floats and strings. We can build up very elaborate struct like types where we say, "This is a shopping cart and a shopping cart has a count of items and an item looks like this."

And the component model for Web Assembly can articulate what those look like, but it also can do a couple of other really cool things. This is where I think we're going to see Web Assembly really break out. Developers will be able to do things in Web Assembly that they have not yet been able to do using other popular architectures, other popular paradigms. And this is that Web Assembly can articulate, "Okay, so when this module starts up, it needs to have something that looks like a key value storage. Here's the interface that defines it. I need to be able to put a string string and I need to be able to get string and get back a string object or I need a cache where it lives for X amount of time or else I get a cache miss." But it has no real strong feelings about, it doesn't have any feelings at all. It's binary, it has no real strong...

Wesley Reisz: Not yet. Give a time.

Matt Butcher: Anthropomorphizing code.

And then at startup time we can articulate, Fastly can say, "Well, we've got a cache-like thing and it'll handle these requests." And Fermyon can say, "Well we don't, but we can load a Docker container that has those cache-like characteristics and expose a driver through that." And suddenly applications can be sort of built up based on what's available in the environment. Now because Web Assembly is multi-language, what this means is effectively for the most part, we've been writing the same tools over and over again in JavaScript and Ruby and Python and Java. If we can compile all of the same binary format and we can expose the imports and exports for each thing, then suddenly language doesn't make so much of a difference. And so whereas in the past we've had to say, "Okay, here's what you can do in JavaScript and here's what you can do in Python," now we can say, "Well, here's what you can do."

Wesley Reisz: Reuse components.

Matt Butcher: And whether the key value store is written in Rust or C or Erlang or whatever, as long as it's compiled to Web Assembly, my JavaScript application can use it and my Python app can use it. And that's where I think we should see a big difference in the way we can start constructing applications by aggregating binaries instead of fetching a library and building it into our application.

Wesley Reisz: Yes, it's cool. Speaking of, language support was another thing that you wanted to talk about. There's a lot of changes, momentum and things that have been happening with languages themselves and support of Web Assembly like Switches, there's things with Node, we talked about Blazer for a minute. What's happening in the language space when it comes to Web Assembly?

Matt Butcher: To us, Web Assembly will not be a real viable technology until there is really good language support. On fermyon.com we actually track the status of the top 20 languages as determined by Red Monk and we watch very closely and we continually update our matrix of what the status is of Web Assembly in these languages. Rewind again back only a year or two and all the check boxes that are checked are basically C and Rust, right? Both great languages, both well-respected languages, both not usually the first languages a developer says, "Yes, this is my go-to language." Rust is gaining popularity of course, and we love Rust, but JavaScript wasn't on there. Python wasn't on there, Ruby wasn't on there. Java and C Sharp certainly weren't on there. What we've seen over only a year, year and a half is just language after language first announcing support and then rapidly delivering on it.

Earlier this year, I was ecstatic when I saw in just the space of two weeks, Ruby and Python both announce that the CRuby and CPython run times were compilable to Web Assembly with WASI, which effectively meant all of a sudden Spin, which applications were kind of limited to Rust and C at the time, could suddenly do Python and Ruby applications. Go, the core project is a little bit behind on Web Assembly support, but the community picked up the slack and Tiny Go can compile Go programs into Web Assembly Plus WASI. Go came along right around, actually a little bit earlier than Python and Ruby, but now what we're seeing, now being in the last couple of weeks, is the beginning of movement from the big enterprise languages. Microsoft has been putting a lot of work into Web Assembly in the browser over the past with the Blazer framework, which essentially ran by compiling the CLR, the run time for C Sharp in those languages into Web Assembly and then interpreting the DLLs.

But what they've been saying is that was just the first step, right? The better way to do it is to compile C#, F#, all the CLR supported languages directly into Web Assembly and be able to run them directly inside of a Web Assembly runtime, which means big performance boost, much smaller binary sizes and all of a sudden it's easy to start adding support for newly emerging specifications because it doesn't have to get routed through multiple layers of indirection.

Steve Sanderson, who's one of the lead, I think he's the lead PM for the dotnet framework, has been showing off a couple times since KubeCon in Valencia, now I think four or five different places has shown off where they are in supporting dotnet to Web Assembly with WASI, and it's astounding. So often we've thought of languages like C# as being sort of reactive, looking around at what's happening elsewhere and reacting, but they're not. They are very forward thinking engineers, and David Fowler's brilliant and the stuff they're doing is awesome. Now they've earmarked Web Assembly as the future, as one of the things they really want to focus on. And I'm really excited, my understanding is the next version of dotnet will have full support for compiling to Native Web Assembly and the working drafts of Native's out now.

Wesley Reisz: Yes, that's awesome. You mentioned that there's work happening with Java as well, so Java, the CLR, that's amazing.

Matt Butcher: Yep. Kotlin too is also working on a Native implementation. I think we'll see Java, Kotlin, the dotnet languages all coming. I think they'll be coming by the end of the year. I'm optimistic. I have to be because I'm a startup founder and if you're not optimistic, you won't survive. But I think they'll be coming by the end of the year. I think you'll really start to see the top 20 languages, I think we'll see probably 15 plus of them support Web Assembly by the end of the year.

Wesley Reisz: That's awesome. Let's come back for a second to Fermyon. We're going to wrap up here, but I wanted you to walk through, there's an app that you talk about, Wagi, that's on one of your blog posts, that's how you might go about using Spin, how you use Fermyon cloud. Could you walk through what it looks like to bootstrap an app? Talk about just what does it look like for me if I wanted to go use Fermyon cloud, what would it look like?

Matt Butcher: Spin's the tool you'd use there? Wagi is actually just a description of how to write an application, so when you're writing it. Think about Wagi as one of You download Spin from our GitHub repository and you type in Spin new and then the type of application you want to write and the name. Say I want to create Hello World in Rust, it's Spin New Rust Hello World. And that commands scaffolds out, it runs the cargo commands in the background and creates your whole application environment. When you open it from there, it's going to look like your regular old Rust application. The only thing that's really happening behind the scenes is wiring up all the pieces for the component model and for the compiler so that you don't have to think about that.

spin new, you've got your Hello World app created instantly. You can edit it however you'd normally edit, I use VS code. From there, you type in Spin Build, it'll build your binary for you. And again, largely it's invoking the Rust compiler in Rusts case or the Tiny Go compiler in Go Case or whatever. And then Spin Deploy will push it out to Fermyon. So assuming you've got a Fermyon instance running somewhere, you can Spin Deploy and have it pushed out there. If you're doing your local development, you can just, instead of typing, Spin Deploy, you can type Spin Up and it'll create you a local web server and be running your application inside there so the local development story is super easy there. In total, we say you should be able to get your first Spin application up and running in two minutes or less.

Wesley Reisz: How do you target different endpoints for when you deploy out to the cloud? Or do you not worry about it? That's what you pay Fermyon, for example.

Matt Butcher: Yes, you're building your routing table as you build the application. There's a toml file in there called Spin.toml where you say, "Okay, if they hit slash then they load this module. If they hit /fu, they hit that module," and it supports all the normal things that routing tables support. But from there, when you pushed out to the Fermyon platform, the platform will provision your SSL certificate, set up a domain name for you. The Fermyon dashboard that comes as part of that platform will allow you to set up environment variables and things like that. So as the developer, you're really just thinking merely in terms of how you build your binary and what you want to do. And then once you deploy it, then you can log into the Fermyon dashboard and start tweaking and doing the DevOps side of what we would call the outer loop of development.

Wesley Reisz: What's next for Fermyon?

Matt Butcher: We are working on our software as a service because again, our goal is to make it possible for anybody to be able to run Spin applications and get them up and running in two minutes or less, even when that means deploying them out somewhere where they've got a public address. So while right now if you want to run Fermyon, you got to go install it in your AWS cluster, your Google Cloud cluster, whatever. As we unroll this service later on this year, it should make it possible for you to get that started just by typing Spin Deploy, and have that up and running inside of Fermyon.

Wesley Reisz: Well, very cool. Well, Matt, thank you for, thanks for the time to catch up and help us further understand what's happening in the Wasm community and telling us about Fermyon and Fermyon cloud.

Matt Butcher: Thanks so much for having me.

.From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Here is the original post:

Matt Butcher on Web Assembly as the 3rd Wave of Cloud Computing - InfoQ.com

Posted in Cloud Computing | Comments Off on Matt Butcher on Web Assembly as the 3rd Wave of Cloud Computing – InfoQ.com

Beeks bets on further growth as financial firms flock to cloud computing – Yahoo News UK

Posted: at 12:19 am

Beeks bets on further growth as financial firms flock to cloud computing

Cloud computing and connectivity provider Beeks Financial has signed two significant multi-year contracts that the company says will underpin revenue growth going forward.

The contracts, which are covered by non-disclosure agreements, are with global asset management firms and are expected to be worth a total of 1.8 million over three years. The announcement came as Glasgow-based Beeks posted an increase in turnover and underlying earnings for the year to June 30.

AIM-listed Beeks supplies technology that speeds up online trading in financial products, and also operates an international network of data centres. Its Proximity Cloud is a single-use platform that banks and brokers can use without being hosted at a Beeks data centre, while Exchange Cloud is designed specifically for global financial exchanges and electronic communication networks.

The majority of financial services organisations around the world are exploring how to utilise the power of the cloud to support their ambitions, chief executive Gordon McArthur said. This presents us with a considerable opportunity and through our Private Cloud, Proximity Cloud and Exchange Cloud, we have the offering to address it.

READ MORE:Beeks on cloud nine with new products securing record sales

He added that the company will continue to invest in expansion following a 15m fundraising in April, of which more than 10m gross remains.

Going for a higher amount allowed us to frontload capacity, so youll see weve got a couple of million pounds worth of stock sitting on the balance sheet which will allow us to deliver this years number, and then again as some of these [new] deals land we will keep growing the investment both in product and infrastructure, Mr McArthur added.

The stockpile includes servers, networking gear and other IT equipment that has minimised the impact of supply chain disruptions on Beeks customers. From its new headquarters in Braehead, the company has also increased staffing levels to approximately 100 employees.

Story continues

Beeks posted a 57 per cent increase in revenues which rose to 18.3m during the year to June, Underlying earnings were 52% higher at 6.3m.

After taking account of approximately 2m in deferred earn-out payments for the April 2020 acquisition of network monitoring specialist Velocimetrics, pre-tax profits fell to 660,000 from 1.25m the previous year.

Shares in Beeks closed 9.5p lower yesterday at 145.5p.

Visit link:

Beeks bets on further growth as financial firms flock to cloud computing - Yahoo News UK

Posted in Cloud Computing | Comments Off on Beeks bets on further growth as financial firms flock to cloud computing – Yahoo News UK

The future of automotive computing: Cloud and edge – McKinsey

Posted: at 12:19 am

As the connected-car ecosystem evolves, it will affect multiple value chains, including those for automotive, telecommunications, software, and semiconductors. In this report, we explore some of the most important changes transforming the sector, especially the opportunities that may arise from the growth of 5G and edge computing. We also examine the value that semiconductor companies might capture in the years ahead if they are willing to take a new look at their products, capabilities, organizational and operational capabilities, and their go-to-market approaches.

Four well-known technology trends have emerged as key drivers of innovation in the automotive industry: autonomous driving, connectivity, electrification, and shared mobilitysuch as car-sharing services (Exhibit 1). Collectively, these are referred to as the ACES trends, and they will have a significant impact on computing and mobile-network requirements. Autonomous driving may have the greatest effect, since it necessitates higher onboard-computing power to analyze massive amounts of sensor data in real time. Other autonomous technologies, over-the-air (OTA) updates, and integration of third-party services will also require high-performance and intelligent connectivity within and outside of the car. Similarly, increasingly stringent vehicle safety requirements require faster, more reliable mobile networks with very low latencies.

Exhibit 1

With ACES functions, industry players now have three main choices for workload location: onboard the vehicle, cloud, and edge (Exhibit 2).

Exhibit 2

To ensure that use cases meet the thresholds for technical feasibility, companies must decide where and how to balance workloads across the available computing resources (Exhibit 3). This could allow use cases to meet increasingly strict safety requirements and deliver a better user experience. Multiple factors may need to be considered for balancing workloads across onboard, edge, and cloud computing, but four may be particularly important. The first is safety, since workloads essential for passenger safety require extremely fast reaction times. Other considerations include latency, computing complexity, and requirements for data transfer, which depend on the type, volume, and heterogeneity of data.

Exhibit 3

Connected-car use cases today typically rely on either onboard computing or the cloud to process their workloads. For example, navigation systems can tolerate relatively high latency and may function better in the cloud. OTA updates are typically delivered via a cloud data center and downloaded via Wi-Fi when it is least disruptive, and infotainment content originates in the cloud and is buffered onboard to give users a better experience. By contrast, accident prevention workloads such as autonomous emergency-braking systems (AEBS) require very low latency and high levels of computing capability, which, today, may mean that they are best processed onboard the vehicle.

Advances in computing and connectivity are expected to enable many new and advanced automotive use cases.

Advances in computing and connectivity are expected to enable many new and advanced use cases (Exhibit 4). These developments could alter where workloads are located. Of particular significance, the rollout of 5G mobile networkscould allow more edge processing. Given the importance of these interrelated technologies, we explored their characteristics in detail, focusing on automotive applications.

Exhibit 4

5G technology is expected to provide the bandwidth, low latency, reliability, and distributed capabilities that better address the needs of connected-car use cases. Its benefits to automotive applications fall into three main buckets:

These benefits could contribute to greater use of edge applications within the automotive sector. Workloads that are not safety-criticalinfotainment and smart traffic management, for examplecould start to shift to the edge from onboard or in the cloud. Eventually, 5G connectivity could reduce latency to the point that certain safety-critical functions could begin to be augmented by the edge infrastructure, rather than relying solely on onboard systems.

Most current automotive applications today tend to rely exclusively on one workload location. In the future, they may use some combination of edge computing with onboard or cloud processing that delivers higher performance. For instance, smart traffic management systems may improve onboard decision making by augmenting the vehicles sensor data with external data (for example, other vehicles telemetry data, real-time traffic monitoring, maps, and camera images). Data could be stored in multiple locations and then fused by the traffic management software. The final safety-related decision will be made onboard the vehicle. Ultimately, large amounts of real-time and non-real-time data may need to be managed across vehicles, the edge infrastructure, and the cloud to enable advanced use cases. In consequence, data exchanges between the edge and the cloud must be seamless.

The evolving automotive value chainwill open many new opportunities for those within the industry and external technology players. The total value created by connected-car use cases could reach more than $550billion by 2030, up from about $64 billion in 2020 (Exhibit 5).

Exhibit 5

Increased connectivity opens up opportunities for players across the automotive value chain to improve their operations and customer services. Take predictive maintenance in cars as an example. Aftermarket maintenance and repair provision now predominantly involve following a fixed interval maintenance schedule or reactive maintenance/repair. There is little visibility around the volume of vehicles that need to be serviced in a particular period, leading to inefficiencies in service scheduling, replacement parts ordering, and inventory, among others. Predictive maintenance using remote car diagnostics could improve the process by giving OEMs and dealers an opportunity to initiate and manage the maintenance process.

The pace of rollout of advanced connected-car use casesis highly contingent on the availability of 5G and edge computing. A variety of factors are converging to accelerate this. Demand is rising for these critical enablers, fueled by a proliferation of consumer and industry use cases. In the short term, value may be generated through enhancements to services already available with 4G, including navigation and routing, smart parking, centralized and adaptive traffic control, and monitoring of drivers, passengers, or packages.

We expect that greater 5G and edge availability may expand the list of viable use cases (technically and financially), boosting edge value exponentially. Looking to 2030, about 30 percent of our value estimate may be enabled by 5G and edge (from 5percent in 2020), largely consistent with our cross-sectoral report on advanced connectivity.

Value creation could be accelerated by traditional players moving into adjacencies and by new entrants from industries not traditionally in the automotive value chain, such as communication system providers (CSPs), hyperscalers, and software developers. Players such as Intel, Nvidia, and the Taiwan Semiconductor Manufacturing Company are adding automotive-softwarecapabilities, leading to greater synergies and vertical-integration benefits. In addition to accelerating value creation, new entrants may compete for a greater share of the total value.

Automotive-hardware value chains are expected to diverge based on the type of OEM. Traditional auto manufacturers, along with their value chains, are expected to see a continuation of well-established hardware development roles based on existing capabilities. Automobiles, components, devices, and chips for applications ranging from cars to the cloud may continue to be primarily manufactured by the companies that specializein them. Nontraditional or up-and-coming automotive players could codevelop vehicle platforms with the established car OEMs and use OEMs services or contract manufacturers such as Magna Steyr for the traditional portions of the value chain.

Established players may seek to increase their share by expanding their core businesses, moving up the technology stack, or by growing their value chain footprints. For instance, it is within the core business of semiconductor players to create advanced chipsets for automotive OEMs, but they could also capture additional value by providing onboard and edge software systems or by offering software-centric solutions to automotive OEMs. Similarly, to capture additional value, hyperscalers could create end-user services, such as infotainment apps for automotive OEMs or software platforms for contract manufacturers.

As players make strategic moves to improve their position in the market, we can expect two types of player ecosystems to form. In a closed ecosystem, membership is restricted and proprietary standards may be defined by a single player, as is the case with Volkswagen, or by a group of OEMs. Open ecosystems, which any company can join, generally espouse a democratized set of global standards and an evolution toward a common technology stack. In extreme exampleswhere common interfaces and a truly open standard existeach player may stay in its lane and focus on its core competencies.

Hybrid ecosystems will also exist. Players following this model are expected to use a mix of open and closed elements on a system-by-system basis. For example, this might be applied to systems in which OEMs and suppliers of a value chain have particular expertise or core competency.

Exhibit 6 describes the advantages and disadvantages of each ecosystem model.

Exhibit 6

Companies in the emerging connected-car value chain develop offerings for five domains: roads and physical infrastructure, vehicles, network, edge, and cloud. For each domain, companies can provide software services, software platforms, or hardware (Exhibit 7).

Exhibit 7

As automotive connectivity advances, we expect a decoupling of hardware and software. This means that hardware and software can develop independently, and each has its own timeline and life cycle. This trend may encourage OEMs and suppliers to define technology standards jointly and could hasten innovation cycles and time to market. Large multinational semiconductor companies have shown that development time can be reduced by up to 40 percent through decoupling and parallelization of hardware and software development. Furthermore, the target architecture that supports this decoupling features a strong middleware layer, providing another opportunity for value creation in the semiconductor sector. This middleware layer may likely be composed of at least two interlinked domain operating systems that may handle the decoupling for their respective domains. Decoupling hardware and software, which is a key aspect of innovation in automotive, tilts the ability to differentiate offerings heavily in favor of software.

New opportunities. In the software layer, companies could obtain value in several different ways. With open ecosystems, participants will have broadly adopted interoperability standards with relatively common interfaces. In such cases, companies may remain within their traditional domains. For instance, semiconductor players may focus on producing chipsets for specific customers across the domains and stack layers, OEMs concentrate on car systems, and CSPs specialize in the connectivity layer and perhaps edge infrastructure. Similarly, hyperscalers may capture value in cloud/edge services.

In closed ecosystems, by contrast, companies may define proprietary standards and interfaces to ensure high levels of interoperability with the technologies of their members. For example, OEMs in a closed ecosystem may develop analytics, visualization capabilities, and edge or cloud applications exclusively for their own use, in addition to creating software services and platforms for vehicles. Sources of differentiation for vehicles could include infotainment features with plug-and-play capabilities, autonomous capabilities such as sensor fusion algorithms, and safety features.

While software is a key enabler for innovation, it introduces vulnerabilities that can have costly implications for OEMs, making cybersecurity a priority (see sidebar, The importance of cybersecurity, for more information). Combined, the 5G and edge infrastructure could potentially offer increased flexibility to manage security events related to prevention and response.

Hardware players could leverage their expertise to offer advanced software platforms and services. Nvidia, for instance, has entered the market for advanced driver-assistance systems (ADAS) and is complementing its system-on-a-chip AI design capabilities with a vast range of software offerings that cover the whole automated-driving stackfrom OS and middleware to perceptionand trajectory planning.

Some companies are also moving into different stack layers. Take Huawei, which has traditionally been a network equipment provider and producer of consumer-grade electrical and electronic (E&E) equipment, and manufacturer of infrastructure for the edge and cloud. Currently, the company is targeting various vehicle stack layers, including the base vehicle operating systems, E&E hardware, automotive-specific E&E, and software and EV platforms. In the future, Huawei may develop vehicles, monitoring sensors, humanmachine interfaces, application layers, and software services and platforms for the edge and cloud domains.

Greater automotive connectivity will present semiconductor players and other companies along the automotive value chain with numerous opportunities. In all segments, they may benefit from becoming solution providers, rather than keeping a narrower focus on software, hardware, or other components. As they move ahead and attempt to capture value, companies may benefit from reexamining elements of their core strategy, including their capabilities and product portfolio.

The automotive semiconductor market is one of the most promising subsegments of the global semiconductor industry, along with the Internet of Things and data centers. Semiconductor companies that transform themselves from hardware players to solution providers may find it easier to differentiate their business from the competitions. For instance, they might win customers by developing application software optimized for their system architecture. Semiconductor companies could also find emerging opportunities in the orchestration layer, which may allow them to balance workloads between onboard, cloud, and edge computing.

As semiconductor companies review their current product offerings, they may find that they can expand their software presence and produce more purpose-specific chipssuch as microcontrollers for advanced driver-assistance, smart cockpit, and power-control systemsat scale by leveraging their experience in the automotive industry and in edge and cloud computing. Beyond software, semiconductor companies might find multiple opportunities, including those related to more advanced nodes with higher computing power and chipsets with higher efficiency.

Semiconductor companies can capitalize on their edge and cloud capabilities by building strategic partnerships with hyperscalers and edge players that have a strong focus on automotive use cases.

To improve their capabilities related to purpose-specific chips, semiconductor players would benefit from a better understanding of the needs of OEMs and consumers, as well the new requirements for specialized silicon. Semiconductor companies can capitalize on their edge and cloud capabilities by building strategic partnerships with hyperscalers and edge players that have a strong focus on automotive use cases.

Tier 1 suppliers could consider concentrating on capabilities that may allow them to become tier 0.5 system integrators with higher stack control points. In another big shift, they could leverage existing capabilities and assets to develop operating systems, ADAS, autonomous driving, and human-machine-interface software for new cars.

To produce the emerging offerings in the automotive-computing ecosystem, tier 1 players might consider recruiting full-stack employees who see the bigger picture and can design products better tuned to end-user expectations. They might also want to think about focusing on low-cost countries and high-volume growth markets with price-differentiated, customized, or lower-specification offerings that have already been tested in high-cost economies.

OEMs could take advantage of 5G and edge disruption by orienting business and partnership models toward as-a-service solutions. They could also leverage their existing assets and capabilities to build closed- or open-ecosystem applications, or focus on high-quality contract manufacturing. Key OEM high growth offerings could include as-a-service models pertaining to mobility, shared mobility, and batteries. OEMs, when seeking partnerships with other new and existing value chain players, need to keep two major things in mind: filling talent and capability gaps (for instance, in chip development) and effectively managing diverse portfolios.

CSPs must keep network investments in lockstep with developments in the automotive value chain to ensure sufficient 5G/edge service availability. To this end, they may need to form partnerships with automotive OEMs or hyperscalers that are entering the space. For best results, CSPs will ensure that their core connectivity assets can meet vehicle-to-everything (V2X) use case requirements and create a road map to support highly autonomous driving. Connectivity alone represents a small part of the overall value to CSPs, however, and companies will benefit from expanding their product portfolios to include edge-based infrastructure-as-a-service and platform-as-a-service. Evolving beyond the traditional connectivity core may necessitate organizational structures and operating models that support more agile working environments.

Hyperscalers could gain ground by moving quickly to partner with various value chain players to test and verify priority use cases across domains. They could also form partnerships with industry players to drive automotive-specific standards in their core cloud and emerging edge segment. To determine their full range of potential opportunitiesas well as the most attractive oneshyperscalers should first analyze their existing assets and capabilities, such as their existing cloud infrastructure and services. They would also benefit from aligning their cloud and edge product portfolios or by extending cloud-availability zones to cover leading locations for V2X use case rollouts and real-world testing. If hyperscalers want to increase the footprint of their cloud and edge offerings within the automotive value chain, they could consider a range of partnerships, such as those with OEMs to test and verify use cases.

The benefits of 5G and edge computing are real and fast approaching, but no single player can go it alone. There are opportunities already at scale today that are not clearly addressed in the technological road map of many automotive companies, and not everybody is capturing them.

Building partnerships and ecosystems for bringing a connected car to market and capturing value are crucial, and some semiconductor companies are already forging strong relationships with OEMs and others along the value chain. The ACES trends in the automotive industry are moving fast; semiconductor companies mustmove quickly to identify opportunities and refine their existing strategies. These efforts will not only help their bottom lines but also could also allow tier 1s and OEMs to shorten the time-to-market for their products and services, which would accelerate the adoption of smart vehiclesand that benefits everyone.

See the rest here:

The future of automotive computing: Cloud and edge - McKinsey

Posted in Cloud Computing | Comments Off on The future of automotive computing: Cloud and edge – McKinsey

Cloud Computing Market In Government Sector to grow by USD 25.41 Bn in 2022, Alphabet Inc. and Amazon.com Inc. emerge as Key Contributors to growth -…

Posted: at 12:19 am

NEW YORK, Oct. 6, 2022 /PRNewswire/ -- According to the latest market research report titled cloud computing marketin government sector by Product (Hardware, Software, and Services) and Geography (North America, Europe, APAC, South America, and the Middle East and Africa) from Technavio, the market is expected to increase byUSD 25.41 bn. The growth can be mainly attributed to the surging growth of the wellness industry. Request Free Sample Report.

Frequently Asked Questions:

The market is fragmented, and the degree of fragmentation will accelerate during the forecast period. Alphabet Inc., Amazon.com Inc., AT and T Inc., Capgemini Service SAS, CGI Inc., Cisco Systems Inc., Citrix Systems Inc., Dell Technologies Inc., Equinix Inc., Fujitsu Ltd., Hewlett Packard Enterprise Co., Informatica LLC, International Business Machines Corp., Lumen Technologies Inc., Microsoft Corp., Oracle Corp., Salesforce.com Inc., SAP SE, and VMware Inc. are some of the major market participants.

The increased cross-functional service, growing demand for cloud computing to decrease IT expenditure, and rising demand for the OPEX model will offer immense growth opportunities. However, increasing operating expenses is likely to pose a challenge for the market vendors.Buy Sample Report.

In a bid to help players strengthen their market foothold, this cloud computing market in the government sectorforecast report provides a detailed analysis of the leading market vendors. The report also empowers industry honchos with information on the competitive landscape and insights into the different product offerings offered by various companies.

Cloud Computing Market in Government Sector Segmentation

Cloud Computing Market in Government Sector Scope

Technavio presents a detailed picture of the market by the way of study, synthesis, and summation of data from multiple sources. Thecloud computing market in the government sectorreport covers the following areas:

This study identifies the rising demand for cloud-based securityas one of the prime reasons driving the Cloud Computing Market in the Government Sectorgrowth during the next few years. Download Free Sample Report.

Cloud Computing Market in Government Sector Key Highlights

Related Reports:

Identity and Access Management Marketby End-user, Deployment, and Geography - Forecast and Analysis 2022-2026

Simulation and Analysis Software Marketby Deployment, End-user, and Geography - Forecast and Analysis 2022-2026

Cloud Computing Market In Government Sector Scope

Report Coverage

Details

Page number

120

Base year

2021

Forecast period

2022-2026

Growth momentum & CAGR

Accelerate at a CAGR of 14.04%

Market growth 2022-2026

$25.41 billion

Market structure

Fragmented

YoY growth (%)

13.48

Regional analysis

North America, Europe, APAC, South America, and Middle East and Africa

Performing market contribution

North America at 44%

Key consumer countries

US, China, Japan, UK, and Germany

Competitive landscape

Leading companies, competitive strategies, consumer engagement scope

Companies profiled

Alphabet Inc., Amazon.com Inc., AT and T Inc., Capgemini Service SAS, CGI Inc., Cisco Systems Inc., Citrix Systems Inc., Dell Technologies Inc., Equinix Inc., Fujitsu Ltd., Hewlett Packard Enterprise Co., Informatica LLC, International Business Machines Corp., Lumen Technologies Inc., Microsoft Corp., NEC Corp., NetApp Inc., NTT DATA Corp., Oracle Corp., Salesforce.com Inc., SAP SE, and VMware Inc.

Market Dynamics

Parent market analysis, Market growth inducers and obstacles, Fast-growing and slow-growing segment analysis, COVID-19 impact and future consumer dynamics, and market condition analysis for the forecast period.

Customization purview

If our report has not included the data that you are looking for, you can reach out to our analysts and get segments customized.

Table of Contents

1 Executive Summary

2 Market Landscape

3 Market Sizing

4 Five Forces Analysis

5 Market Segmentation by Product

6 Customer Landscape

7 Geographic Landscape

8 Drivers, Challenges, and Trends

9 Vendor Landscape

10 Vendor Analysis

11 Appendix

About Us

Technavio is a leading global technology research and advisory company. Their research and analysis focuses on emerging market trends and provides actionable insights to help businesses identify market opportunities and develop effective strategies to optimize their market positions. With over 500 specialized analysts, Technavio's report library consists of more than 17,000 reports and counting, covering 800 technologies, spanning across 50 countries. Their client base consists of enterprises of all sizes, including more than 100 Fortune 500 companies. This growing client base relies on Technavio's comprehensive coverage, extensive research, and actionable market insights to identify opportunities in existing and potential markets and assess their competitive positions within changing market scenarios.

ContactTechnavio ResearchJesse MaidaMedia & Marketing ExecutiveUS: +1 844 364 1100UK: +44 203 893 3200Email: [emailprotected]Website: http://www.technavio.com/

SOURCE Technavio

Read more here:

Cloud Computing Market In Government Sector to grow by USD 25.41 Bn in 2022, Alphabet Inc. and Amazon.com Inc. emerge as Key Contributors to growth -...

Posted in Cloud Computing | Comments Off on Cloud Computing Market In Government Sector to grow by USD 25.41 Bn in 2022, Alphabet Inc. and Amazon.com Inc. emerge as Key Contributors to growth -…

Gitex Global 22: Cloud computing and Metaverse is where UAEs tech spending will be – Gulf News

Posted: at 12:19 am

Dubai: Businesses in the UAE and the Gulf are still spending heavily on their technology needs this year, but you would need to look elsewhere to confirm it. In other words, check out what these businesses had going in the cloud.

Thats right, cloud is where the tech spending was most visible, with storing and managing organisational data becoming the most pressing need for organisations. And keeping them safe too.

This is also the reason why it seemed that IT spending by corporates was not happening at the same levels as in 2020-21, when the need for remote working forced businesses of all sizes to go in for immediate upgrades to support the transition.

Spending by corporates is on the rise, moving more to cloud-based opex (operating expenditure) rather than traditional capex (capital expenditure) models, said Victoria Mendes, Research Manager for Data & Analytics (META) at the consultancy IDC.

- Victoria Mendes, IDC

So, it means less of the heavy spending on servers and costly upgrades to their IT systems - unless they are absolutely necessary. The other category that keeps getting the sign-offs from businesses is cyber security.

And tech industry sources say 2023 has all the makings for another solid year in spending. In the UAE, government organisations are far out in front in making these investments and upgrades necessary for the smart everything, said the general manager of an IT services company in Dubai. The same can be said only for the biggest private companies here - theres so much that needs to be done by others.

Unless theres a major worldwide recession and its effects are felt by Gulf economies too, the spending on IT will not stop.

This is the backdrop then for the opening of the latest Gitex, now branded as Gitex Global, which will dwell deep on the fortunes of startups, fintechs, the Metaverse and, of course, the cloud among the key verticals. The 2022 edition starts Monday (October 10) and runs until Friday at Dubai World Trade Centre.

Cloud action

Global heavyweights such as Huawei, Amazon Web Services, Microsoft and Oracle have carved up enough space in the UAEs transition towards the cloud. They have gone operational with cutting-edge data centres, while the G42- and e&- (formerly Etisalat) owned Khazna is the local powerhouse in this space.

Check out these numbers on the cloud spend in the UAE, and it gives even more evidence of why IDCs Victoria thinks this is where the IT spending spree will continue to be.

According to IDC research, about 70 per cent of UAE enterprises are moving from the discovery and piloting phase to significant implementation of business apps in the cloud. Public cloud spending in the UAE is expected to grow 31.6 per cent this year to $1.3 billion, while spending on private clouds continues to rise and hybrid multi-clouds are increasingly the norm for organisations. (Public cloud is where computing resources are owned and operated by the provider and web-shared with multiple tenants.)

Digital transformations - and a chance for VCs

On a parallel track, digital switchover projects will keep tech companies busy. There will be a lot of focus on online payment services across categories, opening up more chances for fintechs to secure key contracts, funding, said an industry source.

That would be cue for more action from tech startups in the region. Walid Hanna is the founder and CEO of Middle East Venture Partners (MEVP), the VC firm based in Dubai. Hanna reckons these will be the key trends in the tech startup area:

* Market penetration (of e-commerce, online payment, etc.) is still quite low compared to the Western markets (at 4x lower).

* The online banking infrastructure is under-developed, especially in the Levant and North Africa, which also gives room for further digitization down the line.

* Presence of a high under-banked population with low options for credits.

* There is still room for clearer regulation, especially in the fintech space. The UAE, Saudi Arabia and Egypt are the pioneers in the region.

* Exit options are increasing either from a M&A perspective or listing on stock markets, which was unimaginable three years ago.

- Walid Hanna, MEVP

We closed a partial exit from Fresha, an online marketplace and SaaS (Service as a Software) management platform for spas and salons, yielding a 52x cash on cash return. The company started in the UAE, serving primarily the US markets. It subsequently expanded globally and moved its HQ to the UK.

And then theres Metaverse

Dubais digital transformation blueprint has set heavy emphasis on the Metaverse. The first set of Dubai entities, such as Dubai Airport Free Zone, have already come out with what their presence will be in parallel universe built around AR.

And Dubai entities presence on the Metaverse will be based on real-world problem solving and more.

This is how Sultan Ahmed Bin Sulayem, Group Chairman and CEO of DP World, said it just the other day: We are exploring the usage of the Metaverse across our services, including simulations of warehousing and terminal operations, container and vessel repair inspections, safety training, and other commercial uses. Our customers will now be able to see and understand the whole supply chain from end-to-end with full visibility and take corrective actions in case of logistics bottlenecks.

- Sultan Ahmed Bin Sulayem, DP World

The cloud and the Metaverse - in IT, thats where the spending will be...

The rest is here:

Gitex Global 22: Cloud computing and Metaverse is where UAEs tech spending will be - Gulf News

Posted in Cloud Computing | Comments Off on Gitex Global 22: Cloud computing and Metaverse is where UAEs tech spending will be – Gulf News

Cloud Automation is Speeding Up Innovation in Pharma and Life Sciences – PR Web

Posted: at 12:19 am

Researchers and scientists need on-demand access to application services, but compliance concerns mean they are not able to simply go to the public cloud like those in other industries

GREENWOOD VILLAGE, Colo. (PRWEB) October 10, 2022

In recent years, the life sciences and pharma industries have experienced an increase in regulatory oversight while also facing pressure to respond to global healthcare crises. This tension between regulation and innovation will only increase as security and data privacy concerns limit the ability of many industries to take advantage of public cloud services.

Looking forward, as technology expands, these industries will continue to see a steady escalation in regulations. By optimizing data sharing and advancing collaboration and innovation, cloud computing can facilitate world-changing discoveries. Herein lies a unique difficulty for those operating within life sciences and pharma industries organizations must innovate at the fastest pace possible to compete while still adhering to critical regulatory requirements. As the reliance on technology increases, the pace of R&D in pharma and life sciences is becoming bottlenecked by internal IT processes that are historically unable to compete with the speed of service delivery provided by public clouds.

In highly regulated sectors, there has been a lot of frustration on the part of teams that feel handcuffed by legacy approaches to IT, Brad Parks, Chief Product and Marketing Officer at Morpheus Data. Researchers and scientists need on-demand access to application services, but compliance concerns mean they are not able to simply go to the public cloud like those in other industries.

This is where Morpheus Data provides innovative hybrid cloud management technology that makes it possible for researchers and teams to move faster while still meeting strict regulatory guidelines and process controls. Through a unified orchestration and automation platform, teams get instant on-demand access to the applications they need to advance their research agenda without having to wait on manual IT processes. Individual users can request organizationally approved application services from a simple self-service portal or API, which means they can constantly analyze data sets, iterate, and test again without delay. At the same time, IT and security teams are able to set strict policies around where applications are deployed and how data is accessed. By retaining critical control of organization-wide governance, IT can protect the organization from risk.

With the Morpheus hybrid cloud management platform, organizations can get the speed and efficiency of the public cloud within the safety and security of their on-premises datacenter. The platform integrates with the datacenter tools and hypervisors that organizations already have, reducing the number of manual tasks that must be performed each time an application service is provisioned. This helps those operating in highly regulated environments access the data sets and applications they need to perform testing in minutes rather than waiting days, weeks, or months for IT to provide assistance.

One Morpheus Data client, AstraZeneca, cites that a service delivery process that previously took 80 hours per server end-to-end can now be accomplished in 27 minutes, start to finish. AstraZeneca also found they saved over six-million dollars in IT overhead by automating their hybrid-cloud estate with Morpheus.

In the future, pharma and life sciences organizations that are highly automated will be poised to be able to innovate the fastest. These teams will be able to process patents, receive approvals, and move through the FDA at speed. Automating and orchestrating private cloud environments will allow these teams to operate efficiently while still adhering to strict security and regulation requirements.

To learn more about how Morpheus can help speed up innovation while reducing risk, download the whitepaper Market Insight Report.

About Morpheus Data:Morpheus Data is the leader in hybrid cloud application orchestration, helping hundreds of organizations in life science, pharmaceutical, financial services, and other industries unleash productivity and address IT operations skills gaps through their unified software platform. The Morpheus platform enables self-service provisioning of VMs, Containers, Clusters, and Application Stacks into any private or public cloud while staying within policy guardrails. For more information and to request a personalized platform demo, visit http://www.morpheusdata.com/demo.

Reference:1. Team, E. (2020, December 17). Why do biotech and pharma need so much computing power in the cloud? Atlantic.Net. Retrieved October 6, 2022, from https://www.atlantic.net/life-sciences-pharma-biotech/biotech-pharma-need-much-computing-power-cloud/

Share article on social media or email:

Read the original:

Cloud Automation is Speeding Up Innovation in Pharma and Life Sciences - PR Web

Posted in Cloud Computing | Comments Off on Cloud Automation is Speeding Up Innovation in Pharma and Life Sciences – PR Web

Does DigitalOcean Stand a Chance Against the Biggest Cloud Providers? – The Motley Fool

Posted: at 12:19 am

Cloud computing marks the next stage in business analytics, computing resources, and information storage. It will be one of the most significant business innovations over the next decade, which is why many market research companies think the cloud computing market could grow by more than 17% annually to $1.6 trillion by 2030.

With prospects like that, investors may be considering how they can get in on this market shift. However, the top three cloud infrastructure companies, Amazon (Amazon Web Services (AWS)), Microsoft (Azure), and Alphabet (Google Cloud), control about 65% of the total market right now. If you know anything about these companies, it's that cloud computing isn't the largest segment of their operations. As a result, investors may be looking for a dedicated cloud computing company to best take advantage of this trend.

If you fit the description, DigitalOcean Holdings (DOCN -3.79%) may be your stock. It's solely focused on the cloud, which also means it competes with the big players.

Can DigitalOcean survive in this cutthroat environment?

What can users can do with cloud computing? The premise is straightforward: A cloud computing provider (be it AWS or DigitalOcean) has data centers around the world with computational power. Customers can sign up to use these computing resources to support an app, store data, or host a website.

Companies of all sizes can find this useful, but the problem is the big three don't want to be spending their time on mom-and-pop start-up businesses. They're after big customers that bring sizable contracts. That's not to say AWS, Azure, or Google Cloud can't be used for this purpose; they're just not optimized for it and are likely more expensive. Providing cloud infrastructure to these smaller businesses is where DigitalOcean's niche is.

DigitalOcean's targets are small businesses and developers, and it emphasizes its simplicity, customer support, and open-source platform, making this solution ideal for its customer base. To back up its affordability claim, here's how it compares to the big three:

Data source: DigitalOcean.

That's some extreme value compared to the other three, which is why DigitalOcean has 105,000 customers paying at least $50 per month.

However, DigitalOcean points out that there will be 43 million developers by 2025, and more than 100 million small businesses exist globally, with 14 million more being added each year. Not every small business needs to use cloud computing. But many could benefit by using DigitalOcean's product, and relatively few have utilized its services so far.

DigitalOcean has identified a large niche and is doing a great job of catering to its core customer base. But how are its financials?

Despite general economic uncertainty, cloud computing is an area where businesses continue to spend. This trend was reflected in Amazon's and Alphabet's most recent quarterly results, with AWS sales growing 33% year over year to $19.7 billion and Google Cloud rising 36% year over year to $6.3 billion. (Microsoft doesn't individually break out Azure sales, so it was excluded from this comparison). DigitalOcean's entire business is cloud computing, so investors don't need to look at individual segments. Its revenue was up 29% year over year to $134 million, with annual run rate revenue up 28% from the prior-year period to $544 million.

DigitalOcean may never reach its competitors' run rates, but remember, the larger companies cater to massive enterprise customers, while DigitalOcean focuses on small businesses.

For the full year, DigitalOcean's revenue is expected to grow 32% to $566 million, with free cash flow coming in at $54 million -- a 9.5% margin. Next year, analysts expect DigitalOcean to maintain its growth, and they project 31.7% sales growth on average.

With all that in mind, I think DigitalOcean can survive in its niche. As long as the company stays true to its mission, investors shouldn't have anything to worry about. Moreover, with the company trading for a mere 8.7 times sales, I think it's also a solid buy.

Cloud computing has massive benefits and doesn't need to be solely reserved for the largest businesses. DigitalOcean is there to ensure customers of all sizes are taken care of. As one of few pure-play cloud computing investments, it could generate massive shareholder returns as this space matures over the next decade.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. John Mackey, CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Keithen Drury has positions in Alphabet (C shares) and Amazon. The Motley Fool has positions in and recommends Alphabet (A shares), Alphabet (C shares), Amazon, DigitalOcean Holdings, Inc., and Microsoft. The Motley Fool has a disclosure policy.

Originally posted here:

Does DigitalOcean Stand a Chance Against the Biggest Cloud Providers? - The Motley Fool

Posted in Cloud Computing | Comments Off on Does DigitalOcean Stand a Chance Against the Biggest Cloud Providers? – The Motley Fool

What is cloud storage? A comprehensive evaluation of data-as-a-service – TechRadar

Posted: at 12:19 am

Cloud storage (opens in new tab) remains one of the more confusing tech terms around; nearly a quarter million queries on what is cloud storage were carried out globally over the past 12 months according to Google, a 21% rise over the same period the year before.

At its simplest, Cloud storage is disk space, usually in a data center, which you can access to save or retrieve files. That space is usually owned and operated by what is commonly called a hyperscaler (e.g. Google, Facebook, Apple, Microsoft, Amazon, Tencent or Alibaba).

That is an important point: that space is owned by them, not you, you are only renting it (as you would do in a condominium or a flat); in other words, you are leasing a hard drive or SSD or tape (or a portion of it) in the cloud. For the purpose of this article, we will narrow our focus on user-friendly cloud storage where one can store digital data.

If you are of a certain age, you may remember buying audio CDs off the shelves of supermarkets; you would own the CD and youd be able to do whatever you want with it. Then came Spotify and everything changed, including the fact that youd be paying a monthly fee and essentially get access to the biggest music library that ever existed. No need to buy CDs or carry your collection around.

Cloud storage is essentially the same except that you usually pay based on usage (Backblaze being one notable exception). You can get as much data as you want, anytime and not have to worry about things that come with owning a storage device: theft, aftersales, power consumption, incidents/accidents and subsequent data recovery, being able to access data anywhere etc.

You can look at your own phone or computer as a comparison. At its simplest, a computer does three basic things to data: transmit it, store it or compute it; cloud storage is therefore a subset of cloud computing, only remotely. Cloud computing is used generically to describe any sort of on-demand computer-related service that can be done by a service provider, usually as a subscription. Check out our more comprehensive write up on Cloud Computing.

Amazon was the first one to launch cloud computing at scale with a cloud storage service, almost 17 years ago. S3 (Simple Storage Service) is an object-based storage service, which differs from a normal business or consumer cloud storage; it spawned a dozen or so other storage-based services on AWS. AWS (opens in new tab) has grown into a massive sprawling ecosystem with 25 categories as of October 2022 and more than 200 services.

Amazons definition of Cloud Storage talks to a business audience but applies to consumers as well. It is a cloud computing model that stores data on the Internet through a cloud computing provider who manages and operates data storage as a service. Its delivered on demand with just-in-time capacity and costs, and eliminates buying and managing your own data storage infrastructure. This gives you agility, global scale and durability, with anytime, anywhere data access.

As mentioned earlier your data is most likely to be located in a data center, either owned by a hyperscaler or by a data center operator. This is where data is stored in the cloud. A service provider like iDrive will rent space and amenities (lights, electricity, cooling) and deploy its own cloud storage services, in a way similar to dedicated server hosting, colocation providers or bare metal hosting.

The best cloud storage providers will save multiple copies of your data across multiple data centers to mitigate the risk of your data being lost should there be a catastrophic event (e.g. Tsunami, Typhoon, earthquake, war or even fires). That does happen even to the biggest players and could have a long lasting negative impact on businesses.

Others go even further by storing your data on thousands of devices, creating a large number of mirrors that make losing your entire data virtually impossible. Cubbit and Storj are two of the most well-known proponents of this radically different approach to decentralized cloud storage, one which embraces peer-to-peer technology, the same philosophy behind torrenting and bittorrent.

Not only do hyperscalers have their own cloud storage services (Google Drive, Amazon Drive, Apple iCloud, OneDrive, Terabox), they also host over cloud storage services on behalf of other firms. For example, Dropbox is one of Amazon Web Servicess biggest customers. Setting up your own cloud storage provider is so easy weve got a tutorial for that.

As such, there are dozens of services that offer some form of cloud storage: you might see them described as online backup, cloud backup, online drives, file hosting and more, but essentially theyre still cloud storage with custom apps or web consoles to add some extra features. As of October 2022, weve reviewed around 50 of them which is probably a quarter of the total number of providers in the market.

You wont have to look far to find your nearest cloud storage service, though, because theres a very good chance you have access to one already. Facebook and Twitter provide free cloud storage when they allow users to store photos and videos on their servers, for instance, while even the most basic free Google account gets you 15GB of cloud storage space via the Google Drive app.

Saving your data to the cloud protects you from all kinds of data disasters. Whether it's a dead hard drive, a lost laptop or a ransomware attack, having your files out of harm's way means you'll avoid a whole lot of pain.

Sharing files via the cloud is safer and easier than many alternatives. Send something by email or copy it to a USB key, and your data doesn't have much protection beyond wishful thinking ('no-one else has access to that email account, right?') Cloud storage providers usually enable file encryption from the moment they leave your device, then give you a range of secure ways to share them with others.

Many services allow you to access files directly from your storage, without downloading them first. You might be able to stream a huge video from the cloud, for instance. You can often collaborate on files with others, perhaps with two people editing a document at the time.

Storing files in the cloud gives them real protection from damage, too. Accidentally deleted something? You'll usually find it in the Recycle Bin. Made a big mistake in the last few edits? You can often restore any previous version of the document from the last 30 days, and sometimes more - a real life-saver.

Its important to know a cloud storage service can be trusted with your files, so most providers go to a lot of trouble to make sure theyre safe. Theyll upload and download files via a secure encrypted connection, for instance. Maximum security data centers ensure no unauthorized person gets access to their servers, and even if someone did break in, leading-edge encryption (opens in new tab) prevents an attacker viewing your data.

While cost is an obvious factor to consider when choosing a cloud storage provider, it is secondary to trust. After all, whats the point of saving a few dollars a month if you cant be 100% sure that your data is safe, untampered and 100% private? The best secure cloud storage provider is likely to be one of the bigger players in the cloud storage market.

Fran Villalba Segarra, CEO of cloud storage company Internxt, wrote extensively on how cloud storage works and on whether cloud storage is safe, secure and private and we found out that the term cloud storage actually dates back from 1896 and was used to describe just that; how to store clouds. Andrew Martin, UK MD for Egnyte, a business cloud storage provider, investigated the pros and cons of on-premise vs cloud storage setups. Oh and before we forget, make sure you check our comparison, Cloud storage vs Cloud backup vs Cloud sync, written by Jay El-Anis from UK cloud storage provider, Zoolz.

See the original post:

What is cloud storage? A comprehensive evaluation of data-as-a-service - TechRadar

Posted in Cloud Computing | Comments Off on What is cloud storage? A comprehensive evaluation of data-as-a-service – TechRadar