Arm puts virtual hardware in the cloud so you won’t have to wait for the actual chips – The Register

Arm is putting virtual models of its chip designs in the cloud so developers can write and test applications before the physical hardware gets into their hands.

The Arm Virtual Hardware offering is part of new product portfolio called "ARM Total Solutions for IoT." Cringe-worthy marketing jargon aside, Arm wants to give developers a head-start in coding for Internet of Things applications, like cars, robots and refrigerators.

Here's how it works.

Arm licenses chip designs and intellectual property for chips used in devices ranging from battery-operated devices to cars and servers. Once Arm releases the building blocks for chips to silicon partners, it will also make a virtual representation of the chip stack available to developers in the cloud.

Developers can then start writing, testing and debugging applications and test them on simulated hardware. Historically everything happened in sequence, with ARM releasing chip design IP to silicon providers, and there was a three-year wait before development of apps could begin.

Now, chip design and software development can happen almost in parallel, Mohamed Awad, vice president of IoT and Embedded at Arm, told The Register.

"It represents a new way for software developers to innovate and develop for all those diverse devices, but they can do so in the cloud without hardware," Awad said.

This is the first time Arm is offering virtual hardware, and it'll initially be for IoT, Awad said.

The Virtual Hardware will initially be available for the Corstone-300 subsystem from Arm SoC partners, incorporating the Arm Cortex M55 AI processor and Arm Ethos U55 microNPU.

Awad declined to say whether something similar would be available for mobile chip designs, and he highlighted why it needed to first be in IoT.

The overwhelming number and diversity of IoT chips makes it costly and challenging to test and deploy software, and virtual hardware provides a better model on which to program. Compare that to mobile phones, which replicates one chip design over a number of devices.

Testing software on virtual hardware isn't new, with examples being flight simulation and wind-tunnel testing in engineering applications.

Arm is relying on a modern development methodology called DevOps, an iterative software cycle so developers can track performance improvements, the quality of code, and achieve a level of comfort for code across a range of devices, all while the chip is being developed. The iterative and collaborative DevOps methodology is used by Amazon, Facebook and Google to quickly deploy code to test new features in their products.

"Arm Virtual Hardware allows them to do that in the cloud ... as opposed to what they had to do before which was just have a massive hardware farm and run flash on those devices every time they had to make the code change," Awad said.

Amazon used Arm Virtual Hardware to test Alexa features on a myriad of devices, Awad said. Amazon gave its wake word recognition software to multiple vendors for use in devices like fridges and thermostat. Amazon used Arm Virtual Hardware to virtually test the code and it's performance without deploy hundreds of hardware units using that feature.

The company also announced Project Centauri as part of ARM Total Solutions for IoT, which is an effort to find a common language on which devices, chips and cloud services can interface and talk.

Go here to see the original:

Arm puts virtual hardware in the cloud so you won't have to wait for the actual chips - The Register

Microsoft makes its VS Code tool available directly in the browser – ZDNet

Credit: Microsoft

Microsoft made available on October 20 a preview version of its Visual Studio Code (VS Code) tool for the Web. VSCode for the Web enables developers to use a lighter-weight version of VSCode directly in the browser without having to install it on their PCs.

By going to https://vscode.dev, users can get a version of VS Code that works in their browsers. Officials are calling it a "zero-installation local development tool."

Microsoft officials suggested several scenarios where people might want VS Code for the Web. Among them: Local file viewing and editing for taking notes quickly and previewing in Markdown; building client-side HTML, JavaScript and CSS applications in conjunction with he browser tools for debugging; editing code on machines where it's not easy to install VS Code, such as Chromebooks; and even developing on iPads.

Browsers that support the File System Access APIs -- which means Microsoft Edge and Google Chrome so far -- are supported. And if a browser doesn't yet support local File System Access APIs, users still will be able to open individual files by uploading and downloading them via the browser.

"Bringing VS Code to the browser is the realization of the original vision for the product. It is also the start of a completely new one. An ephemeral editor that is available to anyone with a browser and an internet connection is the foundation for a future where we can truly edit anything from anywhere," Microsoft's blog post concluded.

View post:

Microsoft makes its VS Code tool available directly in the browser - ZDNet

After more than a decade of development, South Korea has a near miss with Nuri rocket test – The Register

South Korea today came close to joining the small club of nations that can build and launch their own orbital-class rockets, with its maiden attempt blasting off successfully then failing to deploy its payload.

At 5pm local time (UTC+9), the rocket, named Nuri, or KSLV-II, left its launchpad at Naro Space Center, destined for low-Earth orbit with a 1.5-ton dummy payload. But while all the three stages of the Korea Space Launch Vehicle II worked and the initial payload separation was fine, the dummy satellite was not placed into orbit as planned.

It wasn't immediately clear what went wrong, although South Korean President Moon Jae-in, speaking from the Naro spaceport, said the payload did not stabilize in orbit after separation. It appears the rocket's third-stage engine stopping running after 475 seconds, about 50 seconds earlier than planned, leading to the failed deployment.

According to Reuters, Moon said of the partly failed test: "It's not long before we'll be able to launch it exactly into the target trajectory."

The mission was scheduled for 4pm local time, and delayed an hour to allow for a valve and wind check. Korea Aerospace Research Institute (KARI) had pegged the mission's success rate at 30 per cent.

South Korea would have been the seventh nation to launch its own rocket carrying over a tonne of payload into space, following in the footsteps of Russia, US, France, China, Japan, and India.

The launch represents 11 years of work by the likes of KARI and approximately 300 private companies. In 2013, Korea launched its first space rocket, Naro or KSLV-I, with some help from Russian technology but experienced several delays and two failed launches before eventually succeeding.

South Korea has been behind on its space endeavors, partly due to a 1979 Cold War-era agreement with the US that limited the country's ability develop and test ballistic missiles of significant range. Those restrictions were amended in 2020, making South Korea free to use solid rocket motors without restrictions and enabling a space program.

Meanwhile, countries like China and Japan have developed their own space programs, leaving some catching up to do for South Korea in both military and civilian capacities.

A successful program could help South Korea get a foothold in 6G and keep tabs on North Korea, which has a military nuclear weapons program. South Korea does not, although politicians and officials have pushed for one and even implied in the past that Nuri could be a nuclear weapon precursor.

The three-stage rocket consists of about 370,000 parts, is just over 47 metres long, and has six liquid-fueled engines.

The first stage uses four clustered 75-ton engines and separates at 50km altitude. The second stage uses a single 75-ton engine that separates at 240km. The third stage uses a seven-ton engine to take the payload to its final destination of an orbit between 600 and 800km.

By May 2022, KARI planned to follow up the endeavour by sending a 200kg satellite into low-earth orbit. A lunar orbiter is slated for August 2022 with hopes to send a spaceship to the Moon by 2030.

See the rest here:

After more than a decade of development, South Korea has a near miss with Nuri rocket test - The Register

Unvaccinated and working at Apple? Prepare for COVID-19 testing ‘every time’ you step in the office – The Register

Apple will require unvaccinated workers to get tested for COVID-19 every time they come into the office for work, starting from November 1.

Employees have been told to declare whether theyve been vaccinated or not by October 24, Bloomberg reported this week. Staff who choose not to disclose their vaccination status will be subjected to COVID-19 testing whenever they enter the office, it's said.

The iGiant has again and again pushed back the date it wants its staff to return to their desks as the coronavirus continues romping around the planet. Although it hoped workers could go back to their campuses this autumn, now the plan is to get them working at least three days a week at their office desks from some time in January 2022.

So far this concerns office workers. The rules are a little different for people manning the Apple Stores. Many have already returned to the physical retail shops, and those that remain unvaccinated will be required to be tested for COVID-19 twice a week.

Rapid do-it-yourself test kits will be made available for employees in Apple offices and retail stores, and, of course, there's an app for that, on which staff will self-report their status.

Apples latest safety protocols to tackle the bio-nasty fall short of forcing a vaccine mandate on all employees. Other tech companies have been more stringent. Google, Facebook, and IBM, for example, have made it clear staff must be fully vaccinated before they can go back their work campuses.

Big Blue is more hardcore, and has said it would suspend employees with no pay if they choose to remain unvaccinated from December 9, regardless of whether theyre continuing to work from home or not. IBM said it has to follow rules set by President Joe Biden, who signed an executive order that stated federal contractors and subcontractors must be vaccinated.

The Register has asked Apple for comment.

Follow this link:

Unvaccinated and working at Apple? Prepare for COVID-19 testing 'every time' you step in the office - The Register

Boeing’s Starliner capsule corroded due to high humidity levels, NASA explains, and the spaceship won’t fly this year – The Register

Boeings CST-100 Starliner capsule, designed to carry astronauts to and from the International Space Station, will not fly until the first half of next year at the earliest, as the manufacturing giant continues to tackle an issue with the spacecrafts valves.

Things have not gone smoothly for Boeing. Its Starliner program has suffered numerous setbacks and delays. Just in August, a second unmanned test flight was scrapped after 13 of 24 valves in the spacecrafts propulsion system jammed. In a briefing this week, Michelle Parker, chief engineer of space and launch at Boeing, shed more light on the errant components.

Boeing believes the valves malfunctioned due to weather issues, we were told. Florida, home to NASAs Kennedy Space Center where the Starliner is being assembled and tested, is known for hot, humid summers. Parker explained that the chemicals from the spacecrafts oxidizer reacted with water condensation inside the valves to form nitric acid. The acidity corroded the valves, causing them to stick.

Engineers managed to free nine out of 13 faulty valves, but four remained stuck. The capsule was returned to the factory and two valves have been removed and handed to NASA for further analysis, with a third on the way. Boeing said will not resume flight tests of its CST-100 Starliner module until the first half of next year.

NASA astronauts Nicole Mann and Josh Cassada, who were expected to fly aboard Boeing's first official crewed flight for its Starliner-1 mission, will now hitch a ride to the ISS as part of Crew-5, a SpaceX mission in the second half of 2022.

NASA decided it was important to make these reassignments to allow Boeing time to complete the development of Starliner," the US agency previously said, "while continuing plans for astronauts to gain spaceflight experience for the future needs of the agencys missions."

Veteran astronauts Butch Wilmore and Mike Fincke have still been assigned to fly on Starliner. An official date hasn't been set for their launch and this will depend on whether Boeing has managed to fix its valve issue and has successfully pulled off an uncrewed orbital test flight and other NASA-mandated checks.

The rest is here:

Boeing's Starliner capsule corroded due to high humidity levels, NASA explains, and the spaceship won't fly this year - The Register

No swearing or off-brand comments: AWS touts auto-moderation messaging API – The Register

AWS has introduced channel flows to its Chime messaging and videoconferencing API, the idea being to enable automatic moderation of profanity or content that "does not fit" the corporate brand.

Although Amazon Chime has a relatively small market share in the crowded videoconferencing market, the Chime SDK is convenient for developers building applications that include videoconferencing or messaging, competing with SDKs and services from the likes of Twilio or Microsoft's Azure Communication Services. In other words, this is aimed mainly at corporate developers building applications or websites that include real-time messaging, audio or videoconferencing.

The new feature is for real-time text chat rather than video and is called messaging channel flows. It enables developers to create code that intercepts and processes messaging before they are delivered. The assumption is that this processing code will run on AWS Lambda, its serverless platform.

A post by Chime software engineer Manasi Surve explains the thinking behind the feature in more detail. It is all about moderation, and Surve describes how to "configure a channel flow that removes profanity and certain personally identifiable information (PII) such as a social security number."

She further explains that corporations need to prevent accidental sharing of sensitive information, and that social applications need to "enforce community guidelines" as well as avoiding "content shared by users that does not fit their brand." A previous approach to the same problem worked only after the message had been posted too late in many scenarios.

It is telling that Surve observes that "human moderation requires significant human effort and does not scale."

Automate everything is a defining characteristic of today's cloud giants, even though moderation automation has not always been successful.

Surve said: "Amazon Comprehend helps remove many of the challenges," this service being for natural language processing and having the ability, when suitably trained, to detect "key phrases, entities and sentiment" to automate further actions.

The simple example presented by Surve does not use Comprehend for profanity but "simply a banned word list," though she adds that "you can also use Comprehend for profanity, but you will need to train your own model." Comprehend is used for detecting a social security number.

Users are skilled in getting around automated filters and we suspect that training Comprehend to sanitise every kind of profanity or off-brand message a user could devise will be challenging.

There are other possible use cases for message flows for example, looking up a support article automatically in order to show the user a link, sending an alert, or analysing sentiment though in these cases it may not matter so much whether the processing takes place before or after a message is sent to others in the same channel.

Visit link:

No swearing or off-brand comments: AWS touts auto-moderation messaging API - The Register

Microsoft emits more Win 11 fixes for AMD speed issues and death by PowerShell bug – The Register

Microsoft has released a build of Windows 11 that it claims addresses performance problems the new OS imposed on some systems.

Redmond's announcement of OS Build 22000.282 lists over 60 "improvements and fixes" on top of a lucky 13 "highlights".

One of those highlights is described as fixing "an issue that causes some applications to run slower than usual after you upgrade to Windows 11 (original release)".

Another addresses an issue that could cause Bluetooth mice and keyboards "to respond slower than expected". A third "improves the time estimate for how long you might wait to use your device after it restarts".

Some of the improvements and fixes offer meatier fare among them addressing "an L3 caching issue that might affect performance in some applications on devices that have AMD Ryzen processors after upgrading to Windows 11 (original release)".

AMD users have, quite reasonably, been rather miffed at being singled out, and more miffed still that their concerns weren't addressed in the first bundle of Win 11 fixes issued last week.

Another fix prevents PowerShell from eating a PC alive by creating an infinite number of child directories. "This issue occurs when you use the PowerShell Move-Item command to move a directory to one of its children. As a result, the volume fills up and the system stops responding," Microsoft explained.

If Server Manager has disappeared while you use Windows 11, Microsoft has found the cause for its absence: silly you, for installing Server Manager using the Remote Server Administration Tools and then using it to remove some features from Hyper-V.

Distorted fonts for Asian alphabets have been clarified, Microsoft Office has been restored to operability after Windows Defender Exploit Protection prevented it from running "on machines that have certain processors," and an issue that could prevent successful printer installation with Internet Printing Protocol has been erased.

Microsoft's Windows teams appear to be rather busy. On the same day as the new Windows 11 fixes were delivered, the IT giant also announced the all-butpicked cut of Windows 10 it will use for the Windows 10 November 2021 update.

"We believe that Build 19044.1288 is the final build for the November 2021 Update," wrote Brandon LeBlanc, a senior manager on the Windows Insider Program.

Insiders can get their hands on the November update in the Release Preview Channel on Windows 10 via Microsoft's "seeker" experience in Windows Update.

"This means Insiders currently on Windows 10, version 21H1 (or lower) in the Release Preview Channel will need to go to Settings > Update & Security > Windows Update and choose to download and install Windows 10, version 21H2," LeBlanc explained.

Microsoft previously teased a modest set of additions to Windows 10 in this update, headlined by Wi-Fi security improvements and GPU compute support in the Windows Subsystem for Linux (WSL) and Azure IoT Edge for Linux on Windows (EFLOW) environments.

Another major feature the 'softies previously promised would appear in the update a Windows Hello for Business deployment method called "cloud trust" has dropped out of the release.

LeBlanc described it as "still under development" and now due to appear "in a future monthly update to the November 2021 Update".

We will provide more information as this feature gets closer to availability. Information on exactly when the 21H2 update will make its mainstream debut is also in the "coming-real-soon-now-we-promise" bucket.

Original post:

Microsoft emits more Win 11 fixes for AMD speed issues and death by PowerShell bug - The Register

New Relic CodeStream plugs observability directly into developer workflows in the IDE – Diginomica

(New Relic CodeStream screenshot)

New Relic yesterday brought observability directly into developer workflows with the acquisition of CodeStream, a collaboration tool that sits inside popular Integrated Development Environments (IDEs) and also integrates to a range of other tools, from GitHub to Jira and Slack. A new version of CodeStream that integrates with the New Relic One observability platform is able to surface application telemetry data directly within the IDE at the relevant point in the code, so that developers can instantly get to work on resolving issues. We spoke to Peter Pezaris, CodeStream CEO and Founder, and Buddy Brewer, GVP and GM, New Relic, to find out more about the news.

The company believes the new product, along with a new pricing tier, will broaden New Relic's appeal among developers whose primary role is coding, rather than the traditional users of monitoring and observability tools, who focus on operational uptime and reliability. Customers will also get extra value from their investment in collecting and analyzing telemetry data, as Brewer explains:

Especially with the shift to cloud, and decomposing monoliths into the containerization of software, there is so much telemetry out there, that it's a massive investment that companies are making. The value of that investment can extend so much further than just utilizing that data when the software is on fire ...

By bringing that telemetry data into the IDE I think there's like 14 IDEs that are supported by CodeStream you can incorporate this data into your IDE. It allows those developers to access that data when they're planning and they're building software, not just when they're running it. We think that'll help them get a lot more value out of the investment in telemetry and also help them build better software.

One of the most compelling examples of how the newly integrated product blurs the line between building code and running it in production is the integration with New Relic Errors Inbox. This recently introduced and already popular capability provides a single location for viewing and dealing with errors from anywhere across the application stack, with the ability to see detail down into the stack trace. But as Pezaris points out, for all its convenience, what do you do then? There's a button to copy the stack trace to a clipboard, but it's up to you to then work out where to go next. Whereas with the integration to CodeStream, all of those next steps are automatically done for you. He explains:

Now you'll be able to click on that stack trace, and you'll open it up right in your IDE. We'll check you out to the right repo. We'll open up the build SHA that was used to build that production artefact. And now that stack trace becomes clickable. So you can click through to all the stack frames, go right to the parts of the code where the problem's happened.

Because CodeStream's good at collaboration, every time you do that, we'll tell you who wrote that code. You can easily bring them into the conversation, swarm the issue between production and non-production engineers, and get to the root cause of the problem faster. We'll also keep track of who's been assigned which errors. So now every time you open up your IDE, you get to see all the errors that are assigned to you, so you can investigate and solve those problems.

Another feature uses integration to Pixie, the technology acquired by New Relic last year, which automatically collects telemetry data from deep inside application code running in Kubernetes clusters. With dynamic logging, a developer who wants to instrument a particular function in their source code to see how it's running in production, can invoke Pixie directly from their code editor to insert a probe and start logging straight away. Pezaris explains:

You can right-click on it and say, 'Start instrumenting that thing.' And then you will get feedback in real time in your editor, without having to do a deployment. You don't have to commit code, you don't have to change code, you don't have to push code. Just immediately, you'll start to get logging back for every call to that function.

As part of the announcement, New Relic highlighted its partnership with Microsoft around CodeStream, which works with the VS Code and Visual Studio IDEs, and also integrates to GitHub and Teams. Commenting in a joint press release, Scott Guthrie, Executive Vice President, Cloud + AI, Microsoft, welcomed the news of the acquisition, stating:

Tighter collaboration between development projects and improved connections between existing applications are just some of the benefits New Relic CodeStream will provide to the developer community.

To help encourage adoption, there will be a new Core user pricing option for New Relic One, starting at $49 per user per month, plus usage charges based on the volume of telemetry data. Developers can sign up for a free account for a preview period ending in January, after which some features will require a paid license.

Joining forces with CodeStream which is effectively a Disrupt for coders, but with a lot of the integration chops of a Slack built in too is a huge step up for New Relic in furthering its mission of data-driven software, which was a big theme of its FutureStack conference earlier this year. It helps to close the loop on ensuring that the people building software have as much access to telemetry about how it runs in production as those charged with running it. That can only improve quality and performance.

It's also a great illustration of why collaboration inside of the applications where people spend much of their working day is just as important and valuable as collaboration around them in applications such as Teams and Slack and highlights the value of integration that automates the futzing around that otherwise wastes people's time they move information from one place to another and put it in the right context to then get to work on it. When I write about Frictionless Enterprise, this is exactly the kind of thing I'm talking about.

Follow this link:

New Relic CodeStream plugs observability directly into developer workflows in the IDE - Diginomica

Huawei appears to have quenched its thirst for power in favour of more efficient 5G – The Register

MBB Forum 2021 The "G" in 5G stands for Green, if the hours of keynotes at the Mobile Broadband Forum in Dubai are to be believed.

Run by Huawei, the forum was a mixture of in-person event and talking heads over occasionally grainy video and kicked off with an admission by Ken Hu, rotating chairman of the Shenzhen-based electronics giant, that the adoption of 5G with its promise of faster speeds, higher bandwidth and lower latency was still quite low for some applications.

Despite the dream five years ago, that the tech would link up everything, "we have not connected all things," Hu said.

Click to enlarge

It was a refreshingly frank assessment, sandwiched between the usual cheerleading for 5G. A distinct change of tack could be detected from the normal trumpeting of raw performance to an acknowledgement that power consumption would need to be reduced amid concerns about efficiency.

On that note, we'll draw a veil over the fact that the event's host Dubai features an indoor ski slope on the edge of a desert.

Hu ticked off a shopping list of things that hadn't quite happened in the 5G world just yet there are now 10,000 "5GtoB" projects in the world, but more than half are in China and industry has yet to see its promised redefinition. 5GtoB is Huawei's B2B 5G services punt, which includes a network, a NaaS offering, a 5G app engine, and a marketplace.

There had been great hopes for virtual reality and 360 broadcasting, but neither had taken off. And so it went on.

That said, Hu also noted faster-than-expected growth in some areas, claiming over 1.5 million base stations and 176 commercial 5G networks were up and running along with more than half a billion 5G users (smartphone users and industry modules).

Hu also reckoned plenty of opportunities lay ahead. The pandemic had accelerated digital transformation by approximately seven years, he said, and consumers had hopped online and were voraciously consuming services such as video streaming. As well as the increasing trend toward cloud applications, there was a demand for decent wireless home broadband.

Fertile ground for 5G and 5.5G, for sure.

Getting latency down to 10ms and upping the bandwidth are key said Hu as he wheeled out the industry buzzword of the moment: the metaverse. After all, if AR and VR haven't taken off as hoped, there is always extended reality or XR.

And then there is a growing awareness among the population that perhaps shoving yet more power-hungry gizmos into data centres might not be the best approach. But hey Huawei has just the 5G (and 5.5G) and networking tech for that, assuming you live in a country that hasn't banned its tech.

"We can't do anything about that," a spokesperson told The Register with a hint of a smile.

Huawei's kit is famously being pulled out of UK networks amid mistrust of the Chinese government, although it continues to install its telecommunications technology elsewhere. As well as telco representatives from the Middle East, the likes of Vodafone turned up via video link to extol the virtues of 5G.

Konstantinos Masselos, President of Greece's Hellenic Telecommunications & Post Commission, spoke in person about spectrum strategy, even as the backdrop behind him strobed like a ZX Spectrum loading screen.

Naturally, Huawei was keen to show off its other toys. The AR and XR department was taken care of by a display showing a customer garbed in virtual traditional attire thanks to a Azure Kinect DK camera and a big screen. An electric car was also on show, hoped to be a showcase for Huawei's dream of a connected automobile world, but sadly lacking the battery thanks to problems getting the units shipped into the UAE. There's perhaps a metaphor in there somewhere

5G technology is critical for Huawei as the company faces sanctions around the world. The banhammer was dropped in the UK last year, prohibiting telcos from purchasing its kit and removing what had already been installed by 2027. US sanctions have played a role in a decline in the company's revenues as components have become difficult to source for products such as smartphones. That said, back in August, rotating chairman Eric Xu remained bullish about the company's enterprise and carrier business (excluding the likes of the UK, of course).

While some countries might regard Huawei with some suspicion, others appear more than happy to fit out data centres with its tech poor firmware engineering processes or not.

Overall, the theme of the 2021 Mobile Broadband Forum was a recognition that the world had changed in the last two years. Raw performance seemed to take a back seat to the potential for power savings and efficiency improvements as old kit gradually gets replaced with new over the coming years.

While XR might seem a contender for next year's hypewagon, a renewed emphasis on industry applications and standards for 5G seems a good deal more realistic.

The Register attended MBBF 2021 as a guest of Huawei.

Visit link:

Huawei appears to have quenched its thirst for power in favour of more efficient 5G - The Register

AWS admits cloud ain’t always the answer, intros on-prem vid-analysing box – The Register

Amazon Web Services, the outfit famous for pioneering pay-as-you-go cloud computing, has produced a bit of on-prem hardware that it will sell for a once-off fee.

The device is called the "AWS Panorama Appliance" and the cloud colossus describes it as a "computer vision (CV) appliance designed to be deployed on your network to analyze images provided by your on-premises cameras".

"AWS customers agree the cloud is the most convenient place to train computer vision models thanks to its virtually infinite access to storage and compute resources," states the AWS promo for the new box. But the post also admits that, for some, the cloud ain't the right place to do the job.

"There are a number of reasons for that: sometimes the facilities where the images are captured do not have enough bandwidth to send video feeds to the cloud, some use cases require very low latency," AWS's post states. Some users, it adds, "just want to keep their images on premises and not send them for analysis outside of their network".

Hence the introduction of the Panorama appliance, which is designed to ingest video from existing cameras and run machine learning models to do the classification, detection, and tracking of whatever your cameras capture.

Sometimes the facilities do not have enough bandwidth to send video feeds to the cloud

AWS imagines those ML models could well have been created in its cloud with SageMaker, and will charge you for cloud storage of the models if that's the case. The devices can otherwise run without touching the AWS cloud, although there is a charge of $8.33 per month per camera stream.

The appliance itself costs $4,000 up front.

Charging for hardware is not AWS's usual modus operandi. Its Outposts on-prem clouds are priced on a consumption model. The Snow range of on-prem storage and compute appliances are also rented rather than sold.

The Panorama appliance's specs page states that it contains Nvidia's Jetson Xavier AGX AI edge box, with 32GB RAM. The spec doesn't mention local storage, but lists a pair of gigabit ethernet ports, the same number of HDMI 2.0 slots, and two USB ports.

AWS announced the appliance at its re:invent gabfest in December 2020, when The Register opined that the cloudy concern may be taking a rare step into on-prem hardware, but by doing so would be eating the lunches of server-makers and video hardware specialists alike. Panorama turns out to not have quite the power to drive cloud services consumption as other Amazonian efforts, since the ML models it requires could come from SageMaker or other sources. That fact, and the very pre-cloud pricing scheme, mean the device could therefore be something of a watershed for AWS.

View post:

AWS admits cloud ain't always the answer, intros on-prem vid-analysing box - The Register