Sonatype and Cloud Native Computing Foundation Partner to – GlobeNewswire

Fulton, Md., Oct. 06, 2022 (GLOBE NEWSWIRE) -- Sonatype, the pioneer of software supply chain management, in partnership with The Cloud Native Computing Foundation(CNCF), which builds sustainable ecosystems for cloud native software, has announced an inaugural virtual Security Slam event to help improve their projects security posture, while raising $50,000 for its Diversity Scholarship Fund donated by Google.

Security Slam is a virtual event aimed at improving the security posture of all CNCF open source projects. This new event will use CNCFs automated CLOMonitor that measures project security, enabling maintainers and contributors to work together and improve participating projects overall security. Every CNCF project that reaches 100% Security status will win prizes for its top participating maintainers and contributors, including free Linux Foundation courses, gift cards to the CNCF online store, and more.

From our ongoing stewardship of Maven Central to the creation of our free developer solutions like OSS Index, Sonatype has a long history of supporting the open source community, says Brian Fox, co-founder and CTO of Sonatype. We are ecstatic to partner with CNCF and Google on this event to improve CNCF projects security, while raising funds that can help expand our community to include more individuals from historically underrepresented groups.

Additionally, the top overall contributor will win free airfare and hotel to the next KubeCon + CloudNativeCon, courtsey of Open Source Travel Fund by Community Classroom. Plus, for every project that achieves 100% Security, Google will donate $2,500 to CNCFs Diversity Scholarship Fund, which helps underrepresented individuals become valuable members of the CNCF community. The event will culminate at KubeCon + CloudNativeCon 2022 North America in Detroit, where winners will be announced October 24-26, 2022.

Were thrilled to be putting on this event that will help our projects become even more secure, while garnering the largest donation weve ever received for the CNCF Diversity Scholarship Fund and giving prizes to our valued contributors and maintainers, said Chris Aniszczyk, CTO of the Cloud Native Computing Foundation.

To learn more about the Security Slam, visit community.cncf.io/cloud-native-security-slam/.

Open source maintainers can sign their project up for participation here, and open source contributors can sign up to participate here.

About SonatypeSonatype is the software supply chain management company. We empower developers and security professionals with intelligent tools to innovate more securely at scale. Our platform addresses every element of an organizations entire software development life cycle, including third-party open source code, first-party source code and containerized code. Sonatype identifies critical security vulnerabilities and code quality issues and reports results directly to developers when they can most effectively fix them. This helps organizations develop consistently high-quality, secure software which fully meets their business needs and those of their end-customers and partners. More than 2,000 organizations, including 70% of the Fortune 100, and 15 million software developers already rely on our tools and guidance to help them deliver and maintain exceptional and secure software.

See the rest here:

Sonatype and Cloud Native Computing Foundation Partner to - GlobeNewswire

[Interview] Next Generation Connected Experiences: Experts Share the Story Behind Tizen’s 10 Year Development – Samsung

On October 12, Samsung Electronics will host the Samsung Developer Conference 2022 (SDC 2022) in the U.S. Through this years SDC, Samsung will showcase its latest updates that seek to create even smarter user experiences by intuitively and organically connecting various devices. By providing upgraded, next-generation connected experiences, the role of the operating system (OS) has become even more important.

Samsung recognized the importance of OS early on and subsequently began research and development. In April of 2012, Samsung unveiled the first version of Tizen, a Linux-based open-source platform. 10 years later, at this years SDC, Samsung will unveil its new vision for Tizen 7.0.

The team behind the research and development of Tizen OS at Samsung Research: VP Jinmin Chung, Head of the Platform Team (center), and researchers Seonah Moon of the MDE Lab (left) and Jin Yoon of the Tizen Platform Lab (right)

Since the first version of Tizen was released, much time has passed and Tizen has evolved in a variety of ways. To learn the details behind the development of Tizen, Samsung Newsroom met with Samsung Researchs Vice President Jinmin Chung and researchers Jin Yoon and Seonah Moon, who have been working on Tizen since the beginning.

Tizen is a Linux-based open-source platform led by Samsung and it also supports all types of smart devices. With the aim of being utilized in various types of Samsung products to support smooth product operation, Tizen has been equipped in about 330 million smart devices as of the end of 2021.

We needed Tizen to differentiate Samsungs devices from others and to provide a different service and user experience, VP Chung said. Its already been 10 years since Tizen was first developed. We experienced challenges in the initial development stage, but we felt supported by the people who believed in the possibility and usability of Tizen and rooted for us. We focused on the research proudly knowing that we were leading Samsungs own independent OS development project, he added.

In 2014, for the first time, Tizen was equipped in Samsungs Gear 2, a wearable device, proving its viability through its commercialization. Furthermore, a year later, Tizen was used in the 2015 Samsung Smart TV product line up that set a new bar of smart TVs.

Tizen has many advantages that enable it to offer the highest performance across Samsungs devices. First, Tizens flexibility allows it to be easily applied to a variety of smart devices. In order to make this possible, Tizen went through multiple platform improvement processes. Several profiles were established based on the different types of products. Then, Tizen Common, which is the common module for all products, and the Specialized Module, which is needed for certain products only, were created. The structure is designed in a way that allows the platform to be quickly modified and applied to new products as well. This enables Tizen to be utilized in a wide range of products, including smart TVs, refrigerators and air conditioners.

Additionally, Samsung utilized its advanced know-how and experience in commercializing embedded system software when developing Tizen. The Tizen platform is optimized to perform well while using minimal memory and low power. Its an open-source platform that can be used by anyone, and it supports optimized performance for immediate commercialization.

Tizen is also convenient for new product development because it is Samsungs own independent OS. The platform can be modified as desired to add new features and services to products in a timely manner.

Across the world, only a handful of companies own their own independent OS, Chung said. The fact that Samsung has its own OS called Tizen means that Samsung has become a company proficient in developing not only hardware but software as well, he emphasized.

Many developers put much effort into the development and evolution of Tizen.

Researcher Jin Yoon has been participating in the Tizen project since its early stages, meaning he has witnessed the growth of Tizen firsthand. Starting with Smart TVs, the applications for Tizen are gradually increasing, and the system is evolving and advancing further, Yoon said. In addition, the implementation of a new development language, framework and infrastructure makes development more convenient and increases the productivity of developers. Now, were working hard to secure usability that is appropriate for each product group that uses Tizen, he continued.

The code sources of Tizen are very stable because theyve gone through actual commercialization. On top of that, they come with performance-specific details and security as well. This means third party developers can trust and find these sources, Yoon said.

Expanding the application of Tizen to more devices and creating an ecosystem for Tizen is important for improving the usability of a product, but active participation in open-source communities is also crucial. This is because open-source communities enable community members to share problems and come up with solutions together, directly contributing to the improvement of software. In order to manage this, Seonah Moon from the MDE Lab, whos been developing Tizen for eight years, is responsible for tasks involving open-source maintenance. Countless open sources were also used for Tizens development. Moon monitors each open source, analyzes its errors and shares her opinion on them to help outside developers access Tizen more easily.

Tizen is more than just an OS for Samsungs developers and researchers.

Since the platforms development requires constant maintenance, this means developers must continue to hone their skills. Tizen motivated each member of the development team to continue learning and improving their software skills. The developers of Tizen have grown into experts specializing in different areas. Since the teams initial start, they have grown to accumulate many platform codes over the last 10 years. They also constantly learned by voluntarily participating in study group meetings.

Tizen is like a bridge that connects all of Samsungs products together, Moon said. Cooperation among business divisions is a must for utilizing the OS in different products. Through this cooperation, we can share our development knowledge with one another and also create a new service based on our OS, Moon continued.

Tizen has continued to evolve to allow all devices around us, including wearables, TVs, refrigerators and even robot vacuum cleaners, to provide new user experiences. When asked about what the future of Tizen will look like, the developers explained their ambitions to continue connecting devices using Tizen.

Were now living in an era where everything is connected to one another through the Internet of Things (IoT), said Yoon. By increasing the productivity of Tizen app development, Id like to provide an innovative user experience where all Samsung products are connected to one another, creating an interconnected product ecosystem, he explained.

Id like to lead the efforts in expanding Tizens use in a variety of ways by discovering new scenarios and utilizing even more advanced technologies, said Moon when explaining her ambitions. By growing together with Tizen, Id like to become the maintainer or the key contributor to the open-source project, she continued.

I dream of a future in which Tizen is equipped in all the devices that people use in their daily lives, enabling various devices to operate organically as if theyre one and providing intelligent services, like the metaverse, Chung said. To enable this, Samsung Research is developing various technologies with many teams in order to manifest a future in which various Tizen devices are all connected through the OS, providing a Multi-Device Experience (MDE), modular AI and more.

The infinite possibilities of Tizen will be showcased at SDC 2022 this year. At the conference, Samsung Research will share how easy it is to make new devices based on the flexibility of Tizen 7.0, and how the new version of Tizen can strengthen intelligent services. As it continues to evolve in line with the era of hyper-connectivity and intelligence, the future of Tizen is bright and its applications are limitless.

View original post here:

[Interview] Next Generation Connected Experiences: Experts Share the Story Behind Tizen's 10 Year Development - Samsung

Red Hat CEO on OpenShift roadmap, competitive play – ComputerWeekly.com

Red Hat, the open source juggernaut known for its enterprise-grade Linux distribution and OpenShift container application platform in more recent years, undertook a leadership change in July 2022, when it appointed Matt Hicks as president and CEO.

Hicks, who previously served as Red Hats executive vice-president of products and technologies, took over the top job from Paul Cormier, who will serve as chairman of the company.

In a wide-ranging interview with Computer Weekly in Asia-Pacific (APAC), the newly minted CEO said he hopes to continue building on Red Hats core open source model and tap new opportunities in edge computing with OpenShift as the underlying technology platform.

Having taken over as Red Hat CEO recently, could you tell us more about how youd like to take the company forward?

Hicks: Ive been at Red Hat for a long time, and what drew me to Red Hat was its core open source model, which is very unique and empowering. I distil it down to two fundamental things. One, we genuinely want to innovate and evolve on the shoulders of giants because there are thousands of creative minds across the world who are building and contributing to the products that we refine.

The second piece is that customers also have access to the code, and they understand what were doing. They can see our roadmaps, and our ability to innovate and co-create with them is unique. Those two things go back a long time and make us special. For me, thats the core mentality we want to hold on to at Red Hat because thats what differentiates us in the industry.

In terms of where we want to go with that open source model, weve talked about the open hybrid cloud for quite a while because we think customers are going to get the best in terms of being able to run what they have today, as well as where they want to be tomorrow. We want to help customers be productive in cloud and on-premise, and use the best that those environments offer, whether its from regional providers, hyperscalers, as well as specialised hardware. We see hybrid cloud as a huge, trillion-dollar opportunity, with just 25% of workloads having moved to the cloud today.

Potentially, there are more exciting opportunities with the extension to edge. Were seeing this accelerate with technologies such as 5G, where you still need to have computing reach and move workloads closer to users while pushing technologies like AI [artificial intelligence] at the point of interaction with users.

So, its going from the on-premise excellence we have today, extending that reach into public cloud and eventually into edge use cases. Thats Red Hats three- to five-year challenge, and an opportunity we are addressing with the same strategy of open source-based innovation that weve had in the past.

Were involved in practically every SBOM effort at this point, but when we make that final choice, we want to make sure its the most applicable choice at the time Matt Hicks, Red Hat

Against the backdrop of what youve just described, what is your outlook for APAC, given that the region is very diverse with varying maturities in adopting cloud and open-source technologies?

Hicks: If we look at APAC as a market, I think the core fundamentals of using software to drive digital transformation and innovation is key, and that could be for a lot of reasons. It could be controlling costs due to inflation. It could be tighter labour markets, where we need to drive automation. It could be adjusting to the Covid-19 situation where you might not be able to access workers. And I think for all of these reasons, weve seen the drive to software innovation in APAC, similar to the other markets.

DBS Bank is a good example in Singapore. They pride themselves in driving innovation and by using OpenShift and adopting open source and cloud technologies, they were able to cut operating costs by about 80%. But they are not just trying to cut costs, they also want to push innovation and I think thats very similar to other customers we have across the globe.

Kasikorn Business Technology Group in Thailand has a very similar approach, where theyre using technologies such as OpenShift to cut development times from a month to two weeks while increasing scale. Another example is Tsingtao Alana, which is using Ansible to drive network automation and improve efficiencies.

Like other regions, the core theme of using software innovation and getting more comfortable with open source and leveraging cloud technologies is similar in APAC. But one area where we might see an acceleration in APAC more so than in the US is the push to edge technologies driven by the innovation from telcos.

You spoke a lot about OpenShift, which has been a priority for Red Hat for a number of years. Moving forward, whats the balance in priorities between OpenShift and Red Hat Enterprise Linux (RHEL), which Red Hat is known for among many companies in APAC?

Hicks: Its a great question, and heres how I tend to explain that to customers that are new to the balance between OpenShift and RHEL.

The core innovation capability that RHEL provides on a single server is still the foundation that we build on. Its done really well for decades, for being able to provide that link to open source innovation in the operating system space. I call it the Rosetta Stone between development and hardware and being able to get the most out of that is what we aspire to do with RHEL.

That said, if you look at what modern applications need and Ive been in this space for more than 20 years they far exceed the resources of a single computer today. And in many cases, they far exceed the resources of a dozen, 100 or 1,000 computers. OpenShift is like going from a single bee to a swarm of bees, which gives you all the innovation in RHEL and lets you operate hundreds or thousands of those machines as a single unit so you can build a new class of applications.

RHEL is part and parcel of OpenShift, but its not a single-server model anymore. Its that distributed computing model. For me, thats exciting because I started my open source journey with Linux and then with RHEL when I was in consulting. Since then, the power of RHEL has expanded across datacentres and helps you drive some incredible innovation. Thats why the pull to OpenShift doesnt really change our investment footprint as RHEL offers a great model to leverage all of those servers more efficiently.

Could you dive deeper into the product roadmap for OpenShift? Over the years, OpenShift has been building up more capabilities, including software as a service (SaaS) for data science, for example. Are we expecting more SaaS applications in the future?

Hicks: When we think about OpenShift, or platforms in general, we try to focus on the types of workloads that customers are using with them and how we can help make that work easier.

One of the popular trends is AI-based workloads, and that comes down to the training aspects of it, which requires capabilities like GPU rather than CPU acceleration. Being able to take trained models and incorporate them into traditional development are things that companies struggle with. So, the way to get your Nvidia GPUs to work with your stack, and then get your data scientists and developers working together, is our goal with OpenShift Data Science.

We know hardware enablement, we have a great platform to leverage both training and deployment, and we know developers and data scientists, so that MLOps space is a very natural fit. What you will see more from us in the portfolio is what we call the operating model, where for decades, the prevalent model in the industry was having customers run their own software supplied and supported by us.

The public cloud has changed some of the expectations around that. While theres still going to be a ton of software run by customers, they are also increasingly leveraging managed platforms and cloud services. Once we know the workloads that we need to get to, we will try to offer that in multiple models where customers can run the software themselves if they have a unique use case.

But at the same time, we want to improve our ability to run that software for them. One area where youll see a lot of innovation is managed services, in addition to the software and edge components.

We at Red Hat, along with IBM, have put our bet on containers. VMware, I think, has tried or was sort of a late entrant to that party around Tanzu. For us, our core is innovation in Linux which is an extension to containers Matt Hicks, Red Hat

If you look at telcos, for example, they run big datacentres with lots of layers in between where the technology stack gets smaller and smaller. They also have embedded devices, which may have RHEL on them even if they are running containers. In the middle, were seeing a pull for OpenShift to get smaller and smaller. You can think of it as the telephone pole use case for 5G, or maybe its closer to the metropolitan base station that runs MicroShift, a flavour of OpenShift optimised for the device edge.

That ability to run OpenShift on lightweight hardware is key as edge devices dont have the same power and compute capabilities of a datacentre. So, those areas, coupled with specific use cases like AI or distributed networking based applications, is where youll see a lot of the innovation around OpenShift.

Red Hat has done some security work in OpenShift to support DevSecOps processes. I understand that currently there isnt any kind of software bill of materials (SBOM) capabilities that are embedded in OpenShift. What are your thoughts around that?

Hicks: If we picked one of the most important security trends that we try to cater to, it is understanding your supply chain and being confident in the security of it. Arguably, this is what we do we take open source, where you might not have that understanding of its provenance or the expertise to understand it, and add a layer of provenance so you know where its coming from.

I would argue that for the past 20 years, whether it was the driving decision or not, you are subscribing to security in your supply chain if you are a Red Hat customer. And were excited about efforts around how you build that bill of materials when youre not only running Red Hat software but also combining Red Hat software with other things.

There are a few different approaches, and this is always Red Hats challenge: when we make a bet, we have to stick with it for a while. Were involved in practically every SBOM effort at this point, but when we make that final choice, we want to make sure its the most applicable choice at the time.

So, while we havent pulled the trigger on a single approach or said what we will support, the core foundation behind SBOM is absolutely critical and we invest a lot there. Were excited about this, and honestly, before the SolarWinds incident, this was an area that was overlooked as a risk to consuming software that you dont understand.

With open source continuing to drive innovation, I think its critical for customers to understand where theyre getting that open source code from, whether its tied to suppliers or whether theyre responsible for understanding it themselves. But we havent made that final call on the SBOM format to support right now. I fully expect, in the next year or so, that we start to converge as an industry on a couple of approaches.

What are your thoughts on the competitive landscape, particularly around VMware with its Tanzu Application Platform?

Higgs: Its really about the choice on the right technology architecture to get the most out of hybrid cloud. About a year ago, most customers were drawn to a single public cloud and that trend was certainly strong, at least in the US and Europe, for a variety of reasons.

I think enterprises have realised that they might still have that desire, but its not practical for them. Theyre going to end up in multiple public clouds, maybe through acquisition or geopolitical challenges. And your on-premise environments, whether its mainframe technology or others, are not going away quickly. The need for hybrid has therefore become much more recognised today than it was even a year or two ago.

The second piece on that is, what is the technology platform that enterprises are going to leverage to build and structure their application footprint for hybrid? VMware certainly has their traditional investment in virtualisation and the topology around that.

We at Red Hat, along with IBM, have put our bet on containers. VMware, I think, has tried or was sort of a late entrant to that party around Tanzu. For us, our core is innovation in Linux which is an extension to containers. Were pretty comfortable with that and we see a lot of traction because all the hyperscalers have adopted that model.

Personally, I think we have a great position on a technology that lets customers leverage public clouds natively and get the most out of their on-premise environments. I dont know if virtualisation will have that same reach and flexibility of being able to run on the International Space Station, as well as power DBS Banks financial transactions as containers do.

VMware, I think, will be more drawn to their core strength in virtualisation, but we still have 75% of workloads remaining that have yet to move so well see how that really shakes out. But Im pretty comfortable with the containers and OpenShift bet on our side.

Red Hat has a strategic partnership with Nutanix to deliver open hybrid cloud offerings. In light of the uncertainty around Broadcoms acquisition of VMware, are you seeing more interest from VMware customers?

Hicks: Acquisitions are tricky and its hard to predict the outcome of an acquisition like that. What I would say is that we partner pretty deeply with VMware today as virtualisation still provides a good operating model for containers. I would expect us to partner with VMware as part of Broadcom.

That said, theres a bit of uncertainty in an area like this, and it does create a decision point around architecture. Were neutral to that because for us, if customers choose to stay on that core vSphere base, we will continue to serve them, even if containers are their technology going forward.

We also partner closely with companies like Nutanix, which will compete at that core layer. For us, we really run on the infrastructure tier, and we want to let customers run applications whether they are on Nutanix, vSphere or Amazon EC2.

We dont really care too much where that substrate lies. We want to make sure we serve customers at that decision point, and I think we have a lot of options to deliver to customers regardless of how the acquisition ends or how the landscape changes with other partners.

Continued here:

Red Hat CEO on OpenShift roadmap, competitive play - ComputerWeekly.com

How Citrix dropped the ball on Xen … according to Citrix – The Register

Open Source Summit What's the difference between the Citrix Hypervisor and Xen? Well, one has quite a big crowd of upset current and former community members.

One of the more interesting talks at the Open Source Summit was from Jonathan Headland, software development manager at Citrix, with the unusual title "How to Disengage the Open-Source Community: The Citrix Hypervisor Experience." Given all the usual fist-pumping so many companies' marketing teams like to engage in, especially at an event like the Open Source Summit, The Reg FOSS desk was intrigued.

Among other things, these days Citrix offers the Citrix Hypervisor, the product formerly known as XenServer, which it has owned since it acquired XenSource in 2007. The focus of Headland's talk [PDF] was how Citrix mismanaged the relationships between its commercial version of XenServer, the free open source version, and both its upstream and its user community. His opening line was:

He went on to carefully itemize the mistakes the company made, and the four lessons he suggests for others to avoid doing the same.

Citrix originally offered XenServer under the "freemium" model: one product was free, the other commercial for enterprise users. Only paying customers received maintenance. The model was successful, and the revenue funded a full-time team of eight engineers and a community manager who worked on the upstream project.

According to Headland, Citrix's first significant misstep was in 2011, when it decided to open source the full feature set of the enterprise product, with revenue to be made from support. The goal, he said, was to get more community buy-in, but the company learned some tough lessons: "We had a very poor understanding of our customers and what they'd actually pay for. Customers happily took the new features, but it turned out that they weren't so keen to pay for the maintenance."

The result was crashes: in revenue, in the reputations of the people who made the decisions, and in that of open source itself within the company.

Another problem was that when Citrix gave away the source code to the enterprise product, it didn't provide the accompanying tooling. "Even if we had remembered, or thought, to make all these tools available, we'd still have needed to teach people how to use them."

The result was disappointing Xen enthusiasts, and "rather than increasing contributions, it inhibited them."

In 2017, in "an atmosphere of mistrust of open source projects, and with the feeling that many of its customers were free-riding and not making any contributions," the company announced a change of direction. Product management reintroduced limits, cutting or reducing the features available for free, some to lower than the free product had back in 2010. For example, the number of hosts in a server cluster went from 64 down to just three.

However, this hit one particular sub-community of free users that one engineer Headland interviewed called the "weird systems users": hobbyists who offer virtualization to non-profit and charity users, using old, out-of-maintenance hardware that had been inherited or passed on to them. Enthusiast users, with no funds to buy licenses, but who had been among the most important finders and fixers of bugs. Unable to use the free version any more, they were forced to move to other products or create them.

The result was a whole new product: XCP-ng. "We did regain revenue from our paying customers, the big ones, the enterprises who were never going to make contributions but we lost the ones who were keeping the project alive."

Headland's talk ended with some confessions, and the four lessons he felt were the most important things for any company selling products based on open source.

He said that Citrix misunderstood and underestimated the breadth and richness of the community, and the number and types of stakeholders in it. That it hadn't identified the most important members, and didn't do enough to support even the ones it did know about. It also never thought about working with the other "commercial peer contributors to Xen: Red Hat, Oracle, SUSE, and Amazon, to name a few."

His takeaways?

It is both fascinating and wonderful to see such openness and honesty from any commercial entity. Even before Citrix changed its name, Xen wasn't the most well-known commercial hypervisor that's always been VMware, the company that pretty much created the industry. But as the references to Amazon and EC2 hint, Xen has some very big users and is a more important competitor in the space than you might think.

Whether Citrix's candor is going to win it more trust is uncertain, but it's an astonishingly big olive branch, and we applaud it.

Here is the original post:

How Citrix dropped the ball on Xen ... according to Citrix - The Register

First Hand analysis: a good open source demo for hand-based interactions – The Ghost Howls

I finally managed (with some delay) to find the time to try First Hand, Metas opensource demo of the Interaction SDK , which shows how to properly develop hand-tracked applications. Ive tested it for you, and I want to tell you my opinions about it, both as a user and as a developer.

First Hand is a small application that Meta has developed and released on App Lab. It is not a commercial app, but an experience developed as a showcase of the Interaction SDK inside the Presence Platform of Meta Quest 2.

It is called First Hand because it has been roughly inspired by Oculus First Contact, the showcase demo for Oculus Touch controller released for the Rift CV1. Actually, it is just a very vague inspiration, because the experiences are totally different, and the only similarity is the presence of a cute robot.

The experience is open source, so developers wanting to implement hand-based interactions in their applications can literally copy-paste the source code from this sample.

Here follows my opinion on this application.

As a user, Ive found First Hand a cute and relaxing experience. There is no plot, no challenge, nothing to worry about. I just had to use my bare hand and interact with objects in a natural and satisfying way. The only problem for me is that it was really short: in 10 minutes, it was already finished.

I expected a clone of the Oculus First Contact experience that made me discover the wonders of Oculus Touch controllers, but actually, it was a totally different experience. The only similarities are the presence of a cute little robot and of some targets to shoot. This makes sense considering that controllers shine with different kinds of interactions than the bare hands, so the applications must be different. Anyway, the purpose and the positive mood were similar, and this is enough to justify the similar name.

As Ive said, in First Hand there is not a real plot. You find yourself in a clock tower, and a package gets delivered to you. Inside there are some gloves (which are a mix between Alyxs and Thanoss ones) which you can assemble and then use to play a minigame with a robot. Over.

What is important here is not the plot, but the interactions that you perform in the application in a natural way using just your bare hands. And some of them are really cool, like for instance:

There were many other interactions, with some of them being simple but very natural and well made. What is incredible about this demo is that all the interactions are very polished, and designed so that to be ideal for the user.

For instance, pressing buttons just requires tapping on them with the index finger, but the detection is very reliable, and the button never gets trespassed by the fingertips, resulting very well-made and realistic.

The shooting with the palm is very unreliable because you cant take good aim with your palm, so the palm shoots a laser for a few seconds, giving you time to adjust its aim until you shoot the object you want to destroy.

And force-grabbing is smart-enough to work on a cone of vision centered on your palm. When you stretch your palm, the system detects what is the closest object in the area you are aiming at with your hand, and automatically shows you a curved line that goes from your palm to it. This is smart for two reasons:

All the interactions are polished so that to cope with the unreliability of hand tracking. I loved how things were designed, and Im sure that Metas UX designers made very experiments before arriving at this final version. I think the power of this experience as a showcase is given by this polish level, so that developers and designers can take inspiration from this to create their own experiences.

As Ive told you, as a user I found the experience cute and relaxing. But at the same time, I was quite frustrated by hands tracking. I used the experience both with artificial and natural light, and in both scenarios, I had issues with the tracking of my hands, with my hands losing tracking a lot of times. Its very strange, because I found tracking more unreliable than in other experiences like Hand Physics Lab. This made my experience a bit frustrating, because for instance, when I went to grab the first object to take the remote and activate the elevator, lots of times my virtual hand froze when I was close to it, and it took me many tries before being able to actually grab it.

I also noticed more how hand tracking can be imprecise: Meta did a great job in masking its problems by creating a smart UX, but it is true that these smart tricks described above were necessary because hand tracking is still unreliable. Furthermore, not having triggers to press, you have to invent other kinds of interactions to activate objects, which may be more complex. Controllers are much more precise than hands, and so for instance shooting would have been much easier with the controllers, exactly as force grab.

This showed me that hand tracking is not very reliable and cant be the primary controlling method for most VR experiences, yet.

But at the same time, when things worked, performing actions with the hands felt much better. I dont know how to describe it: it is like when using controllers, my brain knows that I have a tool in my hands, while just using my bare hands, it feels more that its truly me making that virtual action. After I assembled the Thanos gloves, I watched my hands with them on, and I had this weird sensation that those gloves were on my real hands. It never happened to me with controllers. Other interactions were satisfying to me, like for instance scrolling the Minority Report screen, or force grabbing. Force Grabbing is really well implemented here, and it felt very fun to do. All of this showed me that hand tracking may be unreliable, but when it works, it gives you a totally different level of immersion than controllers. For sure in the future hand tracking will be very important for XR experiences, especially the ones that dont require you to have tools in your hands (e.g. a gun in FPS games).

Since the experience is just 10 minutes long, I suggest everyone give it a try on App Lab, so that to understand better these sensations I am talking about.

The Unity project of First Hand is available inside this GitHub repo. If you are an Unreal Engine guy, well Im sorry for you.

The project is very well organized inside folders, and also the scene has a very tidy tree of gameobjects. This is clearly a project made with production quality, something that doesnt surprise me because this is a common characteristic of all samples released by Meta.

Since the Interaction SDK can be quite tricky, my suggestion is to read its documentation first, and only after check out the sample. Or to have a look at the sample while reading the documentation. Because there are some parts that just by looking at the sample are not easily understandable.

Since the demo is full of many different interactions, it is possible in the sample to have the source code of all of them, and this is very precious if you, as a developer, want to create a hand-tracked application. You just copy a few prefabs, and you can use them in your experience. Kudos to Meta for having made this material available to the community.

Ive not used the Interaction SDK in one of my projects yet, so this sample was a good occasion for me to see how easy it is to implement it.

My impression is that the Interaction SDK is very modular and very powerful, but also not that easy to be employed. For instance, if you want to generate a cube every time the user does a thumb up with any of his hands, you have to:

As you can see, the structure is very tidy and modular, but it is also quite heavy. Creating a UI button that when you point-and-click with the controllers generates the cube is much easier. And talking about buttons, to have those fancy buttons you can poke with your fingertip, you require various scripts and various child gameobjects, too. The structure is very flexible and customizable, but it seems also quite complicated to master.

For this reason, it is very good that there is a sample that gives developer prefabs ready out-of-the-box that could be copied and pasted, because I guess that developing all of this from scratch can be quite frustrating.

The Interaction SDK, which is part of the Presence Platform of the Meta Quest is interesting. But sometimes, I wonder: what is the point of having such a detailed Oculus/Meta SDK, if it works on only one platform? What if someone wants to do a cross-platform experience?

I mean, if I base all my experience on these Oculus classes, and this fantastic SDK with fantastic samples how can I port all of this to Pico? Most likely, I can not. So I have to re-write all the application, maybe managing the two versions in two different branches of a Git repository, with all the management hell that comes out of it.

This is why I usually am not that excited about the updates on the Oculus SDK: I want my applications to be cross-platform and work on the Quest, Pico, and Vive Focus, and committing only to one SDK is a problem. For this reason, Im a big fan of Unity.XR and the Unity XR Interaction Toolkit, even if these tools are much rougher and unriper than Oculus SDK or even the old Steam Unity plugin.

Probably OpenXR may save us, and let us develop something with Oculus SDK and make it run on Pico 4, too. But for now, OpenXR implementation is not very polished in Unity, yet, so developers have still to choose if going for only one platform with a very powerful SDK, or going cross-platform with rougher tools or using 3rd-parties plugins, like for instance AutoHand, which works well in giving physics-based interactions in Unity. I hope the situation will improve in the future.

I hope you liked this deep dive in First Hand, and if it is the case, please subscribe to my newsletter and share this post on your social media channels!

(Header image by Meta)

Read the original here:

First Hand analysis: a good open source demo for hand-based interactions - The Ghost Howls

How the blockchain helps a whisky and rum producer protect his brand – Fortune

Standing in a vineyard in the Alsace region of France, after the three-day-long Whisky Live Paris conference, longtime master distiller Mark Reynier wanted to discuss something else: the blockchain.

Although combining the age-old craft of distilling with such comparatively nascent technology may seem odd to some, Reynier, the CEO and founder of Waterford Whisky Distillery and of Renegade Rum Distillery, is no stranger to unconventional ideas.

Well known in the industry for helping revive the shuttered Bruichladdich Distillery, located on the isle of Islay in Scotland, Reynier helped pioneer applying the concept of terroiror teireoir, the term the company trademarked that includes the Irish Gaelic word for Irelandto whisky.

Terroir has been used in wine production for a millennia, with winemakers obsessing over environmental factors like microclimate, soil, and topography interacting together to create a wines flavor profile. But the practice typically hadnt been applied to whiskey or rum, which are mostly mass produced by large corporations like Paris-based Pernod Ricard, which controls 80% of the global Irish whiskey market.

Using terroir to produce alcohol means rejecting the homogenization of industrial distillation or industrial manufacture, and extolling the virtues of going au naturel, Reynier told Fortune.

Through a proprietary blockchain-enabled system called ProTrace that validates their record-keeping system for manufacturing, Waterford and Renegade Rum are proving the effectiveness of terroir for spirits and presenting the details in digital form, tracking and compiling every step of the growing and distilling process. Cian Dirrane, the group head of technology for both distilleries, said he worked with his team to create ProTrace as a custom blockchain after researching open source code on GitHub, and it was implemented it in 2019.

On the back of every bottle of whisky or rum distilled by Reyniers companies is a nine-digit code that customers can enter online to reveal myriad details. Although the company could have used another technology or system to accomplish the same goal, Dirrane said using the blockchain ensures the recorded data cant be changed.

Its not just marketing bullshit, Reynier added. Its a validation, as well as a proof of concept.

For one such bottle of whisky from Waterford Distillery, part of a bottling of 21,000, the company reported the names of the growers and when they harvested the grain, when the product was distilled and bottled, and how long the bottles contents had matureddown to the day.

To make the data more visual, along with the processing information collected by Waterfords blockchain system, each bottles unique report includes a map with the location of the farm where the barley was grown, a video of the field and the farmers, and ambient sounds.

Its really a counter to the nonsense thats spouted around the world by different sales guys and brand owners or whatever, Reynier said. Our process is so specific. And because were small guys in a world of multinational companies, I have to be able to verify what I say.

The blockchain verification involves more than 800 validation points spanning from when the grain is received by the distillery to the end of distillation when the spirit is put in casks, according to Dirrane. These validation points include the amount of malt brought to the distillery by trucks and the temperatures reached when the fermented liquid is heated into vapor and condensed back into a liquid.

Every data point is validated and logged on the digital ledger, which cant be tampered with, Dirrane said.

The whole productionfrom the raw product intake to the distillation process to the casting and agingand then the finished product is all on the blockchain, Dirrane said. If anybody wants to validate externally, they can see all the processes that happened.

Those processes do add to production costs. A typical bottle of whisky from Waterford Distillery may cost $80 to $120, whereas a bottle of Jameson, a well-known Irish whisky brand produced by Pernod Ricard, may retail for just $25. Renegade Rums first mature bottling will be released by the end of the month, and one of the first bottles will likely cost about $55, compared with a $20 price tag on a bottle of Captain Morgan from London-based Diageo.

But Waterford Distillery and Renegade Rum could soon have some company. Dirrane and his team, instead of keeping the tech to themselves, have written a white paper and are planning to make the code and ledger open source, and publish it online next year. Dirrane said that in addition to Reyniers commitment to transparency, as a software engineer, hes eager to see the program reviewed by peers.

Public or not, the blockchain has been essential for Reynier in an industry sometimes known for obfuscation.

This is taking on a completely Wild West drink sector, Reynier said, and trying to establish and verify my way of doing it so everybody can see the traceability and the transparency.

More here:

How the blockchain helps a whisky and rum producer protect his brand - Fortune

Google Chrome is the most vulnerable browser in 2022 – General Discussion Discussions on AppleInsider Forums – AppleInsider

New data reveals that Google Chrome users need to be careful when browsing the web, but Safari users don't get off scot-free.

According to a report by Atlas VPN on Wednesday, Google Chrome is the most vulnerable browser on the market. So far, in 2022, the browser had 303 vulnerabilities, totaling 3,159 cumulative vulnerabilities.

These figures are based on data from the VulDB vulnerability database, covering Janurary 1, 2022 to October 5, 2022.

Google Chrome is the only browser with new vulnerabilities in the five days in October. Recent ones include CVE-2022-3318, CVE-2022-3314, CVE-2022-3311, CVE-2022-3309, and CVE-2022-3307.

The CVE program tracks security flaws and vulnerabilities across multiple platforms. The database doesn't list details for these flaws yet, but Atlas VPN says they can lead to memory corruption on a computer.

Users can fix these by updating to Google Chrome version 106.0.5249.61.

Mozilla's Firefox browser is in second place for vulnerabilities, with 117 of them. Microsoft Edge had 103 vulnerabilities as of October 5, 61% more than the entire year of 2021. Overall, it has had 806 vulnerabilities since its release.

Next is Safari, which has some of the lowest levels of vulnerabilities. For example, in the first three quarters of 2022, it had 26 vulnerabilities, and its number for cumulative vulnerabilities 1,139 since its release.

Meanwhile, the Opera browser had no documented vulnerabilities so far in 2022 and only 344 total vulnerabilities.

Google Chrome, Microsoft Edge, and Opera all share the Chromium browser engine. Vulnerabilities in Chromium may affect all three browsers.

The Chromium open-source project generates the source code used by all Chromium-based browsers. Not all flaws will affect all of these browsers because each company creates their browsers in different ways.

As of May 2022, Safari reached over a billion users, and Apple has been working hard to make sure its browser is secure and safe to use.

To stay safe on the web, people should keep their browsers updated to the latest version. Be careful when downloading plug-ins and extensions, especially from lesser-known sources or developers.

Read on AppleInsider

Continued here:

Google Chrome is the most vulnerable browser in 2022 - General Discussion Discussions on AppleInsider Forums - AppleInsider

Intel CTO wants developers to build once, then run on any GPU – VentureBeat

Over two decades ago, the Java programming language, originally developed by Sun Microsystems, offered developers the promise of being able to build an application once and then have it run on any operating system.

Greg Lavender, CTO of Intel, remembers the original promise of Java better than most, as he spent over a decade working at Sun. Instead of needing to build applications for different hardware and operating systems, the promise of Java was more uniform and streamlined development.

The ability to build once and run anywhere, however, is not uniform across the computing landscape in 2022. Its a situation that Intel is looking to help change, at least when it comes to accelerated computing and the use of GPUs.

Today in the accelerated computing and GPU world, you can use CUDA and then you can only run on an Nvidia GPU, or you can go use AMDs CUDA equivalent running on an AMD GPU, Lavender told VentureBeat. You cant use CUDA to program an Intel GPU, so what do you use?

Thats where Intel is contributing heavily to the open-source SYCL specification (SYCL is pronounced like sickle) that aims to do for GPU and accelerated computing what Java did decades ago for application development. Intels investment in SYCL is not entirely selfless and isnt just about supporting an open-source effort; its also about helping to steer more development toward its recently released consumer and data center GPUs.

SYCL is an approach for data parallel programming in the C++ language and, according to Lavender, it looks a lot like CUDA.

To date, SYCL development has been managed by the Khronos Group, which is a multi-stakeholder organization that is helping to build out standards for parallel computing, virtual reality and 3D graphics. On June 1, Intel acquired Scottish development firm Codeplay Software, which is one of the leading contributors to the SYCL specification.

We should have an open programming language with extensions to C++ that are being standardized, that can run on Intel, AMD and Nvidia GPUs without changing your code, Lavender said.

Lavender is also a realist and he knows that there is a lot of code already written specifically for CUDA. Thats why Intel developers built an open-source tool called SYCLomatic, which aims to migrate CUDA code into SYCL. Lavender claimed that SYCLomatic today has coverage for approximately 95% of all the functionality that is present in CUDA. He noted that the 5% SYCLomatic doesnt cover are capabilities that are specific to Nvidia hardware.

With SYCL, Lavender said that there are code libraries that developers can use that are device independent. The way that works is code is written by a developer once, and then SYCL can compile the code to work with whatever architecture is needed, be it for an Nvidia, AMD or Intel GPU.

Looking forward, Lavender said that hes hopeful that SYCL can become a Linux Foundation project, to further enable participation and growth of the open-source effort. Intel and Nvidia are both members of the Linux Foundation supporting multiple efforts. Among the projects where Intel and Nvidia are both members today is the Open Programmable Infrastructure (OPI) project, which is all about providing an open standard for infrastructure programming units (IPUs) and data processing units (DPUs).

We should have write once, run everywhere for accelerated computing, and then let the market decide which GPU they want to use, and level the playing field, Lavender said.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

View original post here:
Intel CTO wants developers to build once, then run on any GPU - VentureBeat

The Week in Security: CISA alerts on open source tool, SBOMs are just the first step – Security Boulevard

Welcome to the latest edition of The Week in Security, which brings you the newest headlines from both the world and our team across the full stack of security: application security, cybersecurity, and beyond. This week: APT groups targeted a defense industrial base sector organization, why SBOMs are a great first step, and more.

A new U.S. Cybersecurity and Infrastructure Security Agency (CISA) Alert (AA22-277A) shares that advanced persistent threat (APT) activity was found on the enterprise network of a U.S. Defense Industrial Base (DIB) sector organization. The known activity took place from November 2021 to January 2022, and was tracked by CISA with the help of a trusted third-party organization. CISA asserts that multiple APT groups gained access to this network, some over a long period of time. The Alert also reports that these actors used an open-source toolkit called Impacket to expand their foothold in the network and compromise it.

The effort on behalf of CISA and the trusted third-party was an incident response engagement plan. The effort found that certain APT groups gained access to the organizations Microsoft Exchange Server in early 2021. However, they have not yet determined how these groups gained access to the network. Once granted access, the APT groups used a compromised administrator account, allowing them to access the networks EWS Application Programming Interface (API) twice, while connected to a VPN.

After accessing the EWS API, the threat actors used Window Command Shell over a 3-day period, allowing them to interact with the organizations network, including the collection of sensitive data. It was in this same period that the APT groups utilized Impacket to move laterally across systems. The Alert defines Impacket as a Python toolkit for programmatically constructing and manipulating network protocols on another system.

The response effort believes that the APT groups were able to maintain access to the network until January 2022 with the use of legitimate login credentials.

CISAs Alert lists tactics, techniques, and procedures (TTPs) as well as indicators of compromise (IoCs) related to this incident. CISA, along with the FBI and NSA, advise that any DIB sector or critical infrastructure organization take the necessary precautions listed in the Alert in order to manage this cyber threat.

Here are the stories were paying attention to this week

Having a Bill of Materials is nothing new in the traditional Supply Chain Management (SCM) process, and it shouldnt be any surprise and makes perfect sense to apply this same concept to software.

The Egypt Financial Cybersecurity Framework uses the most common, and well-respected frameworks into one unified source. Rather than attempting to cross-reference all the frameworks to each other, the CBE choses the best practices from each, creating a new document for use in the financial sector.

TheFederal Bureau of Investigation(FBI) and CISA have published a joint public service announcement. It assesses that malicious cyber activity aiming to compromise election infrastructure is unlikely to result in large-scale disruptions or prevent voting.

Researchers have disclosed details about a now-patched high-severity security flaw in Packagist, a PHP software package repository, that could have been exploited to mount software supply chain attacks.

SaaS security provider Legit Security today announced the launch of Legitify, a new open-source security tool designed to help enterprises secure their GitHub implementations. The solution will enable security and devops teams to scan GitHub configurations at scale and ensure the integrity of open-source software.

*** This is a Security Bloggers Network syndicated blog from ReversingLabs Blog authored by Carolynn van Arsdale. Read the original post at: https://blog.reversinglabs.com/blog/the-week-in-security-cisa-alerts-on-open-source-tool-sboms-are-just-the-first-step

Link:
The Week in Security: CISA alerts on open source tool, SBOMs are just the first step - Security Boulevard

What we can learn from the top DevOps articles of 2022 – TechTarget

Since its inception in 2007, DevOps has been stirring up the way IT teams handle operations -- and 2022 has been no different.

DevOps combines development and operations to promote collaboration and communication. In turn, it can streamline processes and enable companies to keep up with market and customer demands. These benefits keep DevOps relevant and on the rise. Around 77% of companies use a DevOps model to streamline software deployment, Google reported. The market is expected to grow from $6,079.38 million to $14,554.23 million by 2027, according to 360iResearch's DevOps market forecast.

The majority of DevOps articles TechTarget published in 2022 focus on the knowledge and skills needed to be a successful DevOps engineer and have a competitive advantage in the job market. IT professionals also need strategies and tools to promote individual and organizational growth -- for organizations just beginning their adoption journey or refining a DevOps environment.

As we move into fall 2022, let's review the top 10 DevOps articles from the last six months that dive into everything from DevOps runbooks and preferred programming languages to adoption and desirable skills.

Every organization expects different skills and education levels from their DevOps engineers, but some qualifications are universal. In this article, former associate site editor Alyssa Fallon interviewed Matthew Grasberger, a DevOps engineer at Imperfect Foods and TechTarget contributor, and Mirco Hering, a global transformation lead at Accenture, about the top skills DevOps admins should have and their key responsibilities. Both found that DevOps engineers need cloud experience and should be acquainted with cloud-native platforms, such as Microsoft Azure or Google Cloud. In addition to tool knowledge, they must be adaptable and able to balance regular responsibilities with unexpected tasks.

In a DevOps engineering role, interpersonal skills -- such as listening, curiosity and communication -- are just as important as technical skills. Collaboration and teamwork drive DevOps projects forward because they promote creativity and problem-solving. In an interview conducted for the article, Kyle Fossum, senior DevOps engineer at The Predictive Index, said, "I've heard DevOps defined as people, processes and technology -- in that order, so people come first." Review this article from March to determine what soft skills look like for DevOps engineers and how to translate them onto a resume.

DevOps runbooks focus on a single workflow process and address issues IT teams encounter. With a proper runbook, IT admins can create repeatable processes to help eliminate avoidable issues. In his article on runbook development, analyst Kurt Marko explained why you must record each task step before deploying automation and place it into a runbook. Explore the who, what, when, where and why of runbooks and how to automate them.

Every complex procedure should have a runbook that describes detailed steps for continual consistency and accuracy across an organization. Creating a template strengthens existing runbooks and eliminates confusion by explaining what is going on and why. DevOps runbooks provide teams with clear DevOps process descriptions and detail what each process accomplishes. Tom Nolle, president of CIMI Corporation, covered what organizations should include in a runbook template, rules for the structure and how to test it before implementation.

To keep on top of the variety of tasks DevOps requires, DevOps engineers must have some code knowledge -- using programming skills to implement CI/CD and infrastructure as code, for example. Grasberger unpacked which languages are the most useful, from his experience as a DevOps engineer and shared how to best improve your skills -- through practice, practice and more practice.

The programming language Go, commonly referred to as Golang, can be a good fit when speed, concurrency and developer experience are a top priority. This strongly typed programming language makes it easier to use, write and read concurrent code than with JavaScript. Go's features also include readable code, extensive documentation and a command-line tool. This tutorial by Grasberger unpacked Go's benefits and teaches readers how to get started.

DevOps architect and engineering roles seem similar upon first glance but they differ greatly. In the simplest terms, DevOps architects create the framework and an engineer works to fill it in. An organization needs a DevOps architect if it already has software or enterprise architects. If there is a DevOps team of any kind, it needs engineers. In this article, Nolle explained where the architect and engineer roles diverge and come together in areas such as cloud knowledge and experience levels.

Many factors come into play when choosing an orchestration tool, but organizations can turn to open source tools to alleviate costs. When choosing the right tool, keep in mind an organization's size and the extent of its DevOps capabilities. Orchestration tools coordinate all the automated tasks necessary for a deployment. Once DevOps teams implement automation, they can integrate more DevOps processes. In this comparison piece, analyst Kerry Doyle dove into detail about open source orchestration options, such as Rancher, HashiCorp Nomad, Jenkins and GitLab CI.

Just because an organization has implemented a DevOps framework does not guarantee its success. Factors such as unclear definitions, deeply rooted silos, legacy commitments and missing actionable metrics can all contribute to a lackluster and inconsistent DevOps adoption process. For DevOps success, TechTarget senior technology editor Stephen Bigelow recommended ways to set reachable DevOps goals and six steps for a smooth adoption.

Self-service portals standardize build tools, technology, configurations, infrastructure and design patterns across an organization with a centralized dashboard system. In this article, Doyle has laid out the primary benefits of working with a DevOps self-service portal, its key elements and how to best prepare for adoption.

Read the rest here:
What we can learn from the top DevOps articles of 2022 - TechTarget