Open Compute Project Foundation Turns in Strong 2019 with 40% YoY Growth – Yahoo Finance

Study shows 2019 non-board member spend tops $3.6 billion, projects to hit $11.88 billion by 2023

AUSTIN, Texas, April 22, 2020 (GLOBE NEWSWIRE) -- The Open Compute Project Foundation (OCP), a collaborative community focused on redesigning hardware technology to efficiently support the growing demands on compute infrastructure, announces today the results of an updated independent assessment of the Open Compute Project market impact by Omdia, a global technology research powerhouse. Omdia was established following the merger of the research division of Informa Tech (Ovum, Heavy Reading and Tractica) and the acquired IHS Markit technology research portfolio. The study, commissioned by OCP, analyzes the global adoption and impact of OCP-certified gear in the technology industry. With the uptick in network usage and data demands associated with COVID-19, this report reveals critical information to serve as a basis for the data center sector and beyond to adjust their compass amid uncertain economic times.

At the close of 2019, Omdia interviewed OCP-certified equipment vendors and end users to supplement its ongoing discussions within the data center ecosystem, from server, power, network and rack vendors to silicon suppliers and system integrators. Data obtained from these discussions were fed into Omdias proprietary bottom-up models for estimating markets to determine the growth of 2019 non-board member adoption over the previous year. Participants shared how the market has changed over the last 12 months regarding technology demands, shifting compute to the edge, a circular economy, distribution models and more.

Among the preliminary findings:

We are witnessing a paradigm shift in the data center industry as adoption acceleration continues, comments Rocky Bullock, CEO for the Open Compute Project Foundation. We value the insights of the study to fuel our community to innovate, collaborate and improve the global ecosystem to support future growth.

These results confirm the increased emphasis on solutions that we have undertaken, adds Bill Carter, CTO for the Open Compute Project Foundation. Collaboration with open source software organizations and OCP solution providers is making open hardware easier to adopt and consume. The results also indicate that earlier enabling of a robust product offering and supply chain for edge products is broadening the reach of OCP products into new verticals.

Once again, the market has shown healthy maturation with significant revenue growth, states CliffGrossner, Ph.D., Executive Director Research & Technology Fellow at Omdia. We have seen the early embers of an emerging supply chain with a circular economy, innovations driven by OCP certification and hyperscale operators showing greater importance over the past year.

Please join us at the 2020 OCP Virtual Summit on May 12-15 to learn the complete details of the findings, including how the market has matured and shifted from 2018 and the projected track of dynamic growth expected through 2023. Dr.Grossner and Mr. Vladimir Galabov will present the detailed findings at 1:30 p.m. PST on Wednesday, May 13. Dr. Grossner will also present on Friday, May 15, Delivering the Open Edge: New opportunities for Collaboration during the OCP Future Technologies Symposium. The Summit and Symposium are free to attend. Click here to register.

About Open Compute Project Foundation (OCP)

The Open Compute Project Foundation (OCP) was initiated in 2011 with a mission to apply the benefits of open source and open collaboration to hardware and rapidly increase the pace of innovation in, near and around the data centers networking equipment, general purpose and GPU servers, storage devices and appliances, and scalable rack designs. OCPs collaboration model is being applied beyond the data center, helping to advance the telecom industry & EDGE infrastructure. http://www.opencompute.org

About Omdia

Omdia is a global technology research powerhouse, established following the merger of the research division of Informa Tech and the acquired IHS Markit technology research portfolio, Ovum, Tractica and Heavy Reading. We combine the expertise of more than 400 analysts across the entire technology spectrum covering 150 markets and publish over 3,000 research reports annually, reaching over 14,000 subscribers.

Story continues

Continue reading here:
Open Compute Project Foundation Turns in Strong 2019 with 40% YoY Growth - Yahoo Finance

COVID-19 contact tracing: The tricky balance between privacy and relief efforts – TechRepublic

As more governments consider the use of contact tracing apps to prevent the spread of coronavirus, researchers say privacy will have to be at the forefront of efforts in order for civilians to use it.

Image: Ministry of Health Singapore

Governments around the world have begun exploring the use of contact tracing apps as a key means for tracking and reducing the spread of coronavirus. Contact tracing is a method for warning people when they have been exposed to someone who has contracted a serious illness like COVID-19. This method has previously been used to better understand and limit the spread of HIV, meningitis, and other diseases.

It also allows for governments or health authorities to identify individuals who may have been in close contact with an infected COVID-19 confirmed case.

SEE:Coronavirus: Critical IT policies and tools every business needs(TechRepublic Premium)

Traditionally, contact tracing required a very labour intensive process, Gnana Bharathy, a systems and modelling lecturer at the University of Technology Sydney (UTS) told TechRepublic. But with technology such as smartphones, this has made contact tracing more efficient and easier to perform.

"Now, it is possible to do [contact tracing] through mobile phone data -- that is assuming people seldom part with their phone. Therefore, you could take the signals and then see which signals are coming into contact with devices owned by people who have contracted COVID-19," Bharathy explained.

"It's essentially the identification of the signals that actually come into contact with any non-risk signal. This particular view is quite helpful from a health perspective because this is one of those illnesses that can even finish asymptomatically."

SEE:Coronavirus having major effect on tech industry beyond supply chain delays (free PDF)(TechRepublic)

Rachael Falk, CEO of the Cyber Security Cooperative Research Centre (CSCRC), said with a serious public health crisis like the COVID-19 pandemic, digital contact tracing is helpful as positive cases need to be identified quickly, and particularly if the patients involved are unable to communicate with those who they come into contact with.

Singapore's TraceTogether application

Image: Ministry of Health Singapore

"Having an asset record a timestamp and a period of time is going to be better than our memory I think that's a good thing and particularly for highly infectious public health issues," Falk said.

In Singapore, the government has already released a contact tracing app, called TraceTogether, which traces the location and movements of individuals via mobile phones. Australia is currently working on a similar app that is based on the Singaporean app's source code.

Australian Prime Minister Scott Morrison said last week that using location information may be necessary to save lives and livelihoods.

"If that tool is going to help them do that, then this may be one of the sacrifices we have to make," Morrison said last week.

On the industry side, Google and Apple have also been working on contact tracing -- with their aim being to provide governments and researchers with as much information as possible to understand the spread patterns of coronavirus.

Apple and Google's iteration of contact tracing entails creating application program interfaces (APIs) that would help public health authorities design apps with contact tracing capabilities. These apps would then be available in both Apple's App Store and the Google Play store.

Belal Alsinglawi, an AI modeller for COVID-19 and data science lecturer at Western Sydney University, said Google and Apple's contact tracing efforts would save time and resources in developing applications to track the virus' spread, as well as give a bigger data picture regarding the pandemic's impact.

SEE:COVID-19: How artificial intelligence can help companies plan for the future(TechRepublic)

He explained that algorithmic forecasts of the COVID-19 pandemic -- especially AI-powered ones -- could be much more accurate, but there has been a lack of data available. Due to this, most models used by governments for tracking and forecasting have not relied on AI.

"[AI predicative methods] involve finding trends in past data, and using these insights to forecast future events and there's currently too few Australian cases to generate such a forecast for the country, with this being the same for many other countries too," Alsinglawi said.

"Once we have enough data inputs about COVID-19, it will facilitate the mission for AI researchers to develop a predictive framework for the future epidemic events, and as a result, more lives being saved."

The sensitive nature of medical information, collected from contact tracing, means that ensuring its privacy is particularly important. Due to this, plans to roll out contact tracing apps have not been without their fair share of critics, who have expressed concern that these apps do not have enough substance on the privacy and data protection fronts.Addressing these concerns, Australia's Minister for Government Services Stuart Robert said on Monday that so long as contact tracing apps do not have geolocation, it would not be known what is happening when a block of 15 minutes or more has been logged between two people.

"All we care about is who the person was next to because there's no geolocation, no one knows where the young person was or what they were doing, there's no surveillance at all. It's simply who they were near from a health point of view," Stuart said.

A day later, Australian Prime Minister Scott Morrison said Australia's upcoming contact tracing app would put data into an encrypted national store that is only accessible by the states and territories' "health detectives".

"The Commonwealth can't access the data. No government agency at the Commonwealth level, not the tax office, not government services, not Centrelink, not Home Affairs, not Department of Education, not childcare -- the Commonwealth will have no access to that data," Morrison said.

SEE:COVID-19: How cell phones are helping to track future cases(TechRepublic)

Falk, who is currently reviewing Australia's COVID-19 tracing app alongside the Digital Transformation Agency (DTA), backed the government's comments that Australia's iteration of contact tracing would be reasonably secure and private.

"The moment when you upload the app and you enter that data, that data is actually sitting in your handset, so nothing is going anywhere at that stage at all," she said.

"That data is sitting on your hands, along with the data of other people you might come into contact with for the period of time to satisfy a digital note being named in that app without a contact. But at that stage, that data is not going anywhere," she added.

When asked about the privacy concerns surrounding when data is temporarily stored inside the encrypted national store however, Falk declined to comment as testing around the app was still in progress.

Falk did acknowledge though, that any participation in using contact tracing apps should be voluntary, with Australians being allowed to take a conservative approach to protecting their online security and privacy if they wished to do so.

"From my perspective, I'll be [using TraceTogether], but it's opt-in and I recognise it is up to the individual's approach. I think it's right that Australians take a conservative approach to protecting their online security and privacy," Falk said.

The Director of the University of Western Australia Centre for Software and Security Practice, David Glance, took a different perspective, telling TechRepublic that while it was reassuring to see the CSCRC and DTA perform security tests on Australia's contact tracing app, there will always be privacy concerns due to the role health authorities have in collecting and using the information gathered.

"You've got to trust the implementation of infrastructure performed by the health authorities, which I don't. They're notoriously bad at running infrastructure, especially for something like this. And so, I think that's the fundamental problem," Glance said.

Previous data projects created by Australia's health authorities, like My Health Record, have been lambasted for having a lack of security measures.

Glance added that Singapore's version of TraceTogether, which Australia's contact tracing will be based upon, uses keys that are generated by its health authority and "potentially gives access to the data on your phones".

"They have the IDs of people who they are tracing so that's great because that enables your authorities to quickly work out who's been exposed and how to get in contact with the people who have been affected.

"But that means, essentially, you're relying on the government producing a version of the application that doesn't do other things we have legislation in Australia that allows people to do that so what's to say that the government doesn't use this app, once it's on your phone, to target you for keyboard login or to infiltrate your messaging or anything else?"

With governments, such as Australia, already set on implementing contact tracing, Dr Mahmoud Elkhodr at Central Queensland University noted that while there will never be a "perfect solution" for privacy, there are various mechanisms that could be applied to improve the balance between privacy concerns and the public benefit.

"The right to be forgotten also known as the 'right to erasure' is an EU rule which gives EU citizens the right to request deletion of their personal data from Internet companies such as Google. In Australia, this law is non-existent. So perhaps the first measure the government should take to preserve the privacy of Australians when using the TraceTogether app is to draft legislations and implement features that allow them to simply delete their location information from the app," Elkhodr said.

Elkhodr also proposed that governments should make their contact tracing applications open source if they are serious about addressing the privacy concerns.

"This enables transparency and provides the public with much needed assurances. The government should also ensure that the application will not be used outside its intended scope such as penalising non-essential travel," he said.

SEE:COVID-19 demonstrates the need for disaster recovery and business continuity plans(TechRepublic Premium)

Currently, Singapore's TraceTogether app is not open source software and is not subject to audit or oversight, however, the island nation committed to doing so last month.

Regardless of what security measures are put in place however, Elkhodr said the effectiveness of contact tracing apps would simply come down to whether people trust putting their data in the hands of government.

Glance, meanwhile, said that contact tracing is not a "silver bullet" even if it is successful.

"Testing is obviously the key to controlling disease and it has to be widespread testing.

"The fundamental issue is really, again, going back to what do you do next? Apps are not going to help us with that -- they're not going to help us if we decide to open the border. There is going to be a certain amount of infection. The key to that is testing," he said.

At the time of writing, the World Health Organization reported that there have been over 2.4 million confirmed cases, with over 163,000 fatalities as a result of the virus.

Strengthen your organization's IT security defenses by keeping abreast of the latest cybersecurity news, solutions, and best practices. Delivered Tuesdays and Thursdays

Verizon Media builds search engine to help researchers find COVID-19 documentsThe search engine was developed using Verizon's Vespa big data processing technology and comes in response to a nationwide open research data set challenge.

Robots assisting factory workers and retailers in fight against coronavirusRobots are playing an integral role in efforts to keep essential workers safe and transform factories for the effort against coronavirus.

Mayo Clinic uses self-driving shuttles to transport coronavirus tests on Florida campusAutonomous vehicles are delivering food to quarantined communities, and robots are disinfecting hospital rooms in China due to COVID-19.

New daily charts map out which states have flattened the COVID-19 death curveNone of the states with the highest COVID-related death rates such as Washington, New Jersey, New York, California, Michigan or Louisiana have flattened the curve. This data shows who has been successful.

Google Cloud launches new AI chatbot for COVID-19 informationThe Rapid Response Virtual Agent program includes open source templates for companies to add coronavirus content to their own chatbots.

Coronavirus data story tracks hot spots around the world and in the USAnalyst used Microsoft's Power BI and public data to visualize the rise and fall of the coronavirus country by country and state by state.

See the rest here:
COVID-19 contact tracing: The tricky balance between privacy and relief efforts - TechRepublic

Typosquatting RubyGems laced with Bitcoin-nabbing malware have been downloaded thousands of times – The Register

Malware in software packages means even trusted repositories are not always safe

A researcher has uncovered malicious packages in the RubyGems repository, one of which was downloaded more than 2,000 times.

RubyGems, the standard package manager for Ruby, was studied by threat analyst Tomislav Maljic at ReversingLabs, who highlighted research based on analysing packages submitted to the repository that have similar names to existing popular gems possible cases of "typosquatting," where perpetrators name a package using a common misspelling or substitute a character to mislead developers into installing it by mistake.

The research found over 400 suspect gems including "atlas-client", which was downloaded 2,100 times by developers likely looking for the legitimate gem named atlas_client. The rogue gems contained Windows executables renamed with a .png extension, along with a Ruby script that renamed and ran the file. The malware then created a new VBScript file along with an autorun registry key to run it on startup old-school malware and nothing too technical.

"It starts an infinite loop where it captures the user's clipboard data... the script then checks if the clipboard data matches the format of a cryptocurrency wallet address," Maljic reported. "If it does, it replaces the address with an attacker-controlled one."

In truth, the malware is not very advanced. It is looking for a Ruby developer on Windows whose system is also used for Bitcoin transactions. "A rare breed indeed," remarked Maljic. "At the time of writing this blog, seemingly no transactions were made for this wallet."

He added that "the RubyGems security team has been contacted, and all packages from reported users have been removed from the repository".

The bigger concern is how easy it is to get malware into one of the most widely used package managers. Modern software development is reliant on packages downloaded from repositories, not only RubyGems but also via NPM (JavaScript libraries), NuGet (.NET packages), Maven (Java), Cargo (Rust), PEAR for PHP, PyPI (Python) and many others. Last year the same researcher reported on an NPM package that steals passwords. In 2018, malicious code was found in the NPM package event-stream and was downloaded nearly 8 million times, according to open-source security specialist Snyk.

In February, the Linux Foundation published a white paper [PDF] on the security of the open-source software supply chain, concluding: "Software repositories, package managers, and vulnerability databases are all necessary components of the software supply chain, as are the developers and end users who leverage them. Unless and until the weaknesses inherent within their current designs and procedures are addressed, however, they will continue to expose the companies and developers who rely upon them to significant risk."

This includes not only malware, but also programming errors that introduce vulnerabilities.

The foundation undertook to convene "a meeting of global technology leaders in working across application and product security groups in order to design collective solutions to address these problems."

Tools exist to counter threats, including commercial software projects like OWASP Dependency Track, and the efforts of repositories to improve security. "We'll integrate GitHub and npm to improve the security of the open source software supply chain," GitHub CEO Nat Friedman said last week about the acquisition of NPM.

It is a tricky problem, and it is not only when writing code that developers should be careful what they type.

Sponsored: Webcast: Build the next generation of your business in the public cloud

More:
Typosquatting RubyGems laced with Bitcoin-nabbing malware have been downloaded thousands of times - The Register

How DBS is reaping the dividends of digital transformation – ComputerWeekly.com

When some DBS Bank employees tested positive for Covid-19 early this year, the bank was able to use data from office access cards and Microsoft Office 365 calendars to conduct contact tracing within days. And within weeks, it rolled out an application that enables business customers to submit forms without visiting a branch.

The speed and agility at which DBS, Southeast Asias biggest lender, would not have been possible without the investments it made a decade ago to modernise its IT infrastructure, applications and business processes to spearhead digital transformation.

In an interview with Computer Weekly, DBSs group CIO Jimmy Ng talks up the banks technology initiatives amid the coronavirus outbreak and the next steps in its digital transformation journey, while VMwares vice-president and managing director for Southeast Asia and Korea, Sanjay Deshmukh, explains how the software company is supporting DBS in that journey.

Can you give us sense of how DBS is approaching technology to sharpen its competitive edge, especially with the entry of new digital banking players in markets across the region?

Jimmy Ng: I think we need to take a step back during times of crisis like the current coronavirus outbreak, because that will have great bearing on whats going to happen after we emerge from the crisis. I sense that over the next two years, digital banking is going to be the main channel through which people are going to transact. Covid-19 has been the main catalyst for people to adopt digital banking and thats going to accelerate digital adoption.

Right now, the crisis may seem like the worst of times because IT folks are working hard to keep the lights on, but its also the best of times because it has brought forth the value of technology and the investments weve made over the past decade to modernise our technology stack. But the transformation weve undertaken is not just in the way weve architected our infrastructure.

One of the biggest things we realised was the change in the mindset of our people. In the past five years, weve adopted agile and continuous integration and continuous delivery (CI/CD), enabling us to roll out releases more than 10 times faster than before and it played out nicely during this crisis.

For example, when you have a crisis, customers do not want to come to the office to hand you trade documents because of social distancing rules. So, within a couple of weeks, weve implemented an application to enable customers to submit their documents digitally. Weve been able to move very quickly and nimbly because of our processes and the infrastructure weve put in place.

Another area that weve put in a lot of effort is the use of data to make decisions. During the outbreak, when some of our staff were infected, we were able to use data from office access cards and Office 365 calendars to do contact tracing within days.

We also used video analytics and internet of things (IoT) to understand how we can space out our people so they can maintain a social distance. Our agility and modern infrastructure enabled us to ingest the data very quickly and has paid off in a rapidly changing environment. Amid all of that, we were still able to deliver on existing initiatives.

As for the new digital banking players, they will be formidable competitors, but we think we are in a very good position. In fact, were a digital bank already. Were going to double up on our efforts and continue to provide digital offerings. I believe that this is going to be the battlefield, especially after Covid-19.

As we all know, digital transformation is an ongoing effort. Whats at the top of your mind at this point in time? How are you thinking two or three steps forward from a CIOs perspective?

Ng: As we are battling Covid-19, weve been looking into technologies that we should focus on moving forward. For me, 5G is going to be a main infrastructure that will make it easier for us to work from home and extend our services for example, mobile ATMs in more locations amid a pandemic or emergency.

Theres no such thing as winners or losers because at some point a particular technology will become dominant, and the rest will catch up eventually Jimmy Ng, DBS

Another area is IoT and video analytics which will allow us to monitor crowds at ATMs and branches. And there will be more use cases that combine 5G, IoT and blockchain to enable our customers to manage their business.

Finally, lets not forget artificial intelligence (AI) and machine intelligence (ML), because if we have 5G and IoT coupled with AI, ML and data, well get a formidable suite of tools to enable our customer businesses and support our internal operations.

At the heart of the initiatives you talked about is the application and infrastructure stack. What changes or tweaks do you need to make in your technology stack to support those initiatives?

Ng: Our modern applications and stack form the basis of our technology foundation. Over the past five years, we have moved very aggressively into a virtual private cloud (VPC) environment. Very few banks have gone down that route, which has brought us a lot of benefits and enabled our staff to very conversant in the technology.

The next step for us is containerisation and moving into a hybrid cloud environment to achieve productivity gains and scale. That includes the greater use of public cloud services. There will be cases where public cloud providers have better capabilities and scale from an infrastructure perspective, so the ability for us to adopt those capabilities, including native cloud services, will be key.

I understand that DBS is still running some legacy applications. How are you managing their transition to modern apps?

Ng: We have a dual-prong strategy to tackle that. First, we are containerising legacy applications that will give us the ability to adopt public cloud at scale. Second, we are chipping away our core banking system, so that it becomes a very small part of the entire infrastructure.

And this includes your mainframe applications? Are they going to be eliminated at some point?

Ng: I think mainframe technology has changed a lot and its getting modernised. Our strategy is still to chip it away, with only a small part of it remaining through containerisation. That said, I dont think mainframe technology is going to come to a standstill.

The fact that IBM has bought Red Hat shows that theres going to be some progress in how theyre going to provide a migration path or enabling the use of mainframe hardware in more productive ways. So, it remains to be seen and were keeping our options open depending on how things pan out.

You rightly mentioned that this space is still evolving. How do you then hedge against the risks that you might be taking on with the use of multiple technologies and platforms, such as VMware and Red Hat OpenShift in the Kubernetes space?

Ng: We have adopted a multi-pronged approach even as we use both Cloud Foundry and OpenShift as our application platforms. We think the universe is big enough for a couple of players to co-exist. A multi-supplier strategy has always been part of our approach because technology moves so fast.

Theres no such thing as winners or losers because at some point a particular technology will become dominant, and the rest will catch up eventually. So, using a diverse set of technologies will give us the best innovation from leading suppliers, and as others improve, we can still reap the benefits of their improvements. Even for cloud, were not going to go with just one cloud provider.

How are you dealing with the potential complexities that could arise?

Ng: Itll be a more complex environment and probably not as efficient because we have to maintain two or more technologies. However, the strive for efficiency needs to be balanced with technology obsolescence. Its not an efficiency play; rather its a resilience play so that we can move forward as technology evolves.

We are constantly looking at new technologies and players we can place bets on. For example, we adopted MariaDB in the initial days and have scaled up its use, but we are also looking at new databases that are coming to market. As we can scan the horizon, we will make some good choices and some bad choices, but hopefully more good ones than bad.

Sanjay Deshmukh: As customers like DBS go multi-cloud, they can choose AWS, Azure or Google. To solve the complexity of investing in multiple platforms, were giving them a common infrastructure fabric across multiple clouds, alleviating the need to train their employees on multiple technologies.

On the management side, Jimmy talked about chipping away old applications and putting them into containers. Lets say for discussion sake, they run some of these applications in different cloud environments and infrastructure. Our management capability offers a consistent operations framework, which means if I'm an administrator, I will have one screen that gives me the ability to manage applications that are running in my datacentre on VMware infrastructure or a public cloud. This will help to shield organisations from the complexity of taking a multi-cloud approach.

We have been in this journey for the past 10 years and we are getting very comfortable with how we work, whether it is operating in squads or pulling together people from different departments to work in small teams and respond quickly to changing environments Jimmy Ng, DBS

Id also share a couple of things in terms of our relationship with DBS. Jimmy talked about their initiative that enables their customers to submit trade forms without visiting the branch if you break that down at the technology level, there are two things that are needed to respond to that situation.

First is the agility in building the application to respond to the business need. Thats one area where weve been partnering with DBS very closely with our Pivotal technology that provides the agility to build software in days, not months.

The second aspect is when developers are trying to respond to these market needs, they need infrastructure. In a traditional bank, it takes months or more to make the infrastructure available. With our partnership with DBS, their developers are able to self-provision the infrastructure that they need within the same day and serve their customers quickly.

Can you elaborate on any cultural challenges that you are grappling with when it comes to getting everyone up to speed on what the bank needs to do to take things forward from a technology perspective?

Ng: We have been in this journey for the past 10 years and we are getting very comfortable with how we work, whether it is operating in squads or pulling together people from different departments to work in small teams and respond quickly to changing environments.

Second, our culture of using data to drive insights has been fully entrenched. I think what we need to do more is to get people to understand the application of AI and the use of data to help the business in even more productive ways.

The third area that weve been looking at over the past two years is what we call a platform construct that brings business and technology teams together as co-owners of a particular platform. They own the budget and make decisions on what they want to build or operate. Everyone has the same set of key performance indicators (KPIs), and the biggest win is the true alignment of business and technology.

How does the open source culture play a part in the culture youve just described?

Ng: We are looking at becoming an engineering company and we run very much like a technology company. If you look at the characteristics the big technology giants, embracing open source is always one of them. We have built a library of digital assets internally, and weve been debating whether some of those assets should be made open source because its part and parcel of being an engineering company.

So, dont be surprised if one day, some of the toolkits that we have built are put out as open source software. Weve benefited a lot from open source, so its also our responsibility to contribute to the open source community.

See the rest here:
How DBS is reaping the dividends of digital transformation - ComputerWeekly.com

The coronavirus contact tracing app won’t log your location, but it will reveal who you hang out with – The Conversation AU

The federal government has announced plans to introduce a contact tracing mobile app to help curb COVID-19s spread in Australia.

Read more: Explainer: what is contact tracing and how does it help limit the coronavirus spread?

However, rather than collecting location data directly from mobile operators, the proposed TraceTogether app will use Bluetooth technology to sense whether users who have voluntarily opted-in have come within nine metres of one another.

Contact tracing apps generally store 14-21 days of interaction data between participating devices to help monitor the spread of a disease. The tracking is usually done by government agencies. This form of health surveillance could help the Australian government respond to the coronavirus crisis by proactively placing confirmed and suspected cases in quarantine.

The TraceTogether app has been available in Singapore since March 20, and its reception there may help shed light on how the new tech will fare in Australia.

Read more: Privacy vs pandemic: government tracking of mobile phones could be a potent weapon against COVID-19

Internationally, contact tracing is being explored as a key means of containing the spread of COVID-19. The World Health Organization (WHO) identifies three basic steps to any form of contact tracing: contact identification, contact listing, and follow-up.

Contact identification records the mobile phone number and a random anonymised user ID. Contact listing includes a record of users who have come into close contact with a confirmed case, and notifies them of next steps such as self-isolation. Finally, follow-up entails frequent communication with contacts to monitor the emergence of any symptoms and test accordingly to confirm.

The TraceTogether app has been presented as a tool to protect individuals, families and society at large through a community data-driven approach. Details on proximity and contact duration are shared between devices that have the app installed. An estimated 17% of Singapores population has done this.

In an effort to preserve privacy, the apps developers claim it retains proximity and duration details for 21 days, after which the oldest days record is deleted and the latest days data is added.

Read more: Tracking your location and targeted texts: how sharing your data could help in New Zealand's level 4 lockdown

TraceTogether supposedly doesnt collect users location data thereby mitigating concerns about location privacy usually linked to such apps. But proximity and duration information can reveal a great deal about a users relative distance, time and duration of contact. A bluetooth-based app may not know where you are on Earths surface, but it can accurately infer your location when bringing a variety of data together.

The introduction of a contact tracing app in Australia will allow health authorities to alert community members who have been in contact with a confirmed case of COVID-19.

However, as downloading the app is voluntary, its effectiveness relies on an uptake from a certain percentage of Australians - specifically 40%, according to an ABC report.

But this proposed model overlooks several factors. First, it doesnt account for accessibility by vulnerable individuals who may not own or be able to operate a smartphone, potentially including the elderly or those living with cognitive impairment. Also, its presently unclear whether privacy and security issues have been or will be integrated into the functional design of the system when used in Australia.

This contact tracing model is also not open source software, and as such is not subject to audit or oversight. As it has currently been deployed in Singapore, it also places a government authority in control of the transfer of valuable contact and connection details. The question is now how these systems will stack up against corporate implementations like that being proposed by Google and Apple.

Also, those who criticise contact tracing point out that the technology is after the fact when it is too late, rather than preventive in nature, although it might act to lower transmission rates. Some research has proposed a more preemptive approach, location intelligence, implemented by responsible artificial intelligence, to predict (and respond to) how an outbreak might play out.

Others argue that if were all self-isolating, there should be no need for unproven technology, and that attention may instead be focused on digital immunity certificates, allowing some people to roam while others do not.

And in the apps created to respond to particular situations, theres always the question of: who owns the data?. A pandemic-tracing app would need to have a limited lifetime, even if the user forgets to uninstall the COVID-19 app after victory has been declared over the pandemic. It must not become the de facto operational scenario this would have major societal ramifications.

In the end, it may simply come down to trust. Do Australians trust their data in the hands of the government? The answer might well be no, but do we have any other choice?

Or for that matter what about data in the hands of corporations? Time and time again, government and corporates have failed to conduct adequate impact assessments, have been in breach of their own laws, regulations, policies and principles, have systems at scale that have suffered from scope and function creep, and have used data retrospectively in ways that were never intended. But is this the time for technology in the public interest to proliferate through the adoption of emerging technologies?

No one fears tech for good. But we must not relax fundamental requirements of privacy, strategies for maintaining anonymity, the encryption of data, and preventing our information from landing in the wrong hands. We need to ask ourselves, can we do better and what provisions are in place to maintain our civil liberties while at the same time remaining secure and safe?

Read more:
The coronavirus contact tracing app won't log your location, but it will reveal who you hang out with - The Conversation AU

10 developer skills on the rise and five on the decline – TechCentral.ie

Image: StockXpert

Heres how to ensure your programming chops stay sharp

Print

Read More: GraphQL Pytorch skills software development training

Technology is constantly evolving and so, too, are the developer skills employers look for to make the most of what is emerging and what is solidifying its place in the enterprise.

As companies dive deeper into digital transformations and pivot to data-driven cultures, tech disciplines such as AI, machine learning, internet of things (IoT) and IT automation are driving organisations technology strategies and boosting demand for skills with tools, such as Docker, Ansible and Azure, that will help companies innovate and stay competitive in rapidly changing markets.

What were seeing is companies developing internal skill maps within their developer organisations so they can see what skills they have, and where they need to grow, says Vivek Ravisankar, CEO and co-founder of HackerRank. Theyre building these competency frameworks to find their skill gaps and then put in place training and education to close those.

Understanding which disciplines and skills are up-and-coming and which are fading can help both companies and developers ensure they have the right skills and knowledge to succeed. And what better way to find that out than to mine developer job postings.

Indeed.com analysed job postings using a list of 500 key technology skill terms to see which ones employers are looking for more these days and which are falling out of favour. Such research has helped identify cutting-edge skills over the past five years, with some previous years risers now well establish, thanks to explosive growth.

Docker, for one, has risen more than 4,000% in the past five years and was listed in more than 5% of all US tech jobs in 2019.IoT as well has shot up nearly 2,000% in the past half-decade, with Ansible an IT automation, configuration management, and deployment tool and Kafka a tool for building real-time data pipelines and streaming apps showing similarly strong growth. And, of course, therise of data sciencehas also since cemented high demand for a range of skills, including artificial intelligence, machine learning, and data analysis.

Developers looking to add new skills to their repertoire should pay close attention to the most recent upticks in skills demand that Indeed identified from September 2018 to September 2019 and those falling out of favour as outlined below. Each skill is accompanied by average annual salary information for developers who possess these skills, according to PayScale.com.

Pytorch is an open-source machine-learning library written in Python, C++ and CUDA. It is used for applications such as computer vision and natural language processing. While primarily developed by Facebooks AI Research Lab, it is offered free under the modified BSD license.

Rate of growth, 2018-2019:+138%

Average salary:$118,000

GraphQL is an open-source data query and manipulation language for APIs, and a runtime for fulfilling queries in existing data sets. GraphQL was originally developed for internal use by Facebook but was released for public use in 2015 under the GraphQL Foundation, hosted by the non-profit Linux Foundation. GraphQL supports reading, writing and subscribing to changes in data, and servers are available for multiple languages, including Haskell, JavaScript, Perl, Python, Ruby, Java, C#, Scala, Go, Elixer, Erlang, PHP, R and Clojure.

Rate of growth, 2018-2019:+80%Average salary:$97,000

Kotlin is a cross-platform, statically typed, general-purpose programming language that is designed to interoperate with Java. The Java Virtual Machine (JVM) version of its standard library, in fact, depends on the Java Class Library, though Kotlins syntax is more concise than that of Java. In May of 2019, Google announced that the Kotlin language is now its preferred language for Android developers and has been included as an alternative to the standard Java compiler since the release of Android Studio 3.0 in 2017.

Rate of growth, 2018-2019:+76%Average salary:$99,000

Vue is a progressive, incrementally adoptable JavaScript framework for building user interfaces on the Web. It allows users to extend HTML with attributes (called directives) that offer increased functionality to HTML applications through either built-in or user-defined directives.

Rate of growth, 2018-2019:+72%Average salary:$116,000

.Net Core is a free, open-source, managed software framework for Windows, Linux and macOS. It is a cross-platform successor to Microsofts proprietary .NET Framework and is released for use under the MIT License. It is primarily used in the development of desktop application software, AI/machine learning and IoT applications.

Rate of growth, 2018-2019:+71%Average salary:$87,000

Formerly Looker Data Sciences, Looker is a data exploration and discovery business intelligence platform that was acquired by Google Cloud Platform in 2019. Lookers modelling language, LookML, enables data teams to define relationships in their database so business users can explore, save and download data without needing to knowSQL. Looker was the first commercially available BIplatform built for and aimed at scalable or MPRDBM (massively parallel relational database management system) such as Amazons Redshift, Google BigQuery, HP Vertica, Netezza and Teradata.

Rate of growth, 2018-2019:+68%Average salary:$68,000

HashiCorps Terraform is open-source infrastructure-as-codesoftware that allows users to define and provision a data centre using theproprietary, high-level configuration language Hashicorp Configuration Language(HCL) or JSON. Terraform supports a number of cloud infrastructure providers,including Amazon AWS, IBM Cloud, Google Cloud Platform, DigitalOcean, MicrosoftAzure, and more.

Rate of growth, 2018-2019:+66%Average salary:$104,000

Googles suite of cloud computing services runs of the same infrastructure used for Googles end-user products, and includes a set of management tools, modular cloud services such as computing data storage, data analytics and machine learning. The platform provides infrastructure as a service, platform as a service and serverless computing environments to customers, as well as Googles App Engine, which allows for development and hosting web applications in Google-managed data centres.

Rate of growth, 2018-2019:+62%Average salary:$191,000

Originally designed by Google, Kubernetes (sometimes abbreviated as K8s) is an open-source container orchestration system for automating application deployment, scaling and management. Kubernetes provides a platform for application container automation, deployment, scaling and operation across clusters of hosts.

Rate of growth, 2018-2019:+61%Average salary:$115,000

Spring Boot is an open-source, Java-based integration framework used to create microservices and to build stand-alone and production-ready Spring applications. Spring Boot is built on the Spring framework and gives developers a platform on which to jumpstart development of Spring applications. Spring Boot uses pre-configured, injectable dependencies to speed up development and save developers time.

Rate of growth, 2018-2019:+58%Average salary:$78,000

As fast as some tech skills rise, others fall. Five skills that dropped off significantly in the year between 2018 and 2019 are:

The free, open-source Web browser from the Mozilla foundation has seen its popularity wane in recent years; developers with these skills may also find they are not in demand.

Rate of growth, 2018-2019:-47%

Open source software from HashiCorp for building and maintaining portable virtual software development environments, Vagrant tries to simplify configuration management of virtual environments.

Rate of growth, 2018-2019:-41%

Skills related to Googles web browser have also decreased in popularity between 2018 and 2019.

Rate of growth, 2018-2019: -33%

Production of optics systems have seen a steep decline of late.

Rate of growth, 2018-2019:-33%

The Global System for Mobiles is an older telecommunications standard for mobile phones, which could explain why it has decreased in popularity.

Rate of growth, 2018-2019: -26%

IDG News Service

Read More: GraphQL Pytorch skills software development training

View post:
10 developer skills on the rise and five on the decline - TechCentral.ie

What the open source community can teach the suddenly remote workforce – Security Boulevard

Productive remote teamwork is possible. Just ask the open source community, who has been doing it for years. Here are some top tips for working remotely.

By now we are all familiar with the, uh, challenges (thats the printable word) of uprooting millions of workers from their offices so they can work more safely from home.

Remote workall of a sudden with no time to plan for itis disruptive. Its unfamiliar. Its stressful. Its distracting, especially for those with school-aged children who are not at school. As one frazzled parent put it, I now have two more full-time jobs. Im a principal and a teacher.

And in the tech sector, software developers who have been working together, collaborating in open office environments, are suddenly isolated. Sure, there are virtual connections, but they are not the same as being in the same room.

That doesnt mean development has to crash and burn, though. There is a template available for overcoming that challenge. The open source software sector has been working remotely since well, since open source became a thing.

In most cases, participants have never met. They dont know each other. They might never see or speak with one another. They are likely in different parts of the country, or different countries, many on different sides of the world. Frequently they dont even speak the same language.

Yet they work together, in many cases with astonishing efficiency, and they produce products of superb quality. The Linux operating system, for example, started as an open source project and remains open source to this day. Open source software is part of virtually every application, network, and system in operation today. It often represents the majoritysometimes more than 90%of the code in a codebase.

So yes, productive remote teamwork is possible.

Thats not to say that open source development is an exact parallel to the corporate world. Open source is a community, not a company. Those who participate are essentially volunteers, not paid employees. There is a hierarchy, but the supervisor is generally more along the lines of community leader than boss.

But conventional development teams and their managers in need of new ideas for working remotely can still learn plenty of things from the remote operation of the open source world. There are even books about itone of the most popular is The Art of Community by Jono Bacon, former community manager for Ubuntu.

Tim Mackey, principal strategist at the Synopsys Cybersecurity Research Center (CyRC), knows about the remote operation of open source communities as well. While he works for a company, he has been a community leader, and still is a community member, for open source projects. He has worked remotely and managed remote teams for the bulk of his career.

So he knows from experience that remote doesnt have to mean disconnected. It just takes some awareness, effort, and cooperation. He described some of the ways open source communities mitigate the absence of physical human contact:

It sounds like the tech version of the real estate mantra Location, location, location, which describes the most important factor in buying a house.

But that is because communication is the foundation for everything else. Of course, working remotely cant be exactly like the physical office environment, where, as Mackey puts it, If someone is working on something that relates to what you are working on, you will know because up will pop a head when you say, I really dont understand why this is doing this.

But it can come close, as long as teams arent too largeideally fewer than 10 people.

You could actually have everyone put their phone on a Skype call. The phone is just sitting in the corner, and it doesnt have any other purpose than to serve as the proxy for the office, he said. There are many ways to solve the problem. You just need to find the pieces that are missing.

Resiliency flows from communicationwhat Mackey says is completely and transparently communicating all of the issues regarding a project.

As is the case with open source projects, resiliency is the result when there is nobody who is magically special who needs to know extra stuff, he said. Anyone at any point in time can know everything. That level of egalitarianism really starts to increase the engagement.

It also means there is no single point of failure, which is a mandatory element of resiliency. If somebody gets sick, goes on vacation, or gets a different job, it doesnt hamstring the rest of the team, because one person isnt carrying all the institutional knowledge. Everybody is.

You dont have to worry about one person having all the magic knowledge and then you are massively disrupted when that one person has to deal with some personal issue or, for that matter, wins a Powerball ticket, Mackey said.

It gives you flexibility. Everybody is going to have some aspect of their life that is going to be variable. Some people want to ski, some people want to surf, some people dont like the cold, some people love the cold.

Emotionally intelligent feedback, which also flows from good communication, can be much trickier among a team working remotely, since emails and texts usually lack tone. Facial expressions, speech volume, and other physical cues present in face-to-face communication can bring a lot of helpful nuance to comments that might seem harsh or even accusatory in writing.

Not to mention that cultural and language barriers can be easier to overcome face-to-face than in writing. If a recipient from another country who speaks another language gets a note and puts it into something like Google Translate, the results can be unpredictable.

If you know multiple languages and have tried that, then you know Google Translate is sometimes really good and sometimes it is absolutely atrocious, Mackey said. He noted that in the open source world or any remote situation, he makes a point of using very precise language when he writes comments.

If you have ever been in a situation where somebody has complained about the tone of your writing, that is exactly the type of scenario that successful open source teams figure out pretty early how to overcome, he said. In some countries, tone is such a key component of their written language that they might miss what you meant more often than you would prefer.

A breakdown in the emotional element of feedback can be a huge kick in morale, Mackey said.

Every team and every project needs a process to govern how things get done. But remember that the whole point of the process is to help things get done. As Bacon says in his book, processes are only useful when they are a means to an end.

Or as Mackey puts it, It really boils down to making certain that everyone knows what it is, why it is, and to a certain extent knows that they can raise their hand and say, But you do realize youre not doing this, right?

And if the shift from the office to working remotely means certain things cant get done, then the process needs a revision.

A perfect example, Mackey said, is a scenario where IT and legal have imposed a security policy that says, To protect our source code, you can only commit code on this special network that will never be accessible outside of the company.

While that might have made perfect sense before, it doesnt work when nobody is at the office. Process needs to be a living entity. You cant just fall back on, But this is the way we have always done it, he said.

One reason is that someone on the team might come up with a workaround just to get their job done, but such a workaround amounts to shadow IT, since it is outside security policy.

Does that mean Ive created a situation where, in trying to do the work Ive been assigned, I have now circumvented every process in place because it wasnt designed for the reality of everybody working from home? he asked.

It is clearly much better to figure out a way to maintain the security of source code without making it impossible for a remote development team to do its work.

All of which, once again, comes down to communication, communication, communication.

Of course, it will take some getting used to. There will likely be some bumps in the road. But if the open source community can do it, organizations can too.

Like what youre reading? Subscribe to the blog!

See more here:
What the open source community can teach the suddenly remote workforce - Security Boulevard

Open source made the cloud in its image – ITworld

The cloud was built for running open source, Matt Wilson once told me, which is why open source [has] worked so well in the cloud.

While true, theres something more fundamental that open source offers the cloud. As one observer put it, The whole intellectual foundation of open interfaces and combinatorial single-purpose tools is pretty well ingrained in cloud. That approach is distinctly open source, which in turn owes much to the Unix mentality that early projects like Linux embraced.

Hence, the next time you pull together different components to build an application on Microsoft Azure, Google Cloud, AWS, or another cloud, realize that the reason you can do this is because the open source ethos permeates the cloud.

Open source has become so commonplace today that we are apt to forget its origins. While it would be an overstatement to suggest that Unix is wholly responsible for what open source became, many of the open source pioneers came from a Unix background, and it shows.

Heres a summary of the Unix philosophy by Doug McIlroy, the creator of Unix pipes:

Sound familiar? From this ideological parentage its not hard to see where open source gets its preference for modularity, transparency, composability, etc. Its also not much of a stretch to see where the open source-centric clouds are picking up their approach to microservices.

In turn, the different clouds have all converged on similar design principles. As Wilson notes, the composable pieces ethos of open source is a property of open systems, and a general Unix philosophy that [is] carried forward in the foundational building blocks of cloud as we know it.

Cloud is impossible without the economics of free and open source software, but cloud is arguably even more impossible at least, in the way we experience it today without the freedoms and design principles offered by open source. Erica Brescia makes this point perfectly.

Importantly, were now in a hyper-growth development phase for the cloud, with different companies with different agendas combining to open source incredibly complex, powerful, and cloud-native software to tackle everything from machine learning to network management. As Jono Bacon notes,

Open source created the model for collaborative technology development in a competitive landscape.

The cloud manifested as the most pressing need to unite this competitive landscape together.

This led to a rich tapestry of communities sharing best practices and approaches.

This rich tapestry of communities sharing owes its existence to open source. Clouds may provide the platforms where open source increasingly lives and grows, but the animating force behind the clouds is open source. Given the pressing problems all around us, were going to need both cloud and communities each driven by open source to help tackle them.

This story, "Open source made the cloud in its image" was originally published by InfoWorld.

Read this article:
Open source made the cloud in its image - ITworld

How Edge Is Different From Cloud And Not – The Next Platform

As the dominant supplier of commercial-grade open source infrastructure software, Red Hat sets the pace and it is not a surprise that IBM was willing to shell out an incredible $34 billion to acquire the company. It is no surprise, then, that Red Hat has its eyes on the edge, that amorphous and potentially substantial collection of distributed computing systems that everyone is figuring out how to chase.

To get a sense of what Red Hat thinks about the edge, we sat down with Joe Fernandes, vice president and general manager of core cloud platforms at what amounts to the future for IBMs software business. Fernandes has been running Red Hats cloud business for nearly a decade, starting with CloudForms and moving through the evolution of OpenShift from a proprietary (but open source) platform to one that has become the main distribution of the Kubernetes cloud controller by enterprises. Meaning those who cant or wont roll their own open source software products.

Timothy Prickett Morgan: Is the edge different, or is it just a variation on the cloud theme?

Joe Fernandes: For Red Hat, the edge is really an extension of our core strategy, which is open hybrid cloud and which is around providing a consistent operating environment for applications that extends from the datacenter across multiple public clouds and now out at the edge. Linux is definitely the foundation of that, and Linux for us is of course Red Hat Enterprise Linux, which we see running in all footprints.

It is not just about trying to get into the core datacenter. Its about trying to deal with the growing opportunity at the edge, and I think its not just important for Red Hat. Look at what Amazon is doing with Outposts, what Microsoft is doing with Azure Stack, and what and Google is doing with Anthos, trying to put out cloud appliances for on premises use. This hybrid cloud is as strategic for any of them as it is for any of us.

TPM: What is your projection for how much compute is on the edge and how much is in the datacenter? If you added up all of the clock cycles, how is it going to balance out?

Joe Fernandes: It is very workload driven. Generally, the advice we always give to clients is that you should always centralize what you can because at the core is where you have the most capacity in terms of infrastructure, the most capacity in terms of your SREs and your ops teams, and so forth. As you start distributing out to the edge, then you are in constrained environments and you are also not going to have humans out there managing things. So centralize what you can and distribute what you must, right.

That being said, specific workloads do need to be distributed. They need to be closer to the sources of data that they are operating upon. We see alignment of the trends around AI and machine learning with the trends around edge, and thats where we see some of the biggest demand. That makes sense because people want to process data close to where it is being generated and they cant they cant incur either the cost or the latency of sending that data back to their datacenter or even the public cloud regions.

And it is not specific to one vertical. Its certainly important for service providers and 5G deployments, but its also important for auto companies doing autonomous vehicles, where those vehicles are essentially data generating machines on wheels that need to have made quick decisions that are as tell.

TPM: As far as I can tell, cars are just portable entertainment units. The only profit anybody gets from a car is all the extra entertainment stuff we add. The rest of the price covers commissions for dealers and the bill of materials for the parts in the car.

Joe Fernandes: At last years Red Hat Summit, we had both BMW and Volkswagen talking about their autonomous vehicle programs, and this year we received an award from Ford Motor Company, who also has major initiatives around autonomous driving as well as electrification. Theyll be speaking at this years Red Hat Summit. Another edge vertical is retail, allowing companies to make decisions in stores to the extent that they still have physical locations.

TPM: I didnt give much thought to the Amazon store where it has something ridiculous like 1,700 cameras and you walk in, you grab stuff, you walk out, it watches everything you do and it takes your money electronically. This is looking pretty attractive this week is my guess. And I thought it was kind of a bizarre two months ago, not shopping as I know and enjoy it. And I know were not going to have a pandemic for the rest of our lives, but this could be the way we do things in the future. My guess is that people are going to be less inclined to do all kinds of things that seem very normal only one or two months ago.

Joe Fernandes: Exactly. The other interesting vertical for edge is financial services, which has branch offices and remote offices. The oil and gas industry is interested in edge deployments close to where they are doing exploration and drilling, and the US Department of Defense is also thinking about remote battlefield and control of ships and planes and tanks.

The thing that those environments have in common is Linux. People arent running these edge platforms on Windows Servers, and they are not using mainframes or Unix systems. It is obviously all Linux and it puts a premium on performance and security, on which Red Hat has obviously made its mark with RHEL. People are interested in driving on open systems anyway, and moving to containers and Kubernetes, and Linux is the foundation of this.

TPM: Are containers a given for edge at this point? I think they are, except where bare metal is required.

Joe Fernandes: I dont think that containers are a prerequisite. But certainly, just like the rest of the Linux deployments, it is going in the direction of containers. The reason is portability, having that same environment to package and deploy and manage at the edge as you do in the datacenter and in the cloud. Bare metal containers can run directly on Linux; you dont need to have a virtualization layer in between.

TPM: Well, when I say bare metal, I mean not even a container. Its Linux. Thats it.

Joe Fernandes: I think that that distinction between bare metal Linux versus bare metal Linux containers is more around do what those packaged as container images, or as something like RPMs or Debian and you need orchestration, do you need orchestrated containers. Right. And again, thats very workload specific. We certainly see folks asking us about environments that are really small, that you might not do orchestration because youre not running more than a single container or a small number of containers. In that case, its just Linux on metal.

TPM: OK, but you didnt answer my question yet, and that is really my fault, not yours. So, to circle back: How much compute is at the edge and how much is on premises or in the cloud? Do you think it will be 50/50? Whats your guess?

Joe Fernandes: I dont think itll be 50/50 for some time. I think in the range of 10 percent to 20 percent in the next couple of years is possible, and I would put that at 10 percent or less because there is just a ton of applications running in core datacenter and a ton running out in the public cloud. People are still making that shift to cloud.

But again, itll be very industry specific. I think the adoption of edge compute using analytics and AI/ML is still now just taking off. For the auto makers doing autonomous vehicles, there is no other choice. It is a datacenter on wheels that needs to make life and death decisions on where to turn and when to brake, and in that market, the aggregate edge compute will be the majority at these companies pretty darn quick. You will see edge compute adoption go to 50 percent or more in some very specific areas, but if you took the entire population of IT, its probably still going to be in the single digits.

TPM: Does edge require a different implementation of Linux, say a cut-down version? Do you need a JEOS-type thing like we used to have in the early days of server virtualization? Do you need a special, easier, more distributed version of OpenShift for Kubernetes? Whats different?

Joe Fernandes: With Linux, the valuable thing is the hardware compatibility that RHEL provides. But we certainly see demand for Linux on different footprints. So, for example, RHEL on Arm devices or RHEL with GPU enablement.

When it comes to OpenShift, obviously Kubernetes is a distributed system, where the cluster is the computer, while Linux is focused on individual servers. What we are seeing is demand for smaller clusters, with OpenShift enabled on three node clusters. Three node clusters, which is sort of the minimum to have a highly available control plane because etcd, which is core to Kubernetes, requires three nodes for quorum. But in that situation, we may put the control plane and the applications run on the same three machines, whereas in a larger setup, you would have a three-node OpenShift control plane and then at least two separate machines running your actual containers so that you have HA for the apps. Obviously those application clusters will grow to tens or even hundreds of nodes. But at the edge, the premium is on size and power, so three nodes might be as much space as youre going to get in the rack out at the edge.

TPM: Either that or you might end up having put your control plane on a bunch of embedded microcontroller type systems and compacting that part down.

Joe Fernandes: Actually, we see a kind of the progression. So there are standard clusters as small as you can get them. So maybe its control plane with one or two nodes. And then the next step weve moved into is a control plane and app nodes are the same three machines. And then you get into what Id call distributed nodes, where you might have a control plane shared across five or ten or twenty edge locations that are running applications and talk back to a shared control plane. You have to worry about connectivity to the control plane.

TPM: If you lose the control plane or your connectivity to it, all it should mean is that you cant change the configuration of the compute cluster at the edge.

Joe Fernandes: Not exactly, because Kubernetes is a declarative system, so it thinks that needs to start up containers on another node or start a new node. In a case where you might have intermittent connectivity, we need to meet to make it more tolerant so it doesnt actually start that process unless it doesnt reconnect for some amount of time. And then the next step beyond that is clusters that have two nodes or a single node, and at that point the control plane, if it exists, is not HA, so youre focusing on high availability some other way.

TPM: You can do virtual machines on a slightly beefier server and have software resilience, but you have the potential of having a hardware resilience issue.

Joe Fernandes: Maybe their resiliency is between edge locations.

TPM: What happens with OpenStack at this point, if anything? AT&T obviously has been widely deploying OpenStack at the edge, with tens of thousands of baby datacenters planned, all linked by and controlled by OpenStack. Is this going to be something like use OpenShift where you can, use OpenStack where you must?

Joe Fernandes: We certainly see Red Hat OpenStack deployed at the edge. Theres an architecture that we put out called the distributed compute node architecture, which customers are adopting. It is relevant that customers virtualized application workloads and also want an open solution, and so I think you will continue to see Red Hat OpenStack at the edge and you continue to see vSphere at the edge, too.

For example, in telco, OpenStack has a big footprint where companies have been creating virtualized network functions, or VNFs, for a number of years and that has driven a lot of our business for OpenStack in telco because a lot of the companies we work with, like Verizon and others, they wanted an open platform to deploy VNFs.

TPM: These telcos are not going to suddenly just decide, to hell with it, and containerize all this and get rid of VMs and server virtualization?

Joe Fernandes: Its not going to be an either/or, but we now see a new wave of containerized network functions, or CNFs, particularly around like the 5G deployment. So the telcos are coming around to containers, but like every other vertical, they dont all switch overnight. Just because Kubernetes has been out for five years now doesnt mean the VMs are gone.

TPM: Is the overhead for containers a lot less than VMs? It must be, and that must be a huge motivator.

Joe Fernandes: Remember that the overhead of a VM includes the operating system that runs inside the guest and the overhead of a container, where you are not virtualizing that hardware, you are virtualizing just the process. You can make a container as small as the process that it runs. And for a VM, you can only make it as small as the operating system.

TPM: We wouldnt have done all this VM stuff if we could have just figured out containers to start with.

Joe Fernandes: You know, Red Hat Summit is coming up in a few weeks and we will be providing an update on KubeVirt, which allows Kubernetes to manage standard virtual machines along with containers. In the past year or more, we have been talking about it strictly in terms of what we are doing in the community to enable it. But it has not been something that we can sell and support. This is the year, its ready for primetime and that presents an opportunity to have a converged management plane. You could have Kubernetes directly on bare metal, managing both container workloads and VM workloads, and also manage the transition as more of those workloads move from VMs to containers. You wont have to switch environments or have that additional layer and so forth.

TPM: And I fully expect people to do that. Ive got nothing against OpenStack. Five years ago, when we started The Next Platform, it was not obvious if the future control plane and management and compute metaphor would be Mesos or OpenStack or Kubernetes. And for a while there, Mesos looked like it was certainly better than OpenStack because of some of the mixed workload capabilities and the fact that it could run Kubernetes better than OpenStack could. But if you can get KubeVirt to work and it provides Kubernetes essentially the same functionality that you get for OpenStack in terms of managing the VMs, then I think were done. It is emotional for me to just put a nail in the coffin like that.

Joe Fernandes: The question is: Is it going to put a nail not just in OpenStack, but in VMware, too.

TPM: VMware is an impressive legacy environment in the enterprise, and it generates more than $8 billion in sales for Dell. There is a lot of inertia with legacy environments I mean, there are still System z mainframes out there doing a lot of useful work and providing value to IT organizations and their businesses. I have seen so many legacy environments in my life, but this may be the last big one I see this decade.

Joe Fernandes: You have covered vSphere 7.0 and Project Pacific and look at the contrast in strategy. Were taking Kubernetes and trying to apply it to standard VM workloads as a cloud native environment. What VMware has done is take Kubernetes and wrap it back around the vSphere stack to keep people on the old environment that theyve been on for the last decade.

Read the original post:
How Edge Is Different From Cloud And Not - The Next Platform

No New Normal: Building the Commons – Resilience

Author and archdruid John Michael Greer talks about catabolic collapse, not as the guns & ammo, post-apocalyptic-yet-still-powered-by-capitalism scenariofavored in the media, but as an ongoing process of societal disintegration. Looking at our mainstream institutions, economics and beliefs, its clear that weve been collapsing for a while. A pandemic punctuates the catabolic curve with an eye-popping shock set against systemic processes bedrocked as background, never foreground.

The etymology of apocalypse points to anunveiling, dropping illusion and findingrevelation. As our global production systems and social institutions (eg. healthcare, education) are suddenly overwhelmed, their basic unsuitability is exposed. Just weeks ago so mighty, economies now sputter when faced with this latest adversity, and this sudden spike in the process of collapse portends a larger undertaking in ecological and social entropy. As Covid-19 takes its human toll worldwide, weve begun to see the best and worst of humanity in its choice of loyalties, whether to human life or to economic systems, and the power struggles in finding the right balance (if such a thing exists). Its another opportunity to consider, what is inherent in us as people, and what is the product of our systems? Growing up in systems preaching that greed is good, that the only social responsibility of businesses is to increase profits, or that there is no alternative, its no surprise that the worst reactions to the crisis are marked by individualism, paranoia and accumulation.

Image by Sam Wallman and Miroslav Sandev

Natural systems are rebounding because pollution and emissions are down, but its impossible to fist-pump about this while people are suffering, dying, or working beyond capacity to save lives. In fact, its a good time to question the very validity of work: which services are essential, how to use our free time. What solutionscan the market offerto the health crisis, to overcrowded hospitals, to breaks in supply lines of essential goods and services? To those unable to meet their rent, mortgage or future expenses? Some claim ourglobal, industrialized model is to blame for the virus, others cry that the cure is worse than the disease, that the economic effects of quarantining will create more destruction than the virus itself.

These predictions are not endemic to economic science, but to a history of accumulatory, command and control dynamics which, via longstanding institutions including patriarchy and colonialism, have found their apex incapitalist realism: the widespread sense that not only iscapitalismthe only viable political and economic system, but also that it is now impossible even to imagine a coherent alternative to it. Short a few weeks of predatory feeding, the growth-based model shows its weakness against the apocalypse. Another veil is lifting.

What else can we see? Whatwill the world look likewhenever this is over (and how will we know when it is)?

Could this be the herald of another political economybased on abundance, not scarcity and greed? We can help nature to restore itself, cut down emissions, our consumption of mass manufactured and designed-to-break-down crap. We can radically curtail speculative ventures and fictitious commodities. Slash inequality from the bottom up, spend our time away frombullshit jobstoreimagine the world. Use this free time to reconnect, cherish our aliveness, break out of containment, care for each other, grieve what weve lost and celebrate what we still have.

We do have the frameworks, we have been creating this capacity for quite a while. Here I refer tothe Commons. Simply put, the Commons are living systems to meet shared needs. As old as humanity itself and as new as the latest trends in decentralized technology, the Commons are best understood as a verb, not a noun; more action than static. A commons needs three elements:

Examples includecooperatively managed forests,water distribution irrigation systems,social currencies,Free/Libre and Open-Source Software,self organized urban spaces,distributed manufacturing networksandso much more. As George Monbiotdescribes, the most inspiring and effective reactions to the Covid-19 crisis are not coming from markets or states, but from the Commons. Often invisibilized, the practices of the commons offer fairer economic and human frameworks to meet our needs, especially in challenging times.

From localized yetglobally connected systems of productionthat can rapidly respond to urgent needs without depending on massive global chains, to ways to organize the workforce intorestorative and purpose-oriented clusters of peoplewho take care of each other. This new economy will need a new politics and a more emancipated relation to the state:we have tried it and succeeded. What new worlds (many worlds are possible) can we glimpse under this lifted veil?

We Must Reimagine Everything was originally published in Spanish by Miguel Brieva Clismn. It was translated to English by Guerrilla Translation

Heres a question: did you already know about these potentials? Are we still having this conversation among ourselves, or have these terrible circumstances gifted us with anopportunity for (apocalyptic) clarity? The normal is collapsing, while our weirdness looks saner than ever before.

Timothy Leary famously called for us to find the others. I think that the others areallof us, and this may be the moment where more of us can recognise that. A few years ago, we createdan accessible, easy to use platformto share the potential of the Commons with everyone. Today its more relevant than ever. The projects we work on (Commons TransitionandDisCO) are based on two simple precepts:

This is why we strive to create accessible and relatable frameworks for people to find the commoner within themselves. But we need to grow out of our bubbles, algorithmically predetermined or not; we need to rewild our message beyond the people who already know. Movements like Degrowth, Open Source software and hardware, anti-austerity, Social Solidarity Economy, Ecofeminism, Buen Vivir we are all learning from each other. We must continue to humbly and patiently pass the knowledge on, listen to more voices and experiences, and keep widening the circle to include everyone, until thereare no others.

Please share this article with anyone who may benefit from these crazy ideas that suddenly dont look so crazy anymore.Start a conversationwith people who, aghast at the rapid collapse and lack of reliable systemic support, are eager for new ideas, solutions, and hope. The greatest enclosure of the commons is that of the mind: our capacity to imagine better worlds, to be kinder to each other and to the Earth. This will not be an easy or straightforward process. We need to hold each other through the loss and pain. We need to keep finding the others among all of us, until there are no more.

Teaser photo credit: Photograph by Antonio Marn Segovia, CC by 2.0

Continued here:
No New Normal: Building the Commons - Resilience