Quantum Cryptography Solutions Market: Industry Trends, Emerging Technologies and Developments (2020-2027) – Owned

Global Quantum Cryptography Solutions Market 2020 published by Stratagem Market Insights, starts with market description, executive report, segmentation, and classification. The report offers a comprehensive analysis of the market so that readers can be guided about future opportunities and high-profit areas of the industry. The report provides a detailed analysis of the market structure considering the current market landscape, Leading Industry Share, upcoming market trends, leading market players, product type, application, and region.

The study analysis was carried out worldwide and presents current and traditional growth analysis, competition analysis and the growth prospects of the central regions. With industry-standard accuracy in analysis and high data integrity, the report offers an excellent attempt to highlight the key opportunities available in the global Quantum Cryptography Solutionss Market to help players build strong market positions. Buyers of the report can access verified and reliable market forecast, including those for the overall size of the global Quantum Cryptography Solutions Market in terms of sales and volume.

Get FREE Sample copy of this report:https://www.stratagemmarketinsights.com/sample/21553

Development policies and plans are discussed, and manufacturing processes and industry chain structures are analyzed. This report also gives the import/export, supply, and consumption figures, as well as manufacturing costs and global revenues, and gross margin by region. Numerical data is backed up with statistical tools such as SWOT analysis, BCG matrix, SCOT analysis, and PESTLE analysis. Statistics are presented in graphical form to provide a clear understanding of the facts and figures.

The major manufacturers covered in this report:

ID Quantique, MagiQ Technologies, Quantum XC, Qubitekk, QuintessenceLabs

Market segmentation:

The Quantum Cryptography Solutions market is divided into various essential sectors, including application, type, and region. Each market segment is intensively studied in the report, taking into account market acceptance, value, demand, and growth prospects. Segmentation analysis allows customers to customize their marketing approach to perform better orders for each segment and identify the most prospective customer base

Regional insights of Quantum Cryptography Solutions Market

In terms of geography, this research report covers almost all major regions of the world, such as North America, Europe, South America, the Middle East, and Africa, and Asia Pacific. Europe and North America are expected to increase over the next few years. The Quantum Cryptography Solutions market in the Asia Pacific is expected to grow significantly during the forecast period. The latest technologies and innovations are the most important characteristics of North America and the main reason the United States dominates the world market. The South American Quantum Cryptography Solutions market is also expected to grow in the near future.

Report Covers Impacts of COVID-19 to the market.

The on-going pandemic has overhauled various facets of the market. This research report provides financial impacts and market disturbance on the Quantum Cryptography Solutions market. It also includes analysis of the potentially lucrative opportunities and challenges in the foreseeable future. DataIntelo has interviewed various delegates of the industry and got involved in the primary and secondary research to confer the clients with information and strategies to fight against the market challenges amidst and after the COVID-19 pandemic.

The main questions answered in the report:

Global Quantum Cryptography Solutions Market Industry Analysis assists clients with customized and syndicated reports of significant importance to the experts involved in data and market analysis. The report also calls for market-driven results that drive a feasibility study for customer needs. SMI guarantees validated and verifiable aspects of market data operating in real-time scenarios. Analytical studies are conducted to confirm customer needs with a thorough understanding of market capabilities in real-time scenarios.

The conclusion of this report provides an overview of the potential for new projects to be successful in the market in the near future, and the global Quantum Cryptography Solutions market in terms of investment potential in various sectors of the market covers the entire range.

Furthermore, the years considered for the study are as follows:

Historical year 2014 to 2018

Base year 2019

Forecast period 2020 to 2027

Need a discount?

Note: *The discount is offered on the Standard Price of the report.

Request discount for this report @https://www.stratagemmarketinsights.com/discount/21553

More here:
Quantum Cryptography Solutions Market: Industry Trends, Emerging Technologies and Developments (2020-2027) - Owned

Ethereum Foundation announces $3.8M in new grants – Digital Market News

The Ethereum Foundation has announced that over $3.8 million in grants will undoubtedly be awarded to teams focusing on the Ethereum blockchain.

In a Sept. 8 post published on the Ethereum Blog, the Ethereum Foundation, or EF, announced that it has given grants to teams as part of its ecosystem support program all through Q2 2020. The categories included teams focusing on community and education, cryptography and zero-knowledge proof, or ZKP, developer experience, Ethereum 2.0, and Layer 2.

- Advertisement -

The $3,884,000 funds will go to 28 companies and researchers, including blockchain advisory firm Akomba Labs for community and education, and Beacon Fuzz for finding crash-causing and consensus bugs on Ethereum 2.0.

This list represents non-recurring funding from across the EF, including grants via our public inquiry process, delegated domain allocations and third-party funding, the EF web log stated. As always, its a privilege to work with these amazing projects and so many more.

The announcement comes a month following the EF stated it would be creating a dedicated security team for Ethereum 2.0 to review any potential cybersecurity and crypto-economic problems in another generation of the Ethereum network.

Go here to see the original:
Ethereum Foundation announces $3.8M in new grants - Digital Market News

Underwater Connector Market Statistics, Facts and Figures, Growth Overview, Size, SWOT Analysis and Forecast to 2026 by ID Quantique, Infineon…

Premium market insights recently published a report titled Underwater Connector Market Size and Forecast to 2026.he quantum cryptography market is at a nascent stage with massive potential to break through the cybersecurity industry. The quantum cryptography market players are constantly engaging themselves in advancing their features intending to offer highly secured solutions to their clients. The rise in quantum computing has led to the surge in expose of confidential data across industries. Owing to this, several end users of encryption solutions are investing significant amounts in procuring advanced data security solutions and services such as quantum cryptography. The Global Underwater Connector Market is growing at a faster pace with substantial growth rates over the last few years and is estimated that the market will grow significantly in the forecasted period i.e. 2020 to 2026.

Request a Sample Copy of this Report @

https://www.premiummarketinsights.com/sample/TIP00018997

This report includes the following Companies; We can also add other companies you want:

ID Quantique, Infineon Technologies, Magiq Technologies, IBM, NuCrypt, Anhui Qasky Quantum Technology Co. Ltd., Qubitekk, Quintessence Labs, Qutools GmbH

Underwater Connector Market: A Competitive Perspective

The report also provides an in-depth analysis of the competitive landscape and behavior of market participants. In this way, market participants can familiarize themselves with the current and future competitive scenario of the global market for Underwater Connector and take strategic initiatives to gain a competitive advantage. The market analysts have carried out extensive studies using research methods such as PESTLE and Porters Five Forces analysis. Overall, this report can prove to be a useful tool for market participants to gain deep insight into the global market for Underwater Connector and to understand the main perspectives and ways to increase their profit margins.

Underwater Connector Market: Drivers and Limitations

The report section explains the various drivers and controls that have shaped the global market. The detailed analysis of many market drivers enables readers to get a clear overview of the market, including the market environment, government policy, product innovation, development and market risks.

The research report also identifies the creative opportunities, challenges, and challenges of the Underwater Connector market. The framework of the information will help the reader identify and plan strategies for the potential. Our obstacles, challenges and market challenges also help readers understand how the company can prevent this.

Underwater Connector Market: Segment Analysis

The report section contains segmentations such as application, product type and end user. These segments help determine which parts of the market will improve over others. This section analysis provides information on the most important aspects of developing certain categories better than others. It helps readers understand strategies to make solid investments. The market for Underwater Connector is segmented according to product type, applications and end users.

Inquiry before buying @https://www.premiummarketinsights.com/inquiry/TIP00018997

Underwater Connector Market: Regional Analysis

This section of the report contains detailed information on the market in different regions. Each region offers a different market size because each state has different government policies and other factors. The regions included in the report areNorth America, Europe, Asia Pacific, the Middle East and Africa. Information about the different regions helps the reader to better understand the global market.

Table of Content

1 Introduction of Underwater Connector Market

1.1 Overview of the Market1.2 Scope of Report1.3 Assumptions

2 Executive Summary

3 Research Methodology of Underwater Connector

3.1 Data Mining3.2 Validation3.3 Primary Interviews3.4 List of Data Sources

4 Underwater Connector Market Outlook

4.1 Overview4.2 Market Dynamics4.2.1 Drivers4.2.2 Restraints4.2.3 Opportunities4.3 Porters Five Force Model4.4 Value Chain Analysis

5 Underwater Connector Market, By Deployment Model

5.1 Overview

About Premium market insights:

Premiummarketinsights.comis a one stop shop of market research reports and solutions to various companies across the globe. We help our clients in their decision support system by helping them choose most relevant and cost effective research reports and solutions from various publishers. We provide best in class customer service and our customer support team is always available to help you on your research queries.

Sameer Joshi Call: US: +1-646-491-9876, Apac: +912067274191Email: [emailprotected]

Visit link:
Underwater Connector Market Statistics, Facts and Figures, Growth Overview, Size, SWOT Analysis and Forecast to 2026 by ID Quantique, Infineon...

Why Cloud-Based Architectures and Open Source Don’t Always Mix – ITPro Today

By some measures, open source has been wildly successful in the cloud. Open source solutions like Kubernetes have eaten closed-source alternatives for lunch. Yet, in other respects, open source within the cloud has been a complete failure. Cloud-based architectures continue to pose fundamental problems for achieving open sources founding goals of protecting user freedom. For many organizations, using the cloud means surrendering control to proprietary solutions providers and facing stiff lock-in risks.

These observations beg the question: Why hasnt open source been more influential in the cloud, and what could be done to make cloud computing more friendly toward open source?

From the early days of the cloud era, there has been a tension between open source and the cloud.

When free and open source software first emerged in the 1980s under the auspices of Richard Stallman and the GNU project, the main goal (as Stallman put it at the time) was to make software source code available to anyone who wanted it so that users could use computers without dishonor and operate in solidarity with one another.

If you run software on a local device, having access to the source code achieves these goals. It ensures that you can study how the program works, share modifications with others and fix bugs yourself. As long as source code is available and you run software on your own device, software vendors cannot divide the users and conquer them.

But this calculus changes fundamentally when software moves to cloud-based architectures. In the cloud, the software that you access as an end user runs on a device that is controlled by someone else. Even if the source code of the software is available (which its usually not in the case of SaaS platforms, although it theoretically could be), someone else--specifically, whoever owns the server on which the software runs--gets to control your data, decide how the software is configured, decide when the software will be updated, and so on. There is no solidarity among end users, and no equity between end users and software providers.

Stallman and other free software advocates realized this early on. By 2010, Stallman was lamenting the control that users surrendered when they used cloud-based software, and coining terms like Services a Software Substitute to mock SaaS architectures. They also introduced the Affero General Public License, which aims to extend the protections of the GNU General Public License (the mainstay free software license) to applications that are hosted over the network.

The fruits of these efforts were mediocre at best. Stallmans pleas to users not to use SaaS platforms has done little to stem the explosive growth of the cloud since the mid-2000s. Today, its hard to think of a major software platform that isnt available via an SaaS architecture or to find an end user who shies away from SaaS over software freedom concerns.

And although the Affero license has gained traction, its ability to advance the cause of free and open source software in the cloud is limited. The Affero licenses main purpose is to ensure that software vendors cant claim that cloud-based software is not distributed to users, and therefore not subject to the provisions of traditional open source licenses, like the GPL. Thats better than nothing, but it does little to address issues related to control over data, software modifications and the like that users face when they use cloud-based services.

Thus, cloud-based architectures continue to pose fundamental challenges to the foundational goals of free and open source software. Its hard to envision a way to resolve these challenges, and even harder to imagine them disappearing in a world where cloud adoption remains stronger than ever.

You can tell the story of open source in the cloud in another, more positive way. Viewed from the perspective of certain niches, like private cloud and cloud-native infrastructure technologies, open source has enjoyed massive success.

Im thinking here about projects like Kubernetes, an open source application orchestration platform that has become so dominant that it doesnt even really have competition anymore. When even VMware, whose virtual machine orchestration tools compete with Kubernetes, is now running its own Kubernetes distribution, you know Kubernetes has won the orchestrator wars.

OpenStack, a platform for building private clouds, has been a similar success story for open source on cloud-based architectures. Perhaps it hasnt wiped the floor of the competition as thoroughly as Kubernetes did, but OpenStack nonetheless remains a highly successful, widely used solution for companies seeking to build private clouds. =

You can draw similar conclusions about Docker, an open source containerization platform that has become the go-to solution for companies that want a more agile and resource-efficient solution than proprietary virtual machines.

And even in cases where companies do want to build their clouds with plain-old virtual machines, KVM, the open source hypervisor built into Linux, now holds its own against competing VM platforms from vendors like VMware and Microsoft.

When it comes to building private (or, to a lesser extent, hybrid) cloud-based infrastructures, then, open source has done very well during the past decade. Ten years ago, you would have had to rely on proprietary tools to fill the gaps in which platforms like Kubernetes, OpenStack, Docker and KVM have now become de facto solutions.

Open source appears less successful, however, when you look at the public cloud. Although the major public clouds offer SaaS solutions for platforms like Kubernetes and Docker, they tend to wrap them up in proprietary extensions that make these platforms feel less open source than they actually are.

Meanwhile, most of the core IaaS and SaaS services in the public clouds are powered by closed-source software. If you want to store data in Amazon S3, or run serverless functions in Azure Functions, or spin up a continuous delivery pipeline in Google Cloud, youre going to be using proprietary solutions whose source code you will never see. Thats despite the fact that open source equivalents for many of these services exist (such as Qinling, a serverless function service, or Jenkins, for CI/CD).

The consumer side of the cloud market is dominated by closed-source solutions, too. Although open source alternatives to platforms like Zoom and Webex exist, they have received very little attention, even in the midst of panic over privacy and security shortcomings in proprietary collaboration platforms.

One obvious objection to running more open source software in the cloud is that cloud services cost money to host, which makes it harder for vendors to offer open source solutions that are free of charge. Its easy enough to give away Firefox for people to install on their own computers, because users provide their own infrastructure. But it would be much more expensive to host an open source equivalent to Zoom, which requires an extensive and expensive infrastructure.

Id argue, however, that this perspective reflects a lack of imagination. There are alternatives to traditional, centralized cloud infrastructure. Distributed, peer-to-peer networks could be used to host open source cloud services at a much lower cost to the service provider than a conventional IaaS infrastructure.

Id point out, too, that many proprietary cloud services are free of cost. In that sense, the argument that SaaS providers need to recoup their infrastructure expenses, and therefore cant offer free and open source solutions, doesnt make a lot of sense. If Zoom can be free of cost for basic usage, there is no reason it cant also be open source.

Admittedly, making more cloud services open source would not solve the fundamental issue discussed above regarding the control that users surrender when they run code on a server owned by someone else. But it would at least provide users with some ability to understand how the SaaS applications or public cloud IaaS services they use work, as well as greater opportunity to extend and improve them.

Imagine a world in which the source code for Facebook or Gmail were open, for example. I suspect there would be much less concern about privacy issues, and much greater opportunity for third parties to build great solutions that integrate with those platforms, if anyone could see the code.

But, for now, these visions seem unrealistic. There is little sign that open source within the cloud will break out beyond the private cloud and application deployment niches where it already dominates. And thats a shame for anyone who agrees with Linus Torvalds that software, among other things, is better when its free.

Read more:

Why Cloud-Based Architectures and Open Source Don't Always Mix - ITPro Today

The Government releases the source code of its Radar COVID tracking app and publishes it on GitHub – Explica

On September 1, SEDIA (Secretary of State for Digitalization and Artificial Intelligence) announced the code release imminent of their controversial mobile Radar COVID tracking app, with which they intend to trace the contacts of coronavirus patients and thus be able to detect other infected people in good time.

They then dated said release for today September 9, and they justified it based on transparency and their intention that the community could help improve the app although many are now wondering why wait for such an advanced stage of its implementation if the developer community really wanted to help out.

The delay in the publication of said code, since it had already been said at the beginning of August that the intention was for the final version to be open source, is due to the fact that the government wanted to wait for all the CCAA that requested it to have it integrated into their systems.

But the aspect of transparency is a fundamental aspect of this code release, as many voices had been raised criticizing the possible malicious uses that the government could give such sensitive information as was the complete list of users of the app that each of us would have met in the last week.

Now, having the code enables programming experts to take a look under the hood of the applicationIn order to confirm whether the actual management of personal data observable in the app code coincides with that previously explained by SEDIA, as well as to rule out the existence of hidden functionalities.

The code has been available for a few minutes on GitHub, divided into five repositories that correspond to each of the software that makes up the tracking system:

The applications for users: both versions of the app (iOS and Android) are developed entirely in Kotklin.

The DP-3T server: This software, developed in Java, is a fork of the original DP-3T.

The verification service server: This software developed in Java allows the CCAA to request verification codes to provide them to COVID-19 patients.

The configuration service server: This software, developed in Java, allows user applications to obtain information about the Autonomous Communities and the available languages.

As a negative point, it is striking that application documentation (with instructions that allow, for example, to compile them on our computers) is entirely in English.

Radar Covid bases its operation on an API developed jointly by Google and Apple, based in turn on a European protocol developed by at the Swiss Federal Institute of Technology by a team led by the Spanish engineer Carmela Troncoso.

Said protocol, called DP-3T by the acronym of his full name in English, it would come to be translated as privacy-preserving decentralized proximity tracking (Its operation, as well as that of the API, has been explained in detail by our colleagues at Xataka).

DP-3T is subject to an open source license, the Mozilla Public License 2.0, which allows reuse the code in applications attached to other licenses, both proprietary and those with a clear commitment to free software (such as the GNU license used, for example, by Linux).

And that license, MPL 2.0, It will also be the one chosen by SEDIA to release the COVID Radar code, despite the fact that there is already another license (the EUPL or European Union Public License) created precisely for the purpose of make it easier for EU public administrations to release the code of its technological developments.

Share The Government releases the source code of its Radar COVID tracking app and publishes it on GitHub

Visit link:

The Government releases the source code of its Radar COVID tracking app and publishes it on GitHub - Explica

Microsoft makes its Fluid Framework open source, the TypeScript library for creating web applications with real-time collaboration – Explica

After promising last May that the Fluid Framework would become open source, Microsoft has finally released the code and posted it on GitHub. This library was very well received at Build 2019, Microsofts developer conference.

The idea behind Fluid Framework is to offer developers a platform to create collaborative low-latency experiences around documents, in the same way that Microsoft itself is using it within Office applications.

The idea is to offer applications that allow a user to make changes in the browser, such as adding comments, or editing the text, or pressing a button, and the rest of the users who collaborate can see it almost instantly.

It is something like offering a framework for developers to create applications in the style of Google Docs with collaboration in near real time, but with even more features.

Additionally, this Microsoft technology enables the developer to leverage a customer-centric application model with persistent data that do not require writing custom server-side code.

All documentation is available at fluidframework.com, although due to the huge traffic the site is experiencing, it has been working intermittently. In addition to this there are some demos available at fluidframework.com/playground among which are a small puzzle game in which thousands of people made changes to the puzzle in real time, and each user could see the thousands of edits and updates that the others did.

Share Microsoft makes its Fluid Framework open source, the TypeScript library to create web applications with real-time collaboration

View original post here:

Microsoft makes its Fluid Framework open source, the TypeScript library for creating web applications with real-time collaboration - Explica

Remote Work Doesn’t Have to Mean All-Day Video Calls – Harvard Business Review

The Covid-19 crisis has distanced people from the workplace, and employers have generally, if sometimes reluctantly, accepted that people can work effectively from home. As if to compensate for this distancing and keep the workplace alive in a virtual sense, employers have also encouraged people to stick closely to the conventional workday. The message is that working from home is fine and can even be very efficient as long as people join video calls along with everyone else all through the day.

But employees often struggle with the workday when working from home, because many have to deal with the competing requests coming from their family, also housebound. So how effective really is working from home if everyone is still working to the clock? Is it possible to ditch the clock?

The answer seems to be that it is. Since before the pandemic weve been studying the remote work practices of the tech companyGitLab to explore what it might look like if companies to break their employees chronological chains as well as their ties to the physical workplace.

From its foundation in 2014, GitLab has maintained an all-remote staff that now comprises more than 1,300 employees spread across over 65 countries. The git way of working uses tools that let employees work on ongoing projects wherever they are in the world and at their preferred time. The idea is that because its always 9 to 5 somewhere on the planet, work can continue around the clock, increasing aggregate productivity. That sounds good, but a workforce staggered in both time and space presents unique coordination challenges with wide-ranging organizational implications.

The most natural way to distribute work across locations is to make it modular and independent, so that there is little need for direct coordination workers can be effectively without knowing how their colleagues are progressing. This is why distributed work can be so effective for call centers and in patents evaluation. But this approach has its limits in development and innovation related activities, where the interdependencies between components of work are not always easy to see ahead of time.

For this kind of complex work, co-location with ongoing communication is often a better approach because it offers two virtues: synchronicity and media richness. The time lag in the interaction between two or more individuals is almost zero when they are co-located, and, although the content of the conversation may be the same in both face-to-face and in virtual environments, the technology may not be fully able to convey soft social and background contextual cues how easy is it to sense other peoples reactions in a group zoom meeting?

All this implies that simply attempting to replicate online (through video or voice chat) what happened naturally in co-located settings is unlikely to be a winning or complete strategy. Yet this approach of seeing the face is the one that people seem to default to when forced to work remotely, as our survey of remote working practices in the immediate aftermath of lockdowns around the world has revealed.

There is a way through this dilemma. Our earlierresearchon offshoring of software development showed that drawing on tacit coordination mechanisms, such as a shared understanding of work norms and context, allows for coordination without direct communication.

Coordination in this case happens through the observation of the action of other employees and being able to predict what they will do and need based on shared norms. It can occur either synchronously (where, for instance, two people might work on the same Google doc during the same time period), or asynchronously (when people make clear hand-offs of the document, and do not work on it when the other is).

Software development organizations often opt for this solution and tend to rely extensively on shared repositories and document authoring tools, with systems for coordinating contributions (e.g., continuous integration and version control tools). But GitLab is quite unique in the for-profit sector in how extensively it relies on this third path not only for its coding but for how the organization itself functions. It leans particularly on asynchronous working because its employees are distributed across multiple time zones. As a result, although the company does use videoconferencing, almost no employee ever faces a day full of video meetings.

At the heart of the engineering work that drives GitLabs product development is the git workflow process invented by Linux founder Linus Torvalds. In this process, a programmer making a contribution to a code forks (copies) the code, so that it is not blocked to other users, works on it, and then makes a merge request to have the edited version replace the original, and this new version becomes available for other contributions.

The process combines the possibility of distributed asynchronous work with a structure that checks for potential coordination failures and ensures clarity on decision rights. Completely electronic (which makes remote work feasible) and fully documented, it has become an important framework for distributed software development in both for-profit and open source contexts.

GitLab has taken the git a step further, applying it also to managerial work that involves ambiguity and uncertainty. For instance, GitLabs chief marketer recently outlined a vision for integrating video into the companys year-ahead strategy. He requested asynchronous feedback from across the company within a fixed time window, and then scheduled a single synchronous meeting to agree on a final version of the vision. This vision triggered asynchronously input changes from multiple contributors to the companys handbook pages relating to marketing objectives and key results that were merged on completion.

GitLabs high degree of reliance on asynchronous working is made possible by respecting the following three rules right down to the task level:

1. Separate responsibility for doing the task from the responsibility for declaring it done.

In co-located settings, where employees are in the same office, easy communication and social cues allow them to efficiently resolve ambiguities and manage conflict around work responsibilities and remits. In remote settings, however, this can be difficult. In GitLab, therefore, every task is expected to have a Directly Responsible Individual (DRI), who is responsible for the completion of the task and has freedom in how it should be performed.

The DRI, however, does not get to decide whether the task has been completed. That function is the responsibility of a Maintainer, who has the authority to accept or reject the DRIs merge requests. Clarity on these roles for every task helps reduce confusions and delays and enables multiple DRIs to work in parallel in any way they want on different parts of a code by making local copies (forking). It is the Maintainers role to avoid unnecessary changes and maintain consistency in the working version of the document or code.

In a non-software context, say in developing the GitLab handbook page on expenses policies, individual DRIs, who could be anyone in the company, would write specific policies in any way they choose, and their contributions would be accepted or rejected by the CFO acting in the capacity of Maintainer, who could also offer feedback (but not direction) to the DRIs. Once live, the merged page serves as the single source of truth on expenses policies unless or until someone else makes a new proposal. Once more, the Maintainer would approve, reject, or offer feedback on the new proposal. In contexts like this, we would expect people in traditional management positions to serve as Maintainers.

2. Respect the minimum viable change principle.

When coordination is asynchronous, there is a risk that coordination failures may go undetected for too long for instance, two individuals may be working in parallel on the same problem, making one of their efforts redundant, or one person may be making changes that that are incompatible with the efforts of another. To minimize this risk, employees are urged to submit the minimum viable change an early stage, imperfect version of their suggested changes to code or documents. This makes it more likely that people will pick up on whether work is incompatible or being duplicated. Obviously, a policy of minimum viable changes should come with a no shame policy on delivering a temporarily imperfect output. In remote settings, the value of knowing what the other is doing as soon as possible is greater than getting the perfect product.

3. Always communicate publicly.

As GitLab team members are prone to say, we do not send internal email here. Instead, employees post all questions and share all information on the Slack channels of their teams, and later the team leaders decide what information needs to be permanently visible to others. If so, it gets stored in a place available to everyone in the company, in an issue document or on a page in the companys online handbook, which is accessible to anyone, in or outside the company. This rule means that people dont run the risk of duplicating, or even inadvertently destroying the work of their colleagues. Managers devote a lot of time to curating the information generated through the work of employees they supervise and are expected to know better than others what information may be either broadly needed by a future team or that would be useful for people outside the company.

However well implemented, asynchronous remote working of this kind cannot supply much in the way of social interaction. Thats a major failing, because social interaction is not only a source of pleasure and motivation for most, it is also where the random encounters, the serendipitous exchanges by the coffee machines and lift lobbies, create opportunities for ideas and information to flow and recombine.

To minimize this limitation, GitLab provides occasions for non-task related interaction. Each day, team members may attend one of three optional social calls staggered to be inclusive of time zones. The calls consist of groups of 8-10 people in a video chatroom, where they are free to discuss whatever they want (GitLab provides a daily starting question as icebreaker in case needed, such as: What did you do over the weekend? or Where is the coolest place you ever traveled and why?).

In addition, GitLab has social slack groups: thematic chat rooms that employees with similar interests can participate in (such as: #cat, #dogs, #cooking, #mental_health_aware, #daily_gratitude, #gaming) and a #donut_be_strangers channel that allows strangers that have a mutual interest to have a coffee chat to get together.

Of course, GitLab managers are under no illusion that these groups substitute perfectly for the kinds of rich social interactions outside work that people find rewarding. But they do help to keep employees connected, and, at a time when many employees have been working under confinement rules, this has proved very helpful in sustaining morale.

***

Working from home in an effective way goes beyond just giving employees a laptop and a Zoom account. It encompasses practices intended to compensate or avoid the core limitations of working remotely, as well as fully leverage the flexibility that remote can offer working not only from anywhere but at any desired time. We have focused on GitLab because it not only has extensive experience in remote working but also because it pursues an unusual mode of solving the intrinsic challenges of remote work. While some of GitLabs core processes (like its long, remote onboarding process for new hires) and advantages (like the possibility of hiring across the world) cannot be fully reproduced in the short run in companies that will be just temporarily remote, there are others that any company can easily implement.

Go here to see the original:

Remote Work Doesn't Have to Mean All-Day Video Calls - Harvard Business Review

Bitcoin (BTC) has come to an end, sell everything. – IdahoReporter.com

Eleven years since Mr. Nakamoto created Bitcoin we are witnessing the history. The crypto as we know it is dying. Slowly but surely. No, BTC price will not recuperate from here. These are probably the last above $10K weeks.

Back in January 2009 Satoshi Nakamoto released an open-source code which marks the start of Bitcoin life. Nakamoto mined the starting block of the chain, known as the genesis block.

But you know the rest of the story, ups and down and more ups and downs. Millions invested in crypto currencies and price for one Bitcoin peaked at $20K in 2017and now we are here. BTC investors thought that things are heating up again and that we will see another $20K price tag this year, but everything went backwards.

BTC shed $1,000 off of its value in less than a month. Even though the BTC chart shows that this is actually just a small correction I am pretty sure that BTC has come to a brick wall. In my opinion the main reason for this is the huge rally we saw at Nasdaq and NYSE.

On a year-to-date basis BTC gained a measly 30% reward for its long term holders. If you are now laughing at me and asking yourself how could 30% be low, I will explain this now. Being preached as an alternative to fiat currencies BTC failed to prove that during the COVID-19 outbreak in March . Dragged down by a fear BTC went below $5K mark and gradually recovered from there, but it did so because all markets recovered.

NASDAQ, NYSE and Gold, they all recovered from the huge sell-off in March. Gold even went all-time-high on us. And pretty much all markets offered similar rewards to investors. From March bottom to the August peaks all numbers went up almost 100%. So, why invest in crypto when you can invest in real companies with tangible products and SaaS companies with huge profits? This is what is pushing investors away from crypto.

And not to talk about a wave of SPACs (even crypto big boys now are using this scheme to enter NASDAQ)where investors saw some massive (over 300%) gains in a matter of days/weeks. And not to talk about new and emerging markets such as online betting apps (LCA, DKNG) or hydrogen future (NKLA,SHLL) and finally Apple of China (Xiaomi) or Tesla of China (Nio and Xpev). These companies still offer a tremendous growth opportunity. While these are hand-picked stocks there are still hundreds of other companies with 100%+ upside from here.

Blockchain tech will stay with us but the golden days of crypto coins are finished. The sooner you realize that the better for you.

I ask you now. Why invest in shit(alt)-coins when you can get better ROI on technologies of the future? Seems like many investors are asking the same question.

Excerpt from:

Bitcoin (BTC) has come to an end, sell everything. - IdahoReporter.com

A hundred academics demand more transparency from the Government with the Radar Covid app | Technology – Explica

More than 110 renowned Spanish academics, most of them technology experts, published a manifesto this Saturday calling for more transparency from the Government in the development of such sensitive software as Radar Covid, the public app for notification of exposures. In the text, they ask that the promised publication of the app code be exhaustive, well documented and encompass all stages of the apps development, from its inception to future changes. Throughout its almost three pages, the signatories applaud the innovative milestone for Spanish public health of this tool, but regret that the Secretary of State for Digitization and Artificial Intelligence, the head of the application, to date there is no published any documentation on the design of Radar Covid, on its implementation or on the integration process of the Autonomous Communities .

The Secretary of State, after constant criticism for not fulfilling the commitment to bring open source to light, has promised to publish the open source code next Wednesday. But it is not yet clear how deep and constant the governments gesture will be: The opening of the code must be accompanied by complete documentation and information, so that the scientific community and civil society have the necessary scrutiny capacity to identify points to improve and contribute to developing and deploying Covid Radar according to the highest standards , the manifesto indicates.

Among the signers of the manifesto are Daniel Innenarity, Professor of Political and Social Philosophy; Carme Torras, professor at the Robotics Institute of the CSIC and member of the National Council of Artificial Intelligence of the Government; Itziar de Lecuona, Unesco Professor of Bioethics at the University of Barcelona and member of the multidisciplinary working group of the Ministry of Science; Carmela Troncoso, promoter of the DP-3T protocol, who uses the Radar Covid app, and recently named by Fortune magazine as one of the most promising figures under 40 years old; Ricardo Baeza-Yates, professor of Data Sciences and member of the National Council of Artificial Intelligence of the Government; Miguel Luengo-Oroz, head of data for the United Nations Global Pulse; Maribel Gonzlez Vasco, professor of Applied Mathematics at the Rey Juan Carlos University; Lorenzo Cotino, Professor of Constitutional Law at the University of Valencia; Josep-Domingo Ferrer, Unesco Chair in Data Privacy; Juan Tapiador, professor of Computer Science at the Carlos III University, or Jos Molina Molina, president of the Transparency Council of the Region of Murcia.

To questions from EL PAS, sources from the Secretary of State insist on their commitment to publish the code on September 9: We will comply with the commitment to publish on the day and faster than expected. It is something unprecedented in the Spanish public administration and an exercise in transparency , they say, adding: Lets hope that when the code is released, whoever looks at it, fiddles around and helps to verify and improve the tool.

The manifesto praises the achievement of the Spanish Administration in launching an app like Radar Covid, but a tool with such penetration (more than 3.4 million downloads), so sensitive and that should generate trust, needs an exemplary and flawless process and to serve as a precedent for future software developments: There is no technology without flaws and therefore multidisciplinary scrutiny is necessary to achieve the best result, they say in the text. Only an open and joint work, they continue, can efficiently identify potential biases and errors in the conceptualization and implementation of the application that may lead to undesired effects in terms of discrimination and violation of rights. Nothing in the text implies that there are errors or problems with the app, but the only way to know is with public scrutiny. To make it possible, and after waiting for weeks for the insides of the application to be known, they establish a series of essential elements that the Government must publish.

One of the most relevant points is to know the code that allows analyzing all the elements of the tracking system, including the servers, governance and the app itself, which has already been downloaded by more than 3.4 million Spaniards. Where are they, who manages them and what security measures have been adopted both for the deployment at the national level and relative to the autonomous communities, ask the academics along with the evolution of the code since the beginning of the initiative. The revision of previous versions is necessary because not all users periodically update their mobiles, they add.

The transparency required to release Radar Covid does not only respond to technical aspects. Building the application in a certain way depends on another series of decisions, such as the adoption of the decentralized communication protocol in order to preserve the anonymity of the users. For this reason, they understand that it is vital to have the system design report: detailing the analyzes that have led to deciding the configuration parameters and use of the Google and Apple notification exposure API, the implemented mechanisms and the libraries and services used to evaluate the security and privacy of the data, as well as the evaluation of the inclusion and accessibility of the design .

Privacy has aroused a certain suspicion among society. The Government and numerous experts have defended that Radar Covid respects it at all times. The use of bluetooth and built-in protocols, such as the generation of random alphanumeric codes that track phones against each other, prevent individual identification. To verify this, the signatories want a detailed report that contains, as required, the application monitoring mechanisms and associated mechanisms to ensure privacy and compliance with data protection regulations, referring to the data collected both during the pilot as in the production phase .

With the intention of settling any doubts and democratizing a process as novel in Spain as the construction of a useful app against a pandemic, they also require an impact assessment on data protection based on the design report and associated risk analysis to the application , as well as identifying the responsibilities and role played in the project by private entities.

In the absence of the Secretary of State releasing the code, the manifesto recalls that Radar Covid is simply a complementary measure. It does not replace manual trackers or exclude the need to maintain a safety distance or use masks. In order to guarantee the impact of the application, it is necessary to adopt legal and budgetary measures of social support that allow users to follow the recommendations of the app without suffering economic, labor or social damage, the academics say.

Under the idea of tackling the health emergency on all fronts, the signatories go beyond the technological issue. In his opinion, to all the effort made must be added a supervision that identifies potential discriminatory abuses in areas such as housing, the labor market and education. Only a joint interdisciplinary effort and with civil society can efficiently identify potential biases and errors in the conceptualization and implementation of the application that can lead to undesired effects, they reason.

You can follow EL PAS TECNOLOGA RETINA on Facebook, Twitter, Instagram or subscribe here to our Newsletter.

Information about the coronavirus

Here you can follow the last hour on the evolution of the pandemic

This is how the coronavirus curve evolves in Spain and in each autonomy

Download the tracking application for Spain

Search engine: The new normal by municipalities

Guide to action against the disease

View original post here:

A hundred academics demand more transparency from the Government with the Radar Covid app | Technology - Explica

What’s the point: Red Hat Marketplace, JDK version control, and Visual Studio Codespaces DEVCLASS – DevClass

After keeping its business under its hat for a couple of months, the IBM acquisition and venerable open source software purveyor has now made its open cloud marketplace, Red Hat Marketplace, generally available. The service is operated by parent company Big Blue and is supposed to offer one curated repository of tools and services for hybrid cloud computing.

In Red Hats case, the latter is another way of saying OpenShift, since the service really provides a variety of charged software certified to run on the companys container application platform. At the time of writing, the Marketplace contains 62 products from categories ranging from security and monitoring, to logging, tracing, and machine learning.

Organisations who are interested in a more bespoke offer can choose to set up a private marketplace with Red Hat Marketplace Select. Those can then be made to only include pre-approved services, giving admins a way of creating a sort of self-service portal for development teams with options to track usage and spending across cloud environments.

Developers looking to help move the OpenJDK forward, no longer need to learn about version control system Mercurial to participate in the project. The transition of the Java implementations jdk/jdk and jdk/sandbox repositories to Git, GitHub, and Skara was completed last weekend, with a getting started guide available for those who need help to get going again.

Users working with the JDK Updates project need to be aware that the associated repositories still use Mercurial, so a quick glance at the Wiki might be helpful. To make things not too complicated, the Skara CLI tooling is promised to be backward compatible with Mercurial, and help is meant to be available via the skara-dev mailing list or IRC.

Microsoft is ending its Visual Studio Codespaces experiment and looks to consolidate the in-browser IDE formerly known as Visual Studio Online with GitHub Codespaces. VS Codespaces will be retired in February 2021, though current users still can create new plans and codespaces until 20 November.

Self-hosting, which some organisations saw as a major selling point of VSC, isnt in the cards for GitHub Codespaces, and neither is a way to migrate codespaces set up with the VS flavour to GitHub, meaning that they have to be recreated from scratch. However, GitHub Codespaces is still in limited public beta, which means VSC users might have to wait a while until they are added to the club and are really able to access the alternative offering anyway.

The move to axe the Visual Studio product is the result of confusion amongst users, who found the distinct experiences tricky to handle. Since GitHub also belongs to the Microsoft family, the merger will help save resources that potentially could be used to address customer woes quicker.

Visual Studio Code has gotten a new extension: in-memory data store Azure Cache for Redis is now available as a preview. The addition can be found in the Visual Studio Code Marketplace or via the extension tab and is useful to view, test, and debug caches.

More:

What's the point: Red Hat Marketplace, JDK version control, and Visual Studio Codespaces DEVCLASS - DevClass