Synopsys expert on proactive application security strategies for uncertain times – Intelligent CIO ME

As cybercriminals take advantage of the fear and uncertainty surrounding the pandemic, its crucial that organisations ensure the software they build and operate is secure despite reduced resources. Adam Brown, Associate Managing Security Consultant, Synopsys, talks us through the steps organisations can take to improve their application security programmes to protect organisational data and that of their customers.

In 2020, organisations have been faced with the prospect of months of staffing and Business Continuity challenges. Concurrently, cyberattacks by opportunistic hackers and cybercrime groups looking to profit or further disrupt society are on the rise. Organisations must ensure the software they build and operate is secure against these increasing attacks, even as their available security resources may be decreasing.

And a remote workforce is only one of the challenges organisations face in terms of securing their digital properties and sensitive data. While many companies want to invest in security, they may not know where to start. After all, its a challenging endeavor to identify where and how to secure your most valuable or vulnerable projects.

Its a daunting task. However, by tactically addressing their security testing capacity, staff skills and software supply chain risks today, organisations can respond to resource challenges now while fundamentally improving the effectiveness of their AppSec program going forward. Heres how.

Establish a benchmark and mature your strategy

Get started by gathering a full understanding of what your organisations security activities involve. The Building Security In Maturity Model (BSIMM) is not a how-to guide, nor is it a one-size-fits-all prescription. A BSIMM assessment reflects the software security activities currently in place within your organisation. Thus, giving you an objective benchmark whereby to begin building or maturing your software security strategy.

The BSIMM, now in its 11th iteration, is a measuring stick and can be used to inform a roadmap for organisations seeking to create or improve their SSIs, not by prescribing a set way to do things but by showing what others are already doing.

Previous years reports have documented that organisations have been successfully replacing manual governance activities with automated solutions. One reason for this is the need for speed, otherwise known as feature velocity. Organisations are doing away with the high-friction security activities conducted by the software security group (SSG) out-of-band and at gates. In their place is software-defined lifecycle governance.

Another reason is a people shortage the skills gap has been a factor in the industry for years and continues to grow. Assigning repetitive analysis and procedural tasks to bots, sensors and other automated tools makes practical sense and is increasingly the way organisations are addressing both that shortage and time management problems.

But while the shift to automation has increased velocity and fluidity across verticals, the BSIMM11 finds that it hasnt put the control of security standards and oversight out of the reach of humans.

Apply a well-rounded risk mitigation strategy

In fact, the roles of todays security professionals and software developers have become multi-dimensional. With their increasing responsibilities, they must do more in less time and while keeping applications secure. As development workflows continue to evolve to keep up with organisational agility goals, they must account for a variety of requirements, including:

This is the reality around which organisations build and/or consume software. Over the years weve witnessed the use and expansion of automation in the integration of tools such as GitLab for version control, Jenkins for continuous integration (CI), Jira for defect tracking and Docker for container integration within toolchains. These tools work together to create a cohesive automated environment that is designed to allow organisations to focus on delivering higher quality innovation faster to the market.

Through BSIMM iterations weve seen that organisations have realised theres merit in applying and sharing the value of automation by incorporating security principles at appropriate security touchpoints in the software development life cycle (SDLC), shifting the security effort left. This creates shorter feedback loops and decreases friction, which allows engineers to detect and fix security and compliance issues faster and more naturally as part of software development workflows.

More recently, a shift everywhere movement has been observed through the BSIMM as a graduation from shift left meaning firms are not just testing early in development but conducting security activity as soon as possible with the highest fidelity as soon as is practical. As development speeds and deployment frequencies intensify, security testing must compliment these multifaceted dynamic workflows. If organisations want to avoid compromising security and time to market delays, directly integrating security testing is essential.

Since organisations time to innovate continues to accelerate, firms must not abdicate their security and risk mitigation responsibilities.Managed security testing provides and delivers the key people, process and technology considerations that help firms maintain the desired pace of innovation, securely.

In fact, the right managed security testing solutions will provide the ability to invert the relationship between automation and humans, where the humans powering the managed service act out-of-band to deliver high-quality input in an otherwise machine-driven process, rather than the legacy view in which automation augments and/or complements human process.

It also affords organisations the application security testing flexibility required while driving fiscal responsibility. Organisation gain access to the brightest minds in the cybersecurity field when you need them and not paying for them when you dont; you simply draw on them as needed to address current resource testing constraints. This results in unrivaled transparency, flexibility and quality at a predictable cost plus provides the data required to remediate risks efficiently and effectively.

Enact an open source management strategy

And we must not neglect the use of open source software (OSS) a substantial building block of most, if not all modern software. Its use is persistently growing and it provides would-be attackers with a relatively low-cost vector to launch attacks on a broad range of entities that comprise the global technology supply chain.

Open source code provides the foundation of nearly every software application in use today across almost every industry. As a result, the need to identify, track and manage open source components and libraries has increased exponentially. License identification, processes to patch known vulnerabilities and policies to address outdated and unsupported open source packages are all necessary for responsible open source use. The use of open source isnt the issue, especially since reuse is a software engineering best practice; its the use of unpatched OSS that puts organisations at risk.

The 2020 Open Source Security and Risk Analysis (OSSRA) report contains some concerning statistics. Unfortunately, the time it takes organisations to mitigate known vulnerabilities is still unacceptably high. For example, six years after initial public disclosure, 2020 was the first year the Heartbleed vulnerability was not found in any of the audited commercial software that forms the basis of the OSSRA report.

Notably, 91% of the codebases examined contained components that were more than four years out of date or had no development activity in the last two years, exposing those components to a higher risk of vulnerabilities and exploits. Furthermore, the average age of vulnerabilities found in the audited codebases was a little less than 4 years. The percentage of vulnerabilities older than 10 years was 19% and the oldest vulnerability was 22 years old. It is clear that we (as open source users) are doing a less than optimal job in defending ourselves against open source enabled cyberattacks.

To put this in a bit more context, 99% of the code bases analysed for the report contained open source software, of those, 75% contained at least one vulnerability and 49% contained high-risk vulnerabilities.

If youre going to mitigate security risk in your open source codebase, you first have to know what software youre using and what exploits could impact its vulnerabilities. One increasingly popular way to get such visibility is to obtain a comprehensive bill of materials from your suppliers (sometimes referred to as a build list or a software bill of materials or SBOM). The SBOM should contain not only all open source components but also the versions used, the download locations for each project and all dependencies, the libraries to which the code calls and the libraries to which those dependencies link.

Modern applications consistently contain a wealth of open source components with possible security, licensing and code quality issues. At some point, as that open source component ages and decays (with newly discovered vulnerabilities in the code base), its almost certainly going to break or otherwise open a codebase to exploit. Without policies in place to address the risks that legacy open source can create, organisations open themselves up to the possibility of issues in their cyber assets that are 100% dependent on software.

Organisations need clearly communicated processes and policies to manage open source components and libraries; to evaluate and mitigate their open source quality, security and license risks; and to continuously monitor for vulnerabilities, upgrades and the overall health of the open source codebase. Clear policies covering introduction and documentation of new open source components can help to ensure control over what enters the codebase and that it complies with company policies.

Theres no finish line when it comes to securing the software and applications that power your business. But it is critically important to manage and monitor your assets as well as to have a clear view into your software supply chain. No matter the size of your organisation, the industry in which you conduct business, the maturity of your security programme or budget at hand, there are strategies you can enact today to progress your programme and protect your organisational data and that of your customers.

Facebook Twitter LinkedInEmailWhatsApp

Read this article:

Synopsys expert on proactive application security strategies for uncertain times - Intelligent CIO ME

The Future of Software Supply Chain Security: A focus on open source management – Global Banking And Finance Review

By Pete Bulley, Director of Product, Aire

The last six months have brought the precarious financial situation of many millions across the world into sharper focus than ever before. But while the figures may be unprecedented, the underlying problem is not a new one and it requires serious attention as well as action from lenders to solve it.

Research commissioned by Aire in February found that eight out of ten adults in the UK would be unable to cover essential monthly spending should their income drop by 20%. Since then, Covid-19 has increased the number without employment by 730,000 people between July and March, and saw 9.6 million furloughed as part of the job retention scheme.

The figures change daily but here are a few of the most significant: one in six mortgage holders had opted to take a payment holiday by June. Lenders had granted almost a million credit card payment deferrals, provided 686,500 payment holidays on personal loans, and offered 27 million interest-free overdrafts.

The pressure is growing for lenders and with no clear return to normal in sight, we are unfortunately likely to see levels of financial distress increase exponentially as we head into winter. Recent changes to the job retention scheme are signalling the start of the withdrawal of government support.

The challenge for lenders

Lenders have been embracing digital channels for years. However, we see it usually prioritised at acquisition, with customer management neglected in favour of getting new customers through the door. Once inside, even the most established of lenders are likely to fall back on manual processes when it comes to managing existing customers.

Its different for fintechs. Unburdened by legacy systems, theyve been able to begin with digital to offer a new generation of consumers better, more intuitive service. Most often this is digitised, mobile and seamless, and its spreading across sectors. While established banks and service providers are catching up offering mobile payments and on-the-go access to accounts this part of their service is still lagging. Nowhere is this felt harder than in customer management.

Time for a digital solution in customer management

With digital moving higher up the agenda for lenders as a result of the pandemic, many still havent got their customer support properly in place to meet demand. Manual outreach is still relied upon which is both heavy on resource and on time.

Lenders are also grappling with regulation. While many recognise the moral responsibility they have for their customers, they are still blind to the new tools available to help them act effectively and at scale.

In 2015, the FCA released its Fair Treatment of Customers regulations requiring that consumers are provided with clear information and are kept appropriately informed before, during and after the point of sale.

But when the individual financial situation of customers is changing daily, never has this sentiment been more important (or more difficult) for lenders to adhere to. The problem is simple: the traditional credit scoring methods relied upon by lenders are no longer dynamic enough to spot sudden financial change.

The answer lies in better, and more scalable, personalised support. But to do this, lenders need rich, real-time insight so that lenders can act effectively, as the regulator demands. It needs to be done at scale and it needs to be done with the consumer experience in mind, with convenience and trust high on the agenda.

Placing the consumer at the heart of the response

To better understand a customer, inviting them into a branch or arranging a phone call may seem the most obvious solution. However, health concerns mean few people want to see their providers face-to-face, and fewer staff are in branches, not to mention the cost and time outlay by lenders this would require.

Call centres are not the answer either. Lack of trained capacity, cost and the perceived intrusiveness of calls are all barriers. We know from our own consumer research at Aire that customers are less likely to engage directly with their lenders on the phone when they feel payment demands will be made of them.

If lenders want reliable, actionable insight that serves both their needs (and their customers) they need to look to digital.

Asking the person who knows best the borrower

So if the opportunity lies in gathering information directly from the consumer the solution rests with first-party data. The reasons we pioneer this approach at Aire are clear: firstly, it provides a truly holistic view of each customer to the lender, a richer picture that covers areas that traditional credit scoring often misses, including employment status and savings levels. Secondly, it offers consumers the opportunity to engage directly in the process, finally shifting the balance in credit scoring into the hands of the individual.

With the right product behind it, this can be achieved seamlessly and at scale by lenders. Pulse from Aire provides a link delivered by SMS or email to customers, encouraging them to engage with Aires Interactive Virtual Interview (IVI). The information gathered from the consumer is then validated by Aire to provide the genuinely holistic view of a consumer that lenders require, delivering insights that include risk of financial difficulty, validated disposable income and a measure of engagement.

No lengthy or intrusive phone calls. No manual outreach or large call centre requirements. And best of all, lenders can get started in just days and they save up to 60 a customer.

Too good to be true?

This still leaves questions. How can you trust data provided directly from consumers? What about AI bias are the results fair? And can lenders and customers alike trust it?

To look at first-party misbehaviour or gaming, sophisticated machine-learning algorithms are used to validate responses for accuracy. Essentially, they measure responses against existing contextual data and check its plausibility.

Aire also looks at how the IVI process is completed. By looking at how people complete the interview, not just what they say, we can spot with a high degree of accuracy if people are trying to game the system.

AI bias the system creating unfair outcomes is tackled through governance and culture. In working towards our vision of a world where finance is truly free from bias or prejudice, we invest heavily in constructing the best model governance systems we can at Aire to ensure our models are analysed systematically before being put into use.

This process has undergone rigorous improvements to ensure our outputs are compliant by regulatory standards and also align with our own company principles on data and ethics.

That leaves the issue of encouraging consumers to be confident when speaking to financial institutions online. Part of the solution is developing a better customer experience. If the purpose of this digital engagement is to gather more information on a particular borrower, the route the borrower takes should be personal and reactive to the information they submit. The outcome and potential gain should be clear.

The right technology at the right time?

What is clear is that in Covid-19, and the resulting financial shockwaves, lenders face an unprecedented challenge in customer management. In innovative new data in the form of first-party data, harnessed ethically, they may just have an unprecedented solution.

Read the original here:

The Future of Software Supply Chain Security: A focus on open source management - Global Banking And Finance Review

Africa leads the way in open access research, says expert – University World News

AFRICA

Neylon questioned how capacity and infrastructure would be maintained, sustained and grown to support future leadership to advance African scholarship, during a webinar as part of Open Access Week organised by the Academy of Science of South Africa (ASSAf). Open Access Week started in 2008 and has been observed globally this year from 19 to 25 October, with the theme: Open with Purpose: Taking action to build structural equity and inclusion.

According to Neylon, an overview of the progress made towards open access scholarship shows that there has been massive progress over the last decade.

Kenyatta, Venda among top 100 in the world

Speaking to the theme Open Access to Scholarly Literature: Progress and evidence of African leadership, Neylon said several African institutions, including Kenyatta University in Kenya and the University of Venda in South Africa, were among the top 100 universities in the world using open access research.

He said the data showed that there was a wide range of European institutions, also Latin American and Asian but a significant number of African institutions, that were performing better in terms of delivering open access.

African countries are consistently showing very high levels of open access, and again, showing it right from 2010, from early in this process [of developing open access] through to the current play on sound leadership that has existed, over a very long period of time, said Neylon, citing Ethiopia, Kenya, Nigeria and South Africa as examples of countries with success in advancing open source scholarship.

According to Neylon, one of the key reasons for the African continents success in open source access, which has seen many outperform European and North American institutions, may be due to philanthropic funders such as the National Institute of Health and the Bill and Melinda Gates Foundation, while in South Africa the National Research Foundation has been playing a pivotal role. .Volume of open access publishing increasing

Using several graphs presented on screen, Neylon said the statistics showed the development of open access among global universities over the past eight or nine years, particularly on the African continent, adding, ...we are seeing increases in the volume of open access publishing, we're seeing increases in the number of open access to repositories.

At the moment, and we have seen a shift over the last 10 to 15 years from levels of open access around 10%, to global levels of open access of around about 50%, 60% and 70%, he said.

Neylon said that this was an astounding increase in the volume of research content that is accessible for free, at least for those that have access to high-speed internet. There are African institutions that are performing in terms of open access at a level which is equal with the best in the world, he said.

Code of conduct for researchers

But while several African nations were depicted as models of success in open source access, South Africas national science body ASSAf warned of uncertainty and the need for further guidance on the application of privacy data laws concerning research. It highlighted the importance of developing a code of conduct for research in terms of the Protection of Personal Information Act (POPIA), colloquially called the POPI Act, to ensure certainty, transparency and clarity in the use of personal information for research.

ASSAf believes there should be a code to guide the use of personal information for research in all sectors (including health, social science, genomics, etc) and has begun working on a process to facilitate the development of a Code of Conduct for Research, by engaging stakeholders, including researchers, ethicists and legally trained people.

Stringent penalties, with fines up to ZAR10 million (US$615,000), apply as part of the countrys privacy laws governing data. It became effective on 1 July 2020 with enforcement set to begin on 1 July 2021. POPIA strives to balance the right to privacy with other rights and interests, including the free flow of information within the country and across its borders.

ASSAf will appoint a steering group to guide the development of the code next month, with a writing team due to start by December and a draft expected for comment and discussions by March 2021. Pending further discussions and comments by May, it is envisaged that the code would be submitted by July to Information Regulator Pansy Tlakula for approval.

See the original post:

Africa leads the way in open access research, says expert - University World News

A comprehensive list of reasons why pair programming sucks – The Next Web

This article was originally published on .cult by Mynah Marie. .cult is a Berlin-based community platform for developers. We write about all things career-related, make original documentaries and share heaps of other untold developer stories from around the world.

I fell in love with programming because of the feeling of losing myself in ideas and concepts while being completely alone for hours on end. Theres just something about it, you know?

When I decided to enroll in a coding Bootcamp, I thought it would give me the opportunity to meet other people just like me. Little did I know, I was about to meet my nemesis: pair programming.

There are a lot of things I like about Agile development. I even do, now, believe in the power of pair programming. But its not because I can see the benefits of this technique that I necessarily like it. In fact, I deeply hate it. Not because I think its not effective, just because, in my case, it took all the fun out of programming.

[Read:What audience intelligence data tells us about the 2020 US presidential election]

Here are some benefits of pair programming that I personally experienced:

After a few days of Bootcamp, I had my first traumatizing pair programming experience.

We were solving basic JS challenges. I was the navigator and he was the driver. Even though I hated the fact of not being able to type the code myself, I tried to make the most out of the exercise by asking a lot of questions:

At some point, without any warning, my partner got up and left the room leaving me to my puzzlement. Turns out, someone asking loads of questions every two minutes is pretty annoying to most people.

And there started my long descent to hell.

Goodbye, the good old days when Id program for 18 hours straight from the comfort of my bed.

Goodbye, the peaceful moments with myself when Id spend days, sometimes weeks before thinking of talking to another human being.

Goodbye, the joys of working on ideas of my own.

One day, while I was at an emotional all-time low, I confessed to one of the instructors and told him that, literally, I hate pair programming.

His answer couldnt have surprised me more: Oh! yeah pair programming is horrible.

Finally, my aversion was acknowledged!

Im not against pair programming. In fact, I really do believe its great for some people. I even think it couldve been great for me if I wouldve been paired with more experienced pair programmers. But since we were all learning, most students made horrible partners (me included).

I know there are other people like me out there, who suffered at the hand of this technique and never dared to speak up because, in some cases, it can close doors to potential jobs.

But Im not looking for a job anymore, so I dont care.

So for your entertainment, heres a comprehensive list of the reasons why I hate pair programming:

Agile, I love you. You taught me the value of working in teams and learning from one another. The experience was horrendous but meaningful nonetheless.

Im now a freelancer. Back to peace, working for hours on end from the comfort of my home, with minimum human contact. The reality which became a dream is now my reality once more, with the added benefit of financial rewards.

I think I found my path.

Read next: This GPT-3-powered tool generates new ideas for your terrible blog

Read more:

A comprehensive list of reasons why pair programming sucks - The Next Web

The whys and hows of keeping your cloud secrets – ITProPortal

Putting personal or business secrets and credentials up in the cloud is something most users of web-enabled devices are already doing unwittingly. For example, many are using password managers and form apps or browser extensions to conveniently access login details across devices. It is not a good idea to do this without the right security measures, though.

Storing sensitive information in the cloud requires more than just standard security solutions. The handling of passwords, login details, API tokens, SSH keys, private encryption keys, private certificates, and system-to-system passwords can potentially create vulnerabilities frequently targeted by social engineering and advanced cybersecurity attacks. Organizations need to find a way to make the most of the cloud in storing secrets without compromising security.

A Ponemon Research survey reveals that 90 percent of organizations have been hacked at least once. More than half of those surveyed said that they had little confidence in addressing further attacks. Passwords or credentials are the most common target of hacking. As revealed by the 2019 Verizon Data Breach Investigations Report, around 8 out of 10 breaches exploit compromised credentials.

Why is it important to raise the need to secure cloud secrets? It is because many processes that involve passwords and other secrets are handled without many organizations taking security seriously. A 2019 study by North Carolina State University researchers, for example, exposes the vulnerabilities of GitHub repositories. The study found that over a hundred thousand repositories contain app secrets in source codes.

The study revealed that authentication secrets such as API and cryptographic keys appear to be unprotected in a wide variety of projects. This issue does not only affect open source projects. Even private source code repositories are also prone to unauthorized access to secrets.

The cloud is a highly convenient environment for storing various data. However, it is still relatively new for many organizations. As such, only a few thoroughly understand how it works let alone how to ensure security in it.

Interestingly, CompTIA found that an overwhelming majority of organizations that use cloud services trust the security afforded by their cloud providers. Despite concerns, most cloud users report being confident or very confident (net 85 percent) in their cloud service providers security, the study writes. However, the same organizations also said they are reluctant in storing certain types of data in the cloud.

Even with high confidence in cloud security, many firms are still unwilling to store certain types of data there, the CompTIA study notes. Firms of all sizes hesitate to put onto the cloud their confidential company financial data, credit card information, employee HR files, confidential IP and trade secrets, customer contacts, and data covered by regulations.

The findings are understandably somewhat contradicting in light of the alarmingly high levels of cyber attacks businesses are exposed to. Organizations, however, can use secrets management procedures that come with the platforms or apps they are using. Also, they can turn to third-party secrets management tools like Akeyless to address the dilemma.

These tools provide a secrets management solution that ensures secrets are safe through distributed fragments cryptography and ephemeral secrets delivery.

To secure company secrets on the cloud, it is necessary to limit visibility and prevent unauthorized access. This entails encryption without creating cumbersome procedures and tedious processes that may only end up creating vulnerabilities possibly because employees miss a step or are tempted to take shortcuts.

Different platforms and applications come with different methods of securing secrets. Kubernetes, for example, has a feature aptly named Secrets, which makes it possible to save and manage passwords and other sensitive information. The Kubernetes website provides comprehensive details on how to use this feature, which is good, but imagine having to learn how to manage secrets with different platforms and applications.

Employees may have issues with this idea when working with multiple platforms and apps. It is not only tiresome; it can also create vulnerabilities in a cybersecurity system.

This is where secrets management solutions come in handy. Akeyless, for example, provides a unified interface and set of methods to secure secrets regardless of the types of secrets and apps and platforms used. Its basically vault-as-a-service, with plugin capability for popular cloud platforms, including Kubernetes, Terraform, Ansible, Docker, Jenkins, CircleCI, Puppet, Chef, Slack, and many others. This simplifies and enhances the security of secrets management with these platforms.

This results in a seamless way to handle secrets across systems and environments. In general, they are designed to automate the security procedures vital in protecting secrets. In cases when there is no encryption implemented, they enforce high-level encryption. They then automatically encrypt and decrypt data as needed by users.

Secrets management platforms are particularly useful to DevOps teams. Privileged access management (PAM) expert Tyler Reese of DevOps.com acknowledges the tendency of many teams to overlook essential security practices. Whats more, in an environment that relies heavily on code, weve seen time and time again careless developers leaking confidential information through APIs or cryptographic keys on sites such as GitHub, Reese says.

This post may sound like a recommendation to use third-party secrets management tools, but it is not the main point. The goal here is to emphasize how important it is to secure organization secrets being stored in or transmitted to the cloud.

Generally, there is nothing wrong in learning and using the specific procedures in protecting secrets for particular platforms or applications. However, some simply do not have adequate security measures in place. The Kubernetes Secrets feature briefly discussed earlier, for instance, does not perform encryption. With it, secrets are stored in Etcd in base64, which only undertakes encoding, not encryption. As such, anyone who is designated as an admin for the Kubernetes cluster can read the secrets saved in the clustera potential security loophole.

So why do you need to secure your secrets on the cloud? Its because cybersecurity attacks abound and they frequently target secrets stored in the cloud. Also, some cloud platforms and applications do not provide adequate protection for secrets. How do you protect secrets? By learning and using the specific secrets management processes associated with certain platforms or applications. If this is too cumbersome and inefficient, the logical option is to use a unified secrets management solution.

Oren Rofman, senior technology writer

Link:

The whys and hows of keeping your cloud secrets - ITProPortal

Is It Time to Leave Open Source Behind? – Built In

Once there was a group of people who got tired of the status quo, so those people started their own community based on a commitment to freedom and equality. That community got bigger and bigger, until what was once revolutionary became widely accepted.

But there were problems. Big business had too much control, some people warned. Others thought the communitys founding documents had become outdated they served the needs of the people who wrote them, not the community in its current form.

This story could be about the United States, but it isnt. Its about open-source software.

What started as a small, homogeneous online community fed up with proprietary software has exploded into a mainstream framework that powers the tech giants in your stock portfolio and the mobile phone in your hand. Now, the open-source community is much bigger and (slightly) more diverse, but its inner workings remain largely the same.

And a lot of people think thats okay. The philosophical bedrock of free and open-source software no hidden source code, no limitations on use could be as legitimate today as it was when it was written.

Many others disagree. Big companies profit from the work of underpaid and overtaxed project maintainers, they argue. Some organizations take open-source tools and use them for unethical ends, and developers cant stop them. Real freedom, #EthicalSource activists like Coraline Ehmke claim, requires limitations.

So, if a large faction of open-source participants arent happy with the state of things why dont they just leave?

Read This, TooIs Your Open-Source Code Fueling Human Rights Abuses?

* * *

This story is the fourthin a series on cultural battles facingthe open-source community. You can read the first article, on ethics and licensure,here, the second article, on governance,hereand the third article, on the rights of end users, here.

Its really, really hard to leave, Don Goodman-Wilson told me.

Goodman-Wilson is an engineer, open-source advocate, philosopher and former academic. His disenchantment with open source didnt happen overnight. Rather, it was a long process of noticing and questioning some assumptions hed taken for granted.

Its something that had been a long time coming for me, he said. I was, very slowly, attending talks and feeling doubts rise within me over the years.

Now, hes joined with Ehmke and other #EthicalSource proponents to call for changes. Could they abandon traditional definitions of open source and make common-pool software their own way? Sure. But people have tried that, and it didnt go great.

Take King Games, which in May decided to list its game development engine Defold on GitHub for community collaboration.

We are immensely proud to announce that@king_games has released the Defold game engine as open source on GitHub and transferred Defold to the Defold Foundation, Defold Engine tweeted.

Gamers rejoiced; then the fallout came.

Its really, really hard to leave.

Can we discuss the license choice? I had missed this initially and thought it was [the open-source license] Apache 2.0, but I see now that its custom, one user replied. It means that its not open source as per the [Open Source Initiatives] open-source definition.

Thats because King, which presumably didnt want other gaming studios to take and profit from its code, released Defold under a modified license that prevented commercial reuse. That violates the definitions of free and open-source software as per the Free Software Foundations four freedoms and the Open Source Iniatives (OSIs) open-source definition.

So, Defold Engine tweeted again five hours later: We are humbled by the positive reactions to the news we shared earlier today but also sorry for misrepresenting the license under which we make the source code available. Defold is a free and open game engine with a permissive license, and we invite the community to contribute.

But that didnt do the trick.

The use of the words open and free, and the derived from [open-source license] Apache makes me upset, one user replied. It is a blatant attempt to use someone elses good name.

So, Defold Engine tweeted again. And again. And again.

Some thoughts on the open source discussions yesterday, the first in a nine-tweet thread read. There was no ill-intent on our part when said that Defold is open source. The source code is available on GitHub for anyone to play around with and hopefully contribute to. This is what we meant, nothing else.

Comments on that thread appear to have been disabled.

Its a familiar scene, Goodman-Wilson said. A person or organization fiddles with an open-source license and is met with righteous anger. Thats what happened when Ehmke introduced the Hippocratic License which prohibits the use of software for human rights abuses although plenty voiced their support, as well.

Open sources strength lies in its community. Without community buy-in, options are limited for people looking to expand or reimagine what open source means.

Reputation is another barrier to exit for open-source participants, Goodman-Wilson said.

Today, open source is often touted as a resume-builder, or a stepping stone to high-paying jobs with tech companies. For developers, that means creating a high-profile project or even contributing to one might mean the difference between writing your own ticket and languishing in software obscurity.

What makes a project high-profile is, invariably, adoption rates. The more people use your software, the more successful its considered.

You want that [adoption rate] number to go up and to the right, because youve been told over and over again that is the metric for success. And if you cant show that metric, then your project is not successful, Goodman-Wilson said.

That creates what Goodman-Wilson views as a problematic incentive: To boost adoption rates, developers must take care to appeal to corporate interests.

Corporations are notoriously risk-averse. OSI worked hard to bring them into the open-source fold, and their involvement has largely been limited to projects with standard, approved licenses. If developers built some software and slapped on a modified license with caveats for ethics or commercial use, corporations would balk. By sticking with OSI-approved licenses, developers greatly improve their chances of getting their software into corporate tech stacks.

That means higher adoption, more repute and, potentially, more money. Split with OSI, and those benefits of open-source involvement all but disappear.

What happens when an open-source developer creates a successful project with a relatively high adoption rate? They might end up with a job offer. Or, they might get stuck maintaining that codebase for little or no pay.

When Goodman-Wilson was working on GitHubs developer relations team, the company organized a series of meetings for open-source project maintainers to discuss their experiences and make recommendations for improvements. The last one was held in 2019 in Berlin.

Those conversations were eye opening. Holy crap. A lot of the complaints were around like, I feel taken advantage of. I feel like my time is being given freely to people who do not value it, typically large corporations, Goodman-Wilson said. Based on those conversations, it felt like [open source] had come full circle and was now a system that, although initially intended to overturn power hierarchies in the tech world, actually ended up reinforcing them.

It felt like [open source] had come full circle and was now a system that, although initially intended to overturn power hierarchies in the tech world, actually ended up reinforcing them.

The accompanying report named frequent and widespread burnout as a cause for concern, as maintainers cited unmanageable volumes of work and problems with competing interests.

Maintainer burnout is one issue that arises when corporations can dip into the open-source pool with few limitations. But companies can also toss things into the pool.

Often, those contributions are extremely helpful. Tech entrepreneurs rely on open-source to spin up new and innovative offerings. Googles release of Kubernetes as open source, for example, changed the game for cloud-native projects, and TensorFlow laid the foundation for accessible neural network technology.

Other times, the effects are mixed. React, for instance, is a Facebook-maintained open-source library thats served as a powerful recruiting tool as React grew in popularity, Facebook engineering grew in esteem. But React has also been accused of harboring toxic community members and attitudes, leading to the departure of several prominent contributors.

Despite some systemic flaws and personal risk, the desire for industry success and peer repute drives developers to stick with open source. It also drives them to build software that will get them noticed.

Like Avatarify, a program by developer Ali Aliev that uses artificial intelligence to superimpose one face onto another during video capture. Avartify grabbed attention because it is the first software to create semi-convincing real-time deepfakes. Check out this demo, in which Elon Musk bombs a Zoom meeting.

Its really cool, in some very sad sense of the word cool, Goodman-Wilson said.

The implications of technology like this are complicated. On one hand, it is really cool. Combined with a convincing audio deepfake to mask the impostors voice, perhaps a person really could convince their friends that a celebrity had joined their Zoom call. Or they could make and release a video of a real politician saying fake things. They could spread false information. Or incite violence.

Its fair to say that, in the wrong hands, a tool like Avatarify goes from fun to scary. And, because Aliev released it under a traditional open-source license, anyone could take and use its technology.

[Aliev] gained reputation from doing it, so he was incentivized to work on this release in open source, Goodman-Wilson said. On the other hand, now weve got state actors that would love to have this sort of tool available to them. So, knowing that there are oppressive, unjust organizations that can dip into the pool of open source and take from it what they need is actually deeply terrifying to a lot of developers.

The horror in the room was palpable.

What Goodman-Wilson is describing has actually happened. Developers who oppose war, for instance, have been alarmed to learn that the U.S. Air Force and Navy use Kubernetes, an open-source project, to run combat aircraft and warships. For developers outside the U.S., these connections may be particularly disturbing.

While giving a talk in Amsterdam to a group of developers who worked on JavaScript extension TypeScript, Goodman-Wilson presented a U.S. Air Force recruiting website with a TypeScript dependency. The website is a sort of drone flight simulator, and visitors fly through an abstracted city, shooting at blips of light that represent insurgents.

A lot of people in the room were from the Netherlands and unknowingly had their code used by this Air Force recruiting site, and the horror in the room was palpable, Goodman-Wilson said. The last thing that they expected was to be working on a language extension and find that it was being used to recruit drone pilots.

Read This, TooOpen-Source Governance, Meet Feminist Economics

Theres this huge disconnect between what we think were doing when were contributing to open source, which is, quote, unquote, making the world a better place, and the reality of the incentive and access structure behind open source, which is such that, who knows if what youre building is being turned into a weapon? Goodman-Wilson said.

But is it a developers fault if a totally separate entity uses something they helped build for unethical ends? Wont bad actors get their hands on the tools they need, Hippocratic License or no Hippocratic License?

Yes, to both, Goodman-Wilson told me. Organizations that hurt people will always get the software they need but with formal, ethical boundaries around open-source resources, theyd have to pay for that software rather than taking it for free. From a moral perspective, that distinction matters, he argued, because open-source developers would no longer share responsibility for abuses.

Even if theyll just take that software from somewhere else, at least I have cut off one avenue of access that links back to me.

If we think of ethics as a causal relationship, moral actions are ones whose outcomes we can influence, he said. If a dictator in a faraway country uses a tool weve never heard of to aid in human rights abuses, we shouldnt feel responsible. But if an organization uses a piece of software we helped build to conduct drone strikes on civilians, we might feel some sense of responsibility.

To the extent that I want to take responsibility for my own actions and decisions, I might want to find ways to cut down that causal chain, he said. Even if theyll just take that software from somewhere else, at least I have cut off one avenue of access that links back to me. Then you convince enough people to do that, and, as a movement, you begin to cut off more and more avenues.

For Goodman-Wilson, that movement looks like #EthicalSource and Hippocratically licensed software. But cutting off access for some while maintaining the spirit that made open source special access for all is profoundly difficult.

Its a balance Goodman-Wilson, and other open-source activists, are continually trying to strike.

The story of open source feels like the story of communities.

They start small and single-minded. But as they grow, factions form and power dynamics arise. New people show up, bringing new ideas. And eventually, the community is faced with a decision: Should we evolve, or hold fast to the principles we started with?

Ehmke, Goodman-Wilson and others are asking for evolution, and theyve encountered plenty of obstacles. So far, the #EthicalSource movement has been limited to a tweet here, a presentation there, and many behind-the-scenes conversations. Potential allies are afraid to put their reputations and career prospects at risk, Goodman-Wilson said, which limits the movements scope.

What do we need to do to create an atmosphere where people arent afraid to speak out? he said. I dont know the answer to that, but thats a question a lot of us are asking. And I would really like more people to ask.

For now, #EthicalSource will continue to promote unapproved models and licenses and hope that open sources governing bodies come around. But its proponents might not wait forever.

Ive certainly never built a political movement before, but I think a lot of us are starting to see this as a political movement that needs to be built, instead of just throwing some good arguments out there and seeing what sticks, Goodman-Wilson told me.

In the end, open-source participants are free to choose where they stand. Their decisions will affect each and every one of us.

Read This, TooThe Rules of Open Source No Longer Apply

Read this article:

Is It Time to Leave Open Source Behind? - Built In

Three best practices for responsible open source usage in the COVID-19 era – Help Net Security

COVID-19 has forced developer agility into overdrive, as the tech industrys quick push to adapt to changing dynamics has accelerated digital transformation efforts and necessitated the rapid introduction of new software features, patches, and functionalities.

During this time, organizations across both the private and public sector have been turning to open source solutions as a means to tackle emerging challenges while retaining the rapidity and agility needed to respond to evolving needs and remain competitive.

Since well before the pandemic, software developers have leveraged open source code as a means to speed development cycles. The ability to leverage pre-made packages of code rather than build software from the ground up has enabled them to save valuable time. However, the rapid adoption of open source has not come without its own security challenges, which developers and organizations should resolve safely.

Here are some best practices developers should follow when implementing open source code to promote security:

First and foremost, developers should create and maintain a record of where open source code is being used across the software they build. Applications today are usually designed using hundreds of unique open source components, which then reside in their software and workspaces for years.

As these open source packages age, there is an increasing likelihood of vulnerabilities being discovered in them and publicly disclosed. If the use of components is not closely tracked against the countless new vulnerabilities discovered every year, software leveraging these components becomes open to exploitation.

Attackers understand all too well how often teams fall short in this regard, and software intrusions via known open source vulnerabilities are a highly common sources of breaches. Tracking open source code usage along with vigilance around updates and vulnerabilities will go a long way in mitigating security risk.

Aside from tracking vulnerabilities in the code thats already in use, developers must do their research on open source components before adopting them to begin with. While an obvious first step is ensuring that there are no known vulnerabilities in the component in question, other factors should be considered focused on the longevity of the software being built.

Teams should carefully consider the level of support offered for a given component. Its important to get satisfactory answers to questions such as:

Its no secret that COVID-19 has altered developers working conditions. In fact, 38% of developers are now releasing software monthly or faster, up from 27% in 2018. But this increased pace often comes paired with unwanted budget cuts and organizational changes. As a result, the imperative to do more with less has become a rallying cry for business leaders. In this context, it is indisputable that automation across the entire IT security portfolio has skyrocketed to the top of the list of initiatives designed to improve operational efficiency.

While already an important asset for achieving true DevSecOps agility, automated scanning technology has become near-essential for any organization attempting to stay secure while leveraging open source code. Manually tracking and updating open source vulnerabilities across an organizations entire software suite is hard work that only increases in difficulty with the scale of an organizations software deployments. And what was inefficient in normal times has become unfeasible in the current context.

Automated scanning technologies alleviate the burden of open source security by handling processes that would otherwise take up precious time and resources. These tools are able to detect and identify open source components within applications, provide detailed risk metrics regarding open source vulnerabilities, and flag outdated libraries for developers to address. Furthermore, they provide detailed insight into thousands of public open source vulnerabilities, security advisories and bugs, to ensure that when components are chosen they are secure and reputable.

Finally, these tools help developers prioritize and triage remediation efforts once vulnerabilities are identified. Equipped with the knowledge of which vulnerabilities present the greatest risk, developers are able to allocate resources most efficiently to ensure security does not get in the way of timely release cycles.

When it comes to open source security, vigilance is the name of the game. Organizations must be sure to reiterate the importance of basic best practices to developers as they push for greater speed in software delivery.

While speed has long been understood to come at the cost of software security, this type of outdated thinking cannot persist, especially when technological advancements in automation have made such large strides in eliminating this classically understood tradeoff. By following the above best practices, organizations can be more confident that their COVID-19 driven software rollouts will be secure against issues down the road.

Read more from the original source:

Three best practices for responsible open source usage in the COVID-19 era - Help Net Security

DeepMind open-sources the FermiNet, a neural network that simulates electron behaviors – VentureBeat

In September, Alphabets DeepMind published a paper in the journal Physical Review Research detailing Fermionic Neural Network (FermiNet), a new neural network architecture thats well-suited to modeling the quantum state of large collections of electrons. The FermiNet, which DeepMind claims is one of the first demonstrations of AI for computing atomic energy, is now available in open source on GitHub and ostensibly remains one of the most accurate methods to date.

In quantum systems, particles like electrons dont have exact locations. Their positions are instead described by a probability cloud. Representing the state of a quantum system is challenging, because probabilities have to be assigned to possible configurations of electron positions. These are encoded in the wavefunction, which assigns a positive or negative number to every configuration of electrons; the wavefunction squared gives the probability of finding the system in that configuration.

The space of possible configurations is enormous represented as a grid with 100 points along each dimension, the number of electron configurations for the silicon atom would be larger than the number of atoms in the universe. Researchers at DeepMind believed that AI could help in this regard. They surmised that, given neural networks have historically fit high-dimensional functions in artificial intelligence problems, they could be used to represent quantum wavefunctions as well.

Above: Simulated electrons sampled from the FermiNet move around a bicyclobutane molecule.

By way of refresher, neural networks contain neurons (mathematical functions) arranged in layers that transmit signals from input data and slowly adjust the synaptic strength i.e., weights of each connection. Thats how they extract features and learn to make predictions.

Because electrons are a type of particle known as fermions, which include the building blocks of most matter (e.g., protons, neutrons, quarks, and neutrinos), their wavefunction has to be antisymmetric. (If you swap the position of two electrons, the wavefunction gets multiplied by -1, meaning that if two electrons are on top of each other, the wavefunction and the probability of that configuration will be zero.) This led the DeepMind researchers to develop a new type of neural network that was antisymmetric with respect to its inputs the FermiNet and that has a separate stream of information for each electron. In practice, the FermiNet averages together information from across streams and passes this information to each stream at the next layer. This way, the streams have the right symmetry properties to create an antisymmetric function.

Above: The FermiNets architecture.

The FermiNet picks a random selection of electron configurations, evaluates the energy locally at each arrangement of electrons, and adds up the contributions from each arrangement. Since the wavefunction squared gives the probability of observing an arrangement of particles in any location, the FermiNet can generate samples from the wavefunction directly. The inputs used to train the neural network are generated by the neural network itself, in effect.

We think the FermiNet is the start of great things to come for the fusion of deep learning and computational quantum chemistry. Most of the systems weve looked at so far are well-studied and well-understood. But just as the first good results with deep learning in other fields led to a burst of follow-up work and rapid progress, we hope that the FermiNet will inspire lots of work on scaling up and many ideas for new, even better network architectures, DeepMind wrote in a blog post. We have just scratched the surface of computational quantum physics, and look forward to applying the FermiNet to tough problems in material science and condensed matter physics as well. Mostly, we hope that by releasing the source code used in our experiments, we can inspire other researchers to build on our work and try out new applications we havent even dreamed of.

The release of the FermiNet code comes after DeepMind demonstrated its work on an AI system that can predict the movement of glass molecules as they transition between liquid and solid states. (Both the techniques and trained models, which were also made available in open source, could be used to predict other qualities of interest in glass, DeepMind said.) Beyond glass, the researchers asserted the work yielded insights into general substance and biological transitions, and that it could lead to advances in industries like manufacturing and medicine.

Read more here:

DeepMind open-sources the FermiNet, a neural network that simulates electron behaviors - VentureBeat

A new way to think about your favorite games code – The Verge

Its surprisingly hard to archive a video game. Cartridges decay, eventually; discs become unreadable as their plastic degrades. Source codes are lost to corporate mergers and acquisitions. But whats most dangerous to preserving game history isnt a physical or corporate consideration: its the prevailing attitude that games are playful, evanescent, and therefore not worth archiving.

Obviously thats not true, and games deserve critical historical consideration, the kind that other, older mediums get. Frank Cifaldi and Kelsey Lewin, co-directors of the Video Game History Foundation, are two of the people leading that charge. I spoke with them a little while ago about preserving video game history, and their new program, the Video Game Source Project, which takes as its footing the idea that theres no better way to study a video game than to access its raw material.

Theres only so much you can learn from studying the final product, they say because studying the final iteration of a creative project leaves out the hows and whys that brought it to life in the first place. And Lewin and Cifaldi have started with a classic: LucasArtss The Secret of Monkey Island, which is celebrating its 30th anniversary this month.

First of all, [the project is] just kind of a call to arms to everyone that this stuff is really important and useful. And, at least when it comes to the older material, is rapidly dying, says Cifaldi, pointing out that most game companies dont have source code archives. But I think, most importantly, we want to normalize the availability and study of video games source material, because right now, video game source is just a very proprietary trade secret.

Which is true! And in modern gaming, cloning is a big deal, even leaving out the issues with source code. But from our perspective, its like, if you havent been doing anything with this game for 10, 20 years, why why the lockdown? he says.

Its a great question, one that makes me think a lot about the traditional stories of video game archiving rather, I should say, one story in particular, the one about E.T. and the landfill. See, the game E.T. The Extra-Terrestrial was a 1982 adventure game movie tie-in, developed for the Atari in a blazing 36 days. When it hit store shelves that Christmas, it was an unprecedented flop; Atari buried the unsold inventory somewhere deep in the New Mexico desert, where it was dug up in 2014 by a documentary crew. The next year, the game entered the Smithsonians collection, and they produced an episode of a podcast in 2019 to retell the legend.

Its a wonderful story, especially when you consider Ataris fortunes in the aftermath. (Spoiler: they werent great.) But the problem is, its basically the only one people know. Cifaldi and Lewins real goal is to bring more fascinating stories about this kind of history to the public, and to preserve the raw materials that make them possible. (The Foundations blog is excellent, at least when it comes to great stories.)

This isnt without controversy. The Nintendo Gigaleak, as its been called, happened earlier this summer and exposed a rich trove of new data about classic games. It also exposed a moral dilemma: if the riches in the leak were obtained illegally, as they likely were, does that change how historians think about and use what they learn from it? That answer is an individual calculus, of course. But on the other hand: if Nintendos secrets were less closely held, the leak wouldnt be quite as monumental.

I dont know that theres a tidy answer here. Whats clear, though, is that historical research into games should be something companies expect and prepare for. The kind of work the Video Game History Foundation does is important and necessary, even if the industry doesnt quite appreciate it. They just want a more open world.

And just to be clear, I mean, I dont think we expect a world where everyones just like, Great, lets open source everything, says Lewin. Its unrealistic, agrees Cifaldi.

But what is realistic is normalizing that someone could actually study this, just for historical purposes, that they should be able to look at this and learn things from it and tell the stories that they find within it, Lewin says.

The Secret of Monkey Island is a seminal game, not least for Cifaldi, who cites it as maybe my favorite game, depending on which day you ask me. He says it taught him what games could be like that they could have funny, memorable worlds and characters. The Foundation received the game the way they receive a lot of them, which is to say, on the sly. (Cifaldi identifies this as another problem the Foundation is out to solve: making it safe for people to donate games and such to the archive, even if they dont own the rights.)

When they got in touch with Lucasfilm about making content around the game, however, the studio was supportive. I mean, theyre the guys that make Star Wars, right, they understand, says Cifaldi. They understand that fans really enjoy this behind-the-scenes material, and that it directly benefits them if people are talking about it.

This month, the Video Game History Foundation plans to reveal what its learned about The Secret of Monkey Island, to fans, historians, and everyone else who might be interested in the hidden corners of a 30-year-old video game. We are able to reconstruct deleted scenes from the games that no ones ever seen before, because that data literally isnt on the disk that you get, because its not compiled into the game, says Cifaldi. (The Foundation has also gotten Ron Gilbert, the creator of the game, to join them in a livestream happening on October 30th.)

Cifaldi and Lewins perspective can be summed up pretty simply: they want to expand the kinds of stories we can tell about video games, as both fans and historians. Weve only really had crumbs of games development through, you know, finding an unfinished version that was maybe sent to a magazine to preview, or through seeing what accidentally got compiled into the final game, says Cifaldi.

That, he says, has tainted how fans view the development process as something perhaps linear, instead of as a gradual pileup of creative decisions. I think a really interesting thing to come out of this conversation with Ron is being able to sort of show that when we find this unused character in the source code, its not, like, this was a character with a fleshed-out biography, Cifaldi says. Sometimes an unused pirate is just an unused pirate.

It kind of sometimes tends to put either false importance on something, Lewin agrees. If you only have two clues about something, you can come up with a wild variety of scenarios that those two clues fit into. If you have 20 or 30 clues, on the other hand, the realm of possibility narrows. If you saw it in an earlier build of the game, you might be led to believe that its something that it absolutely wasnt, Lewin says.

To take a real example from The Secret of Monkey Island: the collapsing bridge. As Cifaldi explains it, there are frames of animation for a bridge collapsing in the original artwork for the game, but theres no code that calls for it. Its just there. So they took it to Gilbert, the creator, and asked about it. Rons like, Oh, yeah, thats not anything that was ever in the game. Im not really sure why thats there, says Cifaldi.

They also had access to Gilberts sketchbook from when he was making the game, which contained the raw ideas that eventually made it into the finished product. There is a page that just says, booby trap on bridge?. And I think thats like, all it ever was, Cifaldi continues. Like, the game wasnt designed enough, but artists need to be working on something. So its like, I dont know, work on a booby-trapped bridge, and maybe well revisit it, and they never did. Its not a cut puzzle; it doesnt mean anything other than it was an idea that didnt quite make it.

Thats part of the creative process, Cifaldi says. Youre collaborating, theres a lot of people involved, and you try ideas out, you rough draft them, and then they might get cut before you even try to use them. A collapsing bridge is just a collapsing bridge.

In this demystifying the game development process the Video Game History Foundation is something of a pioneer, one thats actively writing the rules for archiving this kind of art as it goes. Its source project is a holistic examination of how the games you love actually get made, which is as important as the games themselves. In our conversation, Cifaldi likened his work to archaeology.

If youre able to access raw source code for a game, bare minimum, you can understand how the systems work and talk to each other and things like that, he says. But if youre a good historian, its a dig. When youre looking at a mummy, you dont have access to that person when they were alive or whatever, right? But you can find clues that help you understand who they were, and what their social status was, and things like that, he says. And those clues eventually become a story, one we might tell, years later, on a podcast for the Smithsonian.

Its just kind of a new idea in the world to have source material for any kind of software in an archive. And I think its going to be a rough road ahead, Cifaldi says. But this has to start somewhere. And its starting now.

Link:

A new way to think about your favorite games code - The Verge

Open-sourced Blockchain Technologies Bring Back Cross-border Tours between Chinese Mainland and Macao – PRNewswire

SHENZHEN, China, Oct. 19, 2020 /PRNewswire/ -- From September 23 onwards, Chinese mainland resumed issuing visas for visitors to Macao. Thanks to the mutual recognition system of Macao blockchain health code and Guangdong health code launched in May, mainland Chinese tourists can apply and use Guangdong health code to verify their health status when entering Macao. Up to date, more than 17 million people have cleared customs between Chinese mainland and Macao using the blockchain system. The average time of receiving, transforming, and generating the health code for the first time is only 100 seconds. And it will only take less than 3 seconds to complete the procedure when traveler clears customs again.

In May, Servios de Sade de Macau (SSM) and Macao Science and Technology Development Fund (FDCT) became the first to establish a blockchain-based health code system to fight the epidemic - Macao blockchain health code. It serves as an electronic pass for residents to access public places. It is a coherent part of Macao's epidemic prevention measures and is later extended to add the mutual recognition mechanism with the Chinese mainland's Guangdong health code system. Due to the epidemic, Macao suspended tourist visa application in January 2020. The establishment of Macao blockchain health code and the mutual recognition mechanism with Guangdong health code greatly improves the efficiency and accuracy of information verification across borders. It proves to be an effective solution to bring travel between Chinese mainland and Macao back to normal.

Macao blockchain health code is implemented based on China's open-source blockchain platform FISCO BCOS and WeIdentity for reliable information verification across organizations.

Mutual recognition of health codes across jurisdictions has a major challenge to overcome the information security and privacy protection regulations in both Chinese mainland and Macao. Health authorities in Chinese mainland and Macao need to verify the health information submitted by users crossing the border and yet they are not supposed to exchange data directly with each other to stay in compliance with their corresponding regulations. The way WeIdentity solution works - IDs and personal health data are encrypted to verifiable digital credentials signed by the issuing authorities and recorded on the consortium blockchain network serving the participating organizations. Users transmit data to the receivers directly via a secured communication channel. Receivers will be able to verify the integrity of the data received by comparing with the corresponding digital credentials as recorded on blockchain. Such a blockchain-based solution on one hand offers a robust data verification mechanisms among trusted parties, while at the same time ensures that the generation and the use of the Macao health code fully comply with the Personal Data Protection Act of Macao.

In addition, the mutual recognition mechanism enables the seamless conversion of health codes for the users without the need to fill in personal information repeatedly on different platforms, offering great convenience and ease of use for cross-border travelers.

With FISCO BCOS and WeIdentity, a robust solution is created to overcome the challenge faced by health authorities and border controls around the world and apparently offers an answer on how to enable cross-border travel once again during the time of a pandemic.

SOURCE FISCO BCOS

More:

Open-sourced Blockchain Technologies Bring Back Cross-border Tours between Chinese Mainland and Macao - PRNewswire