We now know more about Fall Guys’ anatomy, and it turns out they’re abominations – Windows Central

Source: Windows Central / Zackery Cuevas

Fall Guys: Ultimate Knockout has taken the gaming world by storm, already crossing over 2 million copies sold on Steam alone. Its intense and addictive gameplay, where players tackle a wide variety of solo and team-based mini-games to eventually declare a single victor, has led to gamers and streamers alike playing the game endlessly. However, due to popular request (apparently), the team behind Fall Guys: Ultimate Knockout has decided to reveal a little bit more about the mysterious little guys that make up the game's cast: the Fall Guys themselves.

When I said "mysterious little guys" before, I meant "not mysterious enough after this" and "oh wow they're bigger than I imagined" and "I cannot erase this from my mind." According to the "official Fall Guys lore," Fall Guys stand at 6ft tall, with a rather unique skeleton snaking its way through their amorphous bodies. The thing that stands out above everything else, however, is the way their eyes (unassuming from the outside) are precariously attached to a skull recessed far into their heads.

But hey, at least they're smiling.

If this elicited a reaction out of you, or changed your opinion on Fall Guys, be sure to let us know in the comments below! I'm certainly never going to see a swarm of these rushing for a goal in the same manner ever again.

So Fall Guys. Much scary.

Fall Guys: Ultimate Knockout is a wacky battle royale that is inspired by gameshows like Wipeout. Currently available on the PC and PS4, Fall Guys: Ultimate Knockout is sure to keep the attention of both casual and hardcore players alike with its fun and addictive gameplay, as long as you don't think about what Fall Guys are.

Excerpt from:

We now know more about Fall Guys' anatomy, and it turns out they're abominations - Windows Central

State of the software supply chain: Machines will make software faster – TechBeacon

How far away are we from machines making safersoftware, faster? We might be closer than you think.

Other than ensuring that your people are happy and engaged, digital innovation is the bestsource of competitiveness and value creation for almost every type of business. As a result, three things are increasingly common among corporate software engineering teams and the 20 million software developers who work for them:

They seek faster innovation.

They seek improved security.

They utilize a massive volume of open-source libraries.

The universal desire for faster innovation demands efficient reuse of code, which in turn has led to a growing dependence on open-source and third-party software libraries. These artifacts serve as reusable building blocks, which are fed into public repositories where they are freely borrowed by millions of developers in the pursuit of faster innovation.

This is the definition of the modern technology supply chainand more specifically, a software supply chain.

Organizations that invest in securing the best parts, from the fewest and best suppliers, and keeping those components updated, are widening the gap against their competitors. The best-performing organizations are applying automation to help them manage their open-source component choices and updates.

As these practices evolve, machines will become better at guiding developers to the best-quality and most secure component versions. And in the not-too-distant future, machines may be compiling the best components into application code based on functional requirements defined upfront.

Here's why automation should be a key strategy for helping you select open-source components, and other lessons from my team's research.

The2019State of DevOps Reportfound thatelite organizations are deploying 200 times more frequently than their peers, and their change failure rates are seventimes lower. They're also much faster inmean time to recover from failure than other organizations.

But these kinds of metricsarereally focused onhow you're doing internally as a development team and don't take into account many external factors.

This reminds me of what Jeff Bezossaid in his 2017 letter to shareholders: "Beware of the proxies."You can get so focused on a process, and doing that process well, that it becomes the thing that you're trying to achieve.

You might be trying to achieve faster deployments, faster mean times to recovery, or more secure code releases. They can represent your proxies for success, while not necessarily contributing to the outcome your business is attempting to achieve.

Consider adversaries who attack your code. If you can release new security updates in your codebase within two weeks, but your adversaries can find and exploit the new vulnerabilities in two days, your organization's data is at risk.

In this situation, it does not matter as much that you've already reduced your time to implement security updates fivefoldif your adversaries are still faster.

Consider thisreal-world scenario. On Wednesday, April 29, 2020, the creatorsand maintainersof SaltStack, an open-source application, announced that the app had a critical vulnerability. On the very same day, they released the safe version of the application. If you had automatic updates turned on from SaltStack, you got the newer version. If you didn't, then you needed to get the newer version, update your infrastructure, and do so before the adversaries found it.

One of the researchers at F-Secure said that"the vulnerability was so critical, this was a patch-by-Friday-or-be-breached-by-Monday kind of situation."

Andthat's exactly what happened. By Saturday morning, May 2, some18 people on GitHub reported that breaches were actively happening. They had lost control of their servers. SaltStack had been taken over, rogue code was executing on their systems, and their firewalls were being disabled. Throughout May, 27 breaches were recorded.

But not allof the news is bad: We know that developers are getting faster, too, because they're not writing all of their code themselves.

Figure 1:Number of download requests for Java component releases, 2012 to2020, from the Central Repository.Source: 2020 State of the Software Supply Chain Report

We're assembling more and more code from open-sourcecomponents and packages. As one example, it's amazing to look at download volumes for the npm package manager. There were95 billion npm package downloads in July 2020. If you annualize thatdownload volume, wewould seeover 1.1 trillion npm package downloads this year.

In Java, similar things are happening. In 2019, Maven Central had 226 billion download requests. In 2020, download request volumes are expected to hit 376 billion.

How do these monstrous numbers translate to your own developers and applications? After analyzing 1,500 unique applications, we can see that 90% of their code footprint is built from open-source software components.

As I started thinking about all of the above, I wanted to understand not just how these parts are being used, but where they are coming fromand who the open-source software suppliers are. So, in a two-year-long collaboration,Gene Kim,Stephen Magill, and I examined software release patterns and cybersecurity hygiene practices across 30,000 commercial development teams and open-source projects.

We set out to understand what attributes we coulduse to identify the best open-source project performance and practices. If development teams were going to assemble applications from these building blocks, we wanted to understand who the best suppliers were.

We wanted to know who released most often, who were the most popular suppliers, who prioritized features over securityor security over features, who enlisted automated build tools, which projects were consistently well staffed, and more. All of these variables played a role in identifying suppliers with the best track records, because they would be the ones to help developers build the best applications.

Additionally, the more you could teach machines to identify the attributes of the best open-source softwaresuppliers for developers, the faster development could become.

The top-performing projects released 1.5 timesmore frequently than the rest of the teams we studied,were 2.5 times more popular by download count, had 1.4 times larger development teams, and managed 2.9 timesfewer dependencies.

We also saw a strong correlation between open sourceprojects that updated dependencies more frequently and their ability to maintain more secure code. High-performing projects demonstrated a median time to update (MTTU) their dependencies that was 530times faster than other projects. By moving to the latest dependencies, they purposely or consequently remediated known vulnerabilities discovered in older dependencies.

Figure 2: Open-source project cluster analysis of popularity and release speed. Source: 2019 State of the Software Supply Chain Report

To better understand all this, we performed a cluster analysis of these different open-source projects based on severalattributes. We were able to see what development teams should focus on when choosing components.

Choosing open-source projects should be considered an important strategic decision for enterprise software development organizations. Different components demonstrate healthy or poor performance that affects the overall quality of their releases.

Therefore, MTTU should be an important metric when deciding which components to use within your software supply chains. Rapid MTTU is associated with lower security risk, and it's accessible from public sources.

Just as traditional manufacturing supply chains intentionally select parts from approved suppliers and rely upon formalized procurement practices,enterprise development teams should adopt similar criteria for their selection of open-source softwarecomponents.

This practice ensures that the highest-quality parts are selected from the best and fewest suppliers. Implementing selection criteria and updated practices will not only improve code quality, but can accelerate mean time to repair when suppliers discover new defects or vulnerabilities.

Ideally, dependencies should be updatedsimply, safely, and painlessly, and as part of the routine development process. But reality shows that this ideal is rarely met.

An astonishing story of how far an organization can stray from ideal update practices comes from Eileen M. Uchitelle, staff engineer at GitHub, who said it took seven years to successfully migrate GitHub from a forked version of Rails 2 to Rails 5.32.

Even with new tools available to developers that automatically create pull requests with updated dependencies, changes in APIs and potential breakage can still hold back many developers from updating. We suspect this change-induced breakage is a primary driver of poor updating practices.

Taking a deeper dive into the vast data available to us from the Central Repository, the world's largest collection of open-source components,you can better visualize open-source project releases and their adoption by enterprise application development teams that migrate from one version to a newer one. We believe this data shows how open-source component selection can play a major role in allowing for easier and more frequent updates.

Figure 3:Migration patterns between component releases for the joda-time library.Source: 2020 State of the Software Supply Chain Report

Consider the widely used joda-time library, which shows that developers using this open-source component update fairly uniformly between all pairs of versions. This suggests that updates are easy, presenting a seemingly homogeneous set of versions tomigrate to and from.

Figure 4:Migration patterns between component releases for the hibernate-validator library.Source: 2020 State of the Software Supply Chain Report

On the opposite extreme, consider the graph for the hibernate-validator library, where there are two sets of communities using itone favoring version 5 and another preferring version 6. The two communities very rarely intersect. This suggests either that updating to version 6 from version 5 is too difficultor that the value is not worth the effort.

Figure 5:Migration patterns between component releases for the spring-core library.Source: 2020 State of the Software Supply Chain Report

Finally, we take a look at the pattern for spring-core, which suggests that updating is sufficiently difficult that the effort must be planned and some version ranges end up being avoided.

If you are a developer, don't worry; your job is secure. No machine out there will take your place. Having said that, an increased reliance on automation to help you select better, higher-quality, and more secure components can serve you and your teams well today.

You can use automation, through advanced software composition analysis and open-source-governance tools, to point to better suppliers with a better track recordfor instance,they release often, update vulnerabilities quickly, are well staffed, and are popular.

Using these tools to set policies around components can help you determine when to upgrade your dependencies, and they can quickly inform you of newly discovered vulnerabilities in need of remediation. Additionally, these tools can lead developers to the best versions of components, indicating which newer versions will introduce the fewest breaking changes or introduce troublesome dependencies.

To learn more about our research into high-performance, open-souce component-based development,read the2020 State of the Software Supply Chain Reportor attend my upcoming session on this topic at the DevOps World virtual conference, whichruns from September 22-24, 2020.

The rest is here:

State of the software supply chain: Machines will make software faster - TechBeacon

Keep an Eye on This Cohort of Open Source Developer Interns – CoinDesk – Coindesk

When Christopher Allen received applications for the 2020 Blockchain Commons internship, he had a problem: He had more applications than he had ever received in the internships history, and all from stellar applicants.

This was a good problem to have, of course, and Allen tackled it head-on by expanding the internship program. He typically only takes one intern under his tutelage, but this year he took on 7.

With so many extra hands, each intern had the opportunity to work on a project of his or her preference. Each of these projects went toward improving software in the Blockchain Commons repositories.

As the internship draws to a close, the interns contributions to free and open-source software (FOSS) are nearing completion and will soon be open to the public to use.

The Blockchain Commons: a hub for open-source software

Allen founded the Blockchain Commons in 2018 in a bid to keep Bitcoins development open and distributed.

In a past life, he helped pioneer the OpenSSL/TLS protocol, an encryption standard for securing data transmitted over the internet. Come 2014, the Heartbleed Bug compromised the OpenSSL implementation of the encryption standard, which handled 60% of the internets traffic at the time (and with it, trillions of dollars of online commerce).

The flaw was promptly patched. But Allen took that tribulation to heart and vowed to not allow a single point of failure to threaten the security of other software projects he works on.

Cue Allens discovery of Bitcoin and the founding of the Blockchain Commons. After a brief tenure at Blockstream, Allen founded his not-for-profit benefit organization to do his part to keep Bitcoins development distributed.

Now, after a summer of tinkering, his newest interns have enriched the codebase and Github libraries of some of the Blockchain Commons principal projects including the addition of a project of their own design.

What these budding Bitcoin developers created

Spotbit

For their new group project, the interns began building Spotbit, a software for curating Tor-supported bitcoin (BTC) price feeds.

Led by Dartmouth senior Christian Murray with assistance from Nishit Shah, the modular, self-hosted feed draws pricing data from 100 cryptocurrency exchanges across various stablecoin and fiat trading pairs. Users can choose which exchanges they want their feed to tap into, which trading pairs to support and what data they want to store. If a user doesnt want to host a Spotbit node, they can connect to others.

Lethe Kit

Besides Spotbit, each intern has an individual project which they work on alongside Allen to improve.

Gorazd Kovacic from Slovenia, for example, has been working on the Blockchain Commons code for the Lethe Kit. The DIY hardware wallet so-named after the river of Greek mythology that cleansed the underworlds denizens with amnesia of their past lives is an air-gapped hardware wallet, meaning it cannot come in direct contact with an internet-connected device.

The Lethe Kit can generate seeds and addresses to receive transactions, but it cannot send bitcoin through partially-signed Bitcoin transactions (a previous version of this article indicated otherwise).

Kovacic has been working on integrating animated QR codes and Shamir secret shares (a cryptographic technique for dividing a private key into multiple parts) into the Lethe kit.

Gordian Wallet and Gordian Server

Another intern, Gautham Ganesh Elango, is working on Gordian, a two-part project which includes a Bitcoin full-node implementation which runs over Tor and an iOS mobile wallet.

The Gordian Server operates similarly to Bitcoin node dashboards like My Node by offering its users a graphical user interface (GUI) for interacting with Bitcoin Core.

A GUI (an interface type we use everyday when commanding our Macs and PCs with iOs or Windows, to give one example) is the user-friendly, laymans version of the command-line interface the raw coding terminal that developers use to speak to their devices.

The projects other working part, Gordian Wallet, is a mobile Bitcoin wallet for iOS which can connect to the Gordian Server.

Elango, a freshman from Australia, is also building out an accounting tool which will allow Gordian users to import transaction and price data to Microsoft Excel for tax purposes.

For another project, Elango and fellow intern Javier Vargas are stepping into the role of instructor by fleshing out the Blockchain Commons documentation of RPC codes for managing a Bitcoin node from the command-line interface.

Internship takeaways

Almost all the tools the interns have been working on contribute to each others tech stacks (Spotbit, for example, provides price data for the Gordian Wallet). Showing that theres more to open-source development than coding, cross-project collaboration is one of the internships key instructional points.

For Murray, this was indeed one of the internships primary lessons: that open-source development means creating sustainable tools that go beyond a solitary use case.

This was my first introduction to open-source development, and definitely one of the big learning curves is learning to collaborate effectively and developing processes for yourself. A lot of the stuff I wrote before I got here was something I needed to work one time, but this is a lot more about something that is going to work all the time, he told CoinDesk.

Murray said that he plans to continue to work on Bitcoin open-source software after the internship, whether professionally or otherwise. This was a common thread for the soon-to-be alumni of the Blockchain Commons.

Kovacic, who is already diving into other open-source repositories like Blockstreams c-lightning, said the internship reaffirmed my position that I want to work in the Bitcoin space.

For his part, Elango agreed, saying the internship shook off his apprehension about approaching the seemingly daunting task of maintaining open-source projects.

Its definitely got me interested in Bitcoin open-source development. At first I was kind of intimidated by these large open-source projects. After the internship, Ive become more comfortable with doing large contributions to these projects. Once I learn the basics of C++ I may start contributing to Bitcoin Core. And if not Bitcoin Core specifically, then some other open-source project, he told CoinDesk.

Looking ahead to the next cohort of interns

With this internship coming to a close, Allen is offering another one that will begin in October and end in December. He stressed that the latest internship hopes to pull in more talent from Bitcoin-adjacent fields, not just the realm of computer science. This could mean students studying law, library science or other disciplines to help improve aspects of Blockchain Commons documentation.

When Allen asked his students what they would say to incoming interns, Murray answered in the spirit of what may be considered the internships core ethos: Ask plenty of questions and cooperate with others whenever possible.

If I could give advice to anyone coming in it would be: dont be afraid to ask for help when you need it. We have one group chat and I wanted to be professional and not spam the chat with questions. One time, I had spent several hours trying to fix this Github commit and couldnt figure it out. But then Gorazd ended up giving me this one-line solution. If I had asked the question early, I would have saved a lot of time.

This article has been updated to correct a description of the Lethe Kit and to clarify how the Gordian Server and Gordian Wallet operate.

See more here:

Keep an Eye on This Cohort of Open Source Developer Interns - CoinDesk - Coindesk

The Government Digital Service truly was once world-beating. What happened? – The Guardian

No 10 adviser Dominic Cummings and his Silicon Valley ambitions for the civil service have put digital, data and technology in the spotlight but where does this leave the former bright light of UK tech, the Government Digital Service (GDS)?

For many years, government IT was the punchline to a joke that wasnt funny. People trying to deal with government departments picked up the phone or sent letters rather than experience the grief of going online.

But by using the tools of the open web simple words, clear design, open source code, agile ways of working one team in government managed to build some public services fit for the internet era. They didnt seek to amaze citizens; just make their experience simpler, clearer and faster.

That team the GDS was set up nine years ago with a brief from the then Cabinet Office minister, Francis Maude, to haul the civil service into the digital age. It started well. The UK governments new website, gov.uk, was many times cheaper than its predecessors and even won design of the year in 2013. New online services for setting up a power of attorney, taxing a vehicle and booking prison visits, among others, made a mark. Entrepreneurs described GDS as the best startup in Europe. David Cameron lauded the team as one of the great unsung triumphs of the coalition government. Five years after the team started, the UK led the world in digital government, according to the UN. Other countries took note, and copied.

The trick GDS pulled off was to realise that the game wasnt about changing websites. It was about changing government. The digital team saw that parts of public services, such as sending lots of text messages or taking payments, were being developed separately by scores of public organisations, at great cost to the public purse and making systems harder to use. By 2015, the GDS team had rebuilt some of these common components to be used again and again across the public sector. The service also published patterns and code, and enforced standards, to give everyone an incentive to raise their game.

This paid dividends, in better public services and money saved: a whopping 1.7bn by 2014, according to the Cabinet Office. As a result, in the November 2015 budget and spending review, GDS was handed a 450m bounty in what then cabinet secretary Sir Jeremy Heywood described as a vote of confidence.

Even in its pomp, GDS was not universally loved; senior civil servants described the kids in jeans as an insurgency. But the real problem was the challenge it presented to the sovereign power of Whitehall departments. Changing government was not on their agenda, nor in their interests. Common components took away control. So for that 450m, there was a tacit quid pro quo: GDS would support departments, not lead them. That shift, demanded by the chief executive of the civil service, John Manzoni, and encouraged by permanent secretaries who had been embarrassed by GDS, was a tipping point.

While GDS has retained some of the countrys smartest technology talent, its purpose has drifted. From once receiving grudging respect from departments for its rallying cries, it is now peripheral. A top-level post for government chief digital officer has gone unfilled for more than a year. This July, the UN announced that the UK had slipped to seventh in its world e-government rankings, falling six places in four years.

This leaves some awkward questions. Aside from the world-class platforms and patterns that were already taking shape five years ago, where did the 450mgo? For better or for worse, it hasnt gone into the data foundations so desired by the present administration. Whatever Cummings is looking for, he hasnt found it in GDS yet.

Some of GDSs legacy is in plain sight. Some digital successes have been the dogs that didnt bark during the pandemic. HMRC, universal credit and parts of the NHS have delivered online services that have just about stood up to extraordinary new demands. Without GDS starting out by showing departments how to deliver, rather than telling, this would not have happened.

And GDS did something else that no other team had done before. It led everyone using public services to expect a half-decent experience of their government online. It did this by worrying more about user needs than mandarin egos. For Britain to be a leading digital government, it needs a digital team that leads.

Andrew Greenway is a co-founder of Public Digital and former staffer at the Government Digital Service.

See original here:

The Government Digital Service truly was once world-beating. What happened? - The Guardian

GitHub aims to make India the largest market from the third largest – Economic Times

BENGALURU: GitHub, the code-repository service used by many developers, startups and companies worldover, aims to make India the largest market from the third largest at present, said Maneesh Sharma, India head of GitHub.

Sharma, who was appointed India head in February when GitHub opened its first office in the country, is doubling down on working with Indian startups and has built a sales team to target large startups, corporates and financial institutions and in the country.

Covid has accelerated digital transformation. Startups are helping disrupt the status quo which is getting every company look at how they use digital. Globally we have every large industry using digital. In India, the biggest segment using Github is IT enabled services, internet commerce companies and software product companies, he said.

Indias software as a service companies are also a potential customer base for Github, he added.

GitHub, which is popular among developers, particularly those who work on open source projects, was acquired by Microsoft in 2018. Since then, the company has stepped up expanding its presence in newer markets such as India, where there are millions of developers, who work for both large and small companies in India as well as globally. GitHub is also looking at engineering students to contribute to the repository.

We have been participating in global projects and are a great consumer of open source. We need to start thinking about how we can build software that can contribute to global communities as well, said Sharma. We are getting offshoots from the startup ecosystem as well who are starting to open source their libraries. There is a lot to do more,

They can start looking at how they can build software on GitHub. Think of it as credits. We will be doubling down on startup ecosystem, said Sharma.

Original post:

GitHub aims to make India the largest market from the third largest - Economic Times

Winux – Windows/Linux Convergence In 2020 – iProgrammer

It is a strange time when old enemies not only bury the hatchet but start to merge into a single entity. Windows and Linux, Microsoft and Open Source seem not only to be friendly but in the case of Windows and Linux merging into an undifferentiated whole - Winux anyone?

It all started with the move to .NET Core. Well it probably did, but it is too recent for a final history to be written. The .NET system was aggressively Windows- and Microsoft-only and, apart from some heroic open-source efforts on the part of the Mono team, it only worked under Windows. Then Microsoft threw away everything it had done and started over with an open-source project to reinvent .NET as a cross-platform development system and so .NET Core was born, along with much confusion and some developer suffering.

Why was .NET widened to support non-Windows environments?

Only Microsoft really knows but it seems reasonable that it was to serve the greater good of Azure. When Azure started out it mostly provided Windows-based virtual machines, but it didn't take long for it to be quite clear than its users wanted Linux and, if it was to be competitive with AWS, it needed to shift from being Windows-oriented to Linux-supporting - and it has.

Given Azure is potentially the cash cow that is to replace Windows in the future, it now becomes clear that supporting Linux is a good idea. So .NET becomes cross-platform and with .NET Core 5, or perhaps more fully in 6, in the future this task is more or less completed. There is only one version of the .NET platform and it is cross-platform.

Of course, there are still problems - aren't there always?

In particular, there is no .NET cross-platform UI and .NET Core programs tended to be command line or web-based where the UI issue doesn't arise. Eventually Microsoft realized that trying to pretend that .NET Core didn't need a UI was silly and some Windows-specific modules were rolled out to allow Win32/Forms and WPF to be used to create a UI.

As this all was coming to a conclusion, Microsoft suddenly seems to have had another realization - if Azure runs Linux, why not Windows? The Windows Subsystem for Linux (WSL) was born and you could work with Linux on a machine that primarily ran Windows. Not a virtual machine, but a hosted operating system within another operating system. Future historians might well look back on this first step as the start of the fusion between Windows and Linux and indeed Microsoft software in general and open source.

For example, why would Microsoft spend money developing an HTML rendererer for its own browser when there is an open source browser, used by Google, just sitting around waiting to be used. The Edge browser is an example of a development strategy that I think we are going to see more of as time goes on - open source + proprietary code and services.

Now we have news that Edge is going cross-platform. And why not? Chromium is cross-platform so what is surprising? What is surprising is that Microsoft is taking another step towards Linux. Of course, it all comes with some added Microsoft flavoring:

"For developers, WebView2 will be generally available for C/C++ and .NET by the end of 2020. Once available, any Windows app will be able to embed web content with the power of Microsoft Edge and Chromium. WebView2 provides full web functionality across the spectrum of Windows apps, and its decoupled from the OS, so youre no longer locked to a particular version of Windows.

Also, the new Microsoft Edge DevTools extension for Visual Studio Code is now generally available, enabling seamless workflow for developers as they switch contexts."

At the moment WebView2 only seems to support Windows, but Linux support in the near future would seem logical. Also notice the way that Microsoft is building a web of dependencies - Edge supports Visual Studio Code, which in turn favors Microsoft GitHub and of course Azure. It all fits together so tightly that you really wouldn't want to go to the trouble of pulling it apart.

"Starting in October, Microsoft Edge on Linux will be available to download on the Dev preview channel. When its available, Linux users can go to theMicrosoft Edge Insiders siteto download the preview channel, or they can download it from the native Linux package manager."

And while all this is going on WSLis being expanded. Linux GUI apps are being supported in the next few weeks. If you were determined enough, you could already get GUI apps to work, but now it's official. So I can sit down at my machine, boot Windows and run Windows and Linux GUI apps.

Things have gone a long way. There was a time when I had to worry about which operating system I was using. I now routinely use ls in PowerShell and I've almost forgotten what the Windows dir command did. Which slash to use in pathnames isn't much of a problem any more and I am increasingly surprised when I find that a Linux command doesn't work under Windows.

Our current desktop hardware has enough memory and disk storage to support a mind meld of Windows and Linux - something that until relatively recently would have seemed wasteful. We are in an age of operating system bloat - get used to it and take advantage of it.

Winux here we go...

Windows Subsystem for Linux capabilities enhance performance and make install a breeze

Microsoft Edge coming to Linux in public preview, with more support for secure remote work and enabling developers to bring Microsoft Edge to any Windows app

Chrome OS Runs Windows Apps - What's An OS Anyway?

This Is The Year Of Linux On The Desktop - Via Windows

Linux On Windows - Microsoft On How It Works

To be informed about new articles on IProgrammer,sign up for ourweekly newsletter,subscribe to theRSSfeedandfollow us on,Twitter,Facebook orLinkedin.

Make a Comment or View Existing Comments Using Disqus

or email your comment to: comments@i-programmer.info

Visit link:

Winux - Windows/Linux Convergence In 2020 - iProgrammer

Matillion Partner Ecosystem Identifies Trends Driving Data Transformation Market – The Grand Junction Daily Sentinel

DENVER and MANCHESTER, England, Sept. 23, 2020 /PRNewswire/ -- Matillion, the leading provider of data transformation for cloud data warehouses (CDWs), brought together data management consulting services leaders for a Matillion partner advisory roundtable to discuss how enterprise data transformation needs are impacted by current market trends. The event, held virtually in Q3, revealed the existing challenges and trends that are accelerated by the global pandemic and the pressing enterprise needs to access and leverage data for decision making.

- There is increasing demand for low-code and open source solutions among different data personas. Businesses look to enable diverse roles within their organization to use data tools that can help them take control of their projects. There is demand among data engineers who want to use solutions with both low code and open source options. There is still a need for open source, which allows engineers to innovate with data. However, an emphasis on time-to-value and scalability within a complex, enterprise IT environment, and the need to access data across parts of a business, is driving the low code/no code market.

- Enterprises are balancing the need for speed with cost optimization.Before the pandemic, many businesses were looking to increase time to value without increasing costs. But now, enterprises need to reduce infrastructure costs in preparation for a potential recession, but they also desire the quick implementation of solutions that enable them to leverage their data and reduce data latency to make timely, fact-based decisions.

-Enterprises need proven tech stacks and solutions from data consultants.In an effort to help companies optimize cost and scale strategies, consultants see a need to deliver off-the-shelf solutions that will work for diverse business use cases. Data management, integration, and transformation solutions need to work well with one another to allow enterprises easier onboarding, quicker proof of concepts to demonstrate results, and faster time to value. Offering ready-made technology stacks delivers value for clients faster as data projects are scaled down to align with pressured budgets and internal competition for available resources.

- Data volumes are driving data infrastructure modernization.The mean number of data sources per organization is 400 sources, and data volumes are growing by 63 percent per month. This has large enterprises progressing on their "cloud journey," by ditching legacy systems for new approaches in data management and data integration, to avoid additional technical debt and to position them for economic and business recovery. Cloud-native tools are easier to use and to scale, enabling enterprises to begin work on smaller proof-of-concepts to get the frameworks ready for when the pace of business picks up again.

- Talent acquisition is more critical than ever.It is easier to find the right technology solutions than it is to find employees with the right skill sets. Enterprises need to attract data engineers that will implement a modern tech stack to help them derive value from the data they have spent years amassing and aggregating.

"The latest advancements in data technologies addressed enterprise needs prior to the pandemic, but there is added pressure to modernize almost overnight to cope with new and increasing challenges," said Robert Griswold, Senior Manager, Data Foundations Practice Lead at Capgemini.

"Enterprises continue to adjust to the new ways of working, and face increasing pressure to uncover data insights," said Brian Bickell, Data Practice Director at Interworks. "There is a growing need for flexible solutions that serve a remote, distributed team. Companies are doing all they can to ensure business continuity and the ability to scale to keep them moving forward during these uncertain times."

"Current market conditions present yet-unseen pressure on enterprises to mitigate costs while becoming as competitive as possible, said Matthew Scullion, CEO of Matillion. "The trends identified by global leaders in manufacturing, finance, healthcare and more underscore demand for the power of the cloud, which organically solves for modern requirements while better positioning businesses to recover from the impact of a global pandemic."

To learn more about how Matillion and its partner ecosystem support faster time to insights within the enterprise, visit: https://partners.matillion.com/. For further data transformation industry updates and perspectives, follow Matillion on Twitter @Matillion and LinkedIn at https://www.linkedin.com/company/matillion-limited/.

About Matillion Matillion is data transformation for cloud data warehouses. Only Matillion is purpose-built for Amazon Redshift, Snowflake, Microsoft Azure Synapse, and Google BigQuery, enabling businesses to achieve new levels of simplicity, speed, scale, and savings. Trusted by companies of all sizes to meet their data integration and transformation needs, Matillion products are highly rated across the AWS, GCP and Microsoft Azure Marketplaces. Dual-headquartered in Manchester, UK and Denver, Colorado, Matillion also has a presence in New York City and Seattle. Learn more about how you can unlock the potential of your data with Matillion's cloud-based approach to data transformation. Visit us atwww.matillion.com.

Media contact Nonfiction Agency for MatillionShermineh RohanizadehSrohanizadeh@nonfictionagency.com+1 949 378 6469

View post:

Matillion Partner Ecosystem Identifies Trends Driving Data Transformation Market - The Grand Junction Daily Sentinel

‘A customisable approach’ – how contact-tracing apps differ in the UK – Digital Health

The last few months have seen both Northern Ireland and Scotland work with software developers, NearForm, and launch their own individual contact-tracing apps. Digital Health News spoke to NearForm about the technology and how it works.

Northern Ireland and Scotland pipped England and Wales to the post in the contact-tracing app race, with the two countries opting to use different technology.

The NHS contact-tracing app based on Apple and Googles technology launched in England and Wales today, but Scotland and Ireland chose to go their own way working with software developers NearForm.

One of the key differences between what we offer and what Google offers is we offer a far more customised app solution, NearForms technical director, Colm Harte, said.

Both apps use Bluetooth to track time and distance between devices, sending exposure notifications to devices that have spent more than 15 minutes within two metres of another user who has tested positive for Covid-19.

Apple and Google offer a templated app and not a full end-to-end solution, according to Harte.

What they are offering is the app piece, but you still have to provide the key server, the verification server and you still have to do the integration with your manual contact-tracing systems, he added.

And thats all part of what the NearForm solution has brought to the table with different countries. We provide the back-end key server, the one-time code for the verification flows so you can trigger an upload on an app.

If you look at either apps weve brought so far, they are very much tied into the language, look and feel in terms of how government is dealing with Covid-19 in their particular jurisdiction.

Harte said the customisable interface provides governments using NearForms app more flexibility with how it plugs into their manual contact-tracing system.

This allows messaging around Covid-19 to be more consistent, he explained.

Its using the same terminology, its providing the same type of advice, so its a much more customized interface for the particular population, Harte added.

A lot of countries are using different platforms for their manual contract-tracing, so integrating the two is very important for a seamless experience as the manual contact tracing is the driving force.

The app used in the Republic of Ireland, for example, allows a user to voluntarily input their phone number, which is then provided to a member of the manual contact-tracing team should the app receive an exposure notification.

A lot of people have opted into that feature because they want that reassurance that there is someone they can speak to if they do have a close contact alert, Harte said.

The Republic of Irelands app, developed alongside NearForm, was launched in early July and was downloaded one million times in the first 48 hours. Irelands Health Service Executive (HSE) later published the apps Covid Tracker code as part of an open source programme to help global public health authorities tackle the pandemic.

Northern Ireland released its app, StopCOVID NI, at the end of July followed by NHS Scotland in September. Both apps were developed alongside NearForm and were based on Irelands Covid Tracker code.

Not just technical challenges

Asked why Ireland and Scotland had seemingly been more successful at getting a digital contact-tracing solution off the ground, Harte said there are a number of challenges faced when developing apps.

Theres a lot of complexity to building these apps. Theres the technical side of things but theres also a lot of organisational challenges and weve seen that with the countries weve worked with, he said.

Theres a lot of groups that have to be brought together and theres a lot of decisions that have to be made.

That process includes how you integrate the app into the overall contact-tracing process and the manual process, Harte said. Without these discussions the app is unlikely to be a success.

Effectiveness is showing that youre breaking transmission chains, so if youre identifying people through digital contact-tracing that manual contact-tracing didnt identify and those people turn out to be positive youre breaking transmission chains, Harte told Digital Health News.

Thats really the purpose of what youre trying to achieve with these contact-tracing apps.

The importance of open source

Harte emphasised the role open sourcing the code for the Irish app had played in allowed other countries to further their own digital contact-tracing abilities.

NearForms software has been used to develop contact-tracing apps in US states including Pennsylvania and Delaware; Gibraltar; and Jersey alongside Ireland and Scotland.

A key piece of that [NearForms success] is when Ireland launched their app they open sourced all the code, and the Irish health authorities were very happy to share in terms of what they were doing, how they were doing it and who they were using to build their app, Harte said.

They were also happy for other countries to levy their source code in order to build their own app, so that obviously helped other people to accelerate what they were doing.

Irelands HSE provided the code to the newly established Linux Foundation public health initiative which aims to use open source software to help public health authorities around the world combat Covid-19 and future epidemics.

The code is available of github.

Link:

'A customisable approach' - how contact-tracing apps differ in the UK - Digital Health

Jitsi and Mattermost Team Up for Joint Hackathon – "Thriving in a Remote Environment" – Yahoo Finance

TipRanks

In the investing game, its not only about what you buy; its about when you buy it. One of the most common pieces of advice thrown around the Street, buy low is touted as a tried-and-true tactic.Sure, the strategy seems simple. Stock prices naturally fluctuate on the basis of several factors like earnings results and the macro environment, amongst others, with investors trying to time the market and determine when stocks have hit a bottom. In practice, however, executing on this strategy is no easy task.On top of this, given the volatility that has ruled the markets over the last few weeks, how are investors supposed to gauge when a name is flirting with a bottom? Thats where the Wall Street pros come in.These expert stock pickers have identified three compelling tickers whose current share prices land close to their 52-week lows. Noting that each is set to take back off on an upward trajectory, the analysts see an attractive entry point. Using TipRanks database, we found out that the analyst consensus has rated all three a Strong Buy, with major upside potential also on tap.Progenity (PROG)Offering clear and actionable genetic results, Progenity specializes in providing testing services. The company started trading on Nasdaq in June and saw its shares tumbling 44% since then. With shares changing hands for $8.11, several members of the Street recommend pulling the trigger before it heats up.Piper Sandler analyst Steven Mah points out that even against the backdrop of COVID-19, PROG managed to deliver with its Q2 2020 performance. We are encouraged by the recovery in late Q2 2020 with 75,000 accessioned tests (~79,000 in Q1 2020), driven by noninvasive prenatal testing (NIPT) and carrier screening, the analyst noted. Expounding on this, Mah stated, Progenity did not provide guidance, but June test volumes of ~28,000 were strong (Q1 2020 monthly average was ~26,000) which we believe showcases the durability of its reproductive tests and the success that Progenity has in co-marketing and attaching carrier screening to the more essential NIPT. Of note, despite the pandemic disruptions, Progenity was able to maintain its leading pre-COVID test turnaround times.Additionally, health insurer Aetna is temporarily extending coverage of average-risk NIPT until year-end as a result of the pandemic, with the American College of Obstetricians and Gynecologists (ACOG) also expected to endorse average-risk in the future given its clinical utility, in Mahs opinion.Reflecting another positive, the fourth generation NIPT (single-molecule counting assay) test was able to measure fetal fraction, a key milestone according to Mah, and will continue to be developed into 2021. As the technology could potentially be applied to DNA, RNA, epigenetic markers and proteins for additional clinical applications such as oncology, the analyst is looking forward to the completion of the preeclampsia verification in Q4 2020 and a possible 2H21 launch. We believe preeclampsia (~2.3 billion serviceable market) is a major differentiator for Progenity, allowing them to cross-sell across the full-continuum of reproductive testing, the analyst added.If that wasnt enough, PROG signed its first GI Precision Medicine partnership agreement with a top-20 Pharma company in August. The Oral Biotherapeutic Delivery System (OBDS), an ingestible drug and device combination designed to precisely deliver biologics systemically through a needle-free liquid jet injection into the submucosal tissues of the small intestine, is set to be utilized as part of the collaboration. Mah commented, We believe Progenity can sign additional Pharma deals and look forward to the newsflow coming out on this front.To sum it all up, Mah said, We believe Progenity shares are undervalued given the robust recovery in the core testing business and multiple upcoming growth catalysts.To this end, Mah rates PROG an Overweight (i.e. Buy) along with a $17 price target. Should his thesis play out, a twelve-month gain of 105% could potentially be in the cards. (To watch Mahs track record, click here)Are other analysts in agreement? They are. Only Buy ratings, 4, in fact, have been issued in the last three months. Therefore, the message is clear: PROG is a Strong Buy. Given the $13.33 average price target, shares could climb 60% higher in the next year. (See PROG stock analysis on TipRanks)Tactile Systems Technology (TCMD)Developing at-home therapy devices, Tactile Systems Technology wants to provide new treatments for lymphedema, which occurs when the lymphatic system is impaired, disrupting normal transport of fluid within the body, and chronic venous insufficiency. Down 52% year-to-date, its $32.67 share price lands close to its $29.47 52-week low. Thus, with business trends improving, the Street is pounding the table.Writing for Canaccord, analyst Cecilia Furlong acknowledges that the pandemic has hampered the company, with COVID-19 weighing on both volumes and sales. In the second half of March, volumes were down 50% compared to the first half of the month, and TCMDs patient volumes in April and May remained challenged. That being said, trends started to improve at the end of May.Going forward, given the vast majority of TCMDs clinician customers practice in outpatient or office-based settings, we remain positive on TCMDs ability to demonstrate better insulation against COVID impacts and likely experience a greater bounce-back relative to overall med-tech volume trends, with TCMD further benefitting from its expanding using of technology to remotely engage with clinicians and support patients, Furlong explained.The analyst added, Furthermore, recent trends among some providers to prescribe Flexitouch (an advanced intermittent pneumatic compression device to self-manage lymphedema and nonhealing venous leg ulcers) earlier along the therapy process, as a means to reduce in-person contact, could provide upside near term, as well as potentially transition to a longer-term tailwind.On top of this, Furlong is also optimistic about new CEO Dan Reuvers and the reprioritization of the companys investment and market development efforts. TCMD will shift focus away from its acquired Airwear product line, with it redirecting investments toward its Flexitouch and Entre (a pneumatic compression device used to assist in the home management of chronic swelling and venous ulcers associated with lymphedema and chronic venous insufficiency) products.Given significant under-penetration in the lymphedema/phlebolymphedema market targeted by Flexitouch alongside the large patient population with limited treatment options today targeted by the firms Head & Neck platform, we view the combination of education and clinical data as key to further developing and penetrating these markets... Going forward, we expect management to continue to compile a broad base of clinical data to support reimbursement and drive broad adoption, Furlong commented.All of this prompted Furlong to keep a Buy rating and $62 price target on the stock. This target conveys her confidence in TCMDs ability to soar 90% in the next year. (To watch Furlongs track record, click here)In general, other analysts are on the same page. With 3 Buy ratings and 1 Hold, the word on the Street is that TCMD is a Strong Buy. The $62.33 average price target brings the upside potential to 91%. (See TCMD stock analysis on TipRanks)uniQure N.V. (QURE)Last but not least we have uniQure, which delivers curative gene therapies that could potentially transform the lives of patients. Even though shares have fallen 44% year-to-date to $40, not much higher than its 52-week low of $36.20, multiple analysts still have high hopes.Representing SVB Leerink, 5-star analyst Joseph Schwartz acknowledges that shares struggled after news broke of its collaboration and licensing agreement with CSL Behring for AMT-061, QUREs gene therapy for Hemophilia B, he argues the shareholder base turnover is likely now complete as investors and QURE shift focus to next-in-line AMT-130, its AAV5 gene therapy for Huntingtons Disease (HD).Schwartz further added, With the M&A premium now out of the stock, we see the QUREs current level as an attractive buying opportunity for those investors interested in the companys up and coming CNS gene therapies, internal manufacturing, and robust intellectual property and knowhow.Looking more closely at the agreement with CSL Behring, QURE will be tasked with the completion of the pivotal Phase 3 HOPE-B trial as well as the manufacturing process validation and manufacturing supply of AMT-061.According to management, 26-week Factor IX (FIX) data from all 54 patients enrolled in the trial remains on track, and topline data from the pivotal trial is still slated to read out by YE20. It should be mentioned that in a Phase 2b dose-confirmation study, QURE reported 41% FIX activity out to one year. Additionally, Schwartz points out that with HOPE-B progressing as planned, QURE has continued its manufacturing process validation work ahead of the anticipated BLA/MAA submissions in the U.S. and EU in 2021.On top of this, as part of the deal, QURE is eligible to receive more than $2 billion including a $450 million upfront cash payment, $1.6 billion in regulatory and commercial milestones and double-digit royalties ranging up to the low-twenties percentage of net product sales.With a strengthened cash position, QURE is well funded to rapidly advance CNS assets including AMT-130 (AAV5 gene therapy for Huntingtons Disease (HD)) and AMT-150 (AAV gene therapy for Spinocerebellar Ataxia Type 3/SCA3)...We continue to believe that as QUREs CNS pipeline assets mature, the company could once again be an attractive partner to larger biopharma companies that have recently acquired many publicly traded gene therapy platforms with substantial manufacturing capabilities, Schwartz noted.Everything that QURE has going for it convinced Schwartz to reiterate an Outperform (i.e. Buy) rating. Along with the call, he attached a $67 price target, suggesting 68% upside potential from current levels. (To watch Schwartzs track record, click here)What does the rest of the Street have to say? 9 Buys and 3 Holds have been issued in the last three months, so the consensus rating is a Strong Buy. In addition, the $69.89 average price target indicates 75% upside potential. (See QURE stock analysis on TipRanks)To find good ideas for beaten-down stocks trading at attractive valuations, visit TipRanks Best Stocks to Buy, a newly launched tool that unites all of TipRanks equity insights.Disclaimer: The opinions expressed in this article are solely those of the featured analysts. The content is intended to be used for informational purposes only. It is very important to do your own analysis before making any investment.

See the original post:

Jitsi and Mattermost Team Up for Joint Hackathon - "Thriving in a Remote Environment" - Yahoo Finance

Quantum encryption the devil is in the implementation – The Daily Swig

John Leyden23 September 2020 at 13:18 UTC Updated: 23 September 2020 at 16:04 UTC

Implementation flaws in quantum key distribution systems can undermine claims of unhackable cryptographic security, one expert warns

Academics at the University of Bristol recently claimed to have made a breakthrough in making quantum key distribution (QKD) systems commercially viable at scale.

Using a technique known as multiplexing, the team has developed a prototype system that relies on fewer receiver boxes, potentially slashing the cost of building quantum key distribution systems currently used by only governments and large multinational banks.

However, following the recent publication of an article in The Daily Swig, Taylor Hornby, senior security engineer at Electric Coin Company, has been in touch to caution us that comparable systems have been broken in the past because of implementation problems.

If theyre claiming higher security than standard cryptography, they need evidence theyre less likely to have implementation flaws, Hornby told us before offering a lengthier explanation of his thinking (reproduced in full, with light editing) below.

Its technically correct that when implemented correctly, quantum key distribution leverages the laws of physics to ensure that data being transmitted cannot be intercepted and hacked.

However, that implemented correctly is a pretty big assumption. Similar systems in the past have been broken through implementation flaws, so if the researchers are claiming higher security than standard cryptography, they need evidence theyre less likely to have implementation flaws.

Everyones almost certainly better off using normal crypto thats post-quantum secure and paying (a fraction of) the 300,000 cost to people to audit it.

A common narrative in favor of QKD is that its more secure than conventional cryptography because it doesnt need to rely on computational difficulty assumptions (like factoring is hard, its hard to find SHA256 collisions, and so forth).

Its true that QKD eliminates the need to rely on those computational hardness assumptions, but that comes at an additional risk of implementation flaws.

Implementations of conventional cryptography can have implementation flaws, too (e.g. Heartbleed, Zombie Poodle, and many other examples). However theyre usually just software mistakes that can be patched, and theres an industry of cryptographers and security auditors trained to find and fix them.

Over time, the flaws get found and fixed, and the implementations become more secure.

Read more of the latest encryption news

Note that its very rare for conventional cryptography to be broken because of weaknesses in the computational hardness assumptions.

MD5 and SHA1 collisions are two examples, but consider that AES and even DES are not showing substantial signs of weakness, and even MD5 is still secure against second-preimage attacks.

Quantum systems, on the other hand, can have physical vulnerabilities that come from the fact that real single-photon detectors and other components dont behave exactly as their theoretical models predict.

In one case, researchers were able to control single-photon detectors in a QKD system by shining bright light on them (making them behave more like brightness sensors than single-photon detectors).

A defense for this attack was proposed, which was to vary the detectors efficiency randomly. The idea is that the bright light coming from an attacker will always set off the detector, but if there werent an attack, then more photons should be lost when the efficiency is low, so the recipient can tell if theyre being attacked when they dont see a higher rate of lost photons.

Researchers then worked out a way around that defense: By offsetting the timing of short pulses against the timing of a gate clock in the detector, they could trigger the detector just when the efficiency was high and not when it was low, so they could simulate the expected lost photons:

These attacks are on older QKD systems, and I havent looked into the architecture used by researchers quoted in the article, but this shows that QKD systems can have their own kinds of physical flaws, and the risk they introduce needs to be balanced against the benefits of moving away from reliance on computational hardness assumptions.

The burden is on QKD proponents to argue that their physical devices are less likely to contain vulnerabilities than software implementations of conventional cryptography systems.

A potential way to do that is to use device-independent QKD protocols protocols which are proven secure even when the attacker is allowed to have some control over the physical hardware.

Current designs for device-independent protocols are less efficient, however, and they still make assumptions about what the attacker is allowed to do.

Those assumptions need to be tested adversarially before we can be confident in the implementations security.

READ MORE Quantum leap forward in cryptography could make niche technology mainstream

Read the original post:
Quantum encryption the devil is in the implementation - The Daily Swig