Create ASTC Textures Faster With the New astcenc 2.0 Open Source Compression Tool – POP TIMES UK

Adaptive Scalable Texture Compression (ASTC) is an advanced lossy texture compression format, developed by Arm and AMD and released as royalty-free open standard by the Khronos Group. It supports a wide range of 2D and 3D color formats with a flexible choice of bitrates, enabling content creators to compress almost any texture asset, using a level of compression appropriate to their quality and performance requirements.

ASTC is increasingly becoming the texture compression format of choice for mobile 3D applications using the OpenGL ES and Vulkan APIs. ASTCs high compression ratios are a perfect match for the mobile market that values smaller download sizes and optimized memory usage to improve energy efficiency and battery life.

ASTC 2D Color Formats and Bitrates

The astcenc ASTC compression tool was first developed by Arm while ASTC was progressing through the Khronos standardization process seven years ago. astcenc has become widely used as the de facto reference encoder for ASTC, as it leverages all format features, including the full set of available block sizes and color profiles, to deliver high-quality encoded textures that are possible when effectively using ASTCs flexible capabilities.

Today, Arm is delighted to announce astcenc 2.0! This is a major update which provides multiple significant improvements for middleware and content creators.

The original astcenc software was released under an Arm End User License Agreement. To make it easier for developers to use, adapt, and contribute to astcenc development, including integration of the compressor into application runtimes, Arm relicensed the astcenc 1.X source code on GitHub in January 2020 under the standard Apache 2.0 open source license.

The new astcenc 2.0 source code is now also available on GitHub under Apache 2.0.

astcenc 1.X emphasized high image quality over fast compression speed. Some developers have told Arm they would love to use astcenc for its superior image quality, but compression was too slow to use in their tooling pipelines. The importance of this was reflected in the recent ASTC developer survey organized by Khronos where developer responses rated compression speed above image quality in the list of factors that determine texture format choices.

For version 2.0, Arm reviewed the heuristics and quality refinement passes used by the astcenc compressoroptimizing those that were adding value and removing those that simply didnt justify their added runtime cost. In addition, hand-coded vectorized code was added to the most compute intensive sections of the codec, supporting SSE4.2 and AVX2 SIMD instruction sets.

Overall, these optimizations have resulted in up to 3x faster compression times when using AVX2, while typically losing less than 0.1 dB PSNR in image quality. A very worthwhile tradeoff for most developers.

astcenc 2.0 Significantly Faster ASTC Encoding

The tool now supports a clearer set of compression modes that directly map to ASTC format profiles exposed by the Khronos API support and API extensions.

Textures compressed using the LDR compression modes (linear or sRGB) will be compatible with all hardware implementing OpenGL ES 3.2, the OpenGL ES KHR_texture_compression_astc_ldr extension, or the Vulkan ASTC optional feature.

Textures compressed using the HDR compression mode will require hardware implementing an appropriate API extension, such as KHR_texture_compression_astc_hdr.

In addition, astcenc 2.0 now supports commonly requested input and output file formats:

Finally, the core codec is now separable from the command line front-end logic, enabling the astcenc compressor to be integrated directly into applications as a library.

The core codec library interface API provides a programmatic mechanism to manage codec configuration, texture compression, and texture decompression. This API enables use of the core codec library to process data stored in memory buffers, leaving file management to the application. It supports parallel processing for compression of a single image with multiple threads or compressing multiple images in parallel.

You can download astcenc 2.0 on GitHub today, with full source code and pre-built binaries available for Windows, macOS, and Linux hosts.

For more information about using the tool, please refer to the project documentation:

Arm have also published an ASTC guide, which gives an overview of the format and some of the available tools, including astcenc .

If you have any questions, feedback, or pull requests, please get in touch via the GitHub issue tracker or the Arm Mali developer community forums:

Khronos and Vulkan are registered trademarks, and ANARI, WebGL, glTF, NNEF, OpenVX, SPIR, SPIR-V, SYCL, OpenVG and 3D Commerce are trademarks of The Khronos Group Inc. OpenXR is a trademark owned by The Khronos Group Inc. and is registered as a trademark in China, the European Union, Japan and the United Kingdom. OpenCL is a trademark of Apple Inc. and OpenGL is a registered trademark and the OpenGL ES and OpenGL SC logos are trademarks of Hewlett Packard Enterprise used under license by Khronos. All other product names, trademarks, and/or company names are used solely for identification and belong to their respective owners.

See the article here:

Create ASTC Textures Faster With the New astcenc 2.0 Open Source Compression Tool - POP TIMES UK

How to find anyone anywhere with online facial recognition – E&T Magazine

Is DIY facial recognition the new privacy threat? Plus back to college, and the E&T Innovation Awards go virtual

Facial recognition technology is turning up in ever more applications from the useful, like unlocking smartphones, and the fun, like Facebook tagging, to the essential, like crime detection, or the life-saving, like prevention of terrorism.

Our faces too are photographed, filmed and sometimes clocked almost everywhere we go. We post them ourselves, on social media or elsewhere on the web. How many images of your face does your name yield in Google Images? Mine turns up a few dozen, half of them appearing at the top of this monthly column. Ive hardly aged! Its not as many as the Queen or David Beckham, but then I manage to avoid the paparazzi and rarely post selfies (I made an exception when I met Giorgio Moroder at CES in January).

I dont really want my phizog everywhere online, for no particular reason really except vague, probably irrational worries about security and privacy. I would be more concerned if I lived in a more repressive regime, especially if was looking to changeit.

So how easy is it to search the internet and find anyone anywhere? Policies on facial recognition vary widely around the world; some governments employ it freely themselves while others are more cautious about citizen privacy. Some allow private companies more free rein than others. And the tech giants can themselves also be cautious about the implications of allowing anyone to find any face anywhere on the web. Upload a picture of yourself to Google Images and it will produce people with similar clothing or backgrounds but probably not you.

Yet it may only be a matter of time before the genie is really out of the bottle because it is so very easy. Facial-recognition technology is freely available as open-source code packages. Ben Heubl tried it for E&T.It was scarily or satisfyingly efficient, depending on your viewpoint. We also tried it with a picture of Lord Lucan to see if we could solve that mystery. Most of the matches were taken before his disappearance, but it also flagged up James Coburn in A Fistful of Dynamite as a match. Who would have guessed? Yes, it works but its not perfect.

Also in this issue, we start a new regular feature with TV presenter Dr Shini Somara interviewing some extraordinary engineers about their careers, influences and aspirations. First we hear from Clare Elwell, who develops optical monitoring and imaging systems for medicine at UCL, about what makes her tick.

Next month well be revealing the shortlists for the E&T Innovation Awards. Fingers crossed if youve entered, and if you havent well, do it next year! It will be a virtual event this year, but there are some exciting ideas in the pipeline and we aim to make it bigger and better than the real live event. Stay tuned and, as the autumn approaches, stay safe.

E&T

Image credit: E&T

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Here is the original post:

How to find anyone anywhere with online facial recognition - E&T Magazine

Three takeaways from a visit to TikToks new transparency center – The Verge

In July, amid increasing scrutiny from the Trump administration, TikTok announced a novel effort to build trust with regulators: a physical office known as the Transparency and Accountability Center. The center would allow visitors to learn about the companys data storage and content moderation practices, and even to inspect the algorithms that power its core recommendation engine.

We believe all companies should disclose their algorithms, moderation policies, and data flows to regulators, then-TikTok CEO Kevin Mayer said at the time. We will not wait for regulation to come.

Regulation came a few hours later. President Trump told reporters on Air Force One that he planned to ban TikTok from operating in the United States, and a few days later he did. The president set a deadline for ByteDance to sell TikTok by September 15th that is, this coming Tuesday and Mayer quit after fewer than 100 days on the job. (The deadline has since been changed to November 12th but also Trump said today that the deadline is also still Tuesday? Help?)

With so much turmoil, you might expect the company to set aside its efforts to show visitors its algorithms, at least temporarily. But the TikTok Transparency and Accountability Center is now open for (virtual) business and on Wednesday I was part of a small group of reporters who got to take a tour over Zoom.

Much of the tour functioned as an introduction to TikTok: what it is, where its located, and who runs it. (Its an American app, located in America, run by Americans, was the message delivered.) We also got an overview of the apps community guidelines, its approach to child safety, and how it keeps data secure. All of it is basically in keeping with how American social platforms manage these concerns, though its worth noting that 2-year-old TikTok built this infrastructure much faster than its predecessors did.

More interesting was the section where Richard Huang, who oversees the algorithm responsible for TikToks addictive For You page, explained to us how it works. For You is the first thing you see when you open TikTok, and it reliably serves up a feed of personalized videos that leaves you saying Ill just look at one more of these for 20 minutes longer than you intended. Huang told us that when a new user opens TikTok, the algorithm fetches eight popular but diverse videos to show them. Sara Fischer at Axios has a nice recap of what happens from there:

The algorithm identifies similar videos to those that have engaged a user based on video information, which could include details like captions, hashtags or sounds. Recommendations also take into account user device and account settings, which include data like language preference, country setting, and device type.

Once TikTok collects enough data about the user, the app is able to map a users preferences in relation to similar users and group them into clusters. Simultaneously, it also groups videos into clusters based on similar themes, like basketball or bunnies.

As you continue to use the app, TikTok shows you videos in clusters that are similar to ones you have already expressed interest in. And the next thing you know, 80 minutes have passed.

Eventually the transparency center will be a physical location that invited guests can visit, likely both in Los Angeles and in Washington, DC. The tour will include some novel hands-on activities, such as using the companys moderation software, called Task Crowdsourcing System, to evaluate dummy posts. Some visitors will also be able to examine the apps source code directly, TikTok says.

I think this is great. Trust in technology companies has been in decline, and allowing more people to examine these systems up close feels like a necessary step toward rebuilding it. If you work at a tech company and ever feel frustrated by the way some people discuss algorithms as if theyre magic spells rather than math equations well, this how you start to demystify them. (Facebook has a similar effort to describe what youll find in the News Feed here; I found it vague and overly probabilistic compared to what TikTok is offering. YouTube has a more general guide to how the service works, with fairly sparse commentary on how recommendations function.)

Three other takeaways from my day with TikTok:

TikTok is worried about filter bubbles. Facebook has long denied that it creates filter bubbles, saying that people find a variety of diverse viewpoints on the service. Thats why I was interested to hear from TikTok executives that they are quite concerned about the issue, and are regularly refining their recommendation algorithm to ensure you see a mix of things. Within a filter bubble, theres an informational barrier that limits opposing viewpoints and the introduction of diverse types of content, Huang said. So, our focus today is to ensure that misinformation and disinformation does not become concentrated in users For You page.

The problems are somewhat different on the two networks Facebook is primarily talking about ideological diversity, where TikTok is more concerned with promoting different types of content but I still found the distinction striking. Do social networks pull us into self-reinforcing echo chambers, or dont they?

TikTok is building an incident command center in Washington, DC. The idea is to be able to identify critical threats in real time and respond quickly, the company said, which feels particularly important during an election year. I dont know how big a deal this is, exactly for the time being, it sounds like it could just be some trust and safety folks working in a shared Slack channel? But the effort does have an undeniably impressive and redundant official name: a monitoring, response and investigative fusion response center. OK!

You cant prove a negative. TikTok felt compelled to design these guided tours amid fears that the app would be used to share data with Chinese authorities or promote Communist Party propaganda to Americans. (Ben Thompson has a great, subscribers-only interview with the New York Times Paul Mozur that touches on these subjects today.) The problem with the tour, though, is that you cant show TikTok not doing something. And I wonder if that wont make the transparency center less successful than the company hoped.

I asked Michael Beckerman, a TikTok vice president and head of US public policy, about that challenge.

Thats why were trying to be even more transparent were meeting and talking to everybody that we can, Beckerman told me. What a lot of people are saying people that are really well read into global threats is that TikTok doesnt rank. So if youre spending too much time worrying about TikTok, what are you missing?

Oh, I can think of some things.

Anyway, TikToks transparency center is great a truly forward-leaning effort from a young company. Assuming TikTok survives beyond November, Id love to visit it in person sometime.

Today in news that could affect public perception of the big tech platforms.

Trending up: Google is giving more than $8.5 million to nonprofits and universities using artificial intelligence and data analytics to better understand the coronavirus crisis, and its impact on vulnerable communities. (Google)

Russian government hackers have targeted 200 organizations tied to the 2020 presidential election in recent weeks, according to Microsofts threat intelligence team. China has also launched cyberattacks against high-profile individuals linked to Joe Bidens campaign, while Iranian actors have targeted people associated with President Trumps campaign. Dustin Volz at The Wall Street Journal has the story:

Most of the attempted intrusions havent been successful, and those who were targeted or compromised have been directly notified of the malicious activity, Microsoft said. Russian, Chinese and Iranian officials didnt immediately respond to a request for comment.

The breadth of the attacks underscore widespread concerns among U.S. security officials and within Silicon Valley about the threat of foreign interference in the presidential election less than two months away. [...]

The Russian actor tracked by Microsoft is affiliated with a military intelligence unit and is the same group that hacked and leaked Democratic emails during the 2016 presidential contest. In addition to political consultants and state and national parties, its recent targets have included advocacy organizations and think tanks, such as the German Marshall Fund, as well as political parties in the U.K., Microsoft said.

Whats the worst thing that could happen the night of the US presidential election? Experts have a few ideas. Misinformation campaigns about voter fraud, disputed results, and Russian interference are all possible scenarios. (The New York Times)

Voting machines have a bad reputation, but most of their problems are actually pretty minor and unlikely to impair a fair election. Theyre often the result of ancient technology not hacking. (Adrianne Jeffries / The Markup)

Google said it will remove autocomplete predictions that seem to endorse or oppose a candidate or a political party, or that make claims about voting. The move is an attempt to improve the quality of information available on Google before the election. (Anthony Ha / TechCrunch)

Trump is considering nominating a senior adviser at the National Telecommunications and Information Administration who helped draft the administrations social media executive order to the Federal Communications Commission. Nathan Simington is known for supporting Republicans bias against conservatives schtick, and helped to craft a recent executive order about social media. (Makena Kelly / The Verge)

A network of Facebook pages is spreading misinformation about the 2020 presidential election, funneling traffic through an obscure right-wing website, then amplifying it with increasingly false headlines. The artificial coordination might break Facebooks rules. (Popular Information)

Facebook is re-evaluating its approach to climate misinformation. The company is working on a climate information center, which will display information from scientific sources, although nothing has been officially announced. It will look beautiful sandwiched in between the COVID-19 information center and the voter information center. (Sarah Frier / Bloomberg)

Facebook reviews user data requests through its law enforcement portal manually, without screening the email address of people who request access. The company prefers to let anyone submit a request and then check that its real, rather than block them with an automated system. (Lorenzo Franceschi-Bicchierai / Vice)

QAnon is attracting female supporters because the community isnt as insular as other far-right groups, this piece argues. That might be a bigger factor in its ability to convert women than the save the children content. (Annie Kelly / The New York Times)

Chinas embassy in the UK is demanding Twitter open an investigation after its ambassadors official account liked a pornographic clip on the platform earlier this week. The embassy said the tweets were liked by a possible hacker who had gained access to the ambassadors account. Thats what they all say! (Makena Kelly / The Verge)

GitHub has become a repository for censored documents during the coronavirus crisis. Internet users in China are repurposing the open source software site to save news articles, medical journals, and personal accounts censored by the Chinese government. (Yi-Ling Liu / Wired)

Brazil is trying to address misinformation issues with a new bill that would violate the privacy and freedom of expression of its citizens. If it passes, it could be one of the most restrictive internet laws in the world. (Raphael Tsavkko Garcia / MIT Technology Review)

Former NSA chief Keith Alexander has joined Amazons board of directors. Alexander served as the public face of US data collection during the Edward Snowden leaks. Heres Russell Brandom at The Verge:

Alexander is a controversial figure for many in the tech community because of his involvement in the widespread surveillance systems revealed by the Snowden leaks. Those systems included PRISM, a broad data collection program that compromised systems at Google, Microsoft, Yahoo, and Facebook but not Amazon.

Alexander was broadly critical of reporting on the Snowden leaks, even suggesting that reporters should be legally restrained from covering the documents. I think its wrong that that newspaper reporters have all these documents, the 50,000-whatever they have and are selling them and giving them out as if these you know it just doesnt make sense, Alexander in an interview in 2013. We ought to come up with a way of stopping it. I dont know how to do that. Thats more of the courts and the policymakers but, from my perspective, its wrong to allow this to go on.

Facebook launched new product called Campus, exclusively for college students. Its a new section of the main app where students can interact only with their peers, and it requires a .edu address to access. I say open it up to everyone. Worked last time! (Ashley Carman / The Verge)

Ninja returned to Twitch with a new exclusive, multiyear deal. Last August, he left Twitch for an exclusive deal with Mixer which shut down at the end of June. (Bijan Stephen / The Verge)

The Social Dilemma, the new Netflix documentary about the ills of big tech platforms, seems unclear on what exactly makes social media so toxic. It also oversimplifies the impact of social media on society as a whole. (Arielle Pardes / Wired)

You can make a deepfake without any coding experience in just a few hours. One of our reporters just did! (James Vincent / The Verge)

Stuff to occupy you online during the quarantine.

Choose your own election adventure. Explore some worst-case scenarios with this, uh, fun new game from Bloomberg.

Subscribe to The Verges new weekly newsletter about the pandemic. Mary Beth Griggs Antivirus brings you news from the vaccine and treatment fronts, and stories that remind us that theres more to the case counts than just numbers.

Subscribe to Kara Swishers new podcast for the New York Times. The first episode of her new interview show drops later this month.

Watch The Social Dilemma. The new social-networks-are-bad documentary is now on Netflix. People are talking about it!

Send us tips, comments, questions, and an overview of how your algorithms work: casey@theverge.com and zoe@theverge.com.

Continued here:

Three takeaways from a visit to TikToks new transparency center - The Verge

Top 15 DevOps blogs to read and follow – TechTarget

Between the culture, processes, tools and latest trends, there is a lot to know about DevOps -- and there is no shortage of content across the internet that covers it all.

Depending on your DevOps interests and perspectives, that might be a good thing. For many, it's overwhelming to parse through all the case studies on adoption, technical recommendations and tutorials, product reviews, trends and latest news.

Don't get lost on the web. Check out these top 15 DevOps blogs from experienced developers, consultants, vendors and thought leaders across the industry.

The Agile Admin blog covers topics such as DevOps, Agile, cloud computing, infrastructure automation and open source, to name a few. It is run by sysadmins and developers Ernest Mueller, James Wickett, Karthik Gaekwad and Peco Karayanev. Beginners should start with this blog's thorough introduction to DevOps before they dive into deeper discussions and more technical subjects, such as site reliability engineering (SRE), monitoring and observability.

Apiumhub is a software development company in Barcelona. Its Apiumhub blog looks at Agile web and app development, industry trends, tools and, of course, DevOps. Readers will find expertise from the company's DevOps pros as well as tips from contributors. The blog discusses best practices for technical DevOps processes and includes other resources, such as a list of DevOps experts to follow.

Atlassian is a software company that offers products for software development, project management, collaboration and code quality. It also produces a bimonthly newsletter and blog site called Work Life, which includes a section about DevOps. The DevOps blog posts cover subjects such as DevSecOps, CI/CD integrations, compliance and toolchains. Also included are surveys with DevOps professionals. Be aware that some of the content is designed to align with the various products that Atlassian offers.

Microsoft's Azure DevOps blog does not post a lot of its own tutorials or tips, but instead publishes Azure updates and a weekly top stories roundup from across the web. While aimed primarily at Microsoft users, the blog offers useful insights for most anyone. The content changes weekly, sometimes with different themes that will expand a reader's knowledge.

The Capital One Tech blog posts go beyond DevOps to cover enterprise technology across the board, but regularly looks into the company's DevOps journey. With posts from Capital One's software engineers, this blog creatively breaks down its commitment to DevOps, from its pipeline design to creation of the open source, end-to-end monitoring tool Hygieia.

A diary of sorts for the online marketplace Etsy, Code as Craft publicly discusses the tools it uses, its software projects and experience with public cloud infrastructure. The blog posts are not laser-focused on DevOps, but are generally informative and review Etsy's experiments, some that might be outside the scope of the average developer.

The DevOpsGroup offers services to help enterprises adopt and maintain DevOps and the cloud. The company's blog focuses on the people behind DevOps, and tackles subjects such as burnout and the role of a scrum master. It also offers technical tutorials, such as how to set up Puppet -- as well as broader overviews, such as how to select CI/CD tools.

DZone, a site geared toward software developers, evolved from CEO Rick Ross' Javalobby to now cover 14 topics, or "zones," such as AI, cloud and Java, and a DevOps Zone. Tools, tutorials, news, oh my! Readers can find content about everything from Docker and Kubernetes to continuous delivery/continuous deployment and testing packaged in articles, webinars, research reports, short technical walkthroughs called "Refcardz," and even comics. Go in with a specific query or be prepared to spend time browsing.

The Everything DevOps Reddit thread is not technically a blog, but it is a valuable source for anyone interested in DevOps. With numerous threads daily, from Q&As to tips from DevOps practitioners, there is something for everyone. Readers will learn about the latest trends and practices for monitoring pipelines, mastering new DevOps skills and more.

Sean Hull, a DevOps and cloud solutions architect, runs his iHeavy blog with a more personable approach to talk about and teach DevOps. His posts present step-by-step tutorials and technical recommendations, but he also doles out advice for the less mechanical aspects of DevOps. For example, check out this post on how to handle people who say they are not paying invoices during a pandemic. The iHeavy blog also has a tab for CTO/CIO topics and advice for startups.

Gene Kim, author of The Phoenix Project and The Unicorn Project, founded IT Revolution to share DevOps practices and processes with the growing DevOps community. IT Revolution's blog posts dive into the culture of DevOps, with articles on leadership, communication and Agile practices. Blog authors include Kim and software development experts Jeffrey Fredrick and Douglas Squirrel. In addition to the blog, IT Revolution publishes books, hosts events and supports research projects.

Agile coach and software engineer Mark Shead creates animated videos that explain basic DevOps and Agile concepts. These entertaining videos cover principals such as Agile transformation, methodology and user stories in short bursts -- perfect for people looking to dip their toes into DevOps or those who want a quick refresher.

This is another blog that provides an expansive look into how a large enterprise applies DevOps practices and principles to its operations. The Netflix Tech blog features posts written by Netflix's data scientists and software and site reliability engineers, giving readers a first-hand behind-the-screen view of one of the biggest streaming services. Topics span the enterprise's operations, from machine learning to data infrastructure, but there's plenty for DevOps fans to appreciate, such as Netflix's experiences with incident management, application monitoring and continuous delivery -- which includes some in-house tools, such as Spinnaker, which is now open source.

The Scott Hanselman blog is run by Scott Hanselman, a programmer, teacher, speaker and member of the web platform team at Microsoft. It averages about four posts a month -- going back as far as 2002 -- that vary from product reviews to Docker tutorials. This DevOps blog includes screenshots of what readers can expect to see when they try it for themselves as well as code examples.

Stackify is an application performance management vendor, but its DevOps blog covers information for anyone looking to learn more about DevOps with any software. It offers readers tricks, tips and resources, digging into best practices for broad topics such as adoption, implementation and security. It also provides detailed information for beginners on popular platforms, such as AWS, Kubernetes and Azure Container Service.

Continue reading here:

Top 15 DevOps blogs to read and follow - TechTarget

DeFi Unlocked: How to Earn Crypto Investment Income on Compound – Cryptonews

Source: Adobe/thodonal

DeFi (decentralized finance) has emerged as the hottest crypto trend of the year, with the total amount of US dollars locked in decentralized finance protocols surpassing USD 9bn on September 1 before correcting lower.

In our new DeFi Unlocked series, we will introduce you to the different ways you can earn investment income in the booming decentralized finance market.

In part one of our DeFi series, you will discover how to earn interest on cryptoassets by placing them into the Compound (COMP) money market protocol.

Compound is an autonomous, open-source interest rate protocol that allows cryptoasset users to borrow and lend digital assets on the Ethereum (ETH) blockchain.

Compound supports BAT, DAI, ETH, USDC, USDT, WBTC, and ZRX, and currently pay interest rates from 0.19% to 8.06%. Prevailing interest rates vary from asset to asset and depend on market conditions.

The idea behind Compound is to enable anyone across the globe with an internet connection to borrow or lend funds in the form of cryptoasset. To date, however, the protocol - like effectively all DeFi apps - has mostly been used by experienced crypto users and investors hunting for yield.

Earning interest on your crypto holding might be easy on Compound, provided you are comfortable with using MetaMask or similar Ethereum client that can interact with dapps (decentralized apps).

To start earning crypto interest on Compound, you will need to take the following steps:

Next, you click on App on the top right of the website to access the protocol dashboard

Then, you connect to the protocol with your Ethereum wallet (MetaMask, Coinbase Wallet, or Ledger)

Once connected, you will be able to view all available assets, borrowing and lending rates.

The stablecoins - DAI, USDC, and USDT - are favorites among DeFi investors as they ensure that the principal of the loans retain its value while investors can cash in on the interest and - in the case of Compound - also on the COMP token.

Click on the asset you want to deposit, enter the amount you wish to lend, and confirm the transaction. In the screenshot below, you can see that if you were to deposit USDC, you would earn 1.96% APY (annual percentage yield) on your deposit.

In addition to the interest earned on USDC, you also receive COMP tokens as an incentive to provide liquidity to the protocol.

Compound users receive COMP tokens for interacting with the protocol. That means regardless of whether you are borrowing or lending, you will receive COMP tokens proportional to the amount you borrow or lend and dependent on the days COMP distributions.

As a result, some creative investors have started to lend one stablecoin and borrow in another - at a small loss in terms of interest rate differential - to earn enough COMP to generate a profit. This form of liquidity mining became popular following Compounds introduction of the COMP governance token in May 2020.

While the returns of liquidity mining COMP compressed, the interest you can earn on US dollar-backed stablecoins, USDT and USDC, are still higher than what you would receive on a currency savings account at your local bank.

As a result, Compound provides an alternative to existing savings accounts and money market fund solutions in the legacy financial system.

The Compound protocol has been running since 2018. However, that does not mean that code vulnerabilities couldnt be found and exploited. Of course, that is the case for all DeFi protocols and not specific to Compound. It offers bug bounties for anyone who can find a vulnerability in its open-source code to increase its security.

There is also the risk of a bank run on Compound. A bank run - in traditional finance - refers to depositors withdrawing their cash when they fear it may no longer be safe in their bank. In the context of the Compound, bank run risk exists because of the protocols utilization rate, which refers to how much depositors funds are going to borrowers. So if all depositors would withdraw their funds in one go, that would be an issue.

To address this concern, Compound adjusts its interest rates according to borrowing activity to entice more depositors to place funds into the protocol.

Finally, there is also some opportunity cost of holding assets in Compound that you cant trade for a potentially better-performing asset in the secondary market. Having said that, you can withdraw your deposits from Compound and convert it into other cryptoassets.

Compound is one of the most established borrowing and lending protocols in todays DeFi landscape. However, interest rates for most available lendable assets on Compound are not going to make you a crypto millionaire anytime soon. While the COMP governance token provides an additional financial incentive to deposit tokens into the protocol, Compound is arguably more of a protocol for investors then opportunistic traders.

View original post here:

DeFi Unlocked: How to Earn Crypto Investment Income on Compound - Cryptonews

the Spanish app to control the pandemic releases its code – Explica

COVID radar It is the Spanish app to control the pandemic caused by the coronavirus, the one in which we have been immersed since last March, which is said soon. To be more exact, it is the app with which the Government will try to improve the tracking of infections and predict possible outbreaks. It is nothing new because similar strategies have been carried out in half the world for a long time, but it is for which it has been bet in Spain.

The app, available for Android and iOS, has been in test mode for a few weeks and It is expected that on September 15 it will be launched throughout the State. However, despite the fact that from official instances it is expected that the application will be used en masse for the help it can provide and there are international trials that support it to deal with the expansion of COVID-19, Radar COVID continues to be a tracking system, also managed by the government itself.

In other words, it is an invention that does not inspire confidence in terms of privacy, which is why the SEDIA (Secretary of State for Digitalization and Artificial Intelligence), the body in charge of developing Radar COVID, promised to release the code source of all software, in order to provide the transparency that only open source offers. And they have fulfilled: Radar COVID can already be considered as free software.

The description of Radar COVID is concise and insistent on the subject of privacy:

Radar COVID notifies you anonymously of the possible contact that you have had in the last 14 days with a person who has been infected using Bluetooth low consumption technology.

COVID radar also allows:

Anonymously communicate your positive diagnosis Anonymously communicate the exposure to people with whom you have been in contact

Radar COVID guarantees security and privacy and is 100% anonymous. For this reason, we do not request your name, your telephone number, or your email.

Radar COVID also ensures that does not collect any geolocation, GPS or temporary dataTherefore, the information stored is only used for the repeated warning between possible risky contacts, but not to identify anyone in principle. In the end, the release of the source code is the act of guaranteeing that everything that is sustained is as is. And there is no reason to doubt that this is the case.

COVID Radar for Android

Radar COVID, in fact, is not a development from scratch, but relies on Apple and Google tracking APIs for coronavirus and other open technologies. All the components of the project are available on GitHub, including the server software and that of the mobile applications for Android and iOS, whose download can already be found on Google Play and the App Store, respectively, although as indicated, in principle it still does not operate throughout the national territory.

As for the license, the one chosen has been Mozilla Public License 2.0, so we can call it both free software and open source software.

It has been more or less controversial with the privacy implications that this release of the Radar COVID code comes to mitigate, the truth is that the main obstacles that the application faces in its attempt to reach the more homes the better, are its discovery and conditionality. That is, that people know the application and install it. And although it is surely promoted by all possible channels, you have to make the effort to find and install it, which is optional.

Not only that: the application pulls the Bluetooth and, as is evident, it is always active in the background, with the increase in consumption that all this entails. Then there is the handling of the application itself, since it does not only consist of having it installed and activated: it is essential to use it -Its operation is very simple- and, for example, in the case of being diagnosed as positive for COVID-19, raise the alarm voluntarily. It will be necessary to see if it becomes massive as expected or not.

Despite this, and also despite the fact that its deployment at the national level has not yet been completed, Radar COVID already has several million installations and a positive average rating in the two large mobile application stores.

Here is the original post:

the Spanish app to control the pandemic releases its code - Explica

Why Cloud-Based Architectures and Open Source Don’t Always Mix – ITPro Today

By some measures, open source has been wildly successful in the cloud. Open source solutions like Kubernetes have eaten closed-source alternatives for lunch. Yet, in other respects, open source within the cloud has been a complete failure. Cloud-based architectures continue to pose fundamental problems for achieving open sources founding goals of protecting user freedom. For many organizations, using the cloud means surrendering control to proprietary solutions providers and facing stiff lock-in risks.

These observations beg the question: Why hasnt open source been more influential in the cloud, and what could be done to make cloud computing more friendly toward open source?

From the early days of the cloud era, there has been a tension between open source and the cloud.

When free and open source software first emerged in the 1980s under the auspices of Richard Stallman and the GNU project, the main goal (as Stallman put it at the time) was to make software source code available to anyone who wanted it so that users could use computers without dishonor and operate in solidarity with one another.

If you run software on a local device, having access to the source code achieves these goals. It ensures that you can study how the program works, share modifications with others and fix bugs yourself. As long as source code is available and you run software on your own device, software vendors cannot divide the users and conquer them.

But this calculus changes fundamentally when software moves to cloud-based architectures. In the cloud, the software that you access as an end user runs on a device that is controlled by someone else. Even if the source code of the software is available (which its usually not in the case of SaaS platforms, although it theoretically could be), someone else--specifically, whoever owns the server on which the software runs--gets to control your data, decide how the software is configured, decide when the software will be updated, and so on. There is no solidarity among end users, and no equity between end users and software providers.

Stallman and other free software advocates realized this early on. By 2010, Stallman was lamenting the control that users surrendered when they used cloud-based software, and coining terms like Services a Software Substitute to mock SaaS architectures. They also introduced the Affero General Public License, which aims to extend the protections of the GNU General Public License (the mainstay free software license) to applications that are hosted over the network.

The fruits of these efforts were mediocre at best. Stallmans pleas to users not to use SaaS platforms has done little to stem the explosive growth of the cloud since the mid-2000s. Today, its hard to think of a major software platform that isnt available via an SaaS architecture or to find an end user who shies away from SaaS over software freedom concerns.

And although the Affero license has gained traction, its ability to advance the cause of free and open source software in the cloud is limited. The Affero licenses main purpose is to ensure that software vendors cant claim that cloud-based software is not distributed to users, and therefore not subject to the provisions of traditional open source licenses, like the GPL. Thats better than nothing, but it does little to address issues related to control over data, software modifications and the like that users face when they use cloud-based services.

Thus, cloud-based architectures continue to pose fundamental challenges to the foundational goals of free and open source software. Its hard to envision a way to resolve these challenges, and even harder to imagine them disappearing in a world where cloud adoption remains stronger than ever.

You can tell the story of open source in the cloud in another, more positive way. Viewed from the perspective of certain niches, like private cloud and cloud-native infrastructure technologies, open source has enjoyed massive success.

Im thinking here about projects like Kubernetes, an open source application orchestration platform that has become so dominant that it doesnt even really have competition anymore. When even VMware, whose virtual machine orchestration tools compete with Kubernetes, is now running its own Kubernetes distribution, you know Kubernetes has won the orchestrator wars.

OpenStack, a platform for building private clouds, has been a similar success story for open source on cloud-based architectures. Perhaps it hasnt wiped the floor of the competition as thoroughly as Kubernetes did, but OpenStack nonetheless remains a highly successful, widely used solution for companies seeking to build private clouds. =

You can draw similar conclusions about Docker, an open source containerization platform that has become the go-to solution for companies that want a more agile and resource-efficient solution than proprietary virtual machines.

And even in cases where companies do want to build their clouds with plain-old virtual machines, KVM, the open source hypervisor built into Linux, now holds its own against competing VM platforms from vendors like VMware and Microsoft.

When it comes to building private (or, to a lesser extent, hybrid) cloud-based infrastructures, then, open source has done very well during the past decade. Ten years ago, you would have had to rely on proprietary tools to fill the gaps in which platforms like Kubernetes, OpenStack, Docker and KVM have now become de facto solutions.

Open source appears less successful, however, when you look at the public cloud. Although the major public clouds offer SaaS solutions for platforms like Kubernetes and Docker, they tend to wrap them up in proprietary extensions that make these platforms feel less open source than they actually are.

Meanwhile, most of the core IaaS and SaaS services in the public clouds are powered by closed-source software. If you want to store data in Amazon S3, or run serverless functions in Azure Functions, or spin up a continuous delivery pipeline in Google Cloud, youre going to be using proprietary solutions whose source code you will never see. Thats despite the fact that open source equivalents for many of these services exist (such as Qinling, a serverless function service, or Jenkins, for CI/CD).

The consumer side of the cloud market is dominated by closed-source solutions, too. Although open source alternatives to platforms like Zoom and Webex exist, they have received very little attention, even in the midst of panic over privacy and security shortcomings in proprietary collaboration platforms.

One obvious objection to running more open source software in the cloud is that cloud services cost money to host, which makes it harder for vendors to offer open source solutions that are free of charge. Its easy enough to give away Firefox for people to install on their own computers, because users provide their own infrastructure. But it would be much more expensive to host an open source equivalent to Zoom, which requires an extensive and expensive infrastructure.

Id argue, however, that this perspective reflects a lack of imagination. There are alternatives to traditional, centralized cloud infrastructure. Distributed, peer-to-peer networks could be used to host open source cloud services at a much lower cost to the service provider than a conventional IaaS infrastructure.

Id point out, too, that many proprietary cloud services are free of cost. In that sense, the argument that SaaS providers need to recoup their infrastructure expenses, and therefore cant offer free and open source solutions, doesnt make a lot of sense. If Zoom can be free of cost for basic usage, there is no reason it cant also be open source.

Admittedly, making more cloud services open source would not solve the fundamental issue discussed above regarding the control that users surrender when they run code on a server owned by someone else. But it would at least provide users with some ability to understand how the SaaS applications or public cloud IaaS services they use work, as well as greater opportunity to extend and improve them.

Imagine a world in which the source code for Facebook or Gmail were open, for example. I suspect there would be much less concern about privacy issues, and much greater opportunity for third parties to build great solutions that integrate with those platforms, if anyone could see the code.

But, for now, these visions seem unrealistic. There is little sign that open source within the cloud will break out beyond the private cloud and application deployment niches where it already dominates. And thats a shame for anyone who agrees with Linus Torvalds that software, among other things, is better when its free.

Read more:

Why Cloud-Based Architectures and Open Source Don't Always Mix - ITPro Today

The Government releases the source code of its Radar COVID tracking app and publishes it on GitHub – Explica

On September 1, SEDIA (Secretary of State for Digitalization and Artificial Intelligence) announced the code release imminent of their controversial mobile Radar COVID tracking app, with which they intend to trace the contacts of coronavirus patients and thus be able to detect other infected people in good time.

They then dated said release for today September 9, and they justified it based on transparency and their intention that the community could help improve the app although many are now wondering why wait for such an advanced stage of its implementation if the developer community really wanted to help out.

The delay in the publication of said code, since it had already been said at the beginning of August that the intention was for the final version to be open source, is due to the fact that the government wanted to wait for all the CCAA that requested it to have it integrated into their systems.

But the aspect of transparency is a fundamental aspect of this code release, as many voices had been raised criticizing the possible malicious uses that the government could give such sensitive information as was the complete list of users of the app that each of us would have met in the last week.

Now, having the code enables programming experts to take a look under the hood of the applicationIn order to confirm whether the actual management of personal data observable in the app code coincides with that previously explained by SEDIA, as well as to rule out the existence of hidden functionalities.

The code has been available for a few minutes on GitHub, divided into five repositories that correspond to each of the software that makes up the tracking system:

The applications for users: both versions of the app (iOS and Android) are developed entirely in Kotklin.

The DP-3T server: This software, developed in Java, is a fork of the original DP-3T.

The verification service server: This software developed in Java allows the CCAA to request verification codes to provide them to COVID-19 patients.

The configuration service server: This software, developed in Java, allows user applications to obtain information about the Autonomous Communities and the available languages.

As a negative point, it is striking that application documentation (with instructions that allow, for example, to compile them on our computers) is entirely in English.

Radar Covid bases its operation on an API developed jointly by Google and Apple, based in turn on a European protocol developed by at the Swiss Federal Institute of Technology by a team led by the Spanish engineer Carmela Troncoso.

Said protocol, called DP-3T by the acronym of his full name in English, it would come to be translated as privacy-preserving decentralized proximity tracking (Its operation, as well as that of the API, has been explained in detail by our colleagues at Xataka).

DP-3T is subject to an open source license, the Mozilla Public License 2.0, which allows reuse the code in applications attached to other licenses, both proprietary and those with a clear commitment to free software (such as the GNU license used, for example, by Linux).

And that license, MPL 2.0, It will also be the one chosen by SEDIA to release the COVID Radar code, despite the fact that there is already another license (the EUPL or European Union Public License) created precisely for the purpose of make it easier for EU public administrations to release the code of its technological developments.

Share The Government releases the source code of its Radar COVID tracking app and publishes it on GitHub

Visit link:

The Government releases the source code of its Radar COVID tracking app and publishes it on GitHub - Explica

Microsoft makes its Fluid Framework open source, the TypeScript library for creating web applications with real-time collaboration – Explica

After promising last May that the Fluid Framework would become open source, Microsoft has finally released the code and posted it on GitHub. This library was very well received at Build 2019, Microsofts developer conference.

The idea behind Fluid Framework is to offer developers a platform to create collaborative low-latency experiences around documents, in the same way that Microsoft itself is using it within Office applications.

The idea is to offer applications that allow a user to make changes in the browser, such as adding comments, or editing the text, or pressing a button, and the rest of the users who collaborate can see it almost instantly.

It is something like offering a framework for developers to create applications in the style of Google Docs with collaboration in near real time, but with even more features.

Additionally, this Microsoft technology enables the developer to leverage a customer-centric application model with persistent data that do not require writing custom server-side code.

All documentation is available at fluidframework.com, although due to the huge traffic the site is experiencing, it has been working intermittently. In addition to this there are some demos available at fluidframework.com/playground among which are a small puzzle game in which thousands of people made changes to the puzzle in real time, and each user could see the thousands of edits and updates that the others did.

Share Microsoft makes its Fluid Framework open source, the TypeScript library to create web applications with real-time collaboration

View original post here:

Microsoft makes its Fluid Framework open source, the TypeScript library for creating web applications with real-time collaboration - Explica

Remote Work Doesn’t Have to Mean All-Day Video Calls – Harvard Business Review

The Covid-19 crisis has distanced people from the workplace, and employers have generally, if sometimes reluctantly, accepted that people can work effectively from home. As if to compensate for this distancing and keep the workplace alive in a virtual sense, employers have also encouraged people to stick closely to the conventional workday. The message is that working from home is fine and can even be very efficient as long as people join video calls along with everyone else all through the day.

But employees often struggle with the workday when working from home, because many have to deal with the competing requests coming from their family, also housebound. So how effective really is working from home if everyone is still working to the clock? Is it possible to ditch the clock?

The answer seems to be that it is. Since before the pandemic weve been studying the remote work practices of the tech companyGitLab to explore what it might look like if companies to break their employees chronological chains as well as their ties to the physical workplace.

From its foundation in 2014, GitLab has maintained an all-remote staff that now comprises more than 1,300 employees spread across over 65 countries. The git way of working uses tools that let employees work on ongoing projects wherever they are in the world and at their preferred time. The idea is that because its always 9 to 5 somewhere on the planet, work can continue around the clock, increasing aggregate productivity. That sounds good, but a workforce staggered in both time and space presents unique coordination challenges with wide-ranging organizational implications.

The most natural way to distribute work across locations is to make it modular and independent, so that there is little need for direct coordination workers can be effectively without knowing how their colleagues are progressing. This is why distributed work can be so effective for call centers and in patents evaluation. But this approach has its limits in development and innovation related activities, where the interdependencies between components of work are not always easy to see ahead of time.

For this kind of complex work, co-location with ongoing communication is often a better approach because it offers two virtues: synchronicity and media richness. The time lag in the interaction between two or more individuals is almost zero when they are co-located, and, although the content of the conversation may be the same in both face-to-face and in virtual environments, the technology may not be fully able to convey soft social and background contextual cues how easy is it to sense other peoples reactions in a group zoom meeting?

All this implies that simply attempting to replicate online (through video or voice chat) what happened naturally in co-located settings is unlikely to be a winning or complete strategy. Yet this approach of seeing the face is the one that people seem to default to when forced to work remotely, as our survey of remote working practices in the immediate aftermath of lockdowns around the world has revealed.

There is a way through this dilemma. Our earlierresearchon offshoring of software development showed that drawing on tacit coordination mechanisms, such as a shared understanding of work norms and context, allows for coordination without direct communication.

Coordination in this case happens through the observation of the action of other employees and being able to predict what they will do and need based on shared norms. It can occur either synchronously (where, for instance, two people might work on the same Google doc during the same time period), or asynchronously (when people make clear hand-offs of the document, and do not work on it when the other is).

Software development organizations often opt for this solution and tend to rely extensively on shared repositories and document authoring tools, with systems for coordinating contributions (e.g., continuous integration and version control tools). But GitLab is quite unique in the for-profit sector in how extensively it relies on this third path not only for its coding but for how the organization itself functions. It leans particularly on asynchronous working because its employees are distributed across multiple time zones. As a result, although the company does use videoconferencing, almost no employee ever faces a day full of video meetings.

At the heart of the engineering work that drives GitLabs product development is the git workflow process invented by Linux founder Linus Torvalds. In this process, a programmer making a contribution to a code forks (copies) the code, so that it is not blocked to other users, works on it, and then makes a merge request to have the edited version replace the original, and this new version becomes available for other contributions.

The process combines the possibility of distributed asynchronous work with a structure that checks for potential coordination failures and ensures clarity on decision rights. Completely electronic (which makes remote work feasible) and fully documented, it has become an important framework for distributed software development in both for-profit and open source contexts.

GitLab has taken the git a step further, applying it also to managerial work that involves ambiguity and uncertainty. For instance, GitLabs chief marketer recently outlined a vision for integrating video into the companys year-ahead strategy. He requested asynchronous feedback from across the company within a fixed time window, and then scheduled a single synchronous meeting to agree on a final version of the vision. This vision triggered asynchronously input changes from multiple contributors to the companys handbook pages relating to marketing objectives and key results that were merged on completion.

GitLabs high degree of reliance on asynchronous working is made possible by respecting the following three rules right down to the task level:

1. Separate responsibility for doing the task from the responsibility for declaring it done.

In co-located settings, where employees are in the same office, easy communication and social cues allow them to efficiently resolve ambiguities and manage conflict around work responsibilities and remits. In remote settings, however, this can be difficult. In GitLab, therefore, every task is expected to have a Directly Responsible Individual (DRI), who is responsible for the completion of the task and has freedom in how it should be performed.

The DRI, however, does not get to decide whether the task has been completed. That function is the responsibility of a Maintainer, who has the authority to accept or reject the DRIs merge requests. Clarity on these roles for every task helps reduce confusions and delays and enables multiple DRIs to work in parallel in any way they want on different parts of a code by making local copies (forking). It is the Maintainers role to avoid unnecessary changes and maintain consistency in the working version of the document or code.

In a non-software context, say in developing the GitLab handbook page on expenses policies, individual DRIs, who could be anyone in the company, would write specific policies in any way they choose, and their contributions would be accepted or rejected by the CFO acting in the capacity of Maintainer, who could also offer feedback (but not direction) to the DRIs. Once live, the merged page serves as the single source of truth on expenses policies unless or until someone else makes a new proposal. Once more, the Maintainer would approve, reject, or offer feedback on the new proposal. In contexts like this, we would expect people in traditional management positions to serve as Maintainers.

2. Respect the minimum viable change principle.

When coordination is asynchronous, there is a risk that coordination failures may go undetected for too long for instance, two individuals may be working in parallel on the same problem, making one of their efforts redundant, or one person may be making changes that that are incompatible with the efforts of another. To minimize this risk, employees are urged to submit the minimum viable change an early stage, imperfect version of their suggested changes to code or documents. This makes it more likely that people will pick up on whether work is incompatible or being duplicated. Obviously, a policy of minimum viable changes should come with a no shame policy on delivering a temporarily imperfect output. In remote settings, the value of knowing what the other is doing as soon as possible is greater than getting the perfect product.

3. Always communicate publicly.

As GitLab team members are prone to say, we do not send internal email here. Instead, employees post all questions and share all information on the Slack channels of their teams, and later the team leaders decide what information needs to be permanently visible to others. If so, it gets stored in a place available to everyone in the company, in an issue document or on a page in the companys online handbook, which is accessible to anyone, in or outside the company. This rule means that people dont run the risk of duplicating, or even inadvertently destroying the work of their colleagues. Managers devote a lot of time to curating the information generated through the work of employees they supervise and are expected to know better than others what information may be either broadly needed by a future team or that would be useful for people outside the company.

However well implemented, asynchronous remote working of this kind cannot supply much in the way of social interaction. Thats a major failing, because social interaction is not only a source of pleasure and motivation for most, it is also where the random encounters, the serendipitous exchanges by the coffee machines and lift lobbies, create opportunities for ideas and information to flow and recombine.

To minimize this limitation, GitLab provides occasions for non-task related interaction. Each day, team members may attend one of three optional social calls staggered to be inclusive of time zones. The calls consist of groups of 8-10 people in a video chatroom, where they are free to discuss whatever they want (GitLab provides a daily starting question as icebreaker in case needed, such as: What did you do over the weekend? or Where is the coolest place you ever traveled and why?).

In addition, GitLab has social slack groups: thematic chat rooms that employees with similar interests can participate in (such as: #cat, #dogs, #cooking, #mental_health_aware, #daily_gratitude, #gaming) and a #donut_be_strangers channel that allows strangers that have a mutual interest to have a coffee chat to get together.

Of course, GitLab managers are under no illusion that these groups substitute perfectly for the kinds of rich social interactions outside work that people find rewarding. But they do help to keep employees connected, and, at a time when many employees have been working under confinement rules, this has proved very helpful in sustaining morale.

***

Working from home in an effective way goes beyond just giving employees a laptop and a Zoom account. It encompasses practices intended to compensate or avoid the core limitations of working remotely, as well as fully leverage the flexibility that remote can offer working not only from anywhere but at any desired time. We have focused on GitLab because it not only has extensive experience in remote working but also because it pursues an unusual mode of solving the intrinsic challenges of remote work. While some of GitLabs core processes (like its long, remote onboarding process for new hires) and advantages (like the possibility of hiring across the world) cannot be fully reproduced in the short run in companies that will be just temporarily remote, there are others that any company can easily implement.

Go here to see the original:

Remote Work Doesn't Have to Mean All-Day Video Calls - Harvard Business Review