Create ASTC Textures Faster With the New astcenc 2.0 Open Source Compression Tool – POP TIMES UK

Adaptive Scalable Texture Compression (ASTC) is an advanced lossy texture compression format, developed by Arm and AMD and released as royalty-free open standard by the Khronos Group. It supports a wide range of 2D and 3D color formats with a flexible choice of bitrates, enabling content creators to compress almost any texture asset, using a level of compression appropriate to their quality and performance requirements.

ASTC is increasingly becoming the texture compression format of choice for mobile 3D applications using the OpenGL ES and Vulkan APIs. ASTCs high compression ratios are a perfect match for the mobile market that values smaller download sizes and optimized memory usage to improve energy efficiency and battery life.

ASTC 2D Color Formats and Bitrates

The astcenc ASTC compression tool was first developed by Arm while ASTC was progressing through the Khronos standardization process seven years ago. astcenc has become widely used as the de facto reference encoder for ASTC, as it leverages all format features, including the full set of available block sizes and color profiles, to deliver high-quality encoded textures that are possible when effectively using ASTCs flexible capabilities.

Today, Arm is delighted to announce astcenc 2.0! This is a major update which provides multiple significant improvements for middleware and content creators.

The original astcenc software was released under an Arm End User License Agreement. To make it easier for developers to use, adapt, and contribute to astcenc development, including integration of the compressor into application runtimes, Arm relicensed the astcenc 1.X source code on GitHub in January 2020 under the standard Apache 2.0 open source license.

The new astcenc 2.0 source code is now also available on GitHub under Apache 2.0.

astcenc 1.X emphasized high image quality over fast compression speed. Some developers have told Arm they would love to use astcenc for its superior image quality, but compression was too slow to use in their tooling pipelines. The importance of this was reflected in the recent ASTC developer survey organized by Khronos where developer responses rated compression speed above image quality in the list of factors that determine texture format choices.

For version 2.0, Arm reviewed the heuristics and quality refinement passes used by the astcenc compressoroptimizing those that were adding value and removing those that simply didnt justify their added runtime cost. In addition, hand-coded vectorized code was added to the most compute intensive sections of the codec, supporting SSE4.2 and AVX2 SIMD instruction sets.

Overall, these optimizations have resulted in up to 3x faster compression times when using AVX2, while typically losing less than 0.1 dB PSNR in image quality. A very worthwhile tradeoff for most developers.

astcenc 2.0 Significantly Faster ASTC Encoding

The tool now supports a clearer set of compression modes that directly map to ASTC format profiles exposed by the Khronos API support and API extensions.

Textures compressed using the LDR compression modes (linear or sRGB) will be compatible with all hardware implementing OpenGL ES 3.2, the OpenGL ES KHR_texture_compression_astc_ldr extension, or the Vulkan ASTC optional feature.

Textures compressed using the HDR compression mode will require hardware implementing an appropriate API extension, such as KHR_texture_compression_astc_hdr.

In addition, astcenc 2.0 now supports commonly requested input and output file formats:

Finally, the core codec is now separable from the command line front-end logic, enabling the astcenc compressor to be integrated directly into applications as a library.

The core codec library interface API provides a programmatic mechanism to manage codec configuration, texture compression, and texture decompression. This API enables use of the core codec library to process data stored in memory buffers, leaving file management to the application. It supports parallel processing for compression of a single image with multiple threads or compressing multiple images in parallel.

You can download astcenc 2.0 on GitHub today, with full source code and pre-built binaries available for Windows, macOS, and Linux hosts.

For more information about using the tool, please refer to the project documentation:

Arm have also published an ASTC guide, which gives an overview of the format and some of the available tools, including astcenc .

If you have any questions, feedback, or pull requests, please get in touch via the GitHub issue tracker or the Arm Mali developer community forums:

Khronos and Vulkan are registered trademarks, and ANARI, WebGL, glTF, NNEF, OpenVX, SPIR, SPIR-V, SYCL, OpenVG and 3D Commerce are trademarks of The Khronos Group Inc. OpenXR is a trademark owned by The Khronos Group Inc. and is registered as a trademark in China, the European Union, Japan and the United Kingdom. OpenCL is a trademark of Apple Inc. and OpenGL is a registered trademark and the OpenGL ES and OpenGL SC logos are trademarks of Hewlett Packard Enterprise used under license by Khronos. All other product names, trademarks, and/or company names are used solely for identification and belong to their respective owners.

See the article here:

Create ASTC Textures Faster With the New astcenc 2.0 Open Source Compression Tool - POP TIMES UK

Microsoft Teams: Now you can use it with GitHub in this new public beta – ZDNet

Microsoft-owned GitHub has announced the public beta of a new GitHub integration with Microsoft Teams.

The public beta means developers using GitHub now have the option of adding the GitHub app to the Microsoft Teams app, just as they've been able to do with the Slack chat for several years.

GitHub and Slack teamed up in 2018 to bring GitHub to Slack to make it easier for teams to track GitHub activity in Slack channels.

SEE: Top 100+ tips for telecommuters and managers (free PDF) (TechRepublic)

The GitHub and Microsoft Teams integration, which is maintained by GitHub, offers similar functionality as the Slack integration but for Teams channels.

"The GitHub integration for Microsoft Teams gives you and your teams full visibility into your GitHub projects right in your Teams channels, where you generate ideas, triage issues and collaborate with other teams to move projects forward," GitHub explains.

GitHub users can install the GitHub preview app from the Microsoft Teams app store within the Teams app. Users need to link GitHub and Teams accounts by authenticating to GitHub using a @github sign-in command.

GitHub for Teams allows users to track and create new commits, pull requests, issues, status updates, comments and code reviews.

Github users can subscribe and unsubscribe to notifications for an organization or a repository's activity to keep notifications relevant.

GitHub highlights a feature that lets users 'unfurl' GitHub links to give others in a Microsoft Teams channel more information when they share links to GitHub activities, such as pull requests.

The app groups notifications for pull requests and issues under a parent card as replies. The parent card shows the latest of these issues along with information about the title, assignees, reviewers, labels and checks.

SEE: GitHub: Our upgrade to programming language Ruby 2.7 fixes over 11,000 issues

The GitHub and Teams integration should be good news for the portion of GitHub's 30 million developer users who also rely on Teams for collaboration.

Microsoft meanwhile has been busy releasing new features for Microsoft Teams, which as of April had 75 million daily active users. The latest feature it released for Teams was the new Lists app, which offers Teams users a spreadsheet format with a focus on collaboration and completing tasks.

Continued here:

Microsoft Teams: Now you can use it with GitHub in this new public beta - ZDNet

How to find anyone anywhere with online facial recognition – E&T Magazine

Is DIY facial recognition the new privacy threat? Plus back to college, and the E&T Innovation Awards go virtual

Facial recognition technology is turning up in ever more applications from the useful, like unlocking smartphones, and the fun, like Facebook tagging, to the essential, like crime detection, or the life-saving, like prevention of terrorism.

Our faces too are photographed, filmed and sometimes clocked almost everywhere we go. We post them ourselves, on social media or elsewhere on the web. How many images of your face does your name yield in Google Images? Mine turns up a few dozen, half of them appearing at the top of this monthly column. Ive hardly aged! Its not as many as the Queen or David Beckham, but then I manage to avoid the paparazzi and rarely post selfies (I made an exception when I met Giorgio Moroder at CES in January).

I dont really want my phizog everywhere online, for no particular reason really except vague, probably irrational worries about security and privacy. I would be more concerned if I lived in a more repressive regime, especially if was looking to changeit.

So how easy is it to search the internet and find anyone anywhere? Policies on facial recognition vary widely around the world; some governments employ it freely themselves while others are more cautious about citizen privacy. Some allow private companies more free rein than others. And the tech giants can themselves also be cautious about the implications of allowing anyone to find any face anywhere on the web. Upload a picture of yourself to Google Images and it will produce people with similar clothing or backgrounds but probably not you.

Yet it may only be a matter of time before the genie is really out of the bottle because it is so very easy. Facial-recognition technology is freely available as open-source code packages. Ben Heubl tried it for E&T.It was scarily or satisfyingly efficient, depending on your viewpoint. We also tried it with a picture of Lord Lucan to see if we could solve that mystery. Most of the matches were taken before his disappearance, but it also flagged up James Coburn in A Fistful of Dynamite as a match. Who would have guessed? Yes, it works but its not perfect.

Also in this issue, we start a new regular feature with TV presenter Dr Shini Somara interviewing some extraordinary engineers about their careers, influences and aspirations. First we hear from Clare Elwell, who develops optical monitoring and imaging systems for medicine at UCL, about what makes her tick.

Next month well be revealing the shortlists for the E&T Innovation Awards. Fingers crossed if youve entered, and if you havent well, do it next year! It will be a virtual event this year, but there are some exciting ideas in the pipeline and we aim to make it bigger and better than the real live event. Stay tuned and, as the autumn approaches, stay safe.

E&T

Image credit: E&T

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Here is the original post:

How to find anyone anywhere with online facial recognition - E&T Magazine

Three takeaways from a visit to TikToks new transparency center – The Verge

In July, amid increasing scrutiny from the Trump administration, TikTok announced a novel effort to build trust with regulators: a physical office known as the Transparency and Accountability Center. The center would allow visitors to learn about the companys data storage and content moderation practices, and even to inspect the algorithms that power its core recommendation engine.

We believe all companies should disclose their algorithms, moderation policies, and data flows to regulators, then-TikTok CEO Kevin Mayer said at the time. We will not wait for regulation to come.

Regulation came a few hours later. President Trump told reporters on Air Force One that he planned to ban TikTok from operating in the United States, and a few days later he did. The president set a deadline for ByteDance to sell TikTok by September 15th that is, this coming Tuesday and Mayer quit after fewer than 100 days on the job. (The deadline has since been changed to November 12th but also Trump said today that the deadline is also still Tuesday? Help?)

With so much turmoil, you might expect the company to set aside its efforts to show visitors its algorithms, at least temporarily. But the TikTok Transparency and Accountability Center is now open for (virtual) business and on Wednesday I was part of a small group of reporters who got to take a tour over Zoom.

Much of the tour functioned as an introduction to TikTok: what it is, where its located, and who runs it. (Its an American app, located in America, run by Americans, was the message delivered.) We also got an overview of the apps community guidelines, its approach to child safety, and how it keeps data secure. All of it is basically in keeping with how American social platforms manage these concerns, though its worth noting that 2-year-old TikTok built this infrastructure much faster than its predecessors did.

More interesting was the section where Richard Huang, who oversees the algorithm responsible for TikToks addictive For You page, explained to us how it works. For You is the first thing you see when you open TikTok, and it reliably serves up a feed of personalized videos that leaves you saying Ill just look at one more of these for 20 minutes longer than you intended. Huang told us that when a new user opens TikTok, the algorithm fetches eight popular but diverse videos to show them. Sara Fischer at Axios has a nice recap of what happens from there:

The algorithm identifies similar videos to those that have engaged a user based on video information, which could include details like captions, hashtags or sounds. Recommendations also take into account user device and account settings, which include data like language preference, country setting, and device type.

Once TikTok collects enough data about the user, the app is able to map a users preferences in relation to similar users and group them into clusters. Simultaneously, it also groups videos into clusters based on similar themes, like basketball or bunnies.

As you continue to use the app, TikTok shows you videos in clusters that are similar to ones you have already expressed interest in. And the next thing you know, 80 minutes have passed.

Eventually the transparency center will be a physical location that invited guests can visit, likely both in Los Angeles and in Washington, DC. The tour will include some novel hands-on activities, such as using the companys moderation software, called Task Crowdsourcing System, to evaluate dummy posts. Some visitors will also be able to examine the apps source code directly, TikTok says.

I think this is great. Trust in technology companies has been in decline, and allowing more people to examine these systems up close feels like a necessary step toward rebuilding it. If you work at a tech company and ever feel frustrated by the way some people discuss algorithms as if theyre magic spells rather than math equations well, this how you start to demystify them. (Facebook has a similar effort to describe what youll find in the News Feed here; I found it vague and overly probabilistic compared to what TikTok is offering. YouTube has a more general guide to how the service works, with fairly sparse commentary on how recommendations function.)

Three other takeaways from my day with TikTok:

TikTok is worried about filter bubbles. Facebook has long denied that it creates filter bubbles, saying that people find a variety of diverse viewpoints on the service. Thats why I was interested to hear from TikTok executives that they are quite concerned about the issue, and are regularly refining their recommendation algorithm to ensure you see a mix of things. Within a filter bubble, theres an informational barrier that limits opposing viewpoints and the introduction of diverse types of content, Huang said. So, our focus today is to ensure that misinformation and disinformation does not become concentrated in users For You page.

The problems are somewhat different on the two networks Facebook is primarily talking about ideological diversity, where TikTok is more concerned with promoting different types of content but I still found the distinction striking. Do social networks pull us into self-reinforcing echo chambers, or dont they?

TikTok is building an incident command center in Washington, DC. The idea is to be able to identify critical threats in real time and respond quickly, the company said, which feels particularly important during an election year. I dont know how big a deal this is, exactly for the time being, it sounds like it could just be some trust and safety folks working in a shared Slack channel? But the effort does have an undeniably impressive and redundant official name: a monitoring, response and investigative fusion response center. OK!

You cant prove a negative. TikTok felt compelled to design these guided tours amid fears that the app would be used to share data with Chinese authorities or promote Communist Party propaganda to Americans. (Ben Thompson has a great, subscribers-only interview with the New York Times Paul Mozur that touches on these subjects today.) The problem with the tour, though, is that you cant show TikTok not doing something. And I wonder if that wont make the transparency center less successful than the company hoped.

I asked Michael Beckerman, a TikTok vice president and head of US public policy, about that challenge.

Thats why were trying to be even more transparent were meeting and talking to everybody that we can, Beckerman told me. What a lot of people are saying people that are really well read into global threats is that TikTok doesnt rank. So if youre spending too much time worrying about TikTok, what are you missing?

Oh, I can think of some things.

Anyway, TikToks transparency center is great a truly forward-leaning effort from a young company. Assuming TikTok survives beyond November, Id love to visit it in person sometime.

Today in news that could affect public perception of the big tech platforms.

Trending up: Google is giving more than $8.5 million to nonprofits and universities using artificial intelligence and data analytics to better understand the coronavirus crisis, and its impact on vulnerable communities. (Google)

Russian government hackers have targeted 200 organizations tied to the 2020 presidential election in recent weeks, according to Microsofts threat intelligence team. China has also launched cyberattacks against high-profile individuals linked to Joe Bidens campaign, while Iranian actors have targeted people associated with President Trumps campaign. Dustin Volz at The Wall Street Journal has the story:

Most of the attempted intrusions havent been successful, and those who were targeted or compromised have been directly notified of the malicious activity, Microsoft said. Russian, Chinese and Iranian officials didnt immediately respond to a request for comment.

The breadth of the attacks underscore widespread concerns among U.S. security officials and within Silicon Valley about the threat of foreign interference in the presidential election less than two months away. [...]

The Russian actor tracked by Microsoft is affiliated with a military intelligence unit and is the same group that hacked and leaked Democratic emails during the 2016 presidential contest. In addition to political consultants and state and national parties, its recent targets have included advocacy organizations and think tanks, such as the German Marshall Fund, as well as political parties in the U.K., Microsoft said.

Whats the worst thing that could happen the night of the US presidential election? Experts have a few ideas. Misinformation campaigns about voter fraud, disputed results, and Russian interference are all possible scenarios. (The New York Times)

Voting machines have a bad reputation, but most of their problems are actually pretty minor and unlikely to impair a fair election. Theyre often the result of ancient technology not hacking. (Adrianne Jeffries / The Markup)

Google said it will remove autocomplete predictions that seem to endorse or oppose a candidate or a political party, or that make claims about voting. The move is an attempt to improve the quality of information available on Google before the election. (Anthony Ha / TechCrunch)

Trump is considering nominating a senior adviser at the National Telecommunications and Information Administration who helped draft the administrations social media executive order to the Federal Communications Commission. Nathan Simington is known for supporting Republicans bias against conservatives schtick, and helped to craft a recent executive order about social media. (Makena Kelly / The Verge)

A network of Facebook pages is spreading misinformation about the 2020 presidential election, funneling traffic through an obscure right-wing website, then amplifying it with increasingly false headlines. The artificial coordination might break Facebooks rules. (Popular Information)

Facebook is re-evaluating its approach to climate misinformation. The company is working on a climate information center, which will display information from scientific sources, although nothing has been officially announced. It will look beautiful sandwiched in between the COVID-19 information center and the voter information center. (Sarah Frier / Bloomberg)

Facebook reviews user data requests through its law enforcement portal manually, without screening the email address of people who request access. The company prefers to let anyone submit a request and then check that its real, rather than block them with an automated system. (Lorenzo Franceschi-Bicchierai / Vice)

QAnon is attracting female supporters because the community isnt as insular as other far-right groups, this piece argues. That might be a bigger factor in its ability to convert women than the save the children content. (Annie Kelly / The New York Times)

Chinas embassy in the UK is demanding Twitter open an investigation after its ambassadors official account liked a pornographic clip on the platform earlier this week. The embassy said the tweets were liked by a possible hacker who had gained access to the ambassadors account. Thats what they all say! (Makena Kelly / The Verge)

GitHub has become a repository for censored documents during the coronavirus crisis. Internet users in China are repurposing the open source software site to save news articles, medical journals, and personal accounts censored by the Chinese government. (Yi-Ling Liu / Wired)

Brazil is trying to address misinformation issues with a new bill that would violate the privacy and freedom of expression of its citizens. If it passes, it could be one of the most restrictive internet laws in the world. (Raphael Tsavkko Garcia / MIT Technology Review)

Former NSA chief Keith Alexander has joined Amazons board of directors. Alexander served as the public face of US data collection during the Edward Snowden leaks. Heres Russell Brandom at The Verge:

Alexander is a controversial figure for many in the tech community because of his involvement in the widespread surveillance systems revealed by the Snowden leaks. Those systems included PRISM, a broad data collection program that compromised systems at Google, Microsoft, Yahoo, and Facebook but not Amazon.

Alexander was broadly critical of reporting on the Snowden leaks, even suggesting that reporters should be legally restrained from covering the documents. I think its wrong that that newspaper reporters have all these documents, the 50,000-whatever they have and are selling them and giving them out as if these you know it just doesnt make sense, Alexander in an interview in 2013. We ought to come up with a way of stopping it. I dont know how to do that. Thats more of the courts and the policymakers but, from my perspective, its wrong to allow this to go on.

Facebook launched new product called Campus, exclusively for college students. Its a new section of the main app where students can interact only with their peers, and it requires a .edu address to access. I say open it up to everyone. Worked last time! (Ashley Carman / The Verge)

Ninja returned to Twitch with a new exclusive, multiyear deal. Last August, he left Twitch for an exclusive deal with Mixer which shut down at the end of June. (Bijan Stephen / The Verge)

The Social Dilemma, the new Netflix documentary about the ills of big tech platforms, seems unclear on what exactly makes social media so toxic. It also oversimplifies the impact of social media on society as a whole. (Arielle Pardes / Wired)

You can make a deepfake without any coding experience in just a few hours. One of our reporters just did! (James Vincent / The Verge)

Stuff to occupy you online during the quarantine.

Choose your own election adventure. Explore some worst-case scenarios with this, uh, fun new game from Bloomberg.

Subscribe to The Verges new weekly newsletter about the pandemic. Mary Beth Griggs Antivirus brings you news from the vaccine and treatment fronts, and stories that remind us that theres more to the case counts than just numbers.

Subscribe to Kara Swishers new podcast for the New York Times. The first episode of her new interview show drops later this month.

Watch The Social Dilemma. The new social-networks-are-bad documentary is now on Netflix. People are talking about it!

Send us tips, comments, questions, and an overview of how your algorithms work: casey@theverge.com and zoe@theverge.com.

Continued here:

Three takeaways from a visit to TikToks new transparency center - The Verge

Top 15 DevOps blogs to read and follow – TechTarget

Between the culture, processes, tools and latest trends, there is a lot to know about DevOps -- and there is no shortage of content across the internet that covers it all.

Depending on your DevOps interests and perspectives, that might be a good thing. For many, it's overwhelming to parse through all the case studies on adoption, technical recommendations and tutorials, product reviews, trends and latest news.

Don't get lost on the web. Check out these top 15 DevOps blogs from experienced developers, consultants, vendors and thought leaders across the industry.

The Agile Admin blog covers topics such as DevOps, Agile, cloud computing, infrastructure automation and open source, to name a few. It is run by sysadmins and developers Ernest Mueller, James Wickett, Karthik Gaekwad and Peco Karayanev. Beginners should start with this blog's thorough introduction to DevOps before they dive into deeper discussions and more technical subjects, such as site reliability engineering (SRE), monitoring and observability.

Apiumhub is a software development company in Barcelona. Its Apiumhub blog looks at Agile web and app development, industry trends, tools and, of course, DevOps. Readers will find expertise from the company's DevOps pros as well as tips from contributors. The blog discusses best practices for technical DevOps processes and includes other resources, such as a list of DevOps experts to follow.

Atlassian is a software company that offers products for software development, project management, collaboration and code quality. It also produces a bimonthly newsletter and blog site called Work Life, which includes a section about DevOps. The DevOps blog posts cover subjects such as DevSecOps, CI/CD integrations, compliance and toolchains. Also included are surveys with DevOps professionals. Be aware that some of the content is designed to align with the various products that Atlassian offers.

Microsoft's Azure DevOps blog does not post a lot of its own tutorials or tips, but instead publishes Azure updates and a weekly top stories roundup from across the web. While aimed primarily at Microsoft users, the blog offers useful insights for most anyone. The content changes weekly, sometimes with different themes that will expand a reader's knowledge.

The Capital One Tech blog posts go beyond DevOps to cover enterprise technology across the board, but regularly looks into the company's DevOps journey. With posts from Capital One's software engineers, this blog creatively breaks down its commitment to DevOps, from its pipeline design to creation of the open source, end-to-end monitoring tool Hygieia.

A diary of sorts for the online marketplace Etsy, Code as Craft publicly discusses the tools it uses, its software projects and experience with public cloud infrastructure. The blog posts are not laser-focused on DevOps, but are generally informative and review Etsy's experiments, some that might be outside the scope of the average developer.

The DevOpsGroup offers services to help enterprises adopt and maintain DevOps and the cloud. The company's blog focuses on the people behind DevOps, and tackles subjects such as burnout and the role of a scrum master. It also offers technical tutorials, such as how to set up Puppet -- as well as broader overviews, such as how to select CI/CD tools.

DZone, a site geared toward software developers, evolved from CEO Rick Ross' Javalobby to now cover 14 topics, or "zones," such as AI, cloud and Java, and a DevOps Zone. Tools, tutorials, news, oh my! Readers can find content about everything from Docker and Kubernetes to continuous delivery/continuous deployment and testing packaged in articles, webinars, research reports, short technical walkthroughs called "Refcardz," and even comics. Go in with a specific query or be prepared to spend time browsing.

The Everything DevOps Reddit thread is not technically a blog, but it is a valuable source for anyone interested in DevOps. With numerous threads daily, from Q&As to tips from DevOps practitioners, there is something for everyone. Readers will learn about the latest trends and practices for monitoring pipelines, mastering new DevOps skills and more.

Sean Hull, a DevOps and cloud solutions architect, runs his iHeavy blog with a more personable approach to talk about and teach DevOps. His posts present step-by-step tutorials and technical recommendations, but he also doles out advice for the less mechanical aspects of DevOps. For example, check out this post on how to handle people who say they are not paying invoices during a pandemic. The iHeavy blog also has a tab for CTO/CIO topics and advice for startups.

Gene Kim, author of The Phoenix Project and The Unicorn Project, founded IT Revolution to share DevOps practices and processes with the growing DevOps community. IT Revolution's blog posts dive into the culture of DevOps, with articles on leadership, communication and Agile practices. Blog authors include Kim and software development experts Jeffrey Fredrick and Douglas Squirrel. In addition to the blog, IT Revolution publishes books, hosts events and supports research projects.

Agile coach and software engineer Mark Shead creates animated videos that explain basic DevOps and Agile concepts. These entertaining videos cover principals such as Agile transformation, methodology and user stories in short bursts -- perfect for people looking to dip their toes into DevOps or those who want a quick refresher.

This is another blog that provides an expansive look into how a large enterprise applies DevOps practices and principles to its operations. The Netflix Tech blog features posts written by Netflix's data scientists and software and site reliability engineers, giving readers a first-hand behind-the-screen view of one of the biggest streaming services. Topics span the enterprise's operations, from machine learning to data infrastructure, but there's plenty for DevOps fans to appreciate, such as Netflix's experiences with incident management, application monitoring and continuous delivery -- which includes some in-house tools, such as Spinnaker, which is now open source.

The Scott Hanselman blog is run by Scott Hanselman, a programmer, teacher, speaker and member of the web platform team at Microsoft. It averages about four posts a month -- going back as far as 2002 -- that vary from product reviews to Docker tutorials. This DevOps blog includes screenshots of what readers can expect to see when they try it for themselves as well as code examples.

Stackify is an application performance management vendor, but its DevOps blog covers information for anyone looking to learn more about DevOps with any software. It offers readers tricks, tips and resources, digging into best practices for broad topics such as adoption, implementation and security. It also provides detailed information for beginners on popular platforms, such as AWS, Kubernetes and Azure Container Service.

Continue reading here:

Top 15 DevOps blogs to read and follow - TechTarget

DeFi Unlocked: How to Earn Crypto Investment Income on Compound – Cryptonews

Source: Adobe/thodonal

DeFi (decentralized finance) has emerged as the hottest crypto trend of the year, with the total amount of US dollars locked in decentralized finance protocols surpassing USD 9bn on September 1 before correcting lower.

In our new DeFi Unlocked series, we will introduce you to the different ways you can earn investment income in the booming decentralized finance market.

In part one of our DeFi series, you will discover how to earn interest on cryptoassets by placing them into the Compound (COMP) money market protocol.

Compound is an autonomous, open-source interest rate protocol that allows cryptoasset users to borrow and lend digital assets on the Ethereum (ETH) blockchain.

Compound supports BAT, DAI, ETH, USDC, USDT, WBTC, and ZRX, and currently pay interest rates from 0.19% to 8.06%. Prevailing interest rates vary from asset to asset and depend on market conditions.

The idea behind Compound is to enable anyone across the globe with an internet connection to borrow or lend funds in the form of cryptoasset. To date, however, the protocol - like effectively all DeFi apps - has mostly been used by experienced crypto users and investors hunting for yield.

Earning interest on your crypto holding might be easy on Compound, provided you are comfortable with using MetaMask or similar Ethereum client that can interact with dapps (decentralized apps).

To start earning crypto interest on Compound, you will need to take the following steps:

Next, you click on App on the top right of the website to access the protocol dashboard

Then, you connect to the protocol with your Ethereum wallet (MetaMask, Coinbase Wallet, or Ledger)

Once connected, you will be able to view all available assets, borrowing and lending rates.

The stablecoins - DAI, USDC, and USDT - are favorites among DeFi investors as they ensure that the principal of the loans retain its value while investors can cash in on the interest and - in the case of Compound - also on the COMP token.

Click on the asset you want to deposit, enter the amount you wish to lend, and confirm the transaction. In the screenshot below, you can see that if you were to deposit USDC, you would earn 1.96% APY (annual percentage yield) on your deposit.

In addition to the interest earned on USDC, you also receive COMP tokens as an incentive to provide liquidity to the protocol.

Compound users receive COMP tokens for interacting with the protocol. That means regardless of whether you are borrowing or lending, you will receive COMP tokens proportional to the amount you borrow or lend and dependent on the days COMP distributions.

As a result, some creative investors have started to lend one stablecoin and borrow in another - at a small loss in terms of interest rate differential - to earn enough COMP to generate a profit. This form of liquidity mining became popular following Compounds introduction of the COMP governance token in May 2020.

While the returns of liquidity mining COMP compressed, the interest you can earn on US dollar-backed stablecoins, USDT and USDC, are still higher than what you would receive on a currency savings account at your local bank.

As a result, Compound provides an alternative to existing savings accounts and money market fund solutions in the legacy financial system.

The Compound protocol has been running since 2018. However, that does not mean that code vulnerabilities couldnt be found and exploited. Of course, that is the case for all DeFi protocols and not specific to Compound. It offers bug bounties for anyone who can find a vulnerability in its open-source code to increase its security.

There is also the risk of a bank run on Compound. A bank run - in traditional finance - refers to depositors withdrawing their cash when they fear it may no longer be safe in their bank. In the context of the Compound, bank run risk exists because of the protocols utilization rate, which refers to how much depositors funds are going to borrowers. So if all depositors would withdraw their funds in one go, that would be an issue.

To address this concern, Compound adjusts its interest rates according to borrowing activity to entice more depositors to place funds into the protocol.

Finally, there is also some opportunity cost of holding assets in Compound that you cant trade for a potentially better-performing asset in the secondary market. Having said that, you can withdraw your deposits from Compound and convert it into other cryptoassets.

Compound is one of the most established borrowing and lending protocols in todays DeFi landscape. However, interest rates for most available lendable assets on Compound are not going to make you a crypto millionaire anytime soon. While the COMP governance token provides an additional financial incentive to deposit tokens into the protocol, Compound is arguably more of a protocol for investors then opportunistic traders.

View original post here:

DeFi Unlocked: How to Earn Crypto Investment Income on Compound - Cryptonews

the Spanish app to control the pandemic releases its code – Explica

COVID radar It is the Spanish app to control the pandemic caused by the coronavirus, the one in which we have been immersed since last March, which is said soon. To be more exact, it is the app with which the Government will try to improve the tracking of infections and predict possible outbreaks. It is nothing new because similar strategies have been carried out in half the world for a long time, but it is for which it has been bet in Spain.

The app, available for Android and iOS, has been in test mode for a few weeks and It is expected that on September 15 it will be launched throughout the State. However, despite the fact that from official instances it is expected that the application will be used en masse for the help it can provide and there are international trials that support it to deal with the expansion of COVID-19, Radar COVID continues to be a tracking system, also managed by the government itself.

In other words, it is an invention that does not inspire confidence in terms of privacy, which is why the SEDIA (Secretary of State for Digitalization and Artificial Intelligence), the body in charge of developing Radar COVID, promised to release the code source of all software, in order to provide the transparency that only open source offers. And they have fulfilled: Radar COVID can already be considered as free software.

The description of Radar COVID is concise and insistent on the subject of privacy:

Radar COVID notifies you anonymously of the possible contact that you have had in the last 14 days with a person who has been infected using Bluetooth low consumption technology.

COVID radar also allows:

Anonymously communicate your positive diagnosis Anonymously communicate the exposure to people with whom you have been in contact

Radar COVID guarantees security and privacy and is 100% anonymous. For this reason, we do not request your name, your telephone number, or your email.

Radar COVID also ensures that does not collect any geolocation, GPS or temporary dataTherefore, the information stored is only used for the repeated warning between possible risky contacts, but not to identify anyone in principle. In the end, the release of the source code is the act of guaranteeing that everything that is sustained is as is. And there is no reason to doubt that this is the case.

COVID Radar for Android

Radar COVID, in fact, is not a development from scratch, but relies on Apple and Google tracking APIs for coronavirus and other open technologies. All the components of the project are available on GitHub, including the server software and that of the mobile applications for Android and iOS, whose download can already be found on Google Play and the App Store, respectively, although as indicated, in principle it still does not operate throughout the national territory.

As for the license, the one chosen has been Mozilla Public License 2.0, so we can call it both free software and open source software.

It has been more or less controversial with the privacy implications that this release of the Radar COVID code comes to mitigate, the truth is that the main obstacles that the application faces in its attempt to reach the more homes the better, are its discovery and conditionality. That is, that people know the application and install it. And although it is surely promoted by all possible channels, you have to make the effort to find and install it, which is optional.

Not only that: the application pulls the Bluetooth and, as is evident, it is always active in the background, with the increase in consumption that all this entails. Then there is the handling of the application itself, since it does not only consist of having it installed and activated: it is essential to use it -Its operation is very simple- and, for example, in the case of being diagnosed as positive for COVID-19, raise the alarm voluntarily. It will be necessary to see if it becomes massive as expected or not.

Despite this, and also despite the fact that its deployment at the national level has not yet been completed, Radar COVID already has several million installations and a positive average rating in the two large mobile application stores.

Here is the original post:

the Spanish app to control the pandemic releases its code - Explica

Key Players and Initiatives in the Quantum Technology Market 2020 – PRNewswire

DUBLIN, Sept. 10, 2020 /PRNewswire/ -- The "Quantum Computing - A New Paradigm Nears the Horizon" report has been added to ResearchAndMarkets.com's offering.

This study looks into the present perspective of quantum computing and its present state of development, as well as its future outlook.

The study proposes an accessible description of the new computing paradigm brought by quantum technology and presents the potential applications and benefits that the new approach would bring. It also focuses on the potential consequences for cybersecurity and telecommunications.

Alongside the perspective it offers on the current quantum computing ecosystem, the study outlines a vision of the current state of development of the technology. This includes an analysis of the positioning of key players (IBM, Microsoft, D-Wave) and of the investment programmes of some 12 key nations including the USA, China, parts of the EU, Russia, Japan and South Korea.)

Finally, the study analyses the likely development of the technology and its foreseeable impacts.

Key Topics Covered:

1. Executive Summary

2. Quantum Technology Definitions2.1. Quantum computing glossary2.2. Quantum properties and principles for a quantum computer

3. Quantum Computing Technologies3.1. Scope of the study3.2. The two main approaches to quantum computing3.3. Analog-quantum computing (AQC)3.4. Gate-based quantum computing3.5. Qubit: state of the art

4. State of Play of Quantum Technology4.1. Quantum computing foreseen benefits4.2. Quantum foreseen limitations4.3. The quantum computing race4.4. How to compare performance in quantum computing?4.5. Milestones and limitations for quantum computing4.6. Quantum supremacy: another milestone reached in 2019?

5. Quantum Technology Applications5.1. Quantum computing potential applications5.2. Most important QC potential applications5.3. Quantum computing: potential applications5.4. Focus: Quantum computing impact on cryptography5.5. The solution to build a post-quantum secure system

6. Key Players and Initiatives6.1. Private Player Profiles

6.2. Public Initiatives

7. Analysis and Perspectives7.1. Technology Perspective: Common misconceptions on quantum computing7.2. Ecosystems analysis7.3. Cybersecurity perspective7.4. Perspectives of development7.5. The vision of future development

For more information about this report visit https://www.researchandmarkets.com/r/y1svm5

About ResearchAndMarkets.comResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.

Media Contact:

Research and Markets Laura Wood, Senior Manager [emailprotected]

For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1907 Fax (outside U.S.): +353-1-481-1716

SOURCE Research and Markets

http://www.researchandmarkets.com

Follow this link:
Key Players and Initiatives in the Quantum Technology Market 2020 - PRNewswire

2020 Research Report: Innovations in Wearables, Light-field-based VR Glasses, Antenna, Quantum Computing, Micro-LED, and MPUs – ResearchAndMarkets.com…

DUBLIN--(BUSINESS WIRE)--The "Innovations in Wearables, Light-field-based VR Glasses, Antenna, Quantum Computing, Micro-LED, and MPUs" report has been added to ResearchAndMarkets.com's offering.

Some of the innovations include virtual reality glasses, antenna with geostationary satellite architecture, wearables for Covid-19 detection, quantum computing, neural-network based chip, microLED for display, 5G modem, and advanced processors.

The Microelectronics Technology Opportunity Engine captures global electronics-related innovations and developments on a weekly basis. Developments are centred on electronics attributed by low power and cost, smaller size, better viewing, display and interface facilities, wireless connectivity, higher memory capacity, flexibility and wearables.

Research focus themes include small footprint lightweight devices (CNTs, graphene), smart monitoring and control (touch and haptics), energy efficiency (LEDs, OLEDs, power and thermal management, energy harvesting), and high speed and improved conductivity devices (SiC, GaN, GaAs).

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/qvwjnz

See the article here:
2020 Research Report: Innovations in Wearables, Light-field-based VR Glasses, Antenna, Quantum Computing, Micro-LED, and MPUs - ResearchAndMarkets.com...

NSF and DOE to Advance Industries of the Future | ARC Advisory – ARC Advisory Group

The US National Science Foundation (NSF), Department of Energy (DOE), and the White House, announced more than $1 billion in awards for the establishment of 12 new AI and QIS research and development (R&D) institutes nationwide.

Together, NSFs AI Research Institutes and DOEs QIS Research Centers will serve as national R&D hubs for these critical industries of the future, spurring innovation, supporting regional economic growth, and training the next generation workforce.

The NSF and additional Federal partners are awarding $140 million over five years to a total of seven NSF-led AI Research Institutes. These collaborative research and education institutes will focus on a range of AI R&D areas, such as machine-learning, synthetic manufacturing, precision agriculture, and forecasting prediction. Research will take place at universities around the country, including the University of Oklahoma at Norman, the University of Texas at Austin, the University of Colorado at Boulder, the University of Illinois at Urbana-Champaign, the University of California at Davis, and the Massachusetts Institute of Technology.

NSF anticipates making additional AI Research Institute awards in the coming years, with more than $300 million in total awards, including contributions from partner agencies, expected by next summer. Overall, NSF invests more than $500 million in artificial intelligence activities annually and is the largest Federal driver of nondefense AI R&D.

To establish the QIS Research Centers, DOE is announcing up to $625 million over five years to five centers that will be led by DOE National Laboratory teams at Argonne, Brookhaven, Fermi, Oak Ridge, and Lawrence Berkeley National Laboratories. Each QIS Center will incorporate a collaborative research team spanning multiple institutions as well as scientific and engineering disciplines. The private sector and academia will be providing another $300 million in contributions for the centers. The centers will focus on a range of key QIS research topics, including quantum networking, sensing, computing, and materials manufacturing.

The establishment of these new national AI and QIS institutes will not only accelerate discovery and innovation but will also promote job creation and workforce development. NSFs AI Research Institutes and DOES QIS Research Centers will include a strong emphasis on training, education, and outreach to help Americans of all backgrounds, ages, and skill levels participate in the 21st-century economy.

More:
NSF and DOE to Advance Industries of the Future | ARC Advisory - ARC Advisory Group