Aiven: 91% of developers say open source is in their future – VentureBeat

Elevate your enterprise data technology and strategy at Transform 2021.

Enterprise adoption of open source software has grown rapidly, and 91% of developers said in a recent survey they expect open source to be part of their make up a part of their organizations software plans in the years to come, according to Aiven, a software company that combines open source technologies with cloud infrastructure.

Above: The survey revealed growing positivity towards open source, with respondents listing twice as many benefits of the technology, as they did disadvantages.

Image Credit: Avien

The survey revealed growing positivity towards open source among cloud and database developers in the United Kingdom, with respondents listing twice as many benefits as they did disadvantages. The most popular advantage listed by developers was the transparency of open source code, which makes it easier to find and fix bugs quickly. 69% of respondents identified this as a key benefit. Other benefits included reduced vendor lock in (53%) and the ability to build your own features (53%). The most cited obstacle to using open source was maintenance, which 52% of respondents viewed as a challenge. Other difficulties included configuring or installing the software (48%), a lack of support (45%), and hidden costs (27%).

Given these challenges identified, businesses are looking for solutions to make open source easier to adopt. 35% of respondents indicated they would opt for a managed open source solution in the future, allowing them to avoid the burden of installation and maintenance and spend more time focusing on business critical tasks. As businesses look to grow post-pandemic, managed solutions will likely continue to grow in demand.

Aivens research was conducted by Resonance on behalf of Aiven in January/February 2021. The study surveyed 200 UK developers who work in large enterprises and who specialize in cloud and database technology.

Read more from the original source:

Aiven: 91% of developers say open source is in their future - VentureBeat

The webs source code is being auctioned as an NFT by inventor Tim Berners-Lee – CNBC

Sir Tim Berners-Lee gives a speech at the Campus Party Italia 2019 on July 25, 2019 in Milan, Italy.

Rosdiana Ciaravolo | Getty Images

LONDON British computer scientist and inventor Tim Berners-Lee is auctioning the original code for the World Wide Web as a nonfungible token.

The auction for the World Wide Web NFT titled "This Changed Everything" will be run by Sotheby's in London from June 23-30, with bidding starting at $1,000. The proceeds of the auction will benefit initiatives that Berners-Lee and his wife support, Sotheby's said.

NFTs are a type of digital asset designed to show that someone has ownership of a unique virtual item, such as online pictures and videos, or even sports trading cards.

The NFT includes original time-stamped files containing the source code written by Berners-Lee, an animated visualization of the code, a letter written by Berners-Lee on the code and its creation, and a digital "poster" of the full code. They will all be digitally signed by Berners-Lee.

It will be the first time Berners-Lee has been able to capitalize financially on what is widely viewed as one of the greatest inventions of our time.

"Three decades ago, I created something which, with the subsequent help of a huge number of collaborators across the world, has been a powerful tool for humanity," said Berners-Lee in a statement. "For me, the best bit about the web has been the spirit of collaboration. While I do not make predictions about the future, I sincerely hope its use, knowledge and potential will remain open and available to us all to continue to innovate, create and initiate the next technological transformation, that we cannot yet imagine."

He added: "NFTs, be they artworks or a digital artefact like this, are the latest playful creations in this realm, and the most appropriate means of ownership that exists. They are the ideal way to package the origins behind the web."

Cassandra Hatton, global head of science and popular culture at Sotheby's, said in a statement that the "NFT format" will allow collectors to "own the ultimate digitally-born artefact."

In March, South Carolina-based graphic designer Beeple, whose real name is Mike Winkelmann, sold an NFT for a record $69 million at a Christie's auction. Jack Dorsey, CEO of Twitter, sold his first tweet as an NFT for $2.9 million later that month.

On Thursday, a rare digital avatar known as a CryptoPunk sold at Sotheby's for over $11.7 million. Total NFT sales reached an eye-popping $2 billion in the first quarter of this year, according to data from Nonfungible, a website which tracks the market.

But there are signs that the bubble could be bursting, with sales of digital collectibles falling dramatically in recent weeks. Overall sales plunged from a seven-day peak of $176 million on May 9, to just $8.7 million on June 15, according to numbers from Nonfungible. That means volumes are now roughly back where they were at the start of 2021.

Additional reporting by CNBC's Ryan Browne.

Visit link:

The webs source code is being auctioned as an NFT by inventor Tim Berners-Lee - CNBC

The Internets Original Source Code Is Coming to Auction as an NFT This Month – Yahoo Lifestyle

British computer scientist Tim Berners-Lee is auctioning the original source code for his most famous creation, the World Wide Web, as an NFT. Set to appear at a Sothebys auction called This Changed Everything running from June 23 through the end of the month, the work will have a starting bid of $1,000. Sothebys has not designated an estimate for the work, though its final sale price is likely to far exceed its starting bid.

Proceeds from the sale will benefit causes supported by the MIT professor and his wife Rosemary Leith. Sothebys did not specify the names of organizations to which the sale proceeds will be given.

More from Robb Report

The time-stamped files being sold contain 9,550 lines of original programming code Berners-Lee wrote. That code has since served as the foundational structures of the internet: Hypertext Transfer Protocol (HTTP), Hypertext Markup Language (HTML) and Universal Document Identified (URI). Alongside the files, a Python-backed digital poster, which serves as a visualization of the source code and comprises the inventors digital signature, will be auctioned. An additional letter penned by Berners-Lee detailing his 1989 creation, which he made while working at CERN, a physics research lab in Switzerland, will also go to the winning bidder.

Sir Tims invention created a new world, democratizing the sharing of information, said the auction houses global head of science and popular culture, Cassandra Hatton.

Though Berners-Lees code has been open source since 1993, two years after the first webpage supported by his code went live, the auction represents the chance to own the ultimate digitally-born artefact, Hatton said. As the first one of its kind to be offered at auction, this version of the source code is a unique NFT that is valuable as a collectors item.

Story continues

Three decades ago, I created something which, with the subsequent help of a huge number of collaborators across the world, has been a powerful tool for humanity, Berners-Lee said in a press statement. The scientist, who also serves as the chief technology officer at Boston data startup Inrupt, said that an NFT is the ideal format and the most appropriate means of ownership that exists for his game-changing invention.

NFTs (non-fungible tokens) are minted as unique editions using blockchain technology. In recent months, high-profile auctions of them have set records and lent digital art a new stature within the art market. Most recently, Sothebys sold 28 digital artworks in collaboration with crypto artist Pak for a collective $17.1 million last month. That sale followed Christies $69 million auction of cult crypto artist Beeples Everydays project in March.

Some sales of NFTs have drawn controversy over issues around authenticity. Digital art experts decried Christies recent sale of five NFT versions of Andy Warhols 1980s-era Amiga computer drawings, claiming they were essentially exhibition copies.

Best of Robb Report

Sign up for Robb Report's Newsletter. For the latest news, follow us on Facebook, Twitter, and Instagram.

Go here to see the original:

The Internets Original Source Code Is Coming to Auction as an NFT This Month - Yahoo Lifestyle

Microsoft: Try to break our first preview of 64-bit Visual Studio go on, we dare you – The Register

Microsoft has unveiled a slew of developer tools, including a preview of the 64-bit Visual Studio 2022, ahead of that developer event set for 24 June.

Preview 1 of Visual Studio 2022 comes direct from the department of never-say-never following version after version of the toolset remaining staunchly 32-bit, even as the hardware world changed around it.

The move to 64-bit was announced earlier this year and is an ambitious one considering the ecosystem and sheer size of the Visual Studio codebase.

Far be it from us to wonder how much cruft might be lurking within a product that has its roots in the previous century.

"The 64-bit conversion effort affects every part of Visual Studio, so the scope is much bigger than our usual previews," explained Microsoft senior program manager Justin Johnson in a blog on the matter, meaning that the first release is not so much about whizzbang new features (although there are improvements to IntelliCode even if some bits of VS2019 are missing at present) but more about seeing if the old thing remains upright as programmers prod at it.

Microsoft is particularly keen that developers throw huge and complex solutions at the preview that would have caused wobbles in previous versions. The company boasted that "customers were able to run the IDE for days, even with solutions containing 700 (or more!) projects."

Perhaps this hack is a bit old fashioned, but there is surely an argument to be made that rather than allowing developer tools to expand like a helium balloon headed for space, getting to a solution that isn't quite so bloated might be made easier by having a rethink rather than adding yet more memory.

While Visual Studio can now chomp through more system resources, its ecosystem of extensions has not fared so well Microsoft warned that updates would be required by vendors before those same extensions would turn up in Visual Studio 2022. This may not bode well for that one weird component that has long been abandoned but is still depended upon by a developer.

The release was joined by updates to .NET 6 and ASP.NET Core in the form of Preview 5 as well as an updated preview of the .NET Multi-platform App UI (MAUI). Microsoft also announced a developer event at 3pm ET on 24 June, hot on the heels of its "What's next for Windows" show.

After all, there is little point in having a shiny new operating system unless one can encourage developers to target code at it.

More here:

Microsoft: Try to break our first preview of 64-bit Visual Studio go on, we dare you - The Register

10 old software bugs that took way too long to squash – CSO Online

In 2021, a vulnerability was revealed in a system that lay at the foundation of modern computing. An attacker could force the system to execute arbitrary code. Shockingly, the vulnerable code was almost 54 years oldand there was no patch available, and no expectation that one would be forthcoming.

Fortunately, that's because the system in question was Marvin Minsky's 1967 implementation of a Universal Turing Machine, which, despite its momentous theoretical importance for the field of computer science, had never actually been built into a real-world computer. But in the decade or so after Minsky's design, the earliest versions of Unix and DOS came into use, and their descendants are still with us today in the 21st century. Some of those systems have had bugs lurking beneath the surface for years or even decades.

Here are ten noteworthy and venerable bugs that were discovered in recent years.

Age: 7 yearsDate introduced: 2010Date fixed: 2017

Way back in 2011, security researcher Ralf-Philipp Weinmann discovered a recently introduced flaw in the baseband processor used in mobile phones that could conceivably be used in an attack: hackers could set up a fake cell tower, trick the phone into connecting to it, and then hijack its network connection. The flaw was corrected relatively quickly by cell phone manufacturers and then just as quickly forgotten about.

There was one problem: cell phones weren't the only devices that used those chips. "Essentially, the same cellular baseband chipset was in the telematics unit in the Nissan Leaf and a variety of other vehicles," says Jesse Michael, Principal Cyber Security Researcher at security firm Eclypsium. Several researchers (who would go on to join Eclypsium) discovered the vulnerability by experimenting with a car they got from a junkyard.

View post:

10 old software bugs that took way too long to squash - CSO Online

This Week In Security: Updates, Leaks, Hacking Old Hardware, And Making New – Hackaday

First off, Apple has issued an update for some very old devices. Well, vintage 2013, but thats a long time in cell-phone years. Fixed are a trio of vulnerabilities, two of which are reported to be exploited in the wild. CVE-2021-30761 and CVE-2021-30762 are both flaws in Webkit, allowing for arbitrary code execution upon visiting a malicious website.

The third bug fixed is a very interesting one, CVE-2021-30737, memory corruption in the ASN.1 decoder. ASN.1 is a serialization format, used in a bunch of different crypto and telecom protocols, like the PKCS key exchange protocols. This bug was reported by [xerub], who showed off an attack against locked iPhone immediately after boot. Need to break into an old iPhone? Looks like theres an exploit for that now.

Or if we were feeling less charitable, wed call them bloatware. Either way, researchers at Oversecured took a look and found some problems. First up is Samsungs Knox Core app, part of their enterprise security system. This core framework file can install other apps, triggered by a world-writable URI. So first problem, anything that can load a file and call a URI can trigger an arbitrary app install. There is a second problem: part of that install process copies the app-to-be-installed to a world-readable location. This means that with a bit of work, any other app can abuse this to read any file this system app can read, and thats all of them.

Up next is the managed provisioning app. This too allows installing apps, but has a built-in verification system, as it was based on Managed Provisioning from the Android Open Source Project (AOSP). Samsung added features, one of which is a flag to disable the verification. Oh, and this one installs apps as system. Please install my rootkit, Samsung. OK

And the last problem well look at is the TelephonyUI app. It exposes a receiver, PhotoringReceiver, which takes two arguments: the URL to download, and the file location to write it to. This function does check that the remote server reports the file to be an image or video, but this is trivial for an attacker to spoof. The result is that an attacker can send an intent, download an arbitrary file, and write it anywhere on the phone as UID 1001, one of the system users.

Volkswagen has just confirmed that someone got access to a database of their potential and actual customers. Their letter states that a vendor left electronic data unsecured. Based on previous breaches, this is probably something like an Elsticsearch instance exposed to the Internet. So theres good and bad news here. The good, if you only made it into their database as a prospective customer, only your name, physical and email addresses, and a phone number are exposed. The bad? If you were an actual customer, that could include drivers license number, date of birth, and SSN. Watch out for targeted fishing using the information, though the more likely scenario is something like unemployment fraud committed using the information.

Though when it comes to source code, its not really theft, just unauthorized copying. Regardless, an unnamed group claims to be in possession of 780 GB of internal data and source code from EA, and is offering access for a mere $28 million. Its unclear how the breach happened, but known bugs have been suggested, like the high-profile Microsoft Exchange bug from a few months back. Regardless, the dump includes the full source to FIFA 21 and FrostBite, EAs engine. The really bad part is the collection of API keys and other secrets that were inevitably a part of the grabbed source.

Researchers from NordLocker discovered a really big database of data, which appear to have been collected by a network of trojans. How did that malware wind up on real machines? Mostly through cracked software, it seems. An illegal Photoshop download, a Windows crack, and a handful of games. So think long and hard before youre tempted to fire up you favorite torrent client, you might just be inviting malware in.

The malware did quite a bit while it was active, too. It took a screenshot, as well as a webcam capture. Uploaded files from the users folders, captured and sent along passwords and cookies, and more. The whole trove of data seems to be 1.2 terabytes worth. Yikes.

If you havent noticed, a growing collection of people, companies, and now nations are taking issue with Apples walled garden approach to smartphone software. The ongoing litigation from Epic over the Fortnight game and the app store has perhaps the highest profile. But the European Union, thanks to their proposed Digital Markets Act (DMA), might soon enter the fray. This legislation aims to limit the power a digital gatekeeper can exercise over a market. Tim Cook recently gave his thoughts on the idea not entirely positive. The biggest issue? The DMA would force Apple to allow app sideloading. The official response is that sideloading would destroy the security of the iPhone.

Now lets chat about that for a moment. Is it a bit iffy to install apps on your device that havent been vetted through the official app store? Sure. If you arent careful, youre likely to install apps with malware, and not have a Google or Apple working to detect and automatically remove the malicious app. On the other hand, it seems just a bit over-the-top to say that this would destroy the iPhones security. There have been plenty of vulnerabilities found in the last couple years that can compromise the device from a simple page visit. Not to mention malicious apps that have made it into the store.

Allowing you to install any application you wanted would break Apples stranglehold on the iOS app store. What this would mean, is that Apple would out on a whole lot of revenue from apps like Fortnight, who would be willing to build their own app store. So what do you think? Is this really the big security problem that Apple says it is, or are they just being protective of their walled garden and the benefits thereof?

Sometimes, exploits arent notable for how serious they are, but for how educational the write-up is. Firmly in that category is this story of getting a remote shell on an ancient Linksys WRT54GL. Quick note, the L there stands for Linux, and this particular router exists because the WRT54G was the grand-daddy of custom router firmware. A request for GPL code for the original router led a few hackers to put together their own firmware images, and DD-WRT and OpenWRT were both born out of the efforts. Router revisions happen rapidly, and soon the WRT54G had switched to VxWorks, and cut the flash in half, making support just about impossible for the custom firmwares. Enough customers complained, that Linksys re-released the older version as the WRT54GL.

History aside, [Elon Gliksberg] had one of the old routers, and decided to try to break in. Scan the ports with nmap, nothing interesting. The web interface? There is a diagnostic page that can send pings, so it probably runs a linux commands on the backend, so its worth trying something like ping 192.168.1.1; echo hello; That endpoint was sufficiently sanitized that it wasnt a viable attack. A bit of decompiling did lead to one call of system() that could be abused, though. That call was in the post-upgrade logic, to restore the user-interface language. Set the language to some shellcode, and you get execution. From there, it was just the task of getting the reverse shell compiled for that specific device, and using the built-in wget to fetch it.

So heres the irony: this vulnerability is launched as part of uploading firmware, and this device is just about the most widely supported target for custom firmware in the world. You can install your own Linux image on it with the same access this hack requires. Irony aside, the value here is waking through the process, which is well written out, and full of tips for trying to find your own exploit.

A couple weeks ago, we covered a nifty new project, the WiFi Wart. Well [Ryan] is still at it, and has an update on his progress. Theres good news, like finishing the design of the first prototype boards, sourcing the components, and actually assembling a trio of the test boards. Then there was some bad news, like discovering the hard way that the Low Dropout Regulator (LDO) he ordered was a 3.3 V component, instead of the needed 2.5 V. Thats one board with dead components, and time spent waiting on the replacement parts. Such is the way of things, when building new hardware. Well keep you up to date with this promising project, as updates are available.

See the rest here:

This Week In Security: Updates, Leaks, Hacking Old Hardware, And Making New - Hackaday

A convenient way to verify vaccinations and COVID-19 test results – UCHealth Today

Through My Health Connection, UCHealths patient portal, one can easily verify vaccinations, or find COVID-19 test results, on their phone or print from a desktop computer. Photo: Getty Images.

Youve gotten your COVID-19 vaccinations, now what do you do with your paper vaccination certificate? Fold it up into your wallet? Stash it away with your passport and other important documents?

We want to provide as much information as possible so patients have as much information as they may need, in one spot, Caputo said.

Through My Health Connection, UCHealths patient portal, you can easily find your electronic COVID-19 vaccination card to view on your phone or print from your desktop computer.

No one knows exactly what is going to be required, but we want to make sure we provide enough options for our patients to ensure they have what they need for wherever they are going, said Nicole Caputo, UCHealths senior director of experience and innovation.

Currently, UCHealth patients can access medical information, test results, schedule appointments, and message their physicians through My Health Connection. When COVID-19 vaccines became available, My Health Connection added a Your COVID-19 Information button that allows patients to view their COVID-19 vaccination record and most recent test result. A vaccination record card and detailed testing information, which third parties may require for things like entry into a concert or sporting event, or to travel, has recently been added.

We want to provide as much information as possible so patients have as much information as they may need, in one spot, Caputo said.

UCHealths COVID-19 information page will soon be enhanced with the SMART Health Cards Framework, which provides paper or digital versions of your clinical information, including vaccination records.

UCHealth joined forces with organizations such as Microsoft and Mayo Clinic as a member of the Vaccination Credential Initiative (VCI) to harmonize standards and support the development of SMART Health Cards.

VCI, created in January 2021, is a voluntary coalition of public and private organizations. Its goal is to create a trustworthy and verifiable copy of digital or paper vaccination record forms that can serve as credentials for medical purposes and to show vaccination status if required to return to work, school and travel.

We want patients not to have to download or use other applications to verify their test results or vaccination record, said Chad Chenoweth, director of information technology for UCHealth. Our SMART Health Cards implementation is meant to be a convenience for the user so they can easily scan with VCI-capable third parties versus needing to carry paper (verification). Our UCHealth mobile application and My Health Connection are already secure and contain a lot of other patient information for patients to make it easy for them to access their information on the go. This is an added convenience.

The SMART Health Cards Framework VCI developed is an open-source code, Caputo explained. This means it is free for anyone to build within their current program.

VCI is putting it out there for the greater good of public health, Caputo said. It makes sense for all these organizations to come together to build this tool a verifiable health information tool to help in a public health emergency.

UCHealth has prototyped the VCIs specifications within My Health Connection, providing feedback to the VCI team to help with gaps in documentation or areas needing improvement. The application is now being tested by VCI to ensure its compliance with all protected health care regulations, Chenoweth said.

From the UCHealth consumer view, the SMART Health Cards would be in the form of a QR code within My Health Connections My COVID-19 information. A QR, or quick response, code (quick referring to the fact its quickly readable by a cell phone), uses a combination of spacing as a type 2D barcode that when scanned, can convey a wide multitude of information. They were invented in 1994, but you might have seen QR codes more recently during the pandemic at a restaurant to quickly pull up their menu on your phone when paper menus become a source of concern for spreading COVID-19.

Individuals and businesses can easily create QR codes that link to their online pages, like their menu or an event. But the QR code within My Health Connection is different. Because it is secure data, whoever is scanning the information must have a VCI-capable app allowed to read that protected health information.

If you or I were to scan that code with our phone, it would be jumbled information that doesnt make sense, Caputo said.

What about people without smartphones? Although the most convenient way to share your COVID-19 health information would be to bring the QR code up on your smartphone via the UCHealth app, people without a smartphone will still be able to print off their QR code via My Health Connection on their desktop computer.

There are more than 300 members of VCI collaborating to support not only the development but also the testing and real-world use of implementation guides needed to issue, share and validate vaccination records, according to the VIC website.

UCHealth My Health Connection users can expect to see the QR code in their COVID-19 vaccination cards, possibly by fall 2021.

We want to make sure patients have access to test results and their vaccinations, Caputo said. We want to give patients extreme flexibility with their information as we dont know who will be accepting what, who will scan codes, or what information will be required. Its nice to be part of a larger coalition to solve this problem together.

See original here:

A convenient way to verify vaccinations and COVID-19 test results - UCHealth Today

Ploopy is a fully Open Source trackball that can be 3D printed and whose firmware is fully customizable Explica .co – Explica

Mice and trackpads may rule the world, but trackballs are still an attractive option for certain types of users, and now they all have an interesting and totally Open Source option called Ploopy.

This trackball stands out for being totally open: its manufacturing diagrams are public, as is its firmware, which you can also customize to adjust the behavior of the buttons to your liking. And of course, you can create your own fork of Ploopy, if you think you can improve this unique development even more.

It is true that there are several remarkable trackballs on the market such as the Logitech Ergo M575 that this manufacturer launched on the market a few months ago, but it may be that you are intrigued by the possibilities of building one from scratch.

That is what the creators of Ploopy offer, who on the project website on GitHub explain what is necessary to carry out this process, from the tools (see preparing screwdriver and soldering iron) as well as the board (PCB) with the sensor ADNS-5050 which is a fundamental part of the operation and, of course, all the elements that you will have to print on a 3D printer.

All the schematics for printing and assembly are available from the aforementioned project website, and the instructions are precise and clear, but there is one more important element: the firmware.

This vital component for the operation of the Ploopy trackball is another of its outstanding elements: unlike proprietary firmwares from other manufacturers, in Ploopy the firmware code is available and any user can modify it and adjust it to your needs.

If you do not have so much means or so much desire, nothing happens: you can buy the different kits that allow you to save some work or components even if the price goes up. In fact the final assembled Ploopy model costs 100 Canadian dollars, about 68 euros to change.

Even so, we are undoubtedly facing a great idea that once again shows that great things can be done with Open Source hardware and software.

More information | Ploopy

View post:

Ploopy is a fully Open Source trackball that can be 3D printed and whose firmware is fully customizable Explica .co - Explica

Looking at broadband availability data over time – GCN.com

Looking at broadband availability data over time

To track broadband availability, the Federal Communications Commission requires all internet service providers to submit Form 477-- on which they report where they offer internet service at speeds exceeding 200 kbps in at least one direction. Fixedproviders list the census blocks where they offer service to at least one location, and mobile providers file maps of their coverage areas for each broadband technology.

The accuracy of broadband availability data and mapping has long been debated because it is self-reported by ISPs and because the size and composition of census blocks varies. According to a May 19 Congressional Research Service report, the Form 477 data may be incomplete or inaccurate.

In the 10 years between the census counts, people can move in and out of an area, which not only affects the accuracy of the Form 477 data, but it also makes it difficult for researchers or policy-makers who want to study trends or changes in broadband availability. Without a deep understanding of the data, it is nearly impossible to draw meaningful conclusions about broadband penetration and coverage gaps.

Now, researchers at Michigan State University have by developed a methodology for integrating broadband coverage data over time. They produced adataset that puts data from Form 477intoacontinuoustimelineand aligns the data to the 2010census, MSU officials said in a release.

Wedevelopeda procedure for using the data to produce an integratedbroadband time series," said John Mann,anassistant professor with MSUsCenter for Economic Analysis.The team has labeled the dataset BITS, which stands for a Broadband Integrated Time Series.

According to a paper on BITS, the dataset is essential because it provides the basis for longer comparative analyses of relative provision levels and the identification of locales that lag behind others consistently. It also makes it easier to compare data from the Form 477s.

With shrinking public budgets and a need to pinpoint locations suffering from a chronic shortage of broadband, it is critical for policy-makers to efficiently allocate the human, infrastructural, and policy resources required to improve local conditions, the researchers wrote. The paper not only provides a framework for integrating and fusing Form 477 broadband data into a robust time-series database, it provides a user-friendly and harmonized version of the broadband data for use, now.

BITS includes open-source code that analysts can modify for their own research and an approach for cross-walking census data, which is important for future analysis that deals with changes in census geographies. While the BITS is far from perfect, the researchers wrote, it represents an alternative to the current, shorter time series Form 477 data that are available for evaluating broadband provision and the digital divide.

About the Author

Susan Miller is executive editor at GCN.

Over a career spent in tech media, Miller has worked in editorial, print production and online, starting on the copy desk at IDGs ComputerWorld, moving to print production for Federal Computer Week and later helping launch websites and email newsletter delivery for FCW. After a turn at Virginias Center for Innovative Technology, where she worked to promote technology-based economic development, she rejoined what was to become 1105 Media in 2004, eventually managing content and production for all the company's government-focused websites. Miller shifted back to editorial in 2012, when she began working with GCN.

Miller has a BA and MA from West Chester University and did Ph.D. work in English at the University of Delaware.

Connect with Susan at [emailprotected] or @sjaymiller.

Link:

Looking at broadband availability data over time - GCN.com

Shadow bans, fact-checks, info hubs: The big guide to how platforms are handling misinformation in 2021 – Nieman Journalism Lab at Harvard

This report is from the Partnership on AI, a nonprofit that explores the responsible use of artificial intelligence. Members include Amazon, Facebook, Google, DeepMind, Microsoft, IBM, and Apple, as well as nonprofits like the ACLU and First Draft, news organizations like The New York Times, and academic and research institutes like Harvards Berkman Klein Center and Data and Society.

I like this report because misinformation coverage can often get bogged down in small, specific stories, and its useful to zoom back out. If you spot interventions that the authors have missed, you can note them here. LHO

Big Tech CEOs have become a regular sight on Capitol Hill, called intimeandtime againto testify before Congress about their misinformation practices and policies. These debates often revolve around whats been called thefalse take-down/leave-up binary,where the central question is whether platforms should allow misleading (however thats defined) content on their platforms or not.

A quick scroll through platform policies, however, will reveal a variety of intervention tactics beyond simple removal, including labeling, downranking, and information panels. When this range of approaches to misinformation is considered, far more fundamental questions arise: When should each of these approaches be used, why, and who gets to decide?

To date, there has been no public resource to understand and interrogate the landscape of options that might be used by platforms to act on misinformation. At the Partnership on AI (PAI), we have heard from our partner community across civil society, academia, and industry that one obstacle to understanding what is and isnt working is the difficulty of comparing what platforms are doing in the first place. This blog post is presented as a resource for doing just that. Building on our previous research onlabelingmanipulatedandAI-generated media, we now turn our attention to identifiable patterns in the variety of tactics, or interventions, used to classify and act on information credibility across platforms. This can then provide a broader view of the intervention landscape and help us assess whats working.

In this post, we will look at several interventions: labeling, downranking, removal, and other external approaches. Within these interventions, we will look at patterns in what type of content theyre applied to and where the information for the intervention is coming from. Finally, we will turn to platform policies and transparency reports to understand what information is available about the impact of these interventions and the motivations behind them.

This post is intended as a first step, providing common language and reference points for the intervention options that have been used to address false and misleading information. Given the breadth of platforms and examples, we recognize that our references are far from comprehensive, and not all fields are complete. With that in mind, we invite readers to explore and add to ourpublic databasewith additional resources to include. As a result of our collective work, platforms and policymakers can learn from these themes to design more informed and valuable interventions in the future and better debate what it means for an intervention to be valuable in the first place.

It seems like every other day a platform announces a new design to fight misinformation, so how did we decide on which interventions to categorize? We started by comparing a non-comprehensive subset of several dozen interventions (and counting) on top social media and messaging platforms for an initial categorization of intervention patterns, based onusage statistics. We also included some data about other platforms with lower usage statistics, including Twitter, due to its prominent interventions and interest amongst the Partnership on AI partners in our AI and Media Integrity Program.

We included any intervention we found that was related to improving the overall credibility of information on a platform. That means the focus of interventions is not always limited tomisinformation(the inadvertent sharing of false information) but also disinformation (the deliberate creation and sharing of information known to be false), as well as more general approaches that aim to amplify credible information. Note that public documentation of these interventions varies widely, and in some cases may be outdated or incomplete. In general, we based our intervention findings on conversations with PAI Partners, available press releases, platform product blogs, and external press coverage. If you see something to add or correct, let us know in thesubmission form for our intervention database.

In order to organize the patterns across interventions, we classified them according to three characteristics: 1) type of intervention, 2) element being targeted, and 3) the source of information for the intervention. These characteristics emerged as we noted the key differences between each intervention. Apart from the surface design features of the intervention, we realized it was key to address what aspect of a platform the design is applied to (for example, labeling on individual posts vs. accounts) as well as where the information was coming from, or the source.

There are many ways platforms might intervene on posts that they classify as false or misleading (more on the complexities of such classifications in part three). You might already be familiar with some tactics, such as fact-checking labels or the removal of posts and accounts. Others, like downranking posts to make them appear less often in social media feeds, you might not think of, or even be aware of. We refer to these various approaches as interventions, or intervention types, as the high-level types of approaches employed by platforms.

Note that the visual and interaction design of interventions can vary widely, even for interventions of similar types (e.g. veracity labels on Facebook compared to those on Twitter feature different terminology, colors, shapes, and positions). In this post we focus on general approaches, rather than comparing specific design choices within types.

Labels are one of the more noticeable and varied types of interventions, especially as platforms like Facebook have ramped up to labelmillions of postsrelated to COVID-19 since 2020. We define labels as any kind of partial or full overlay on a piece of content that is applied by platforms to communicate information credibility to users.

However, labels are far from alike in their design: in particular, we differentiate between credibility labels and contextual labels. Credibility labels provide explicit information about credibility, including factual corrections (for example, a false label, also known as veracity label in a review byMorrow and colleagues in 2021). Contextual labels, on the other hand, simply provide more related information without making any explicit claim or judgement about the credibility of the content being labeled. For example, TikTok detects videos with words related to Covid-19 vaccines and applies ageneric informational bannerto Learn more about COVID-19 vaccines).

Beyond this, label designs can vary in other crucial ways, such as the extent to which they create friction for a user or cover a piece of content. Labels may be a small tag added alongside content or may make it more difficult to open the content. Each choice may have profound implications for how any given user will react to that content. For a more thorough discussion of the tradeoffs in design choices around labeling posts, you can check out our12 Principles for Labeling Manipulated Media.

Ranking. Platforms with user-generated content, such as Facebook and TikTok, use various signals to rank what and how content appears to users. The same ranking infrastructure used to enhance user engagement has also been used to prioritize content based on credibility signals. For example, Facebook has used anews ecosystem quality(NEQ) score to uprank certain news sources over others. Conversely, downranking can reduce the number of times content appears in other users social media feeds, often algorithmically. For example, Facebookdownranksexaggerated or sensational health claims, as well as those trying to sell products or services based on health-related claims. At the extreme end of this spectrum, content may even be downranked to 0, or no ranking, meaning content will not be taken off of a platform, but it will not be algorithmically delivered to other users in their feeds. The ranking scores of any given content remains an opaque process across platforms, thus it is hard to point to examples that had a low ranking (that is, were downranked) vs. those with no ranking.

Removal is perhaps the most self-explanatoryand often mostcontroversialapproach. We define removal as the temporary or permanent removal of any type of content on a platform. For example, Facebook, YouTube, and othersremovedall detected instances of Plandemic, a COVID-19 conspiracy theory video, from their platforms in May 2020.

Though labeling, downranking, and removal are the most prevalent types of approaches, platforms also employ other methods related to promoting digital literacy and reducing conflict in relationships. Well discuss more specific examples in the next section.

While a lot of attention has been given to platform actions on individual posts, interventions act on a lot different levels. To understand the intervention landscape, its worth knowing and considering what element on a platform is being targeted. In assessing interventions, we found that different approaches act on different scopes of content, including posts, accounts, across feeds, and external efforts.

Post-level interventions are arguably the most visible and salient to users, as platforms indicate that specific posts of interest have been flagged and removed. This sometimes seems to trigger aStreisand effectin which the flagged posts receive additional attention for having been flagged. (This is especially true when the poster is a prominent public figure, such asformer President Donald Trump.) In addition to credibility labels with explicit corrections such as Facebook and Instagram false information ratings, interventions on posts can also include contextual labels that simply provide more information, such as TikToks labels on posts tagged with vaccine information encouraging users to Learn more about the Covid-19 vaccine.

Additionally, some post-level interventions like downranking are by definition less visible, as posts classified to be downranked, for example on Facebook due toexaggerated health claims, are distributed less on social media feeds. In these cases, users may only suspect that an intervention has taken place without being able to confirm this. Finally, post-level interventions also include sharing or engagement restrictions, such as WhatsApps limits on sharing messages more than five times.

In many cases, these post-level interventions may be done in tandem with each other. For example, when Facebook adds a fact-checking label, the post is also downranked, and when Twitter labeled certain Trump tweets following the 2020 election for containing misleading information, liking and sharing was also prohibited.

Account/group interventions target a specific user or group of users. When labeled, they are typically contextual in nature, offering identity verification according to platform-specific processes, or else surfacing relevant information about an account or groups origin, such as the accounts country or if it is state-sponsored. (Right: YouTube label for state-sponsored media.)

Accounts and groups are also subject to downranking and removal. Sometimes this is temporary or conditional until certain changes are made, such as the deletion of an offending post. Other times it is permanent. For example, platforms like Twitter have released guidelines detailing different account actions taken according to afive-strike system.

Instead of targeting individual posts or accounts, some interventions affect an entire platform ecosystem. Examples of feed-level interventions include the shadow banning of certain tags, keywords, or accounts across a platform, preventing search. It is not always clear what feed-level actions are taking place, leading to widespread suspicion and speculations of bias for example thedebunked ideathat conservative accounts and keywords are systematically downranked and banned across platforms like Facebook for ideological reasons.

There are feed-level labels as well, such as information hubs and information panels that are displayed prominently on platforms without being attached to particular posts. The banners shown on Twitter, Facebook, Instagram ahead of the 2020 U.S. elections, which linked to election resources, are one prominent example. Other feed-level labels only appear when triggered by search. These can take the form of both credibility and contextual labels. Google, for example, highlights a fact-check if a query matches a fact-check in the ClaimReview database. And on Pinterest, merely searching for a keyword related to a misinformation-prone topic like census results in a banner linking to additional information (see example at right).

Finally, in some cases, platforms dont depend on labels, removal, or ranking, and instead aim to promote digital literacy education either using embedded digital literacy educators and fact-checkers or outside of a platform environment entirely. This tactic is particularly useful in closed messaging environments where content cant be easily monitored for privacy reasons. For this reason, platforms like WhatsApp have announced funding forseven fact-checking organizationsgroups to embed themselves in groups and find other relational approaches to promote credibility. In other cases, the intervention involves direct support of partner sources identified by a platform as credible to create ads or other content to be amplified to users.

In making intervention decisions, platforms must decide what to intervene on. They currently rely on a variety of sources to both identify the need to intervene and provide what they consider authoritative information. We refer to these actors and institutions as intervention sources and in many ways, the quality of an intervention can only be as good or trustworthy as its source, regardless of other design factors. These intervention sources include different systems, both human and algorithmic. Below we describe sources including crowds, fact-checkers, authoritative sources, and user metadata.

Very few crowd-based rating systems for misinformation currently exist publicly. In 2021, Twitter releasedBirdwatchin beta. The platform allows users to add notes with ratings about the credibility of posts. Others may then rate these notes, and the most helpful notes are surfaced first.

An earlystudy from Poynterobserved very low engagement with the feature, as well as evidence of politicized notes. Indeed, ensuring quality of notes and preventing organized gaming of ratings by motivated political actors remains a challenge for any crowd-based intervention at scalea challenge thatTwitter itself is attentive to.

Similar tools using reporting features exist for moderating hate speech. For example, in 2016Periscope released a featurethat polled random users about whether reported messages were appropriate until a consensus was reached, at which point the offending user would be either allowed to post or be penalized. Users were shown the results of the poll. Though not explicitly about mis/disinformation, the feature offers an interesting model for random jury-based polling for use in content classification.

One of the more publicized sources of intervention information are fact-checkers. A group of fact-checkers came togetherfollowing the 2016 electionoffering to help Facebook check the credibility of its posts. Such organizations are now approved by theIFCN(International Fact-Checking Network). These fact-checking members are contracted by platforms such as Facebook to provide ratings on posts, either according toplatform-specific classifications(as with Facebook), or broader industry schemas, such asClaimReviewdeveloped by schema.org and the Duke Reporters Lab (as with Google).

Facebook has described how ratings are extended in concert with multi-modalautomated detection toolsto flag duplicate issues across poststhoughthese are not always applied accuratelydue to the difficulties of appropriately assessing a users context and intent at scale.

In some cases where technical and specialized information is involved, such as election regulations, COVID-19, and the census, platforms have followed the recommendations of relevant expert organizations.

This approach may seem to offload the responsibility of the platforms to be arbiters of truth by instead depending on credible institutions. In practice, however, the authority of these institutions has also proved contentious in the context of a politicized information ecosystemfor example, Facebookpromotes CDCinformation even as many debate the agencys changing policies.

Additional sources included curated lists of tags/accounts, internal monitoring/content moderation, other platform curation of stories, andother metadata such as the provenance of a photo. While many platforms use automated detection, it is crucial to recognize that this detection is still based on prior classification by sources such as those listed above.

Now that we are equipped with a basic understanding of how misinformation interventions operate, how can we tell what they are meant to do and whether theyre doing it? Here we face a deeper problem: a lack of standardized goals and metrics for intervention. Though such interventions appear to have societal goals related to harmful misinformation, they are, in many ways, still treated like any other platform feature, with limited public-facing explanations. And while many platforms regularly release public statistics, these rarely include information about specific interventions other than high-level counts of actions such as posts removed.

Researchers have also asked for dynamic Transparency APIs to track and compare these and other changes in real time for reporting reasons, but many have yet to receivethe kinds of data they needto conduct the most effective research. For a summary of current research approaches, seeNew Approaches to Platform Data Researchfrom NetGain Partnership. The report points out that even as platforms provide total numbers and categories of information removed, they arent informative about the denominator of the information total, or what kinds of groups information is and isnt distributed to. Because of this, there is very little structured information about the efficacy of specific interventions compared to each other. This results in researchers scraping details from product blogs, corporate Twitter threads, and technology reporting.

If these interventions are to have a positive societal impact, we need to be able to measure that impact. This might start with common language, but ultimately well need more to be able to compare interventions to each other. This begins with platforms taking responsibility for reporting these effects and taking ownership of the fact that their intervention decisions have societal effects in the first place. Ourprior researchsurfaced widespread questioning and skepticism of platform intervention processes. In light of this, such ownership and public communication is essential to building trust. That is, platforms cant simply count on tweaking and A/B testing the color scheme and terminology of existing designs to make the deeper social impacts they appear to seek.

Going forward, we need to examine such patterns and ad hoc goals. We also need to align on what other information is needed and ongoing processes for expanding access to relevant metrics about intervention effects. This includes further analysis of how existing transparency reports are used to understand how they might be more valuable for affecting how users come into contact with content online. Platforms should embrace transparency around the specific goals, tactics, and effects of their misinformation interventions, and take responsibility for reporting on their content interventions and the impact those interventions have.

As a next step, the Partnership on AI is putting together a series of interdisciplinary workshops with our Partners with the ultimate goal of assessing which interventions are the most effective in countering and preventing misinformation and how we define misinformation in the first place. Were complementing this work with asurveyof Americans attitudes towards misinformation interventions. In the meantime, ourdatabaseserves as a resource to be able to directly compare and evolve interventions in order to help us build a healthier information ecosystem, together.

Do you have something to add that we didnt cover here? We know our list is far from comprehensive, and we want your help to make this a valuable and up-to-date resource for the public. Let us know what were missing by emailingaimedia@partnershiponai.orgor submitting an interventionto this Airtable formand well get to it as soon as we can. Stay tuned for more updates on future versions of this database and related work.

Emily Saltz is a UX researcher and a past fellow at the Partnership on AI and First Draft News. Claire Leibowicz leads AI and media integrity efforts at the Partnership on AI.

More here:

Shadow bans, fact-checks, info hubs: The big guide to how platforms are handling misinformation in 2021 - Nieman Journalism Lab at Harvard