Open Invention Network Celebrates Its 15th Year Protecting Core Linux and Open Source from Patent Aggression – GlobeNewswire

DURHAM, N.C. , Nov. 17, 2020 (GLOBE NEWSWIRE) -- Open Invention Network (OIN) is celebrating its 15th year protecting the Open Source Software (OSS) community from patent risk. OINs efforts have enabled businesses and organizations to confidently invest their resources to develop, integrate and use OSS, safeguarding them from patent risk in core Linux and adjacent OSS technologies.

Freedom to participate in open source projects and adopt Linux and other open source code has been enabled through broad based participation in the OIN cross-license, which has become a litmus test for authenticity in the open source community, according to Keith Bergelt, CEO of Open Invention Network.Joining the OIN community demonstrates an explicit recognition among signatories of a commitment to open source technologies and the set of norms required around the appropriate use of patents in an increasingly open source-centric world. Companies that do not sign the OIN license and refuse to participate in this rapidly growing community are explicitly or implicitly reserving the right to use their patents to litigate on core Linux and OSS functionality.

Since its founding in 2005, powered by the contributions of many individuals and organizations, OIN has grown to be the largest patent non-aggression community in history with over 3,300 participants. Over the past 15 years, OINs community has experienced a compounded annual growth rate (CAGR) for licensees of more than 50%, and the OIN community in total now owns more than 2.6 million patents and applications. In addition, OIN provides royalty-free access to its strategic portfolio of more than 1,300 worldwide patents and applications.

The scope of patent non-aggression between OIN community members is defined by OINs Linux System definition. It has evolved to include nearly 3,400 software packages, which is three times that which was covered at its launch in 2005. This ensures freedom of action in global open source projects and technology platforms including Linux, Python, GNOME, SUSE, X.org, Perl, Fedora, Android, Hyperledger, Open Stack, Apache, Avro, Kafka, Spark and Hadoop; Automotive Grade Linux (AGL), Robot Operating System (ROS), KDE Frameworks, Eclipse Paho and Mosquito, among many others.

In addition to its historic focus of limiting operating companies antagonistic to Linux and OSS from using patents to slow or stall OSS development, OIN has worked with numerous community members to either eradicate patent aggressors lawsuits or enable members to settle disputes cost effectively. As patent risk has evolved to include patent assertion entities (PAEs), OIN has shifted a portion of its focus and partnered with IBM, The Linux Foundation, and Microsoft to fund the first anti-PAE program Unified Patents Open Source Zone designed specifically to limit the effect of PAE aggression directed at the open source community.

OINs community explicitly supports patent non-aggression in core Linux and adjacent open source technologies by cross-licensing Linux System patents to one another on a royalty-free basis. Patents owned by OIN are similarly licensed royalty-free to any organization that agrees not to assert its patents against the Linux System. The OIN license can be signed online at http://www.j-oin.net/.

Open Invention Network is a unique organization. For 15 years, it has protected the Linux and open source community from patent aggression, while fostering a community that understands that value of shared technological development, said Tim Kowalski, Chairman of Open Invention Network. It has been an honor to have worked with the OIN team, and fellow OIN Board members, to plan and implement the strategic initiatives that have provided protection for open source.

Google has been a proud member of OIN since joining the community in 2007. Linux and adjacent open source software power the cloud-based services of today and tomorrow, said Chris DiBona, Director of Open Source at Google. Throughout, OIN has been there to ensure that open source remains safe for users, consumers, and developers alike to consume and upon which to build.

Since its inception 15 years ago, OIN has provided unprecedented protection to enable the incredible growth and adoption of Linux and other related open source software around the world, said Ken King, General Manager, OpenPower at IBM. As the OIN community has grown to over 3,000 members and the Linux definition has matured, that protection has only grown stronger. IBM and Red Hat have been two of the leading proponents and drivers of Linux and open source software for over 20 years and have been founding members of OIN since its inception. Today, IBM and Red Hat continue to share with OIN a deep and unwavering commitment to scale Linux and open source innovation providing flexibility, choice and leadership for the industry.

Global adoption of Linux and other open source technologies is an irreversible trend. For the last fifteen years, they have transformed almost every industry, said Hirotake Konda, Deputy General Manager of IP Management Division and Department Manager of Licensing Department of NEC, and Jackson Chen, Senior Associate General Counsel of NECAM. By sharing innovation, Linux and open source capabilities have soared, application interoperability is unprecedented, connectivity is virtually everywhere, while business and consumer productivity are at all-time highs. By blocking patent aggression in open source, OIN has enabled safer investments in product development and helped to enable these innovations.

Philips is proud to be a founder of the Open Invention Network. Over the years, OSS has been an important building block for many of our businesses, and is increasingly important for the medical and health tech industries. OSS enables faster development of monitoring, imaging, diagnostic, and informatic platforms, while driving down costs, said Jako Eleveld, Head of IP Licensing of Royal Philips. With more than 3,300 community members, OIN is ensuring that OSS innovation continues its rapid pace, in a safe environment enabled by licensing.

At the time of OINs founding, open source was nascent, and most software was built in silos. During the past 15 years, we have been pleased to watch the OIN community grow from 6 members to more than 3300, said Peter Toto, Senior VP, IP Counsel at Sony Corporation of America. The OIN communitys powerful cross license has enabled businesses to safely develop and sell innovative new systems and platforms that have revolutionized the way the world conducts business.

As a global leader in open source, SUSE enables innovation from the largest data centers in the world to cloud environments, and our technology is embedded in every day devices like cars, points of sale, and MRIs. Open source infrastructure is the power behind the cloud and digital transformation, and our customers rely on us for missioncritical business outcomes and for innovation, said Thomas Di Giacomo, president of engineering and innovation at SUSE. As a leader in bringing open source solutions to enterprises, including enterprise Linux, we see firsthand why Linux is the number one platform for cloud, embedded devices and high-speed containers at the edge. Open source is one of the fastest growing markets in the industry, in part, due to the protection afforded by the OIN, which we are a proud active member for years, and its patent non-aggression community.

Over its history, the automotive industry has undergone periods of significant innovation. We are now undergoing a fundamental shift in the way automotive platforms are used by consumers and businesses. Automotive Grade Linux and other OSS projects are helping the industry rapidly transform. By protecting them from patent risk, OIN has enabled automotive manufacturers to effectively integrate new kinds of software-based technologies into cars. said Yosuke Iida, General Manager of Intellectual Property Division at Toyota.

About Open Invention NetworkOpen Invention Network (OIN) is the largest patent non-aggression community in history and supports freedom of action in Linux as a key element of open source software (OSS). Patent non-aggression in core technologies is a cultural norm within OSS, so that the litmus test for authentic behavior in the OSS community includes OIN membership. Funded by Google, IBM, NEC, Philips, Sony, SUSE, and Toyota, OIN has more than 3,300 community members and owns more than 1,300 global patents and applications. The OIN patent license and member cross-licenses are available royalty-free to any party that joins the OIN community.

For more information, visit http://www.openinventionnetwork.com.

Media-Only Contact:

Ed SchauwekerAVID Public Relations for Open Invention Networked@avidpr.com+1 (703) 963-5238

Follow this link:
Open Invention Network Celebrates Its 15th Year Protecting Core Linux and Open Source from Patent Aggression - GlobeNewswire

Open Source Software Market: Technological Advancement & Growth Analysis with Forecast to 2026 – The Daily Philadelphian

Coronavirus (COVID19) pandemic has impacted all over industries across the globe, and Open Source Software market is one of them. As the global market heads towards major recession, we are at In4Research, has published a brand-new latest research report which fully studies the impact of COVID-19 crisis on Open Source Software Industry and suggests possible actions to curtail them.Open Source Software Market report covers an in-depth analysis of the Open Source Software industry including statistical, quantitative, qualitative data points with emphasis on the market dynamics including the drivers, opportunities & restraints, market size, industry status and forecast, competition landscape and growth & revenue opportunities after COVID-19 pandemic.

In addition, this Open Source Software market research report covers both the global and regional markets with a detailed overview of the markets complete growth forecast. This research also sheds light on the markets wide-ranging competitive environment. The study also includes a dashboard overview of top businesses in both historical and current contexts, covering their active marketing strategies, recent developments & trends, and market contribution.

Request for a sample report to browse TOC, full company coverage & many more @https://www.in4research.com/sample-request/2054

Open Source Software Market Segment Analysis:

The research report includes specific segments by Type and by Application. Each type provides information about the production during the forecast period of 2019 to 2026. The application segment also provides consumption during the forecast period of 2019 to 2026. Understanding the segments helps in identifying the importance of different factors that aid market growth.

Segmentation by Type:

Segmentation by Application:

There is coverage of market dynamics at the country level in the respective regional segments. The report comprises competitive analysis with a focus on key players and participants of the Open Source Software market covering in-depth data related to the competitive landscape, positioning, company profiles, key strategies adopted and product-profiling with a focus on market growth and potential.

Main Key Players:

Ask for more details or request a custom report from our industry experts @https://www.in4research.com/customization/2054

Regional Analysis:

Open Source Software market breakdown data are shown at the regional level, to show the sales, revenue and growth by regions.

Impact of COVID-19 on Open Source Software Market

The report also contains the effect of the ongoing worldwide pandemic, i.e., COVID-19, on the Open Source Software Market and what the future holds for it. It offers an analysis of the impacts of the epidemic on the international market. The epidemic has immediately interrupted the requirement and supply series. The Open Source Software Market report also assesses the economic effect on firms and monetary markets. Futuristic Reports has accumulated advice from several delegates of this business and has engaged from the secondary and primary research to extend the customers with strategies and data to combat industry struggles throughout and after the COVID-19 pandemic.

Get in touch to know more about the Impact of COVID-19 & Revenue Opportunities in Open Source Software Market:https://www.in4research.com/impactC19-request/2054

Table of Contents:

Any Customization, Any Specific requirements? Speak with Analyst @ https://www.in4research.com/speak-to-analyst/2054

FOR ALL YOUR RESEARCH NEEDS, REACH OUT TO US AT:

Contact Name: Rohan S.Email:[emailprotected]Phone: +1 (407) 768-2028

Read the original post:
Open Source Software Market: Technological Advancement & Growth Analysis with Forecast to 2026 - The Daily Philadelphian

GitHub archives open-source ‘greatest hits’ in four locations around the world – SiliconANGLE News

GitHub Inc., an open-source code repository and subsidiary of Microsoft Corp., today announced an expansion of the GitHub Archive Programby which the company sought to preserve open-source software for future generations in the GitHub Arctic Code Vault in Svalbard, Norway.

This new expansion will see the Greatest Hits of GitHub, a collection of the most popular code repositories, printed on piqlFilm reels and stored in aesthetically pleasing 3D printed boxes. Alongside thesegreatesthits, a random sample of roughly 5,000 other repositories will also be includedto give future historians a better understanding of the gamut of projects built by todays developers.

According to GitHub, these 17,000 open-source repositories are the foundation of technology used today across the globe. To ensure the accessibility of these key projects, per the Lots of Copies Keep Stuff Safe principle, the company created four sets.

The chosen locations are the Bodleian Library at Oxford University in England, the Bibliotheca Alexandrina (a.k.a. the Library of Alexandria, Egypt), and the Stanford Libraries in California. Additionally, the fourth box will be preserved and displayed at GitHub headquarters.

Preservation of knowledge is of enormous importance to not only us at the Bodleian Libraries but to society as a whole, said Richard Ovenden, the Bodleys librarian. In this digital age, we must constantly seek new ways of preserving critical information, such as code, Librarians and archivists, the custodians of the past, are also the advance-guards of the future.

As one of the oldest libraries in Europe, the Bodleian Library has been amassing and protecting human knowledge for centuries and its archives include codicils over thousands of years old. It is the second-largest library in Britain after the British Library, boasting over 12 million items.

Our community has developed open approaches to software development, data practices, and scholarly communication for years, consequently, it feels like the Bodleian Libraries is a perfect partner with GitHub and the other partner libraries around the world to ensure the preservation of open source software, Ovenden said.

The Bibliotheca Alexandrina, Africas pre-eminent library and modern-day successor to the legendary Library of Alexandria, has become the second location for the GitHub Archive. It is also a center for knowledge, as well as dialogue, learning and understanding among cultures.

Since its launch as the New Library of Alexandria in 2002, the Bibliotheca Alexandrina has been committed to the preservation of heritage for future generations, Dr. Mostafa El Feki, director of Bibliotheca Alexandrina, said. To that end, the Bibliotheca Alexandrina developed several initiatives including web archiving, book digitization and 3D artifact digitization.

The Stanford Libraries, 24 in all, at Stanford University sit squarely in the heart of Silicon Valley.The GitHub Archive Program is an excellent partnership model among technology companies, libraries, and other institutions dedicated to the long-term preservation of open source software, said Michael Keller, vice provost and university librarian for Stanford Libraries. Todays announcement is a creative way to encourage others to ensure their history, innovations and code are well conserved for future generations.

The reels themselves will be stored in beautifully printed boxes commissioned from Alex Maki-Jokela, an artist and engineer whose work blends traditional aesthetics with 3D printing and AI-generated art.

I wanted to create something that was aesthetically beautiful, and that paid homage to the spirit of open-source software and to the generations of science and engineering that open-source software rests upon, said Maki-Jokela, who documented the journey on Medium.

The software and 3D printing tools techniques I used to create the cases are close to the edges of whats available today, Maki-Jokela added, and a lot of the technical symbolism reflects our species advances in engineering of the past hundred years. But underneath the artwork, I sought to express stories that are old as time and that will still be true hundreds of years from now, even as the forms of technology change.

GitHub has further information on the Arctic Code Vault, its Greatest Hits archive and more details at its Archive Program web portal.

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.

Wed also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we dont have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary onSiliconANGLE along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams attheCUBE take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here,please take a moment to check out a sample of the video content supported by our sponsors,tweet your support, and keep coming back toSiliconANGLE.

Read this article:
GitHub archives open-source 'greatest hits' in four locations around the world - SiliconANGLE News

GitHub expands open source archive effort into three key libraries – TechCentral.ie

Image: GitHub

Archive Program aims to preserve pieces of open source software to allow future software developers to see how the community built and reviewed code

Print

Read More: GitHub

Historians and future generations of developers will be able to unearth early lines of open source Linux, Ruby, or Python code buried 250 feet under the earths permafrost layer and, now, in three historic libraries in Oxford, Egypt, and California, thanks to GitHubs expanding Archive Program.

Announced last year at the code management companys Universe event, the GitHub Archive Program aims to preserve open source software in much the same way we do works of art, design, or literature. By printing historically relevant open source repositories onto reels of piqlFilm (digital photosensitive archival film), GitHub which was acquired by Microsoft in 2018hopes to preserve the open source software movement for future generations.

This program includes the storage of a code archive in the Arctic World Archive in Svalbard, Norway just one mile away from the famous Global Seed Vault by storing 186 reels of piqlFilm and 21TB of repository data in a decommissioned coal mine 250 meters deep in the permafrost this summer.

Run in partnership with the Long Now Foundation, the Internet Archive, the Software Heritage Foundation, Arctic World Archive, and Microsoft Research, the program looks to preserve both warm and cold versions of the code to ensure multiple copies and formats of the software are preserved, also known as the LOCKSS (Lots Of Copies Keeps Stuff Safe) approach by archivists .

Now, the project is expanding by donating reels of hardened microfilm to the 400-year-old Bodleian Library at Oxford University in England; the Bibliotheca Alexandrina in Egypt, and the Stanford Libraries in California; as well as storing a copy in the library at GitHubs headquarters in San Francisco.

GitHub is preserving its most popular repositories by the number of stars given by the community, including projects like Linux and Android and programming languages like Ruby and Go. The company is also preserving 5,000 repositories picked at random.

The idea behind that is when you go back in history we want to preserve the work of individual developers, students, and small, lesser known developers and their open source projects, Thomas Dohmke, VP of strategic programs at GitHub told InfoWorld.

By its very nature, open source software is not a static thing to be preserved, it is collaborative and always in flux. The intention is not to store copies that can be booted and run in the future, although that may be possible. Instead, the idea is to preserve a moment in time, where open source became the premier mode of software development, and chart the cultural significance of that movement.

A platform like GitHub can paint a picture of a broad spread of the software developer community across the globe at a moment in time, Richard Ovenden, the Bodleys librarian and president of the Digital Preservation Coalition, told InfoWorld.

We think it is worth preserving software and how people worked together across the world to contribute and review source code. There is something culturally there which is worth preserving, GitHubs Dohmke added.

The archive is being built for two types of people, according to Dohmke, historians and future software developers curious about how software was developed during this era.

Each donation is specially encased using a combination of 3D printing and AI-generated art by the engineer and artist Alex Maki-Jokela. You can read more about his work on Medium.

All archived code will also include technical guides to QR decoding, file formats, character encodings, and other critical metadata so that future developers can decode it.Storage is not the same thing as preservation, you have to do other things, Ovenden said.

IDG News Service

The Irish Computer Society provides members with the necessary qualifications, skills and training needed to succeed and excel within the profession.

Upcoming courses which may be of interest include:

Find out more

Read More: GitHub

Read the original:
GitHub expands open source archive effort into three key libraries - TechCentral.ie

SaTT smart advertising is in good company taking the open source route – Crypto News Flash

Once upon a time, packaged software was all we knew. That was the late 60s and early 70s when the concept of Open Source hadnt yet been birthed. The norm for programmers was to be hired to write code for corporations or sell code to companies or consumers.

To be sure, open source software itself already existed but the term had not yet been coined. Computers and software were a new thing and enthusiasts would meet up and write code for fun, a scenario not unlike that of the recent gatherings and meetups of blockchain technology enthusiasts.

Meanwhile, development of software based on the sharing and collaborative improvement of software source code gained more popularity over time (in large part as a protest against proprietary software giants like Microsoft) and in the late 1990s, gained mainstream recognition that led to the coining of the Open Source label.

Basically, the term Open Source denotes that a product includes permission to use its source code, design documents or content. This means any developer or programmer can contribute to the development of an open source project, which is an amazing way of involving global talents and communities to collaborate and produce high quality programs.

Last month when smart advertising token SaTT announced its decision to get on the open source route, it was precisely to take advantage of this aspect of Open Source. According to SaTT:

It has always been clear to us that a blockchain project, decentralized by definition, cannot be constrained by a centralized environment [SaTT aims to be] a project that can benefit from community contributions and exceed our vision and know-how, establishing ourselves as a universal reference.

Open Source does offer advantages when compared to Proprietary Development, such as the backing and involvement of global talents and communities as mentioned; enhanced levels of creative freedom for developers and platform customization to the end user; open source projects with vibrant communities increase their chances of finding and fixing bugs, amongst the reasons to choose open source development.

Indeed, SaTT is in good company taking the Open Source route. Some of the best known and most successful blockchain projects are open source.

Ethereum immediately springs to mind of course, as the open source blockchain platform on which smart contracts run. Proposed in 2013 by Vitalik Buterin who has since become one of the most influential voices in Blockchain, the distributed public blockchain network enables developers to build and deploy next-generation decentralised apps easily.

Currently, the Ethereum platform is depending on open source development for a widely anticipated major upgrade to Ethereum 2.0. The Enterprise Ethereum Alliance is one of the largest open-source blockchain initiatives representing a wide variety of business sectors which includes technology, healthcare, banking, energy, government, pharmaceuticals and many more.

Hyperledger Fabric, the enterprise-grade permissioned distributed ledger framework that enables performance at scale while preserving privacy; Solidity, the contract-oriented programming language; MetaMask, one of the most downloaded blockchain browser extensions which equips you with all you need to manage digital assets are just a handful of the numerous open source blockchain projects that have made their mark on the industry.

Outside of blockchain the open source success stories are considered mainstream technology successes.

Linux has come a long way since Linus Torvalds announced the creation of an OS kernel back in 1991, with a majority of web servers currently running on its platform. Used by numerous websites and services, amongst them Wikipedia and Facebook, MySQL is the most widely used database server in the world. Utilized by 46% of all websites in the world, Apache HTTP Server has been the most popular web server software since 1996. WordPress is one of the most dominant and popular blog platforms. Google Chrome has overtaken the global desktop browser market share for 2020 with 77.3%.

Indeed, history has indicated that open source developments have been successful in claiming substantial market share.

Can the SaTT project live up to its goal of being the working advertising solution that will be widely adopted by its industry?

With a utility token that enables advertisers to buy smart advertising services on its dApp platform, SaTT helps to quantify ROI using blockchain oracles that retrieve data from media platforms like YouTube, Twitter, Facebook, Instagram and others. When the actions performed by influencers and content creators successfully meet the criteria set by the advertiser, payments are triggered automatically from the advertisers preloaded budget to the wallet of the influencer/publisher.

Offering up a fast, secure, tamper-proof, and cost-efficient solution, the SaTT smart advertising system shows much promise, not to mention that it is already a working solution, now in the midst of implementing a massively large scale POC. With a roadmap that details how it will reach Open Source mode in six to nine months time, SaTT seems to be taking a step in the right direction.

Original post:
SaTT smart advertising is in good company taking the open source route - Crypto News Flash

Ted Cruz digs in for congressional battle over censorship on Twitter, Facebook – Houston Chronicle

WASHINGTON U.S. Sen. Ted Cruz set conservative Twitter on fire as he tore into Jack Dorsey, the platforms CEO, during a recent Senate Judiciary Committee hearing, creating the sort of viral moment senators crave from such high-profile exchanges.

Facebook and Twitter and Google have massive power. They have a monopoly on public discourse in the online arena, Cruz told Dorsey and Facebook CEO Mark Zuckerberg, whom the Texas Republican and other GOP members of the committee had subpoenaed to address what they view as censorship and suppression by Big Tech during the 2020 election.

Your policies are applied in a partisan and selective manner, Cruz said, demanding that Dorsey and Zuckerberg produce data showing how often they flag or block Republican candidates and elected officials as opposed to Democrats.

What a moment, right-wing commentator Dinesh DSouza tweeted, sharing a clip from the hearing with his 1.9 million followers.

This is almost TOO GOOD, tweeted Dan Bongino, another conservative commentator, urging his 2.7 million followers to Watch Twitter CEO Jack Dorsey absolutely squirm in his chair as Ted Cruz goes full trial lawyer on him.

IN-DEPTH: Cornyn, Cruz not holding out much hope for Trump to pull off re-election

As social media companies cracked down on misinformation during the election under pressure to prevent a repeat of 2016s Russian meddling they found themselves increasingly targeted by conservatives such as Cruz, who call it censorship when Twitter flags President Donald Trumps posts that falsely claim he won re-election, or when Facebook tries to stop its users from sharing a debunked story about President-elect Joe Bidens son.

Its a sign of how an area of bipartisan agreement the need to reform Big Tech has become increasingly politicized, worrying experts that it will be yet another effort mired in congressional bickering.

The fundamental question is what right does a social media platform have to label something posted on it as potentially untrue, said Chris Bronk, an expert in cyber geopolitics who is an associate professor at the University of Houston.

TEXAS TAKE: Get political headlines from across the state sent directly to your inbox

Bronk said its become increasingly clear that reforms are needed to counter domestic hate groups and hostile foreign governments that use social media to ply the American public with disinformation.

But when the same politicians who regulate the industry are also being flagged for making false or misleading statements, Bronk sees little room for agreement.

I got a tweet this morning at seven whatever, the president put out there and it just said, I won the election. Is that true? said Bronk, a former foreign service officer with the State Department. The internet has allowed us to divorce ourselves from some sets of facts.

The debate centers on Section 230 of the Communications Decency Act, which offers legal protections to online platforms that publish and circulate content created by others.

Cruz, Trump and Biden agree those protections need to go. But the reasons they cite couldnt be further apart.

Democrats such as Biden say social media platforms arent doing enough to combat misinformation and harmful content such as hate speech.

I recognize the steps theyre really baby steps that youve taken so far, and yet destructive, incendiary misinformation is still a scourge on both your platforms, U.S. Sen. Richard Blumenthal, D-Conn., told Dorsey and Zuckerberg during the committee hearing a proceeding that Blumenthal deemed a political sideshow, a public tarring and feathering.

Republicans, including Cruz, say Twitter and Facebook have already gone too far.

Theyve had unchecked power to censor, restrict, edit, shape, hide, alter virtually any form of communication between private citizens or large public audiences, Trump said this year as he signed an executive order targeting the protections in place. Trump said fact-checking attempts by the platforms are one of the greatest dangers (free speech) has faced in American history.

Experts say theres actually little evidence that social media platforms unfairly target those on the right and that available data actually indicates that conservative social media tends to get more traffic online. For instance, the New York Times reported that Trumps official Facebook page got 130 million reactions, shares and comments over a 30-day stretch in the final leg of the presidential race, compared with 18 million for Bidens page.

Trump similarly eclipsed Biden on Instagram, and the gaps on both sites widened as the race came to an end, the Times reported.

Part of the tension on Capitol Hill is the Republicans continue to push this false narrative that tech is anti-conservative, said Hany Farid, a computer science professor at the University of California, Berkeley, who has testified before the Senate and advised congressional offices on potential legislation. There is no data to support this. The data that is there is in the other direction and says conservatives dominate social media.

Farid said some important if small steps are being taken. The Judiciary Committee this year passed a bill that would amend Section 230 to allow federal and state claims against platforms hosting content that sexually exploits children.

Farid said the relatively narrow bill targets a very serious problem, but its one of many, many really bad problems on the internet, including hate speech and terrorism. Once those other issues are brought up, Farid said, Republicans start to push back.

Its easy to be supportive of legislation that protects 4-year-olds from being sexually assaulted, Farid said. When it comes to things outside of child sex abuse, the Republicans have a problem, because a lot of their folks live on the side of white supremacists. When we start talking about cracking down on hate speech, they hear Republicans.

But Farid also questioned the wisdom of scrapping Section 230 altogether, as Biden has advocated, and said regulations on the algorithms that platforms use to decide what content gets promoted to their users would be a better approach.

Part of the problem, he said, is that few lawmakers have a deep understanding of the industry, and even some of their more tech-savvy staffers dont seem to have a firm grasp on the issue.

Unfortunately a lot of these hearings are not substantive, Farid said. They are for show. Theyre like flexing muscles.

Cruz, a former Texas solicitor general, was flexing at the hearing with Dorsey and Zuckerberg.

CRUZ STEPS INTO RING WITH TWITTER CEO, HITS HIM WITH 5 LEGALLY DEVASTATING FINISHING MOVES, read the text on a video the conservative Washington Examiner shared, with clips from Cruzs questioning of Dorsey.

In the past, Cruz has called for a criminal investigation into Twitter, accusing the social media company of violating U.S. sanctions on Iran by providing social media accounts to Iranian leaders.

He has urged the top U.S. trade official to scrap language in trade agreements that Cruz said offers near-blanket legal immunity to technology companies.

And he has accused Google of abusing its monopoly power in an effort to censor political speech with which it disagrees.

Cruz, like many Republicans, has also joined Parler, a social media network catering to conservatives.

At the hearing, Cruz vowed to put Twitters policies to the test by tweeting out statements about voter fraud, including findings from the Commission on Federal Election Reform, a bipartisan organization founded in 2004 by former President Jimmy Carter and former Secretary of State James Baker.

After the hearing, Cruz tweeted to his 4.1 million followers:

Twitter Test #1: Absentee ballots remain the largest source of potential voter fraud.

Twitter Test #2: Voter fraud is particularly possible where third party organizations, candidates, and political party activists are involved in handling absentee ballots.

Twitter Test #3: Voter fraud does exist. This is just one example, linking to a news report about a woman charged in Texas.

None of the tweets was flagged.

ben.wermund@chron.com

twitter.com/benjaminew

Here is the original post:

Ted Cruz digs in for congressional battle over censorship on Twitter, Facebook - Houston Chronicle

Even the world’s freest countries aren’t safe from internet censorship – Help Net Security

The largest collection of public internet censorship data ever compiled shows that even citizens of what are considered the worlds freest countries arent safe from internet censorship.

A team from the University of Michigan used its own Censored Planet tool, an automated censorship tracking system launched in 2018, to collect more than 21 billion measurements over 20 months in 221 countries.

We hope that the continued publication of Censored Planet data will enable researchers to continuously monitor the deployment of network interference technologies, track policy changes in censoring nations, and better understand the targets of interference, said Roya Ensafi, U-M assistant professor of electrical engineering and computer science who led the development of the tool.

Ensafis team found that censorship is increasing in 103 of the countries studied, including unexpected places like Norway, Japan, Italy, India, Israel and Poland. These countries, the team notes, are rated some of the worlds freest by Freedom House, a nonprofit that advocates for democracy and human rights.

They were among nine countries where Censored Planet found significant, previously undetected censorship events between August 2018 and April 2020. They also found previously undetected events in Cameroon, Ecuador and Sudan.

While the United States saw a small uptick in blocking, mostly driven by individual companies or internet service providers filtering content, the study did not uncover widespread censorship. However, Ensafi points out that the groundwork for that has been put in place here.

When the United States repealed net neutrality, they created an environment in which it would be easy, from a technical standpoint, for ISPs to interfere with or block internet traffic, she said. The architecture for greater censorship is already in place and we should all be concerned about heading down a slippery slope.

Its already happening abroad, the researchers found.

What we see from our study is that no country is completely free, said Ram Sundara Raman, U-M doctoral candidate in computer science and engineering and first author of the study. Were seeing that many countries start with legislation that compels ISPs to block something thats obviously bad like child pornography or pirated content.

But once that blocking infrastructure is in place, governments can block any websites they choose, and its a very opaque process. Thats why censorship measurement is crucial, particularly continuous measurements that show trends over time.

Norway, for exampletied with Finland and Sweden as the worlds freest country, according to Freedom Housepassed laws requiring ISPs to block some gambling and pornography content beginning in early 2018.

Censored Planet, however, uncovered that ISPs in Norway are imposing what the study calls extremely aggressive blocking across a broader range of content, including human rights websites like Human Rights Watch and online dating sites like Match.com.

Similar tactics show up in other countries, often in the wake of large political events, social unrest or new laws. News sites like The Washington Post and The Wall Street Journal, for example, were aggressively blocked in Japan when Osaka hosted the G20 international economic summit in June 2019.

News, human rights and government sites saw a censorship spike in Poland after protests in July 2019, and same-sex dating sites were aggressively blocked in India after the country repealed laws against gay sex in September 2018.

The researchers say the findings show the effectiveness of Censored Planets approach, which turns public internet servers into automated sentries that can monitor and report when access to websites is being blocked.

Running continuously, it takes billions of automated measurements and then uses a series of tools and filters to analyze the data and tease out trends.

The study also makes public technical details about the workings of Censored Planet that Raman says will make it easier for other researchers to draw insights from the projects data, and help activists make more informed decisions about where to focus.

Its very important for people who work on circumvention to know exactly whats being censored on which network and what method is being used, Ensafi said. Thats data that Censored Planet can provide, and tech experts can use it to devise circumventions.

Censored Planets constant, automated monitoring is a departure from traditional approaches that rely on volunteers to collect data manually from inside countries.

Manual monitoring can be dangerous, as volunteers may face reprisals from governments. Its limited scope also means that efforts are often focused on countries already known for censorship, enabling nations that are perceived as freer to fly under the radar.

While censorship efforts generally start small, Raman says they could have big implications in a world that is increasingly dependent on the internet for essential communication needs.

We imagine the internet as a global medium where anyone can access any resource, and its supposed to make communication easier, especially across international borders, he said. We find that if this continues, that wont be true anymore. We fear this could lead to a future where every country has a completely different view of the internet.

See the original post here:

Even the world's freest countries aren't safe from internet censorship - Help Net Security

‘Extremely aggressive’ internet censorship spreads in the world’s democracies – University of Michigan News

The largest collection of public internet censorship data ever compiled shows that even citizens of the worlds freest countries are not safe from internet censorship.

A University of Michigan team used Censored Planet, an automated censorship tracking system launched in 2018 by assistant professor of electrical engineering and computer science Roya Ensafi, to collect more than 21 billion measurements over 20 months in 221 countries. They will present the findings Nov. 10 at the 2020 ACM Conference on Computer and Communications Security.

We hope that the continued publication of Censored Planet data will enable researchers to continuously monitor the deployment of network interference technologies, track policy changes in censoring nations, and better understand the targets of interference, Ensafi said. While Censored Planet does not attribute censorship to a particular entity, we hope that the massive data weve collected can help political and legal scholars determine intent.

Ensafis team found that censorship is increasing in 103 of the countries studied, including unexpected places like Norway, Japan, Italy, India, Israel and Polandcountries which the paper notes are rated as some of the freest in the world by advocacy group Freedom House. They were among nine countries where Censored Planet found significant, previously undetected censorship events between August of 2018 and April of 2020. Previously undetected events were also identified in Cameroon, Ecuador and Sudan.

While the study observed an increase in blocking activity in these countries, most were driven by organizations or internet service providers filtering content. The study did not observe any nationwide censorship policies such as those in China. While the United States saw a smaller uptick in blocking activity, Ensafi points out that the groundwork for such blocking has been put in place in the United States.

When the United States repealed net neutrality, they created an environment in which it would be easy, from a technical standpoint, for internet service providers to interfere with or block internet traffic, Ensafi said. The architecture for greater censorship is already in place and we should all be concerned about heading down a slippery slope.

Its already happening abroad, the study shows.

What we see from our study is that no country is completely free, said Ram Sundara Raman, a PhD candidate in computer science and engineering at U-M and the first author on the paper. Today, many countries start with legislation that compels internet service providers to block something thats obviously bad like child sex abuse material. But once that blocking infrastructure is in place, governments can block any websites they choose, and its usually a very opaque process. Thats why censorship measurement is crucial, particularly continuous measurements that show trends over time.

Norway, for exampletied with Finland and Sweden as the worlds freest country according to Freedom Housepassed a series of laws requiring internet service providers to block some gambling and pornography content, beginning in early 2018. But in Norway, Censored Planets measurements also identified network inconsistencies across a broader range of content, including human rights websites like Human Rights Watch and online dating sites like match.com.

Similar tactics show up in other countries, often in the wake of large political events, social unrest or new laws. While Censored Planet can detect increases in censorship, it cannot identify any direct connection to political events. Its also important to note that its not always government-demanded network censorship that leads to websites being unreachable, though.

Some news websites were blocked in a few networks in Japan during the G20 Summit in June of 2019. News, human rights, and government websites saw a censorship spike in certain networks in Poland while a series of protests occurred in July of 2019, and social media websites were blocked in Sri Lanka after a series of bomb blasts in the country in January 2019. Some online dating websites were blocked in India after the country repealed laws against gay sex in September of 2018.

Roya Ensafi. Image credit: Joseph Xu, Michigan Engineering

The researchers say the findings show the effectiveness of Censored Planets approach, which turns public internet servers across the globe into automated sentries that can monitor and report when access to websites is being blocked. Running continuously, it takes billions of automated measurements and then uses a series of tools and filters to analyze the data, removing noise and teasing out trends.

The paper also makes public technical details about the workings of Censored Planet that Sundara Raman says will make it easier for other researchers to draw insights from the projects data. It will also help activists make more informed decisions about where to focus their efforts.

Its very important for people who work on circumvention to know exactly whats being censored on which network and what method is being used, Ensafi said. Thats data that Censored Planet can provide, and tech experts can use it to devise circumventions for censorship efforts.

Censored Planets constant, automated monitoring is a departure from traditional approaches that rely on volunteers to collect data manually from inside the countries being monitored. Manual monitoring can be dangerous for volunteers, who may face reprisals from governments. The limited scope of these approaches also means that efforts are often focused on countries already known for censorship, enabling nations that are perceived as freer to fly under the radar. While censorship efforts generally start small, Sundara Raman says they could have big implications in a world that is increasingly dependent on the internet for essential communication needs.

We imagine the internet as a global medium where anyone can access any resource, and its supposed to make communication easier, especially across international borders, he said. We find that if this upward trend of increasing censorship continues, that wont be true anymore. We fear this could lead to a future where every country has a completely different view of the internet.

The paper is titled Censored Planet: An Internet-wide, Longitudinal Censorship Observatory. The research team also included former U-M computer science and engineering student Prerana Shenoy, and Katharina Kohls, an assistant professor at Radboud University in Nijmegen, Netherlands. The research was supported in part by the U.S. National Science Foundation, Award CNS-1755841.

Clarifications:This story has been updated to include additional nuance about the research, including: The names of the Wall Street Journal and Washington Post websites were removed from the subhead and the body of the story because the instance of blocking was only observed in one network and may be a case of misconfiguration rather than censorship.

More information:

Read more here:

'Extremely aggressive' internet censorship spreads in the world's democracies - University of Michigan News

ICANN Can Stand Against Censorship (And Avoid Another .ORG Debacle) by Keeping Content Regulation and Other Dangerous Policies Out of Its Registry…

The Internets domain name system is not the place to police speech. ICANN, the organization that regulates that system, is legally bound not to act as the Internets speech police, but its legal commitments are riddled with exceptions, and aspiring censors have already used those exceptions in harmful ways. This was one factor that made the failed takeover of the .ORG registry such a dangerous situation. But now, ICANN has an opportunity to curb this abuse and recommit to its narrow mission of keeping the DNS running, by placing firm limits on so-called voluntary public interest commitments (PICs, recently renamed Registry Voluntary Commitments, or RVCs).

For many years, ICANN and the domain name registries it oversees have given mixed messages about their commitments to free speech and to staying within their mission. ICANNs bylaws declare that ICANN shall not regulate (i.e., impose rules and restrictions on) services that use the Internets unique identifiers or the content that such services carry or provide. ICANNs mission, according to its bylaws, is to ensure the stable and secure operation of the Internet's unique identifier systems. And ICANN, by its own commitment, shall not act outside its Mission.

Buttheres always a but. The bylaws go on to say that ICANNs agreements with registries (the managing entities of each top-level domain like .com, .org, and .horse) and registrars (the companies you pay to register a domain name for your website) automatically fall within ICANNs legal authority, and are immune from challenge, if they were in place in 2016, or if they do not vary materially from the 2016 versions.

Therein lies the mischief. Since 2013, registries have been allowed to make any commitments they like and write them into their contracts with ICANN. Once theyre written into the contract, they become enforceable by ICANN. These voluntary public interest commitments have included many promises made to powerful business interests that work against the rights of domain name users. For example, one registry operator puts the interests of major brands over those of its actual customers by allowing trademark holders to stop anyone else from registering domains that contain common words they claim as brands.

Further, at least one registry has granted itself sole discretion and at any time and without limitation, to deny, suspend, cancel, or transfer any registration or transaction, or place any domain name(s) on registry lock, hold, or similar status for vague and undefined reasons, without notice to the registrant and without any opportunity to respond. This rule applies across potentially millions of domain names. How can anyone feel secure that the domain name they use for their website or app wont suddenly be shut down? With such arbitrary policies in place, why would anyone trust the domain name system with their valued speech, expression, education, research, and commerce?

Voluntary PICs even played a role in the failed takeover of the .ORG registry earlier this year by the private equity firm Ethos Capital, which is run by former ICANN insiders. When EFF and thousands of other organizations sounded the alarm over private investors bid for control over the speech of nonprofit organizations, Ethos Capital proposed to write PICs that, according to them, would prevent censorship. Of course, because the clauses Ethos proposed to add to its contract were written by the firm alone, without any meaningful community input, they had more holes than Swiss cheese. If the sale had succeeded, ICANN would have been bound to enforce Ethoss weak and self-serving version of anti-censorship.

The issue of PICs is now up for review by an ICANN working group known as Subsequent Procedures. Last month, the ICANN Board wrote an open letter to that group expressing concern about PICs that might entangle ICANN in issues that fall outside of ICANNs technical mission. It bears repeating that the one thing explicitly called out in ICANNs bylaws as being outside of ICANNs mission is to regulate Internet services or the content that such services carry or provide. The Board asked the working group [pdf] for guidance on how to utilize PICs and RVCs without the need for ICANN to assess and pass judgment on content.

EFF supports this request, and so do many other organizations and stakeholders who dont want to see ICANN become another content moderation battleground. Theres a simple, three-part solution that the Subsequent Procedures working group can propose:

In short, while registries can run their businesses as they see fit, ICANNs contracts and enforcement systems should have no role in content regulation, or any other rules and policies beyond the ones the ICANN Community has made together.

A guardrail on the PIC/RVC process will keep ICANN true to its promise not to regulate Internet services and content.It will help avoid another situation like the failed .ORG takeover, by sending a message that censorship-for-profit is against ICANNs principles. It will also help registry operators to resist calls for censorship by governments (for example, calls to suppress truthful information about the importation of prescription medicines). This will preserve Internet users trust in the domain name system.

Follow this link:

ICANN Can Stand Against Censorship (And Avoid Another .ORG Debacle) by Keeping Content Regulation and Other Dangerous Policies Out of Its Registry...

Google, Facebook and Twitter threaten to leave Pakistan over censorship law – TechCrunch

Global internet companies Facebook, Google and Twitter and others have banded together and threatened to leave Pakistan after the South Asian nation granted blanket powers to local regulators to censor digital content.

Earlier this week, Pakistan Prime Minister Imran Khan granted the Pakistan Telecommunication Authority the power to remove and block digital content that pose harms, intimidates or excites disaffection toward the government or in other ways hurt the integrity, security, and defence of Pakistan.

Through a group called the Asia Internet Coalition (AIC), the tech firms said that they were alarmed by the scope of Pakistans new law targeting internet firms. In addition to Facebook, Google and Twitter, AIC represents Apple, Amazon, LinkedIn, SAP, Expedia Group, Yahoo, Airbnb, Grab, Rakuten, Booking.com, Line and Cloudflare.

If the message sounds familiar, its because this is not the first time these tech giants have publicly expressed their concerns over the new law, which was proposed by Khans ministry in February this year.

After the Pakistani government made the proposal earlier this year, the group had threatened to leave, a move that made the nation retreat and promise an extensive and broad-based consultation process with civil society and tech companies.

That consultation never happened, AIC said in a statement on Thursday, reiterating that its members will be unable to operate in the country with this law in place.

The draconian data localization requirements will damage the ability of people to access a free and open internet and shut Pakistans digital economy off from the rest of the world. Its chilling to see the PTAs powers expanded, allowing them to force social media companies to violate established human rights norms on privacy and freedom of expression, the group said in a statement.

The Rules would make it extremely difficult for AIC Members to make their services available to Pakistani users and businesses. If Pakistan wants to be an attractive destination for technology investment and realise its goal of digital transformation, we urge the Government to work with industry on practical, clear rules that protect the benefits of the internet and keep people safe from harm.

Under the new law, tech companies that fail to remove or block the unlawful content from their platforms within 24 hours of notice from Pakistan authorities also face a fine of up to $3.14 million. And like its neighboring nation, India which has also proposed a similar regulation with little to no backlash Pakistan now also requires these companies to have local offices in the country.

The new rules comes as Pakistan has cracked down on what it deems to be inappropriate content on the internet in recent months. Earlier this year, it banned popular mobile game PUBG Mobile and last month it temporarily blocked TikTok.

Countries like Pakistan and India contribute little to the bottom line for tech companies. But India, which has proposed several protectionist laws in recent years, has largely escaped any major protest from global tech companies because of its size. Pakistan has about 75 million internet users.

By contrast, India is the biggest market for Google and Facebook by users. Silicon Valley companies love to come to India because its an MAU (monthly active users) farm, Kunal Shah, a veteran entrepreneur, said in a conference in 2018.

Read more here:

Google, Facebook and Twitter threaten to leave Pakistan over censorship law - TechCrunch