The problems with Elon Musks plan to open-source the Twitter algorithm – MIT Technology Review

For example, Melanie Dawes, chief executive of Ofcom, which regulates social media in the UK, has said that social media platforms will have to explain how their code works. And the European Unions recently passed Digital Services Act, agreed on April 23, will likewise compel platforms to offer more transparency. In the US, Democratic senators introduced proposals for an Algorithmic Accountability Act in February 2022. Their goal is to bring new transparency and oversight of the algorithms that govern our timelines and news feeds, and much else besides.

Allowing Twitters algorithm to be visible to others, and adaptable by competitors, theoretically means someone could just copy Twitters source code and release a rebranded version. Large parts of the internet run on open-source softwaremost famously OpenSSL, a security toolkit used by large parts of the web, which in 2014 suffered a major security breach.

There are even examples of open-source social networks already. Mastodon, a microblogging platform that was set up after concerns about the dominant position of Twitter, allows users to inspect its code, which is posted on the software repository GitHub.

But seeing the code behind an algorithm doesnt necessarily tell you how it works, and it certainly doesnt give the average person much insight into the business structures and processes that go into its creation.

Its a bit like trying to understand ancient creatures with genetic material alone, says Jonathan Gray, a senior lecturer in critical infrastructure studies at Kings College London. It tells us more than nothing, but it would be a stretch to say we know about how they live.

Theres also not one single algorithm that controls Twitter. Some of them will determine what people see on their timelines in terms of trends, or content, or suggested follows, says Catherine Flick, who researches computing and social responsibility at De Montfort University in the UK. The algorithms people will primarily be interested in are the ones controlling what content appears in users timelines, but even that wont be hugely useful without the training data.

Most of the time when people talk about algorithmic accountability these days, we recognize that the algorithms themselves arent necessarily what we want to seewhat we really want is information about how they were developed, says Jennifer Cobbe, a postdoctoral research associate at the University of Cambridge. Thats in large part because of concerns that AI algorithms can perpetuate the human biases in data used to train them. Who develops algorithms, and what data they use, can make a meaningful difference to the results they spit out.

For Cobbe, the risks outweigh the potential benefits. The computer code doesnt give us any insight into how algorithms were trained or tested, what factors or considerations went into them, or what sorts of things were prioritized in the process, so open-sourcing it may not make a meaningful difference to transparency at Twitter. Meanwhile, it could introduce some significant security risks.

Companies often publish impact assessments that probe and test their data protection systems to highlight weaknesses and flaws. When theyre discovered, they get fixed, but data is often redacted to prevent security risks. Open-sourcing Twitters algorithms would make the entire code base of the website accessible to all, potentially allowing bad actors to pore over the software and find vulnerabilities to exploit.

I dont believe for a moment that Elon Musk is looking at open-sourcing all the infrastructure and security side of Twitter, says Eerke Boiten, a professor of cybersecurity at De Montfort University.

Read the original:

The problems with Elon Musks plan to open-source the Twitter algorithm - MIT Technology Review

Related Posts
This entry was posted in $1$s. Bookmark the permalink.