Let the caged bird fly? — Elon wants to open Twitter’s algorithm, here’s why it’s a bad idea

Gianluca Mauro
3 min readApr 26, 2022

--

So…Elon is actually buying Twitter in the name of “free speech”. Twitter’s algorithm seems to be suspect #1 in Twitter’s alleged censorship problem.

Elon’s solution is to make the algorithm open source. I think this idea is not well thought out and can be a disaster. Let’s see three reasons why.

1. AI = software + data.

AI algorithms are pieces of software that learn from the data they’re fed with. This means that their behavior depends on both the software and the data, and opening just one doesn’t ensure the system can be properly scrutinized.

Let’s take an extreme example: you open up an algorithm to spot violent political tweets and people think it’s built in a fair way. Then the data you feed it with contains just tweets from right-wing parties. The outcome will be biased.

To actually ensure Twitter’s algorithmic system (again, not “just” an algorithm) Elon has two options:

  1. Release Twitter’s training data (and data pre-processing pipeline). I don’t need to explain why this can’t be done (hint: privacy).
  2. Release pre-trained models. This would be absolutely destructive, and the reasons are in point #3

2. There’s no single “Twitter algorithm”

Twitter doesn’t have “an algorithm”. It has a system of algorithms.

Whenever a tweet is made, it goes through a series of algorithmic checkpoints. There must be a pornography classifier, a violence classifier, and at some point down the line a recommender system.

What does Elon want to open? All these systems are interdependent to some degree, so if you want to get something done you should open the whole thing. Yet, do we want to explain to the world how Twitter blocks child pornography? More on why that’d be insanity on point #3.

3. Obscurity is (part of) security.

If people can understand the algorithmic system behavior, they can also use it to game the system. This is particularly dangerous if a trained model is released.

Assume I really want to post some violent content on Twitter. With a closed algorithm, I need to try and get banned until I succeed. If Twitter opens a trained algorithm, I can simulate 1.000.000 attempts on my machine and post just the ones the system fails to catch.

The argument for open source software is that people who spot bugs can patch them, but again, AI is not just software: spotting and patching holes is not that straightforward.

So what?

I’m an engineer, so I studied theories and techniques that have been researched and applied for hundreds, if not thousands of years. Social Media created the opportunity/problem of connecting the entire humanity in the last…15 years? All these problems are extremely new, and we’re trying to solve them with tools (AI) that are…10 years old maybe? (numbers may vary based on what you consider “social media” and “modern AI”, but you get my point).

I don’t think humanity has even started understanding the consequences of social media and AI. Thinking that we can fix them by publishing some code on Github seems very naive. These are complex problems with no silver bullet.

So while I think Elon’s idea is absurd, I wish him the best of luck and watch with trepidation one of the most interesting chapters of tech unfold before our eyes. It may not be the right thing to do, but for sure there’ll be a lot to learn.

p.s.: I don’t like to post complaints without proposing solutions, but I don’t think I can do both in a single post. I’ll write a post on how I think these issues should be approached. Follow me to read that once it’s out.

--

--

Gianluca Mauro

Founder of AI Academy and author of Zero to AI. On a mission to empower organizations and people to prosper in the AI era.