The tech / security legacy of January 6

I wanted to offer a slightly different perspective and contextualise what happened on Januar 6 in terms of tech / security, because my background is in security and social movements, and this event can be deconstructed in these terms. 

So to keep things as simple as possible, and look at it from a different perspective, I will start with a basic summary of what happened on January 6, 2021:

The Capitol was stormed and violence used, in the hopes of stopping the formal counting of the Electoral College votes by a radicalised group of people that was mobilised because they believed that President Trump had defeated Biden in the presidential election. 

BUT, this did not come about at a whim. 

A radicalised group of people that was and still is willing to use violence was mobilised over a long period of time. In this case, the radicalisation and mobilisation, and the willingness to use violence to achieve their goal of preventing the certification of the electoral college votes occurred publicly and predominantly online. 

What we already know: 

  • Data harvesting and micro-targeting have been giving hate speech calls wings.
  • Social media algorithms rank outrageous posts better because of weighing certain forms of engagement higher than others.  

New findings: 

  • Brookings researchers have traced the impact of podcasts used to fan the flames up to the storm of the Capitol,
  • ProPublica and The Washington Post published their analysis of 650.000 Facebook posts, which amounted to about 10k posts published / week leading up to the storm of the Capitol.

Are social media giants at fault here?

There were certainly some decisions made for the sake of protecting margins of profit that facilitated this event.

Frances Haugen continuously emphasizes that by changing Facebook marginally in ways that do not involve more content moderation, with non-content based safety strategies that work for everybody in the world, the product would be safer without picking out single ideas or extensive content moderation.

Access to the algorithms and data as proposed to by EU platform regulation legislators, could enhance accountability, and force big tech to change their practices in part. How effective these measures will be, cannot be predicted at this point. This would be especially important in understanding how disinformation spreads, or how micro-targeting works, etc.

But the biggest lesson here is:

Radicalisation does not happen in the dark!

It is process, and as such takes time to occur. In this case, it was public. The great majority of posts and podcasts that facilitated radicalisation were available online. The danger of radicalisation when paired with a call to action to stop the counting of the Electoral College votes, the way January 6 was, has unfolded in events that are still being investigated on various levels.

There is a lot of effort going into trying to understand what happened, in order to try to predict and prevent these outcomes from occurring again.

Enter AI.

Data scientists have been saying that artificial intelligence can help forecast insurrections. What this means in simple terms, is that complex methods of machine-learning are calibrated to the tune of the mysterious roots of political violence.

Will this be effective in predicting an outcome, even though the art of predicting political violence can be more scientific as ever because of the multitude of datapoints available? It certainly has potential, and the article on The Washington Post is making a case for the enhanced use of models of predictions.

BUT, prediction tools could be also used to justify crackdowns on peaceful protests, with AI used as a fig leaf.

There are still many efforts to connect the dots of what actually happened. In terms of legacy for tech and security, let’s just say… so far it’s complicated.

Don’t want to miss new posts? Then don’t forget to like, subscribe and follow this space.

Leave a comment