Why do we need regulation for online advertisement?
Algorithms can draw inferences about sensitive personal traits such as ethnicity, gender, sexual orientation and religious beliefs based on our browsing behaviour. This data is very valuable.
BUT, creating user groups using this kind of sensitive data is illegal under the current EU data protection regulation.
To sidestep this ban, algorithms are taught to group users based on apparently neutral characteristics such as ‘reader of Cosmopolitan magazine’ or ‘interested in kung fu films’ — which quite often map closely on to related personal categories, such as gender / ethnicity / sexual orientation. Algorithms then segregate users into groups based on this less direct categorisation and offer or exclude different products, services, or prices on the basis of affinity.
In other words, Big Tech has still been able to profile users at scale and target their ads effectively and – important to highlight here – so far legally.
According to Sandra Wachter, discrimination by association describes that even the most innocuous groupings can have harmful consequences, and that grouping people according to their assumed interests has become the baseline for online advertising. She calls this practice inferential analytics.
EU regulation so far has not been able to protect disadvantage groups from this practice of discrimination by algorithms that influence decision-making, but the need to do so within the framework of the Digital Service Act has crystallised, and the plans to regulate targeted ads can be summarised into 2 camps:
1. Either imposing a ban on micro targeting, which would leverage the effect of discrimination based on association;
2. or enforce stricter transparency measures in tandem with more power to restrict or even to apply temporary bans on targeted ads for companies that repeatedly flout new digital competition rules.
…
Don’t want to miss new posts? Then don’t forget to like, subscribe and follow this space.

Leave a comment