I came across an excellent (if disturbing) article today on the hidden algorithms that trap people in poverty(external link). The examples given are US-specific, but the issues raised would be relevant in many other countries. It talks about two major types of algorithms that screw people over:

  1. credit scores, and broader “worthiness scores”, that can determine access to housing, employment, etc.
  2. algorithms used by state institutions (or the private providers the state has outsourced its responsibilities to) to determine access to welfare, health care, and access to public services generally

One of the issues is that these algorithms are being used so that nobody can be held accountable, even when the consequences of a bad decision literally ruin lives. If a disabled person gets cut off from all aid due to a “faulty” algorithm, and suffers a health emergency, homelessness, or even dies as a result… then as far as the institution that made that callous decision is concerned, it’s pretty sad, but it’s not their fault. The algorithm did it! And the fact that humans configure algorithms and make the purposeful choice not to review their decisions? Well, you know, let’s just ignore that. It’s the kind of system you arrive at when the rich and powerful see the disadvantaged as statistics, or even mere burdens on the budget, and not as real people who deserve the same level of dignity and quality of life that the privileged themselves enjoy.

Another important point is the way that “big data”, fed into these algorithms via tech behemoths like Google or Facebook, can work against you. A lot of people wonder why they’d care about, say, the tech giants tracking their location constantly through their phones, or scanning and analysing their e-mails and messages so they can show you more ads that they think fit your demographic. They might figure the data’s all anonymised and aggregated anyway by the time it reachers advertisers, so why worry? One reason is that generally, reidentifying “anonymised” data is very easy(external link) if enough data points are provided… and you know these major companies have a lot of data points to sell. Another is that we already know this data is not fully anonymised; the original article I linked mentions:

Credit reports aren’t new, but these days their footprint is far more expansive. Consumer reporting agencies, including credit bureaus, tenant screening companies, or check verification services, amass this information from a wide range of sources: public records, social media, web browsing, banking activity, app usage, and more. The algorithms then assign people “worthiness” scores, which figure heavily into background checks performed by lenders, employers, landlords, even schools.

If you’re relatively privileged, and have nothing in your data sets that throws up alarm bells for these algorithms, you may never notice what’s going on. But if you’re the one on the wrong end of these algorithmic decisions, it can absolutely become a self-reinforcing downward spiral that ruins your life.

I think the sheer scale of the data collected can also miss some people. This Irish Times article(external link), I think, does a great job conveying just how much data Google collects on you. As we know, Facebook also collects masses of data on you(external link), which it has been known to use for creepy purposes like outing many of this psychiatrist’s patients to each other as “People You May Know”(external link). Many other tech giants, like Amazon, Microsoft, Apple, Twitter et al. will also have collected and stored lots of your data, even if some seem more innocuous than others.

Having said all this, I’m not some perfect privacy activist who fully abstains from all these data-mining services. Indeed, I don’t think “just don’t use Google/Facebook/etc.!!!” is really a solution at all, not least because many of the alternatives cost money, making them inaccessible to the very people who would benefit most from them(external link), and being too absolutist about not using mainstream social media can make it hard to stay in touch with friends and family who have more pressing things to worry about. The correct response to the nightmarish potential of big data is not to blame the individuals who take up tools that purport to (and in some ways, do) make our lives better and easier. The blame needs to be laid squarely at the feet of the companies that harvest this data to monetise it, the governments that refuse to regulate online privacy, and the institutions that design and use algorithms to make totally fucked-up decisions without taking any accountability for it. A systemic issue needs a systemic solution, not just rhetoric to make ordinary people feel bad. Of course if you are able and willing to make a switch, do! I certainly put some effort into it. But the problem is much too big to be fixed by consumer choices alone.

Update 2020-12-10: Another relevant example of horrible algorithms in health care: one in Massachusetts used race as a factor in whether patients qualified for kidney transplants(external link); for 64 Black patients who were denied, they would have qualified if the algorithm didn’t consider race to be a relevant factor. I’d read about things like this before, but didn’t have a link handy yesterday. Adding it now that I do. This shit is fucked.