Thom Behrens

April 11, 2021

Privacy & Autonomy in the Age of Big Tech

This essay was originally published on the “old” thombehrens.com on 02/26/2020.

I recently listened to this episode of the Sam Harris podcast, which features Center for Humane Technology co-founder Tristan Harris (no relation), who makes a really great point about Big Tech and the reasons people have for disliking it. “Data Collection” is often cited as one of the great evils of tech companies or app developers, and is often invoked with the assumption that people either 1. have reasons to be outraged that their data is being collected, or 2. understand the consequences of having their data be collected. In the podcast episode, Tristan makes the point that threat of data collection isn’t likely to be perceived as a problem for most people. Personal data collection by big tech companies may even be seen as a perk – personally, I love seeing all the aggregate data and listening habits Spotify is able to collect about me… and other may enjoy receiving ads for relevant products.

While there are relevant objections to the isolated practice of data collection – a diluted sense of privacy and legal liabilities are most commonly cited – the harm of companies like Facebook and Google collecting data can be seen most potently as part of the larger addictive and manipulative cycle employed by many online companies.

Different aspects of what is explained below may be common knowledge for many members of the tech-literate, but I still think it’s interesting to articulate the shortcomings of internet-era tech giants within this larger strategy of control. What’s even more interesting to me is if and how we’re obligated to respond to this system of manipulation, which is covered further down 🙂.

Big Tech’s Manipulation Strategy


Step 1: Persuasive Design

Data collection online first requires individuals to be online, and to be interacting with websites that are able to scrape your data. As you may well know, it’s no lucky coincidence that apps like Facebook and Snapchat are wildly popular – scores of engineers, designers and psychologists work hard to ensure that users spend as much time as possible on these platforms. Aiming to maximize metrics like Number of Weekly Users or Time Spent on Site, “persuasive design” is a term used to refer to the set of techniques used to keep you returning to an app, and to keep you on the app once you’re there. It’s important here to note that whether or not data collection & targeted advertising is your goal, persuasive design is an important part in capturing people’s attention. Whether you make money from ads – like YouTube – or from subscription costs – like Netflix – you rely on persuasive design features like autoplay and suggested content in order to keep their users from spending their time elsewhere. This concept of competing with other companies for slices of individuals time & focus has been written about as “The Attention Economy.” This open letter from the American Psychological Association focuses on the effects persuasive design has specifically on children, and gives illuminating examples of these techniques at work.

Step 2: Data Collection

With users investing hours of their day in online ecosystems, posting and liking and interacting with others, getting users to forfeit personal data about themselves is almost the easy part. But what may still be hidden from view for many is the extent to which large tech companies like Facebook are able to track not just your Facebook habits, but all your web traffic. Data forfeiture can either be performed voluntarily or involuntarily; some examples of voluntary data forfeiture may be what your birthday is, where you were born, what your interests/hobbies are, as well as your face looks like (uploading photos), and your political/religious beliefs (extracted from reactions & interactions with shared posts). Basically, anything you knowingly input into Facebook. Involuntary data forfeiture includes some other data provided directly to Facebook, like where you live and work (IP address taken from where you log in at different times of day), but also includes data provided to Facebook from many, many other websites. This article gives a great pre-cursor for how Facebook is able to track almost all your web browsing habits, and build a personality profile based on those habits.

Step 3: Manipulative Marketing & Misinformation

With a highly addicted user base and with comprehensive knowledge of their like, dislikes, hopes, and fears, it’s no wonder that online advertisement is so effective, and that ad sales have turned tech firms into some of the most successful companies in history. And while it may be true that certain targeted advertisements may seem to act as a service – it’s always annoying to get completely irrelevant ads – targeted ads even at their most innocuous still persuade you to consume goods you probably never would’ve wanted otherwise. However, targeted ads can be used to make you feel insecure, lonely, unfulfilled – not with blatant messaging, but by playing on the emotional weaknesses which you have, and which advertisers have access to. Additionally, platforms like Facebook itself might (and does) exploit your psychological profile to show you posts that work to addict you even further; if posts which make you outraged are what keep you engaged on Facebook, then Facebook will work to show you posts that spark outrage. On an individual level, this manipulative marketing and advertising might convince you to buy expensive and useless things you don’t need, convince you that you don’t have anything in common with your friends, or change your outlook on political, social, and environmental issues. At a societal level, disinformation campaigns can be used to bolster or attack certain presidential candidates, fabricate popularity of brands and events, or squash important news stories.

For an informative look at how the cycle described above was implemented by Cambridge Analytica in the 2016 U.S. Presidential Election, check out the Netflix documentary The Great Hack.

How Do We Respond?


When viewed not just as its own end but in the context of this deeper and more sinister system, the true colors of “data collection” begin to shine through. I find it put into largest relief when I think of it as the manipulation “sandwich” – our interactions with these platforms work to control our behavior on the front end with persuasive, addictive design techniques, as well as to control our behavior on the back-end, through targeting advertising and algorithmic manipulation. What’s more is that the behavior manipulation that occurs as part of Step 3 isn’t based on some pre-determined agenda of the platform itself… control over ad access and user impressionability is simply sold to the highest bidder.

We’re back in an election year, which is why this is all on my mind – the 2016 election interference is what brought these issues into the light for many, and questions of how 2020 will differ are on the forefront of many people’s minds, including mine. Facebook has announced that it won’t take measures to stop the spread of political disinformation through paid advertisements. There are many committees, foundations, policy proposals, and campaign talking points dedicated breaking up and/or regulating the amount of power and influence Big Tech has, but what seems more immediate to me is how we can work as individuals to keep social media from influencing our lives.

Once you understand the strategy of manipulation as outlined in the three steps above, solutions to the problems become much clearer, and can be more easily mapped to progress against Big Tech’s impingement on our autonomy. Want to keep yourself from getting addicted to social media? Research ways to “hack your brain” so that you can break bad habits, resist persuasive design, and spend more time doing the things you want to do. Want to keep your data from being collected online? Research how and where your data is collected, and look to switch technologies – switch away from Google Chrome and Gmail, add blockers to prevent Facebook and Twitter from tracking your browsing habits, start using a VPN. Want to keep manipulative marketing and misinformation from coloring your spending habits and your worldview? Installing ad blockers will help… but deleting social media accounts (and free email services that mine your messages, like gmail) are the only true cures here.

Literature and methodology exist out there (mostly online) for learning how to take back control over your online life – I shared in my last post that Twitter has become a huge black hole for my time & attention, when I’m purposefully online as well as when I’m working instead to focus on what’s going on in the “real” world around me. I am striving to take back control of my privacy – and, by extension, by autonomy – from the online products and platforms that dominate my life, and as part of that striving, I’d like to examine the principle of Privacy from three different lenses. Privacy, in this context, should be understood not just as the “data collection” element of the three step process described above, but as the anchor of the entire process – in the age of digital advertisement and misinformation, privacy and personal autonomy have become inextricably linked.

Privacy: Privilege, Right, Responsibility


I was first struck by the concept of Privacy as a Privilege when thinking about all the strategies for protecting your data listed above: getting a VPN, avoiding “free” software services which sell your data, etc. One extreme example is in smart phone choice: most of the world’s phones are pre-loaded with an operating system provided by Google, and come loaded with Google apps. Google uses all of it’s “free” tech offerings to incessantly mine data about you, and has the power to sell them to advertisers and feed into the cycle described above. The easiest way to escape the Google OS is to upgrade to an iPhone… which can cost 2-3x what an Android does. This casts the iPhone as a luxury item not only with respect to the cost of the device, but with respect to the cost of the privacy that the device provides. Besides this, many products (such as Spotify) give users a choice – either you pay a monthly fee, or listen to targeted ads between songs. VPN services also take monthly fees, and doing the research to understand how your data travels takes time to research. Seen through this lens, the privilege of data privacy falls to those with pockets deep enough to pay for it. When taken in combination with Privacy As A Right as described below, data privacy joins other monetized commodities like healthcare, education, and housing – rights promised only as the result of “opportunity”, and subject to the vicissitudes of the free market.

A bit less oblique, Privacy As A Right is maybe the most common way in which privacy is referred to in the public discourse. “Liberty” is certainly a common core value among all strokes of classical liberalism, and inasmuch as autonomy is tied to freedom, and inasmuch as privacy is tied to autonomy, the preservation of freedom and the preservation of privacy should be viewed as one and the same. And even without stipulating the connection between liberty and privacy, there are certainly advocates for data as “property” and as such, would claim that protecting data privacy is fundamental to protecting man’s right to property. Either way, privacy expressed as a right – as an inalienable and fundamental part of who someone is – gives an interesting hue to our data as it exists on the web… and becomes a resource worth protecting.

Privacy As A Responsibility starts as the answer to the conclusion above – if indeed our online data should be seen as part of our “self”, then it is our duty to ourself to keep that data out of the hands of those who would manipulate the data, or sell it, or use it to manipulate us. But the other responsibility we have here is to others – if the outcome of us sacrificing our data to Facebook is that Facebook can have a say in who we vote for for President, then we clearly have a responsibility to other members of our democracy to not let that influence contradict the common good. How far does this go? Is it a conscientious citizen’s duty to get rid of their Facebook account? Or to use an ad blocker? If users of an online platform or message board can be used as a predictor for who someone may vote for, then the answer may be “yes”. I know for certain that Twitter has made me more sympathetic to democrats. And to take it one step further: regardless of how much misinformation or bile is spread on any given site, our greatest loss of power over social media comes with our initial decision to participate in it’s virtual world – we lend it our ear, but we also sacrifice the voice that we have in the real world. In the case of democracy: the person’s mind you are most likely to change is the person with whom you have a real life, personal relationship, not the person you interact with online. No matter how much you post, and no matter how convincingly. The same stands for interpersonal pleasures: showing compassion, forgiveness, deep listening – these are all relational muscles that are simply not effective online in the same way they are when sharing a couch, a walk, or a few cups of coffee. 




For the most part, online data about you only exists if you let it. For the most part, online manipulation only happens to you if you let it. Unfortunately, technology has developed in such a way that the default is to “opt-in” to data collection and online manipulation… but the silver lining is that we have the agency to change it (to the degree we can afford it!). The web is still a great place – for storing data you need to run your business, for purchasing things that you can’t find near where you live, and for writing long blog posts. We’re extremely misinformed users of some very dangerous tools, but that doesn’t mean those tools can’t be harnessed for good.