Anonymity, privacy, transparency, and integrity Do we even know what future we want
Midjourney

Anonymity, privacy, transparency, integrity: Do we even know the future we want?

Reading Time: 8 minutes In the late 2000s, research blossomed around our use of online avatars. Did our videogame icons and social media profiles represent our actual selves, our ideal selves, or something else entirely? And did they have a reciprocal impact, a “Proteus effect” that transformed self

Reading Time: 8 minutes

In the late 2000s, research blossomed around our use of online avatars. Did our videogame icons and social media profiles represent our actual selves, our ideal selves, or something else entirely? And did they have a reciprocal impact, a “Proteus effect” that transformed self-perception? Did we change in the real world based how our online personas were perceived and moved through digital realms?

Academically, the jury’s still out on the exact relationship we have to the versions of “us” we put online. Some research suggests that we can’t hide our “real” selves, while other studies indicate that it’s our ideal selves that manifest predictably in the online figures we use. Likewise, we’ve definitely seen some reciprocal relationships between our digital representations and real world behaviors, if only in short term bursts.

But the world of corporate enterprise, ever rushing ahead of peer review, has wasted no time these past few years, in figuring out how to leverage our online data to manipulate the people behind it. Sometimes simply for profit. Other times for politics as well.

The latest crisis with Twitter, which is undergoing a rapid series of compromises to data security amid chaotic restructuring and disintegration of key staff teams, is just one of many moments in digital history when we’ve had to confront the impoverished nature of our relationship to data in general. How little we’ve understood about the implications and import of both privacy and transparency. How little we’ve consented to, and how much we’ve had to trust in good faith actors on the other side. And how rarely we’ve moved through systems that encourage us to think conscientiously about it all.

Could it ever go differently for us? If we had our druthers, what future would we choose for data management, and dissemination? Has privacy ever been as robust a right as we sometimes presume it to be—and is there anything we could do to see a more resilient form of privacy emerge in the online of tomorrow?

The annoyingly academic 8-ball answer is, of course, “more research required”.

But let’s review at least some of the concepts we’ll need to face up to, when planning a future digital world that better serves us all.

The human problem behind data management

We leak data wherever we go. We can’t help it.

Habits persist, and shape behavioral profiles, even when we’ve scrubbed surface level intel: our addresses, our real names, our photos, our financials, our contacts. Creatures of habit, we reveal ourselves every time we click on a link, linger while scrolling, or return time and time again to a set of preferred forums and user interactions.

Worse still, companies are fully aware that if they can provide some semblance of value to data acquisition, we’re more like tolerate the invasion of privacy. Algorithms for auto-response on our emails, AIs grouping photos for us by the faces and metadata in them, and even some forms of targeted advertising based on our search terms, all toe the line with respect to offering “ease of use” while acclimating us to companies collecting immense amounts of our user data for their own purposes.

It is very easy to look at something like Facebook’s 2012 news feed test, where almost 700,000 users had their emotions manipulated to study the impact of bombarding us with positive or negative posts, and identify the ethical issues there. As Forbes writer Kashmir Hill glibly put it at the time, for a series on privacy and tech futures in 2014: “Facebook is the best human research lab ever. There’s no need to get experiment participants to sign pesky consent forms as they’ve already agreed to the site’s data use policy.”

But when confronted with the everyday choice to do something that is easy, something that reduces the friction of our interactions with one another and offers ways to streamline overwhelming facets of our lives, privacy issues become lower priorities. And that’s doubly true when “everyone else is doing it”: if this is just the way the world works now, and if these buy-in costs are required to stay close to people in our lives, what else are we to do?

We’re so thoroughly convinced of our rationalist integrity, as agents of critical thought impervious to our environments and their priorities, that we ironically become even better patsies for external manipulation.

Except that privacy still matters to us, too

And yet, concepts like anonymity and privacy retain a curious power in our lives. In some contexts, anonymity is praised as a necessary act of resistance: the cover under which activists can fight oppressive governments and corporations. We also routinely use services, like Wikipedia and other online encyclopedias, where we have only a vague understanding of the many as-good-as-anonymous editors who’ve curated everyday knowledge for us.

In other spaces, though, being anonymous is treated as a threat: something automatically “shady” and “suspicious”, as if the only reason a person might want to keep their personal identity separate from online life is because they have “something to hide”.

Privacy is even more mercurial: people who share personal data via online products rely on a trust system that private companies routinely breach, while parents wade into complex territory for consent-based used of the internet whenever they share photos, videos, and stories of their children online. (We’re only just starting to see push-back on this last, but even then, online users will often resist seeing the sharing of a “cute” video as every bit as exploitative as sharing a video where a child is having difficulty managing their emotions.)

And yet, we all understand that “doxxing” someone can put them in immediate peril of direct violence. This is usually as a follow-up to online intimidation, but sometimes also comes from governments ready to imprison or assassinate an outed undercover identity.

We also have a very selective approach to the idea of respecting one another’s preferred versions of their past. The idea of “deadnaming” a trans person, by actively using a name aligned with a gender performance in their life before coming out, is fully recognized as unacceptable (and therefore easily weaponized) online practice. However, why stop there? Why do we retain an entitlement to talk about the past lives of a whole slew of other people who have sought to become their best selves in more recent years?

Personal versus state security

Which leads more broadly to the question of state security: an issue that raises the stakes around every single data breach. When major companies are hacked, we’re often given to understand the cost in terms of other private companies then getting a whole slew of names, numbers, and user profiles, to assail us with unwanted products and financial scams.

But the cost runs far deeper, as the Facebook-Cambridge Analytica scandal should have taught us. Whether from outside hack or conscious internal exploitation, our data can be used for political sabotage: to raise the heat and toxicity of a whole nation’s democratic discourse, and to shuffle us off into smaller and more blinkered information silos.

We’re not good at processing this risk, though, for two important reasons: for one, the average user has no idea how extensive digital attacks and exploitative practices truly are. (If they did, they might unplug forever!) For another, the idea that a company, lobby, super-PAC, or government having your data could somehow influence you into a series of political beliefs and actions is an affront to our sense of self. We’re so thoroughly convinced of our rationalist integrity, as agents of critical thought impervious to our environments and their priorities, that we ironically become even better patsies for external manipulation.

And yet, our incoherent thinking on issues of digital data management precedes us. Issue by issue, we lean towards short term convenience, and are quicker to forgive ourselves for any transgressions these choices might yield than we are others. If we mobbed or brigaded someone online, we were just following the moment the righteous push for greater transparency and accountability. But if someone else mobs or brigades an individual? How horrible! Why is the internet so awful? Why can’t people just leave each other alone?

What are we even looking for when we call for greater transparency, especially from prominent cultural and political figures? We generally believe that certain people and organizations have less of a right to privacy—at least, in any domain of relevance to their public actions—and behind that belief lies a vague promise of future accountability. If we only knew how awful certain people and corporations really were, we could stop them! We could make a different choice!

But has history given us much evidence for that conviction? When we’ve known for decades that major oil companies actively suppressed research illustrating the damage they were doing to our environment? When we know they’re the biggest contributors to climate change even now? Or when multiple political and entertainment industries have been rife with “open secrets” about gross misconduct on the part of top actors for years, even decades?

How often does common knowledge about a person or company’s reprehensible behavior actually lead to their downfall?

Even worse: in a herd-species that loves its cults of personality, and routinely flocks under authoritarian figures, how often does revealing a powerful entity to have done terrible things only lead, sometimes, to a widespread defense of those terrible actions, too?

Planning for a better digital future

We all participate in non-ideal systems, online and off. Unfortunately, that fact can make us defensive about discussing better alternatives—because to suggest that we might be inconsistent in our application of concerns about privacy and transparency cannot help but feel like a criticism of us. An attack on personal integrity. An insult to our higher reasoning.

So let’s reframe. Humans are informed by the systems we inhabit, and the content they expose us to. But that can be a point of strength as much as a disaster. After all, if we’ve been unable to manifest consistent ethical positions on anonymity, privacy, transparency, and the pursuit of greater integrity in our secular commons, that might just be because we’ve been operating in some pretty terrible systems to date. Change the systems—lay down better tracks for all to follow—and we can coax better natures from ourselves.

Partially, the problem lies with relying on private, profit-driven companies to serve key civic and democratic functions in our lives. It’s simply not fair to them, let alone to us, to expect organizations with an ultimate responsibility to their shareholders to prioritize moral care for the digital ecosystems they’re running. Private enterprise can be excellent for innovation and invention, but it requires checks and balances from a thriving set of public spaces guided by an engaged citizenry. And no, that doesn’t automatically mean “government is best”—because our governments are also routinely affected by competing priorities: not shareholders, exactly, but voters in the most immediate upcoming election.

But there is an opportunity in recognizing that, if we truly want to more agency in deciding the future for data privacy, and the appropriate roles for anonymity and transparency in our pursuit of genuine public integrity, we have to act. And we can.

We know that people, writ large, will always favor low-friction digital solutions. Just as we will always struggle with the difference between short and long term self-care (the plate of comfort food with Netflix, versus shutting off the phone before bed, getting a good night’s sleep, and doing preventative healthcare). Just as we will always struggle with caving to short term economic pressures versus showing up as a united force against global tyranny.

The trick isn’t to expect human nature to change.

The trick is to work with who we are—our dear, sweet, coherently irrational selves—and lay down smoother tracks toward a future where the lowest friction personal choices also happen to be the healthiest for us. Healthiest for the maintenance of personal well-being. Healthiest for the state of our democracies and civic life.

These will require an immense decoupling of public life from corporate monopolies, which thrive on making it as difficult as possible for us to jump ship if we object to individual business practices. In a thriving free market economy, consumers also need to be able to transition quickly and with minimal loss if any given enterprise goes down. A “too big to fail” approach to corporations is antithetical to robust civic life. Likewise, governments need to provide more civic autonomy, which is what both mainstream right wing and left wing movements are ostensibly rooting for—only, through very different notions of how our governments can best serve to empower their citizenry.

(Or maybe not so different, thanks to neoliberalism’s presence across traditional political parties: promoting overconfidence in private solutions to public problems for decades now.)

Prepping for the work ahead

Even then, though, there will come one great big hurdle. Once we’ve achieved these better ways of sharing, once we’ve created online worlds that better reflect a coherent ethos of public action, we’ll also need to forgive one another, at least to some extent, for all the trespasses we’ve visited upon each other in the inferior state of our digital here and now. That doesn’t mean we have to forget the wounds we’ve caused each other, in all our heavy-handed attempts to craft meaningful discourse out of flawed tools and platforms. But it does mean that we have to embrace our messy biochemical natures (authority-following, tribalist-leaning herd species that we are) and allow for different systems to bring out different facets of the people we’ve always been.

Because at the crux of all our hand-wringing over privacy and transparency, anonymity and integrity, lies a much deeper question about how to craft realms where we’re safer to explore, learn, grow, and engage in the issues that matter most to us.

Onward, then, to a world (both online and off) whose citizens all get a better say.

Comments