Privacy, ads and confusion — Benedict Evans:

In practice, ‘showing car ads to people who read about cars’ led the adtech industry to build vast piles of semi-random personal data, aggregated, disaggregated, traded, passed around and sometimes just lost, partly because it could and partly because that appeared to be the only way to do it. After half a decade of backlash, there are now a bunch of projects trying to get to the same underlying advertiser aims - to show ads that are relevant, and get some measure of ad effectiveness - while keeping the private data private.

Apple has pursued a very clear theory that analysis and tracking is private if it happens on your device and is not private if leaves your device or happens in the cloud. Hence, it’s built a complex system of tracking and analysis on your iPhone, but is adamant that this is private because the data stays on the device. People have seemed to accept this (so far - or perhaps the just haven’t noticed it), but acting on the same theory Apple also created a CSAM scanning system that it thought was entirely private - ‘it only happens your device!’ - that created a huge privacy backlash, because a bunch of other people think that if your phone is scanning your photos, that isn’t ‘private’ at all. So is ‘on device’ private or not? What’s the rule? What if Apple tried the same model for ‘private’ ads in Safari? How will the public take FLoC? I don’t think we know.

On / off device is one test, but another and much broader is first party / third party: the idea it’s OK for a website to track what you do on that website but not OK for adtech companies to track you across many different websites. This is the core of the cookie question.

At this point one answer is to cut across all these questions and say that what really matters is whether you disclose whatever you’re doing and get consent. Steve Jobs liked this argument. But in practice, as we've discovered, ‘get consent’ means endless cookie pop-ups full of endless incomprehensible questions that no normal consumer should be expected to understand, and that just train people to click ‘stop bothering me’. […] Perhaps ‘consent’ is not a complete solution after all.

If you can only analyse behaviour within one site but not across many sites, or make it much harder to do that, companies that have a big site where people spend lots of time have better targeting information and make more money from advertising. If you can only track behaviour across lots of different sites if you do it ‘privately’ on the device or in the browser, then the companies that control the device or the browser have much more control over that advertising.

I think this captures the complexity of privacy in practice. “Protecting privacy” sounds good, but what exactly do we mean by “privacy,” and at what threshold do we consider it protected? Who gets to enforce and control those standards, and how?

Using the internet shouldn’t be so complicated. I want to read an article / buy a thing / watch a video without being tracked or tossing my hat into a web of convoluted and shady data pipelines.

The Advertising Industry Is Running Itself Into the Ground – Pixel Envy:

Not only are ads present on every physical surface you can imagine, they are also on every digital surface. Websites have ads, the most popular apps in the world are made by advertising companies, and many of the accounts people follow are walking billboards.

I don’t think ads = bad; I appreciate the art, craft, and work that goes into a good advertisement. But I feel cynical about the size and omnipresence of the ad industry, which is a staggering $600 billion globally. That kind of weight skews the priorities of the entire economy toward attention as a commodity, incentivizing practices like superficiality and spam in the decision-making processes and objectives of businesses everywhere. What’s that money doing for ordinary people?

See also:

Emily in Paris” and the Rise of Ambient TV:

“Ambient” denotes something that you don’t have to pay attention to in order to enjoy but which is still seductive enough to be compelling if you choose to do so momentarily. Like gentle New Age soundscapes, “Emily in Paris” is soothing, slow, and relatively monotonous, the dramatic moments too predetermined to really be dramatic. […] The shows are functionally screen savers, never demanding your attention; they do draw it, but only as much as a tabletop bouquet of flowers.

[…] TikTok’s For You tab serves an endless stream of short videos that algorithmically adapt to your interests, sorting the content most likely to engage you. Using it feels like having your mind read, because all you do is watch or skip, focus or ignore, a decision made too fast to be fully conscious. Individual videos or accounts matter less than categories or memes; at the moment, my feed is mostly clips of skateboarding, cooking, and carpentry, not unlike the mundanity of the Netflix shows but also accelerated into media gavage. TikTok is an app for ambience.

The Instrumentalist | Zadie Smith | The New York Review of Books. On Tar’s interesting lens on generational differences & evolving cultural semantics:

[…] if you grew up online, the negative attributes of individual humans are immediately disqualifying. The very phrase ad hominem has been rendered obsolete, almost incomprehensible. An argument that is directed against a person, rather than the position they are maintaining? Online a person is the position they’re maintaining and vice versa. Opinions are identities and identities are opinions. Unfollow!

Every generation makes new rules. Every generation comes up against the persistent ethical failures of the human animal. But though there may be no permanent transformations in our emotional lives, there can be genuine reframings and new language and laws created to name and/or penalize the ways we tend to hurt each other, and this is a service each generation can perform for the one before.

ChatGPT Is a Blurry JPEG of the Web:

When an image program is displaying a photo and has to reconstruct a pixel that was lost during the compression process, it looks at the nearby pixels and calculates the average. This is what ChatGPT does when it’s prompted to describe, say, losing a sock in the dryer using the style of the Declaration of Independence: it is taking two points in “lexical space” and generating the text that would occupy the location between them […] [Users have] discovered a “blur” tool for paragraphs instead of photos, and are having a blast playing with it.

In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something. When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression. […] The more that text generated by large language models gets published on the Web, the more the Web becomes a blurrier version of itself.

I originally planned on updating this website with new posts about twice a month, but that didn’t pan out. Probably for the following reasons:

  • "Originality." Self-imposed burden of needing to make everything I put online “original” or substantial
  • General perfectionism. Agonizing over whether any sentence was the best way to crystallize a particular idea
  • Friction in posting. I made this blog from scratch and add new posts here as Github commits. The workflow is a bit labor-intensive compared to tweeting.

Until I learn enough of backend development that I can make a new CMS-driven blog, I’m probably not going to resolve the last point. But the first two are more addressable with a review of why I started this blog in the first place.

When I first made this site, I found this post by Alexey Guzey especially influential:

Consider a university professor teaching a course. Does she say anything original? Do you think she should cancel her course because somebody else discovered the things she wants to teach? Or does she have to cancel her course simply because there is a similar course at some other university?

Or consider yourself. Do you avoid having conversations with your friends when you think you have nothing original to say? Do you share things with them? Do you give advice? Do you help to understand things?

The general attitude here and in other posts about blogging / writing online encourages a more casual approach—that blogging should be for yourself, and incidentally for others:

[…] worry less about audience and the 'worth' of publishing something – blogging is an act for yourself. you can assume different identities, objectives, and selves at any moment. which one will you take?

  1. a blog doesn't ever have to be finished
  2. a blog doesn't only have to contain your own words
  3. a blog can be ephemeral, you can have several blogs, you can kill many blogs at any time, you can revive it at any time
  4. a blog doesn't have to be consistent – never let this stop you
  5. a blog doesn't have to have any expectations
  6. a blog doesn't have to be chronological - what if you arrange blogs by topic and continuously edit the post? or your setup is 1-3 posts that you persistently make?
  7. a blog doesn't have to be for anyone but yourself, even if it lives publicly – blogging for yourself tends to be the most authentic way to connect to others, anyway

I need to remember why I made a standalone site rather than just using a blogging platform or tweeting more:

  • To exist online, to have a space on the internet that would be entirely my own. Here, I can post public-but-isolated thoughts and have an internet existence away from algorithmic flurries and social comparisons.
  • To engage better with what I consume and to “give back” to the internet. I’m a digital hoarder and constant reader, obsessively documenting and tagging everything. The internet raised me through my teenage years and has had an endless impact on my personality.1

Blogging should be more about earnestness than uniqueness. That probably also goes for posting in general.


  1. I think Tumblr was so influential because, unlike Facebook, it centered self-expression and cultural pollination. ↩︎

The Dynamic Island in Apple’s iPhone 14 Pro received a ton of media attention. Here’s Apple’s description:

The Dynamic Island blurs the line between hardware and software, fluidly expanding into different shapes to clearly convey important activities like Face ID authentication. […] Without impeding content on the screen, the Dynamic Island maintains an active state to allow users easier access to controls with a simple tap-and-hold. Ongoing background activities like Maps, Music, or a timer remain visible and interactive.

Video:

The reception is positive, save for this critique by former Apple designer Ken Kocienda:

I worry that a whole bunch of designer effort is now going to be spent on making blobby wobbly pixels—but for what actual benefit? Making great software is still mostly about delivering useful and meaningful features for people.

Augmented Reality blurs hardware, software, and the world around us. I wonder if Apple is introducing this fluid motion style and “hardware blurring” as groundwork for AR interfaces that extend and elaborate on our physical surroundings (e.g., a credit card viewed in AR whose information gushes or flows outward to transform into a menu with more account details).

Toolbar styles used to simulate real textures like leather and brushed metal in the earlier days of UI design skeuomorphism. Buttons had a glossy, protruding appearance as a signal that users could press them.

Supposedly, XR experiences will become the primary place where people interact with software. In that case, UI designers need to rethink their approach to visuals that have always lived in 2D.

Just as skeuomorphism accustomed users to touch interactions with 2D interfaces, I wonder how the Dynamic Island’s life-like fluidity and physics could similarly make 3D interface objects more approachable in AR or VR.

Yes, as Kocienda suggests, this could go too far. Ideally, design should get out of our way with no frills or whimsical flairs. Still, there’s a longing for digital experiences with more personality and warmth.

September 6, 2022

August reading

Some links I found interesting throughout last month:

Public life

Societies operate on infrastructures: physical, digital, and social. At the intersection of digital and social infrastructures is a set of spaces that host critical conversations about civic, political, and social issues. At present, these spaces primarily are built and governed by large media companies who maintain them to collect user data and serve advertisements. What would happen if we built digital public infrastructures, digital social spaces built with taxpayer dollars with explicit civic goals?

Subject after subject reported that they were on Tinder to find someone to love and to love them back and defined love in the most traditional of terms: something that took work, a container in which sex was sacred and where intimacy built over time. They acknowledged that their encounters on Tinder didn’t offer that, yet they went to Tinder to find it. The contradiction was confusing: They wanted sex to be meaningful but felt that Tinder removed the sacredness. They wanted bonds to be lasting but acknowledged they were easily broken.

Conversational affordances often require saying something at least a little bit intimate about yourself, so even the faintest fear of rejection on either side can prevent conversations from taking off. That’s why when psychologists want to jump-start friendship in the lab, they have participants answer a series of questions that require steadily escalating amounts of self-disclosure (you may have seen this as “The 36 Questions that Lead to Love”).


[…] There is no known cure for egocentrism; the condition appears to be congenital. The best we can do is offer our interlocutors all sorts of doorknobs––ornate French door handles, commercial-grade push bars, ADA-compliant auto-open buttons––and listen closely for any that they might give us in return. The best improvisers, like the best conversation partners, have very sharp hearing; they can echolocate a door slightly left ajar, waiting for a gentle push from the outside.

[…] Anyone can build highly specific corners of cyberspace and quickly invent digital tools, changing their own and others’ technological realities.
A technologist makes reason out of the messiness of the world, leverages their understanding to envision a different reality, and builds a pathway to make their vision happen.
[…] We are an unprecedentedly self-augmenting species, with a fundamental drive to organize, imagine, construct and exercise our will in the world. And we can measure our technological success on the basis of how much they increase our humanity. What we need is a vision for that humanity, and to enact this vision. What do we, humans, want to become?

Industry shifts

Besieged by automated recommendations, we are left to guess exactly how they are influencing us, feeling in some moments misperceived or misled and in other moments clocked with eerie precision.

Managers at the new company have taken a hatchet to HBO’s offerings in particular, culling a wide variety of popular content to cut costs. That includes roughly 200 episodes of popular shows like Sesame Street and dozens of films and shows overall. Why? In part because the new consolidated company doesn’t want to pay residuals in a bid to make deal financials make sense.

[…] Again, this is just another example of the U.S.’ harmful obsession with megamergers, consolidation, purposeless (outside of stock fluffing) deal making, and growth for growth’s sake. All of these deals make perfect sense to the executives, lawyers, and accounting magicians exploiting them for tax breaks and various financial benefits, but that doesn’t make this whole saga any less preposterously pointless.

I think one of issues causing all the problems we are lamenting is the infiltration of MBA style management at studios. […] Much of the American auto industry's early century woes have been attributed to not having enough "car guys" in charge--people that cared about the products. Over time this eroded the market. I think the same thing is happening in movies right now. It's not that Jack Warner and his contemporaries didn't want to make money, they did. But they wanted to make great movies that made money. […] The payout wasn't just the movie's profit, but name recognition and interest when those young stars move on to bigger stakes pieces, and the opportunity for the actors to develop.

…in Instagram apologies, even when someone ostensibly confronts their ugliness, it’s hard to read the gesture as anything but an effort to publicly reclaim their image. But at least the Notes App Apology permitted us a semblance of sincerity, and suggested there might be a human being who typed the message—even if that human was an intern or assistant. There’s nothing sincere about a trickle-down excuse crafted to look pretty for Instagram grids, and the processed nature of Photoshopped Apologies implies the absence of the one thing all genuine apologies must possess: accountability straight from the person who committed the transgression.

  • Similarly, Greenwashing Certified™ - garden3d.net, a look into corporate sustainability’s shortcomings—namely, the trend of superficialization that tends to come with anything capitalism-oriented:

The problem is that the main incentive for pursuing corporate sustainability work is its ability to create more marketable products. Outside of sustainability efforts that are pursued based on compliance or regulation, much of this work is built around the idea of pandering to consumers' understanding of sustainability, something that’s notoriously variable.

…there are those who believe highly-targeted advertisements are a fair trade-off because they offer businesses a more accurate means of finding their customers, and the behavioural data collected from all of us is valuable only in the aggregate. That is, as I understand it, the view of analysts like Seufert, Benedict Evans, and Ben Thompson. Frequent readers will not be surprised to know I disagree with this premise. Regardless of how many user agreements we sign and privacy policies we read, we cannot know the full extent of the data economy. Personal information about us is being collected, shared, combined, and repackaged It may only be profitable in aggregate, but it is useful with finer granularity, so it is unsurprising that it is indefinitely warehoused in detail.

The first trend is the shift towards ever more immersive mediums. Facebook, for example, started with text but exploded with the addition of photos. Instagram started with photos and expanded into video. Gaming was the first to make this progression, and is well into the 3D era. The next step is full immersion — virtual reality — and while the format has yet to penetrate the mainstream this progression in mediums is perhaps the most obvious reason to be bullish about the possibility.

The second trend is the increase in artificial intelligence. I’m using the term colloquially to refer to the overall trend of computers getting smarter and more useful, even if those smarts are a function of simple algorithms, machine learning, or, perhaps someday, something approaching general intelligence.

[…] The third trend is the change in interaction models from user-directed to computer-controlled. The first version of Facebook relied on users clicking on links to visit different profiles; the News Feed changed the interaction model to scrolling. Stories reduced that to tapping, and Reels/TikTok is about swiping. YouTube has gone further than anyone here: Autoplay simply plays the next video without any interaction required at all.

Thoughts on design and video games

  • Optimizing For Feelings, an essay from The Browser Company that makes me feel a bit validated for having pursued a liberal arts education.

You see — if software is to have soul, it must feel more like the world around it. Which is the biggest clue of all that feeling is what’s missing from today’s software. Because the value of the tools, objects, and artworks that we as humans have surrounded ourselves with for thousands of years goes so far beyond their functionality. In many ways, their primary value might often come from how they make us feel by triggering a memory, helping us carry on a tradition, stimulating our senses, or just creating a moment of peace.

Our digital products are trapped behind a hard pane of glass. We use the term “touch”, but we never really touch them. To truly Feel a digital experience and have an app reach through that glass, requires the Designer to employ many redundant techniques. Video games figured this out decades ago. What the screen takes away, you have to add back in: animation, sound, and haptics.

“Fun is just another word for learning”. In order to have fun, you must learn. I find this inspiring as app design wants your users to learn but we’ve rarely appreciated this could be fun. Games understand that in order to learn you must start thinking in layers. Begin with a basic skill and slowly add more, getting better one layer at a time.

In design for emergence, the designer assumes that the end-user holds relevant knowledge and gives them extensive control over the design. Rather than designing the end result, we design the user’s experience of designing their own end result. In this way we can think of design for emergence as a form of ‘meta-design.’ […] In other words, to address the long-tail problem, the tool must be flexible enough that it can be adapted to unexpected and idiosyncratic problem spaces—especially those unanticipated by the tool’s designer.

Material-based software can also have gentler learning curves, because the user only needs to learn a small set of rules about how different metaphors in the software interact with each other rather than learning deep hierarchies of menus and terminologies. In the best case, users can continue to discover new capabilities and workflows long after the initial release of the software.

Misc

Britell’s C.V. reads like the setup for a comedy flick: a Harvard-educated, world-class pianist who studied psychology and once played in a moderately successful hip-hop band, who wound up managing portfolios on Wall Street.

The dominant narrative about child welfare is that it is a benevolent system that cares for the most vulnerable. The way data is correlated and named reflects this assumption. But this process of meaning making is highly subjective and contingent. Similar to the term “artificial intelligence,” the altruistic veneer of “child welfare system” is highly effective marketing rather than a description of a concrete set of functions with a mission gone awry.

The whole public-facing system of college admissions—in which admissions decisions are based on rigorous academic standards and financial aid is supposedly provided to those who are most academically and financially deserving—is an elaborate stage play meant to flatter privileged families and the reputations of colleges themselves. The real system, hidden behind the scenery, is much closer to the mechanics of pure capitalism, driven by an industry of for-profit consultants and relentlessly focused on the institutional bottom line.

The Browser Company on “Emotional Design”:

Humor us for a moment and picture your favorite neighborhood restaurant. Ours is a corner spot in Fort Greene, Brooklyn. It has overflowing natural light, handmade textile seat cushions, a caramel wood grain throughout, and colorful ornaments dangling from the ceilings. Can you picture yours? Do you feel the warmth and spirit of the place?

A Silicon Valley optimizer might say, “Well, they don’t brew their coffee at exactly 200 degrees. And the seats look a little ratty. And the ceiling ornaments don’t serve any function.”

But we think that’s exactly the point. That these little, hand-crafted touches give our environment its humanity and spirit. In their absence, we’re left with something universal but utterly sterile — a space that may “perfectly” serve our functional needs, but leave our emotional needs in the lurch.

[…] If you try hard, you can remember a time when our tools and platforms were designed by people, for people. Operating systems were bubbly and evanescent, like nature. Apps were customizable, in every shape and size. And interfaces drew on real-life metaphors to help you understand them, integrating them effortlessly into your life.

But as our everyday software tools and media became global for the first time, the hand of the artist gave way to the whims of the algorithm. And our software became one-size-fits-all in a world full of so many different people. All our opinions, beliefs, and ideas got averaged out — producing the least common denominator: endless sequels that everyone enjoys but no one truly loves.

When our software optimizes for numbers alone — no matter the number — it appears doomed to lack a certain spirit, and a certain humanity.

[…] We wanted to optimize for feelings.

While this may seem idealistic at best or naive at worst, the truth is that we already know how to do this. The most profound craftsmanship in our world across art, design, and media has long revolved around feelings.

Capitalism forces us to measure everything against time, budget, and profit concerns. Under those constraints, it’s easy to prioritize metrics and efficiency over emotional qualities like aesthetic polish, taste, and delight, whose outcomes are harder to quantify & qualify in terms of “solutions.”

Nick Foster, head of design at Google X, wrote about how the standardized design process (i.e., Empathize, Define, Ideate, Prototype, and Test) doesn’t consider intangible qualities like elegance and craft in solutions—only solutions. Designer Benek Lisefski also spoke to this:

… It’s easy to make data-driven design decisions, but relying on data alone ignores that some goals are difficult to measure. Data is very useful for incremental, tactical changes, but only if it’s checked and balanced by our instincts and common sense.

[The commercial design process] creates more generic-looking interfaces that may perform well in numbers but fall short of appealing to our senses.

Ironically, those emotional qualities are what people ought to understand before numbers. We feel before we think.

An experience optimized for “engagement,” clicks, attention, or profit does not guarantee an experience that is fun, nurturing, or insightful (ex. Facebook1 ).

The entertainment industry faces a similar imbalance. It’s remarkable how much the discourse about the metrics-driven sterilization of digital design parallels that of movies.

Martin Scorsese, in a viral critique of Marvel films: 2

Cinema was about revelation — aesthetic, emotional and spiritual revelation. It was about characters — the complexity of people and their contradictory and sometimes paradoxical natures, the way they can hurt one another and love one another and suddenly come face to face with themselves. It was about confronting the unexpected on the screen and in the life it dramatized and interpreted, and enlarging the sense of what was possible in the art form.

[…] Many of the elements that define cinema as I know it are there in Marvel pictures. What’s not there is revelation, mystery or genuine emotional danger. Nothing is at risk. The pictures are made to satisfy a specific set of demands, and they are designed as variations on a finite number of themes. They are sequels in name but they are remakes in spirit, and everything in them is officially sanctioned because it can’t really be any other way. That’s the nature of modern film franchises: market-researched, audience-tested, vetted, modified, revetted and remodified until they’re ready for consumption.

[…] The most ominous change has happened stealthily and under cover of night: the gradual but steady elimination of risk. Many films today are perfect products manufactured for immediate consumption.

These negative reviews for Uncharted and Jurassic World Dominion echo the same sentiments—that execs and strategists overlook texture, originality, or anything that people actually care about:

All an “Uncharted” movie had to accomplish — all that it possibly could accomplish — was to capture the glint and derring-do that helped the series port the spirit of Indiana Jones into the modern world. […] It fails in the areas where history says it should have been able to exceed it. The areas where movies have traditionally had the upper hand over video games: Characters. Personality. Humor. Humanity! […] Fleischer’s competently anonymous direction contributes to the film’s general flavorlessness, as Nathan and Sully chase new clues to the treasure’s whereabouts (and to the location of Nathan’s missing brother) from Barcelona to the Philippines without any sense of urgency or purpose.

Whoever is behind the scenes of these movies fails to understand that you can actually make more money by making something that is "good". That making a "good" movie means people will want to watch your movie multiple times and then purchase it again later, and purchase more tickets, and purchase the streaming service with your movie, and purchase the box-set with your movie.

Blockbusters do not need to be deep or profound. But even with mega-budget funding, state-of-the-art crews, and audience research, they still fall emotionally flat. Neglecting to optimize for these less-measurable things misses out on a chance for a deeper, more human kind of “engagement.”

Likewise, here’s Braden Kowitz, design partner at Google Ventures, explaining a decision to preserve brand quality by stepping back from optimizing a checkout button for attention:

We cared about more than just clicks. We had other goals for this design: It needed to set expectations about what happens next, it needed to communicate quality, and we wanted it to build familiarity and trust in our brand. We could have easily measured how many customers clicked one button versus another, and used that data to pick an optimal button. But that approach would have ignored the big picture and other important goals.

And again, Benek Lisefski:

Not everything that can be counted counts. Not everything that counts can be counted. Data is good at measuring things that are easy to measure. Some goals are less tangible, but that doesn’t make them less important. While you’re chasing a 2% increase in conversion rate you may be suffering a 10% decrease in brand trustworthiness. You’ve optimized for something that’s objectively measured, at the cost of goals that aren’t so easily codified.

Yes, product designs and content need to make money, to be functional, and (in the case of movies but not necessarily) to be entertaining. But it’s a shame to leave it at that. Why should we care? Why is it meaningful or even just nice?

“The most profound craftsmanship in our world across art, design, and media has long revolved around feelings,” but the drive toward measurability and profit in all fields has made that harder to grasp.

Optimizing for feelings spans disciplines: the warmth and spirit in a cafe, the refreshing sound & animations in a boot-up screen, the serenity in a reading app, the resonance of complicated characters in a movie… a sense of Scorsese’s revelation in any media.


  1. Regarding the misinformation issue: Facebook’s engagement optimization does appeal to junk-food level feelings (anger, indignance, shock, surprise), but it’s not the kind of enriching emotion I’m trying to discuss here (read on). ↩︎

  2. Another example: George Lucas commenting on this re: Star Wars ↩︎

August 19, 2022

A love letter to Wikipedia

I love Wikipedia. Its abundant hyperlinks make it easy to indulge any fleeting, tangential interests, and its network of articles is so dense that you can reach any Wikipedia page from another just by “hyperlink-hopping.” I wish there was a word for the pleasure of having 20+ tabs open while going down hyperlink-driven internet wormholes.

When I think about ways the internet has raised me, I like to think that spending so much time on Wikipedia helped me become more curious and see the world in a more interconnected way.

This deep dive by WIRED on Wikipedia ties those attributes to the grander ideal of the internet:

Like Instagram, Twitter, and Facebook, it broadcasts user-generated content. Unlike them, it makes its product de-personified, collaborative, and for the general good. More than an encyclopedia, Wikipedia has become a community, a library, a constitution, an experiment, a political manifesto—the closest thing there is to an online public square. It is one of the few remaining places that retains the faintly utopian glow of the early World Wide Web. A free encyclopedia encompassing the whole of human knowledge, written almost entirely by unpaid volunteers: Can you believe that was the one that worked?

It has its problems, but Wikipedia does often feel like the last bastion of good info on the internet. You can find in-depth coverage of nearly any topic quickly and without ads, paywalls, upsells, or fluff.

That’s not necessarily the case for fan-made “Wikis”1 , which depart from Wikipedia’s strict moderation and objectivity processes. But without Wikipedia’s guardrails, fans can document their obsessions and niche knowledge more freely. The long runtimes and lore in TV shows and franchises have so much character and world info that they deserve their own dedicated Wikis.

In that sense, offshoots like fan wikis continue Wikipedia’s spirit of specialized self-indulgence:

[…] Wikipedia is built on the personal interests and idiosyncrasies of its contributors; in fact, without getting gooey, you could even say it is built on love. Editors' passions can drive the site deep into inconsequential territory—exhaustive detailing of dozens of different kinds of embroidery software, lists dedicated to bespectacled baseball players, a brief but moving biographical sketch of Khanzir, the only pig in Afghanistan. No knowledge is truly useless, but at its best, Wikipedia weds this ranging interest to the kind of pertinence where Larry David's “Pretty, pretty good!” is given as an example of rhetorical epizeuxis. At these moments, it can feel like one of the few parts of the internet that is improving.

There's a necessary tension between moderation and public interactivity, and it's interesting to see how different executions of this balance on Wikipedia variants can allow for new web subcultures to thrive.


  1. Typically hosted on Fandoms.com. ↩︎

August 4, 2022

Ideology & the social web

This e-flux essay on social media ideology makes a few expansive arguments about what we consider when we think about the social web:

Social networking is much more than just a dominant discourse. We need to go beyond text and images and include its software, interfaces, and networks that depend on a technical infrastructure consisting of offices and their consultants and cleaners, cables and data centers, working in close concert with the movements and habits of the connected billions. (...)

We often overlook the internet's physical presence. The ethereal "cloud" is a massive network of data centers in server arrays as formidable & imposing as ancient terra cotta armies. Software designers and product strategists obsess over ways to encourage or discourage user behavior through design.

In general, we should think more holistically about the systems we use, especially foundational ones we typically take for granted.

… Before we enter the social media sphere, everyone first fills out a profile and choses a username and password in order to create an account. Minutes later, you’re part of the game and you start sharing, creating, playing, as if it has always been like that [...] The platforms present themselves as self-evident. They just are—facilitating our feature-rich lives.

It's easy to overlook the role that trial and happenstance have played in the processes we take for granted today. Before movies matured into a standard media format, every movie was an "experimental" movie. Today, mainstream audiences can expect a main character(s), conflict, plot, rising action, and climax. But things didn't have to turn out this way, and they don't have to remain this way.1

When I bring that mindset to social media, I think about its valuation of things like engagement metrics and “personalization.” Like everything else, social media has evolved a set of standards and familiar design practices optimized for capitalism (profit over wellbeing), hyper-individuality, and “optics” over lived reality:

Treating social media as ideology means observing how it binds together media, culture, and identity into an ever-growing cultural performance (and related “cultural studies”) of gender, lifestyle, fashion, brands, celebrity, and news from radio, television, magazines, and the web—all of this imbricated with the entrepreneurial values of venture capital and start-up culture, with their underside of declining livelihoods and growing inequality.

Extending that holistic view to all software helps the argument that software is a form of ideology:

Software “fulfills almost every formal definition of ideology we have, from ideology as false consciousness to Louis Althusser’s definition of ideology as a ‘representation’ of the imaginary relation of individuals to their real conditions of existence.’”
Software, or perhaps more precisely operating systems, offer us an imaginary relationship to our hardware: they do not represent transistors but rather desktops and recycling bins. Software produces users. Without operating system (OS) there would be no access to hardware; without OS no actions, no practices, and thus no user. Each OS, through its advertisements, interpellates a “user”: calls it and offers it a name or image with which to identify.

The current social media product trends…

  • Assign value to actions/engagement in a way that's optimized more for advertisers & companies than for people
  • Establish templates for us to express ourselves
  • Turn "visibility" into something that's less about existing and more about self-branding

Social media ideology depends on metaphors and values we take for granted in software design. Instagram & Facebook in their current states (unfocused, forcing engagement patterns on users for metrics over product quality) seem like extreme signs that we need to change course.


  1. Today's arthouse films could have been another universe's overcommercialized Marvel movies. I wonder how today's standards could be seen as rudimentary / experimental / quaint in the future. At the time of its release in 1929, Critics dismissed Man with a Movie Camera, an avant-garde documentary with no actors or "plot," even when it popularized now-common film techniques fast & slow motion, jump cuts, split screens, and match cuts. Now, elements like its quick cuts and self-referentiality common in contemporary films/series. ↩︎