This site is being redesigned in the open. Visit the redesign blog.

Blog: Recent Posts

Personal observations and private preferences that jump between the internet, design, writing, music, film, self-development, and work culture. View archive

This is a blog post about how social networks can structurally inspire negativity by making positivity a feature. But before we get to that, I want to tell you two stories. One is about loose change, the other is about Larry David, the creator of TV shows like Seinfeld and Curb Your Enthusiasm. Let’s start with Larry.

You boo me? You hiss?

The only things Americans love more than sports are celebrities, which makes featuring famous faces on the big screen a mainstay at live sporting events. A few years ago, a cameraperson was panning across the crowd at a Yankees game and spotted Larry David. The camera zoomed in, Larry did his usual disinclined grimace for the Jumbotron, and the crowd cheered. Wild, thunderous applause from almost 60,000 people. Larry David is so beloved in New York that we don’t mind that he abandoned the city for LA like the Dodgers.

Well, almost universally beloved. The story goes: a heckler was sitting a few rows behind David and during the applause they screamed all kinds of bone-headed disparagements at him. (It’s a Yankees game, after all—there’s always something to be upset about.)

After the cheering died down, the only thing David could talk about was the one person shouting things at him, completely ignoring the tens of thousands of people cheering. Now, I can’t confirm this story, but I am inclined to believe it, because it fits the Larry David way: something’s always wrong.

Loose change

A nickel is larger than a dime, but, until I was seven years old, I never fully accepted that a nickel is worth less than a dime. “It must be some kind of elaborate joke,” I’d think to myself. Kids on the block swindled me out of all kinds of things (money, baseball cards, candy) because of my refusal to understand that bigger isn’t always more.

I eventually caught up, but I still think nickels should be smaller than dimes. Probably half as big, right?

Small and vague positivity vs. big and specific negativity

The features of software with massive reach always have unintended consequences. For instance, social media, by making positivity easy and quantifiable, has ensured that negativity looms large. It’s become a place where we count the good things and experience the bad things.

Suppose a person comes across something quite nice on Instagram or Twitter. They could write a pleasant comment, but they are probably going to Like the post instead (or heart, thumbs up, whatever term the platform uses). This creates a slight hitch in the brain of any reader: positivity gets compressed into a little block of Likes meta data that is about 100 pixels wide, no matter if it is one like or one thousand. Visually diminished positivity creates a challenge for the intellect to really understand the response. One must detach size from scale on social media, similar to how we detach size from value with a nickel and dime. A thousand likes doesn’t look much bigger than one, and this becomes important when considering the form of negativity on social media.

There is no feature for displeasure on social media, so if a person wants to express that, they must write. Complaints get wrapped in language, and language is always specific. This creates a situation similar to the Larry David stadium effect, where one heckler with incisive comments can block out the generalized applause of many more people. Specificity overrides vagueness. The nickel-and-dime size relationship amplifies the situation: one negative reply literally takes up more visual space than tens of thousands of undifferentiated likes.

The arrangement is even worse on Twitter. Liking stays attached to the original tweet and makes most positive interactions static. Negative reactions must be written as tweets, creating more material for the machine. These negative tweets can spread through retweets and further replies. This means negativity grows in number and presence, because most positivity on the service is silent and immobilized.

A like can’t go anywhere, but a compliment can go a long way. Passive positivity isn’t enough; active positivity is needed to counterbalance whatever sort of collective conversations and attention we point at social media. Otherwise, we are left with the skewed, inaccurate, and dangerous nature of what’s been built: an environment where most positivity is small, vague, and immobile, and negativity is large, precise, and spreadable.

We could also chuck the whole thing out the window, but that’s a different blog post, I suppose.

Monkey Trap

Every once in a while you come across a fact that is so sumptuous it begs to be considered a metaphor. And then, there are other facts that are so perfectly just-so that they need to be viewed as apocryphal.

I have read about the monkey trap in multiple places, from Tolstoy to Zen and the Art of Motorcycle Maintenance to your run-of-the-mill self-help books. Some say it’s from South India, others assign it African origins, possibly Namibia, some don’t even bother with origins. Regardless, the details of the trap are the same: take a hollow gourd or coconut and drill a small hole in it. Size matters. The hole should be just barely big enough for a monkey to get a hand inside. Place a treat—bananas, rice, etc.—in the gourd and tie it to a tree. Then wait.

Eventually a hungry monkey will come by, stick in their hand, grab the food, and become trapped. The monkey’s hand fits through the hole, but his fist doesn’t fit back out. They will scream and struggle, clinging tightly to their reward, until someone comes to collect them. The irony, of course, is that the monkey could have escaped at any time. All they had to do was let go.

Preconceptions can blind us from doing things in better ways. Sometimes expertise gets in the way. Buddhists push against this situation by seeking “beginner’s mind.” Over-devotion to the possibility of specific rewards can trap us in precarious situations. Poker players call it being “pot-committed.” All are forms of cognitive biases, but perhaps labelling it as “mental rigidity” is a more immediate and helpful way to think about all of this.

Stay loose. Let go. There are other bananas.

Tweenage Computing

Last week’s Apple event came and went without surprise. The phone is a camera for text messages, work communication, and social media. This is the extent of the vision—for now?

It feels like we are in a tween era for hardware—the in-between years that set the table for an adolescence of great development. That development, I hope, is the integration of the current device sprawl set off by the success of smart phones. Here’s the current situation: each device costs about the same, does mostly the same things, yet doesn’t have comprehensive enough functionality or flexibility to eliminate any of the other devices.

In the Apple ecosystem, I currently own:

  • Apple Watch with data, which I sometimes wear to leave my phone at home
  • iPhone X
  • iPad Pro, for writing and sofa-mode “passive computing”
  • MacBook, for mobile work, usually kept on the home bookshelf
  • iMac, for big-screen design work at the studio
  • AppleTV, for watching things on a screen even bigger than the iMac

I know. Let’s do a price check on a few of these:

  • $1000 for an iPhone Pro
  • $1000 for an iPad Pro
  • $1300 for a MacBook Pro
  • $1300 for an iMac

Which seems… logical? They’re all slightly different combinations of a few basic ingredients. The devices come together into a black rectangle in the size of your choice, with either keyboard & mouse or touch input. Beyond form factor, they all access the same things: your work, communication, and media. The more consistent that access becomes, the more arbitrary the distinctions between the devices seem. The only significant differentiator is the camera on the phone, which is why it is relentlessly updated.

With each year that goes by, it feels like less and less is happening on the device itself. And the longer our work maintains its current form (writing documents, updating spreadsheets, using web apps, responding to emails, monitoring chat, drawing rectangles), the more unnecessary high-end computing seems. Who needs multiple computers when I only need half of one? Which leads to a couple different thoughts.

First, how long until I can do straight-forward design work on something like a Raspberry Pi? A pocket-sized, $50 computer is an interesting proposition.

Second, isn’t the iPhone now more powerful than the computer I had ten years ago? Why can’t I hook my phone up to a larger screen through a dock, Nintendo Switch style? Some Samsung phones have been able to do this for years by using Dex’s functionality, but in my opinion you need to offer a desktop-like experience, not a super-sized tablet experience. (And yes, I know that Dex can run Linux—hello Linux people.)

That seems to be what Apple is working towards with the upcoming updates to Catalina and Xcode, which will make it easier to bring iOS apps to the Mac using a more unified design language. Microsoft has been working in this arena for ages, but the goal in my mind isn’t to turn a tablet into a computer and vice versa—that only feels like you’re switching between input methods. The magic comes from turning a small device like a phone into a full computing environment. (I’ll stick to the phone. Rumors say Apple is working on AR glasses, but this violates a primary rule of computers: don’t put them on your face.)

My wish is for computing to head towards a more integrated, terminal-based approach—one ur-device that is small yet robustly powerful, that can be boosted up for high key usage by docking it to a larger display and alternative methods of input. If extra processing is necessary for computation or graphics, additional hardware can reside in the dock, or processes can be handled remotely, using—ok, go with me here—a “streaming” concept similar to what Google is doing with its Stadia game streaming service. Doesn’t it make sense to do a trial run on this distributed computation method using a technically complex, but low-stakes environment like games? And, just perhaps, couldn’t the sale of accessories to the ur-device, and the subscription cost for extra computation power partially offset the profits of the 3 or 4 other devices that’d be eclipsed in this situation?

We’ve gladly adopted off-site storage for its flexibility to sync between devices. It’s time to do it with processing so we can keep one device on hand but switch how we use it based on our needs. I don’t want more devices, I want more flexibility with the ones that I have.

Leave the Phone at Home

This post was originally written for Harvard’s Nieman Lab, as part of their yearly “Predictions for Journalism” feature, looking into what may happen in 2019. You can read the original here, published January 2019.

Buying things is more fun than running, which is why I convinced myself that an Apple Watch was the perfect inspiration to get back into my trainers. It is now a few months on—I’m still not running regularly, but the watch provided a different and unexpected benefit. I can now leave the house without my phone and still maintain a line of connection to the world with messages, email, and maps. It is freeing. I have no social media on the watch, so no snares in which to get stuck in idle moments. It’s a tremendous relief to be free of the drag of demented global consciousness, and I predict that many others will find the appeal of this situation.

Rather than a prediction, I’d like to offer a plausible wish: that more people opt to leave their phones behind and use smaller, more integrated devices that exist inside the everyday rather than eclipsing it. Small screens, like the watch’s, are incompatible (or at least hostile) homes for social media in its current form. As a result, media companies can begin reestablishing direct relationships with their audience by exploring what media is at home on such small devices. Headphones are the watch’s natural extension, so if I had to offer a place to start, I’d build on top of podcasting’s momentum and explore timely, short-burst audio that’s about the length of a pop song, similar in format to NPR’s hourly news updates.

My wish is a recipe: tiny screens, small snatches of time, clear endpoints, limited engagement, information density, and obvious pathways for more context. If the watch can become people’s primary device, it may provide the opportunity to switch the media paradigm from an endless stream to a concentrated dispatch.

Built-In Resistance

From Ted Hughes’ 1995 interview with the Paris Review, discussing the different yields of writing tools:

I made an interesting discovery about myself when I first worked for a film company. I had to write brief summaries of novels and plays to give the directors some idea of their film potential—a page or so of prose about each book or play and then my comment. That was where I began to write for the first time directly onto a typewriter. I was then about twenty-five. I realized instantly that when I composed directly onto the typewriter my sentences became three times as long, much longer. My subordinate clauses flowered and multiplied and ramified away down the length of the page, all much more eloquently than anything I would have written by hand. Recently I made another similar discovery. For about thirty years I’ve been on the judging panel of the W. H. Smith children’s writing competition. Annually there are about sixty thousand entries. These are cut down to about eight hundred. Among these our panel finds seventy prizewinners. Usually the entries are a page, two pages, three pages. That’s been the norm. Just a poem or a bit of prose, a little longer. But in the early 1980s we suddenly began to get seventy- and eighty-page works. These were usually space fiction, always very inventive and always extraordinarily fluent—a definite impression of a command of words and prose, but without exception strangely boring. It was almost impossible to read them through. After two or three years, as these became more numerous, we realized that this was a new thing. So we inquired. It turned out that these were pieces that children had composed on word processors. What’s happening is that as the actual tools for getting words onto the page become more flexible and externalized, the writer can get down almost every thought or every extension of thought. That ought to be an advantage. But in fact, in all these cases, it just extends everything slightly too much. Every sentence is too long. Everything is taken a bit too far, too attenuated. There’s always a bit too much there, and it’s too thin. Whereas when writing by hand you meet the terrible resistance of what happened your first year at it when you couldn’t write at all… when you were making attempts, pretending to form letters. These ancient feelings are there, wanting to be expressed. When you sit with your pen, every year of your life is right there, wired into the communication between your brain and your writing hand. There is a natural characteristic resistance that produces a certain kind of result analogous to your actual handwriting. As you force your expression against that built-in resistance, things become automatically more compressed, more summary and, perhaps, psychologically denser.