Back vs Up

Almost every UI confuses and frustrates users by conflating two kinda-similar but importantly different controls: Back and Up.

“Back” is the navigational sibling of Undo - take me back one step, from where I am now to where I just was. But this is insufficient in many scenarios where the user is navigating a rich hierarchical control space (e.g. a file browser, a streaming app, a music service). The user also needs to go up a level in the hierarchy. If the user had previously been drilling down into narrower categories, then Back will indeed take them Up, but often these aren’t the same, and having only one control (I’m looking at you, Apple TV) is maddening.

The best UI to handle this, I’m sorry to admit, is the Windows XP file explorer.

Look at the first three buttons in the nav: Back, Forward, and…. Up! And it works as designed!

No matter how the user arrived at a location, they can click Back to return to their previous location, or they can go Up a level in the hierarchy from their current location.

These are different controls! Both are important! Stop conflating them!

Sincerely, annoyed users everywhere.

User Story Mapping

I’m featured in Chapter 3 ("Plan to Learn Faster") of Jeff Patton’s book User Story Mapping. My work as a product manager demonstrates powerful ways to build user value at the intersection of business strategy and user needs.


The Roundtrip Problem

A lot of work goes into design. We start with a product strategy created to support business goals. From this we create a design strategy, and then design concepts based on this strategy, and test them in various ways with users or user-proxies.

Based on what we learn from tests, we create design artifacts representing the direction we have the most confidence in. These artifacts may be low-fi wireframes, or high-fidelity pixel perfect prototypes. This work is, at some point, approved for development.

In even the best teams, in the most collaborative environments, I see a recurring problem that seems to be a result of our tools and technology more than team structure.

The Problem

So much work goes into design artifacts, and then developers have to start from square one building the actual UI. The carefully crafted, sweated-over design work is available at best as a guide.  The process of turning design work into a working product is 99% human-powered.

If, in the development process, the UI deviates from the design intent (often for good reason), rarely is the design work updated to match production. When a new request comes for a new feature or change to the existing product, the design artifacts and production UI are not in sync, so a variety of workarounds come into play. These vary from the more responsible (update the design files to match production) to the crude (take a screenshot and build on top of it).

The Solution?

There must be a better way to keep design in sync with the actual product, but I haven't figure out how yet. Every potential solution I find with ends up trying to turn designers into developers or vis versa.

What I want is a deterministic, bi-direction isomorphism between the design representation and the production UI. I want to way to transform the real production app into design files that can be experimented with by a design team. I want a smart diff that shows development teams what has changed and what hasn't. I want technology assistance in connecting the design intent to the application as delivered.

Right now this connection is 99% dependent on smart humans being conscientious. I don't know yet what the solution is, but I am convinced that when it is solved, the industry will shift en masse to the new paradigm, and our status quo will seem barbaric in hindsight.

Fourball

In the mid 90s, my middle-school best friends Drew, Steven, and I invented a game which we then played for years in Drew’s backyard. We adjusted the rules when necessary, and as a result it’s a surprisingly balanced and fun game. I introduced it to my 6- and 10-yr-old and now they ask to play it.

All you need is three players, four chairs, a medium size bin or basket, and a ball. It can be adapted to most yards. Check out how to play at the link below.

📕 The Official Rules of Fourball

The Future Is Fast and Weird

Most of history is about people and organizations that successfully dominated their environment. Winners are winners within the context in which they compete.

Although we think of humans as a very adaptive species — and we are — we aren’t adapting because we like it; we're adapting in order to win. The process of adaptation is expensive and difficult. The best adaptation is one that only needs to be done once. Where, after you’ve won, you continue to stay on top. The most efficient adaptation results in successful domination.

This is possible because contexts have historically been relatively stable. Not that things didn’t change, only that the rate of change in contexts is far less than the rate at which actors within the context can act, change, adapt, and win.

This fact has been true for all of human history, but it’s about to change forever.

public.jpeg

The world is now changing too fast. This change is being driven mostly by technology, with climate change providing the assist. It used to be that you could adapt in a burst and reach a winning position, then use your power to stay in power. That only works if the environment in which you have won continues to exist. The half-life of environments keeps getting shorter and will soon be less than the decision-making cycle of most actors.

The winners of the future will have one thing in common: the ability to sustain continuous adaptation. Soon it will be necessary to continuously increase an organization’s ability to adapt.

I don’t know what this looks like, but I suspect it’s radically different from the companies, governments, and other organizations that are today considered best in class.

Design Is a Two Cycle Engine

When a team struggles to design a product or service that resonates with users, the diagnosis often focuses on which parts of the design process might be the source the problem. We should have done more research, or we need to determine user goals, or the UI is unclear, or the visual design turns people off.

These conclusions are often correct, yet miss the problem. The problem isn't part of the design process, but the process as a whole.

I think of the design process as a two cycle engine.

Image by A. Schierwagen, licensed CC BY-SA 3.0

Image by A. Schierwagen, licensed CC BY-SA 3.0

A two-cycle engine is a relatively simple internal combustion engine with a high power-to-weight ratio. It works like this:

  1. Air and fuel is drawn into the combustion chamber during the intake

  2. The piston squeezes the fuel/air mix during compression

  3. The spark detonates the fuel/air mix, pushing the piston back during ignition

  4. The movement of the piston both expels the exhaust and draws new fuel and air into the chamber, restarting the cycle.

If your engine isn't working, it could be for a lot of reasons. There might be something stopping fuel or air from reaching the chamber. Or maybe the spark isn't igniting it during the compression. Or the exhaust is blocked. Or any other part could be malfunctioning, and therefore the engine won't run and it won't power your weedeater or motorcycle or AWZ P70 Zwickau.

If you discover some part of the engine is malfunctioning, you're right to fix it. But fixing the malfunctioning part doesn't guarantee a smoothly running engine. And if you fix & improve that one thing over and over, you might still lack engine power if some other part you missed is also malfunctioning.

So it is with design.

  • Intake is feedback and other data that informs the design work

  • Compression is the creation of design work

  • Ignition is testing the work with users

  • Exhaust is discarding work that's proven ineffective

In the same way that “writing is rewriting”, design is redesign, and it requires this cycle to function. Too often we focus on just the compression — how good are we at creating design work — when the problem is often the quality of the feedback, whether we're honestly testing the work, or the ability to discard things that aren't working.

A team struggles when one or more of these component parts is malfunctioning or unbalanced relative to the others. If this cycle isn't running smoothly, it could be a failure of any part of the cycle, so to properly diagnose it, you need to look at every part of the machine.

Credible Strategy Motivates

I have watched compelling visions, pitched by charismatic leaders, nonetheless fail to motivate. I think I've figured out why.

I agree that vision — a compelling description of a future state — is an important, necessary part of motivating a team. The vision must be something the team actually wants to achieve. But that alone isn’t motivating.

What bona fide motivation demands, in addition to an inspiring vision, is a strategy for how the vision will be achieved. Before a person is motivated, they must conclude, in their heart of hearts, that this strategy can succeed. The strategy must be genuinely credible.

What makes this difficult is that it's impossible to counterfeit. You can force people to profess their belief, but if they don't privately think the strategy can work, you'll experience friction the whole time. Conversely, if they do think the strategy will work, no obstacle will feel insurmountable.

Execution follows strategy, and I don't have anything to say about that except: most of our day-to-day attention goes into execution, and there's a world of difference between work in service of a strategy you believe in and one you don't. It's the difference between playing-to-win and playing-not-to-lose.

So here's the recipe for a motivated team:

  1. Describe what you want (a compelling vision).

  2. Describe a plan to go from here to there (a credible strategy). The important thing is that each person, privately, in their heart of hearts, believes this will work.

  3. Follow this plan (competent execution)

When, inevitably, the plan doesn't survive contact with reality, adjust as needed based on what you've learned. If you did step 2 well enough, the team's belief in the plan will be a renewable source of motivation powering virtually any execution work. The vision can inspire; the strategy is what motivates.

Speed Wins

In order to achieve product-market fit in a competitive industry, you need to get smart faster than your competition, and you must adapt faster than the environment is changing. In war, it’s called guerrilla tactics or maneuver warfare. In business, it’s the core of disruptive innovation and it’s how incumbents get out-competed by startups. A team that learns and adapts faster can beat a bigger, richer, slower team.

The best organizations have systems in place to efficiently acquire and make use of high quality feedback. But sometimes the only honest feedback loop a company has is building-and-releasing-the-product, which is the slowest, costliest way to discover how your users will respond.

For interactive products and services, the best way to get smart faster is with prototypes. The label prototype is applied to a broad category of design, but every example has a purpose in common: to simulate the experience of using your product. Simulating the experience is easier and faster than building the real thing, which means prototypes produce actionable feedback earlier.

Prototypes come in many forms, but I think of them in three categories:

Makeshift Prototypes are created with whatever tools and resources you have at hand. A makeshift prototype could be a paper prototype, using printed wireframes or mockups. It could be a Powerpoint with click targets. It might even be a bunch of JPEGs. Makeshift prototypes can be the quickest to create, but they also require the most work from the facilitator to sufficiently simulate the user experience.

Interactive Prototypes are created using a prototyping tool like Figma, InVision, Adobe XD, Sketch, Framer, Axure, Principle, Proto.io, or UX Pin. These tools make it easy to design screens, wire up click-throughs, simulate navigation, and demonstrate hover effects and animation, but their most important feature is that they make it easier to simulate the chronological (changing over time) nature of a product experience.

Much of my work has been on products with complex back-end logic, analytics, or trading models that would take significant development resources to create. By simulating this content in an interactive prototype, we can test and refine product ideas before committing engineering resources.

Interactive prototypes can be very realistic, but it is possible to exceed their ability to manage workflow complexity.

Complex Prototypes are created using the same front-end technologies as production applications. These prototypes are still faster to build than the actual product because they simulate the backend, omit out-of-scope features , and skip the testing requirements of production code. These prototypes can scale to the same level of complexity as the actual UI.

In my experience, complex prototypes generate the most eye-opening feedback, especially when product involves from multiple users interacting with each other in realistic conditions. When features are interconnected, a single event may trigger changes to onscreen text, update data visualizations, modify status notifications, and spawn new windows. The value in the application is not any one of these features in isolation, but rather how they interact with each other to provide a rich user experience. The only way to simulate them is a complex prototype.

It's meaningless to rank these categories; none of them is "best," and they each have a role in a robust design process. With new ideas, use makeshift prototypes to flesh out concepts and flush out problems. Use interactive prototypes to iterate design concepts and get rapid feedback from more realistic scenarios. And finally, invest in complex prototypes to test interconnected concepts and develop the product before committing precious development and engineering resources.

Prototypes offer the the most efficient path to delivering maximum user value. As designers, we are in a unique position to make our organizations smarter, and we should use every tool we have to do so.

Home Movies

This post is adapted from a tweet thread from December 2020, a @vgr style #threadapolooza (cc @threadapalooza) on the topic of “Home Movies” as suggested by @Conaw.

First some definitions: We're talking about movies, not photos. We're talking about personal, not professional. We're talking (mostly) about non-fiction, quotidian, vérité video, not amateur filmmaking. (A wedding video captured by a videographer is right on the border; most of the time we're talking about regular life shot by amateurs.)

People have been shooting home movies for a long time, through multiple changes in equipment and technology. 8mm film was invented by Kodak in 1932 as a low-cost alternative to 16mm. By the late 30s there were cheap cameras and projectors that cost our equivalent of a few hundred dollars. This format remained more or less unchanged until the 70s, when they added a magnetic sound strip. Think about that - multiple generations grew up with home movies as *silent* movies. (This is one reason the home-movie opening credits of The Wonder Years resonates so strongly. Well, that, and Joe Cocker.)

While affordable, 8mm was a pain in the ass. Each 50ft reel would shoot 2-4 minutes depending on your framerate. Before anybody could watch it, the film had to be developed, threaded onto a projector, and then projected onto a screen (and you thought AirPlay was annoying). So until camcorders came along in the mid 80s, home movies were a cinematic anachronism, a living memorial to filmmaking's roots well past the advent of talkies, into the New Hollywood era of the 60s and 70s. Then came the camcorders. By the mid 1980s, Hitachi, RCA, Panasonic, and others all offered camcorders that took full size VHS tapes.

Let's talk about how amazing tapes are. I went to film school in the early 2000s with a deep disdain for videotapes. The look of film, even 8mm film, was held pedestal-high. While I still prefer the fillm aesthetic, I'm so impressed by videotape as a technological feat. A standard VHS tape captures 2-3 hours of video. For those doing the math, compared to a 50ft 8mm reel that's a 60x increase in how much of Christmas morning your uncle can capture.

This is why we all have so much footage starting in the 80s - the camcorder era made it trivial to turn on the camera and let it roll. My wife's family found a box of tapes one time when somebody was moving. We paid to have them digitized. It cost $x00 and we got back terabytes of .mov files that we skimmed through once.

For a truly fascinating and comprehensive account of one (technically sophisticated) person's quest to digitize old home movies, please read @deliberatecoder's blog post: My Eight-Year Quest to Digitize 45 Videotapes (Part One)

Sometimes I think about the hours of hours of captured life sitting in boxes in the back of closets. Most of which will never be seen again. So many boxes of so many tapes. And yet it's comically dwarfed by what we're doing today.

The camcorder made playback monumentally easier too - as shown in Back to the Future, the camcorder was both a recorder and a playback device. A remarkable feat of product design. The camcorder era ran for another two decades - tape technology and cameras got smaller and better, but the user experience remained largely the same.

The best thing about tapes is capacity. The worst thing about tapes is that they're linear. With tapes, copying something happens in real time. Need two copies? I hope you enjoy babysitting two tape decks for a couple hours. So much of what we take for granted with digital formats was unavailable in the tape era. But the shape of the future was apparent as CDs, with their obviously superior format, eclipsed cassettes.

In the mid-2000s, the Flip video camera was the most popular camcorder on Amazon, storing video to flash media as .mp4 files. Sure the video quality sucks, but so does standard def video recorded to magnetic media. In 2009, the iPhone 3GS was released with a camera capable of capturing VGA video at 30 fps. The camera-in-your-pocket became a video camera, and the camcorder era soon ended. Like most era transitions, the upstart was worse than the incumbent in many ways, but better in at least one important way. The convenience of shooting (and sharing) digital video on a small device outweighed the initially-poor video quality and the limitations on length.

Today, more than a decade later, the technical capabilities of smartphone video are jaw-dropping. Hundreds of millions of people walk around with a device capable of at least HD video, often 4k, with slow-mo and other features I would have killed for as a kid in the 90s. In retrospect, the camcorder era of let-it-roll long form home movies was an anomaly. When's the last time you shot a video longer than, say, 10 min on your iPhone? Despite the massive technical differences, the way we shoot smartphone movies today has more in common with the 8mm era than the camcorder era.

What hasn't changed is the motivation. For at least 90 years, through major and minor changes in technology, people - regular people - have been motivated to spend not-insignificant amounts of time and money capturing their lives. The family camera is an aspirational buy - a commitment to capture our lives, a promise that the lives we live are worth capturing. My budget-conscious parents bought a then-expensive video camera in the early 90s after watching a documentary about Lucy and Desi that featured 8mm home movies. (Like a Hallmark channel version of The Ring, the previous era of home movies somehow triggers the next) Halloween, Christmas, Easter, the Fourth of July. Sporting events. Birthdays. The time the Olympic torch came through town on its way to Atlanta in '96. Our move from the old house to the new house. By rolling the camera, we put a little bit of our environment in a container. We put a small amount of the river-you-can-never-step-in-twice in a bottle. It's not the river any more, but you can drink it.

Humans (all apes but especially humans) have a lot of neural tissue devoted to modeling the minds of others, which enables our rich social structures. A camera activates that neural hardware in an odd way: the camera is an observer, but not a participant; we have millions of years of evolution assuming the two are the same. To our monkey minds, the camera is uncanny. The experience of shooting home movies changes an environment. People react differently to the presence of a camera - some light up, some shrink. It's uncanny for the camera operator too. The camera, the viewfinder, is a screen between you and the thing happening right in front of you. (For us introverts this can be a godsend)

But the really captivating illusion, especially for the camera operator, is that shooting feels like "most of the work" (it's not). While shooting, it feels like you're all but done with the creating-home-movies activity. It turns out there's an enormous gap between that feeling and reality. The shooting is the beginning, the start. It's the first half of the the deal. The second half is the watching, which is - surprisingly - just as much work.

Why are we doing all this? Why did people go to the trouble of shooting 8mm film, or pack the camcorder on their vacation, or point their phone at their kids' school play? We want to watch it again. We want to be there again. We want to be reminded of what it was like.

The home movie is a time machine.

While there is some interest in watching a stranger's home movies, the degree to which a you care about a particular video is determined almost exclusively by how well you know the participants. Watching two rando teenagers from the 70s at a diner is mildly interesting, a historical curiosity. Watching the same video of your parents' first date is captivating. This partly explains why the expenditure of time and energy and money on home movies has been so dispersed - the "blast radius" of interest in the work is inherently limited. A commercial piece of art, even from an unknown artist, can go viral. A heavily-marketed factory-produced "must-watch" feature can fail loudly. Both due to one factor: whether the piece resonates with audiences. Art, in general, is meant to resonate. Home movies aren't! They emerge as a side effect of having the camera roll while life happens. And yet they DO resonate, deeply, for a very small audience. Which makes the habits and behaviors around watching home movies fascinating.

Speaking of behavior, the idea of "home movies" conflates a handful of things that are worth separating: shooting, editing, watching, and sharing. It's not the case that these have ever been cleanly separated, and the ways they have been conflated is really interesting.

Shooting has always been pretty self contained - there's a camera, you operate it somehow, and point it at the subject. The limitations imposed by the tech have changed (sometimes significantly) but not the essence: your uncle from the 40s would recognize how an iPhone works; a TikTok teen could pick up the 8mm format. But the rest of it has morphed over the years. I called out "editing" as one of the activities, but now that I'm thinking about it, this is probably the most controversial one. One of the hallmarks of home movies is how little they're edited, if at all.

Editing is a weird piece of the puzzle because I feel like I'd be leaving something out if I didn't mention it, even though it rarely happens. Editing makes a lot more sense if we're talking about the sub-genre of "home movies" that is amateur filmmaking. Like a lot of kids, I employed my siblings and friends in increasingly ambitious genre pictures. The results always fell short of my dreams, but were fun to watch. My parents were, I assume, happy for us to be preoccupied for an entire afternoon. Editing is a task directly affected by the technology. Editing (physical) film is a charmingly physical process - literally cutting and taping. Editing videotape requires two tape decks, which most people could accomplish between the camera itself and a separate VCR.

Editing digital video files is where digital suddenly shines. I fell in love with digital video the first time I saw my uncle use the Dazzle video recorder to digitize a clip from Jurassic Park and then edit it on his then-blazingly-fast Pentium PC. A year later I had saved up the $200 to buy one for myself. With the ability to digitize footage from the hi8 camcorder and edit it on our home desktop, my editing capabilities exploded. With their included non-linear-editing app, I could cut shots with to-the-frame precision, with reversible editing choices! I could add titles and transitions! Not knowing exactly what I was doing, I clicked "Render" and then had to inform my Mom that no, you can't use the computer for the next... [checks the dialog box] ...8 hours. The next morning: a real movie!

My perspective on how much editing really happens is distorted by my personal willingness to put up with the work inherent in amateur filmmaking. When my son was 6 months old, we took him to a Halloween costume party with about a dozen other families we had met in birthing class. I brought my "good camera" and took a bunch of footage, and editing it into a snappy music video. Watching it now confirms my hypothesis about home movies' limited appeal - I love the shots of my now-8-year-old as a baby, and I hardly remember (and have no interest in) everybody else.

I'm really happy to have made this stylish little video for our then-friends, but there's a reason it doesn't happen that often - editing is a pain in the ass, even in digital. Since the beginning, there's always been a surplus of boring home movie footage. Because most of the time, mostly nothing interesting is happening. Even during an exciting historic event, most of the time we're just waiting around; for only a minute of the hour-long video does the Olympic torch get carried by a local celebrity past cheering crowds.

(While we're talking about it, I have to confess I ran out of battery just as the torch reached us. I couldn't believe it. I'm still haunted by all the nonsense footage I had shot of the surroundings as we waited, burning through the battery. I think we (still) underestimate the impact of modern lithium ion batteries on the trustworthiness of our most used electronics.)

So for virtually everything we shoot home movies of, most people want to see the highlights. Which means, ideally, we'd all have somebody cutting together a highlight reel. And instead we're producing this bank of mostly-not-watched, mostly-not-interesting footage.

Smartphones already create AI-edited highlights, but the quality is inconsistent. They're not-bad, but also not reliably good. Sometimes they get the tone right, sometimes the editing choices are nothing a human would do. That said, I'm pretty sure we're near a tipping point - soon we'll have automatically-generated, professional quality highlight reels so good there will be no reason to watch the source videos.

One reason I'm confident about this is that the amount of video being generated is only increasing. Video codecs improve, storage gets cheaper and cheaper, and the amount of video captured continues to increase. Soon, we'll dump random video content in, and the same AI that generates clickbait today will create uniquely captivating, well-edited home movies.

Watching home movies has always been significantly affected by technology. In the film era, watching home movies required a projector and a screen. The camcorder era simplified this by connecting directly to a TV. The digital era detached the video from the screen, enabling a smooth transitions from computer to mobile to whatever-screen-comes-next.

Once we have a reliable, desirable, automatic editing step, people will be more and more motivated to shoot more and more video. If there's a brilliant, never tiring robo-editor to sort through it all, why not give it more raw materials? After this happens, I predict people will notice how limiting it is to have to hold a phone up to shoot something. Maybe by then everybody's wearing AR glasses; if not, there'll be a market for bodycams and other solutions that capture video with minimal input from the user.

It's already the case, in many parts of the world, that every car has a dashcam that rolls automatically when the car is turned on, recording their drive in advance of any traffic incidents. Not long ago, bodycams for police were a futuristic technology, a silver bullet to solve police misconduct. Now they're widely deployed (and we find their limitations political, not technical), and it's not uncommon for "bodycam footage" to feature in court cases. In the future, the camera is always-on, and the footage is automatically edited into videos at whatever frequency we request. The future version of today's "influencers" will broadcast continuously; most people will share about as much as they post to Instagram today.

What of the original footage, in a world where billions of people generate up to 24hrs per day? Given a sufficient robo-editor, does the source material get discarded? At first, the idea of keeping 24hrs of 4k video every day will seem unreasonable, and we will keep only the "best" clips where "something happens". But the cost of storage will keep approaching zero, and the potential value of all the video where "nothing happens" remains non-zero. 24hrs of 4k video (at ~5GB/hr) is "only" 120GB per day, or less than 44TB per year. That sounds like a lot of data, but it's not THAT much. Not long ago, 44GB seemed like a lot, and I remember when 44MB seemed like a lot. It won't be long before a YEAR'S worth of 4k video will be more easily stored than discarded.

I'm reminded of the "Remem" technology in Ted Chiang's short story “The Truth of Fact, the Truth of Feeling". It's told (in part) from the perspective a father resistant to adopt "lifelogging" technology used by most others. Chiang's story is a reflection on how recording changes the lived experience. We can all recall scenarios where events were as remembered differently from how they occurred. Technology changes what's easy and what's hard, and thus changes human behavior. e.g., Texting replaced advanced planning with just-in-time coordination. What will change when any event is able to be recalled, when any event can be seen as highlights, or in its entirety, or anything in between?

For one, it takes what we today call the "home movie" and vaporizes it. We'll be surrounded by an atmosphere of video. Like electricity, running water, internet service - video of anything will be available at utility scale. For decades, home movies represented the equilibrium between the desire to capture and relive parts of our lives, and the limitations of the technology that made this possible. Soon, one side of that equation will evaporate, and the only thing limiting our ability to relive moments will be ourselves.

It may be that home movies will be considered a transitional artifact. There was a long time before moving pictures. And there will be a long time after full fidelity video of every moment from every perspective is available. But for one brief century, millions of people spent time and money manually capturing some of their moments, and watching them again, together. Choosing to buy the camera. Learning to operate it. Remembering to bring it out. Inserting it, however awkwardly, into our most important events. And then assembling the audience. Playing back the videos. Traveling through time, together.

I think people in the future will look back on this time with curiosity, struggling to imagine a world where video doesn't surround life like oxygen. The same way we try to imagine a world before communication, or electricity, or roads, or agriculture, or language. We know that humans - biologically identical to us - lived for tens of thousands of years in these alien ways, and yet it feels so far away from our lives that understanding is impossible. And so it will be for our descendants - our world will be impossible to believe:

"They had to HOLD the camera. And point it at only one thing in the room. And then they only had like, a few hours of video an it only showed what the camera was pointed at. And you had to sit in front of a screen to watch it."

"What if something happened in another room? Or when they weren't holding the camera?"

"They just...couldn't watch it."

"Ever?"

"Ever."

We've been in lockdown since March, and I've spent more hours home with my wife and two kids this year than any previous year. I've found myself, throughout the year, taking more video, and watching more video, than ever before. The kids love to watch themselves at previous ages, and every time I go to pull up video of a certain kid at a certain age, I'm struck by two things:

First, it's amazing that I can usually find a video of whatever age has been requested, within a month or so.

And second, I'm struck by how little of life I capture. Each video a pale blue dot, a little world floating in the endless uncaptured expanse of life.

We are living through the end of the era when "home movies" is still meaningful concept. For now, for the rest of this romantic era, it's all on you. Whatever part of life you want captured, frozen in amber, recallable, relivable - go point a camera at it, so that you may, someday, watch it later with the ones you love.

FADE OUT

THE END

Batman: Riddler's Ransom

In 1999 my friend Jason Woods and I, inspired by the Adam West Batman TV series, wrote a screenplay for a short film called Batman: Riddler’s Ransom.

Traditional screenplay format has the handy property of representing approximately 1 minute of screen time per page. Unfamiliar with this standard, we wrote it in a homemade format that looked more like the script for a play — inline character names and dialog, numbered scenes instead of sluglines, action in parentheses or co-mingled with dialog, and line numbers for some reason.

The screenplay as originally formatted

Even in this condensed format, it was 17 pages long. I boldly estimated we could shoot it all in three, maybe four days.

We recruited our friends to be the cast and crew, and our mothers to sew the costumes. We assembled props and got approval to shoot after hours in key locations around Stillwater. Using Noteworthy Composer I composed and recorded a remix of the theme song, played on 2000-era midi instruments.

Unfortunately, due to equipment failures, logistical oversights, and teenage flakiness, we failed to shoot more than a few pages. I was heartbroken.

Recently I wondered how long our script would be in a standard screenplay format. This is something I had been curious about in the past, but had balked at the tedium of doing the conversion in something like Final Draft. A combination of the fountain plaintext format and the vim text editor made it possible to do it in an afternoon.

Our “movie” turned out to be twenty-five pages, about the length of a standard episode of the TV series. Reading it now, I’m surprised how well it holds up — I think we had a bead on the tone.

That said, no matter how well we had planned the production, our shot-on-hi8-video version would have inevitably fallen short of the version in my head. But I’m proud of what we accomplished, and I’m most proud of the script, which you can read below in its modern format.

Table Paste

I wrote a plugin for Figma called Table Paste, which addresses a very common problem my team and I face when creating mockups and prototypes.

tablepaste-header.png

Stakeholders prefer and sometimes insist on realistic sample data, and for good reason: a meeting with a customer or prospect can go sideways when sample data is unrealistic or repetitive. An institutional trader will find it hard to discuss a trading interface if the market data in the prototype differs too much from live feeds they watch all day.

The design team could get realistic sample data in a spreadsheet, but there was never an easy way to get the data into a prototype. Correctly inserting data from a spreadsheet cell by cell into dozens or hundred of text boxes is crushingly tedious, but Figma’s plugin API make it possible to, among other things, automate repetitive tasks.

Table Paste is a plugin that takes tab-separated content (which you get in your clipboard when you select and copy part of a spreadsheet) and applies the cell values one by one into text boxes in Figma table row components.

The insertion order is determined by the order of the text boxes in the sidebar, but the visual arrangement could be literally anything, so while the standard use case is mockups and prototypes with styled tables, Table Paste will insert data into any repeating layout.

The plugin has been well-received and was included in the Best Figma plugins for 2020 which deserve your attention. If you have feedback, we’d love to hear from you.