The Trials of Storytelling with Code

My goal was to say something meaningful with as few words as possible.

I wanted to emphasize the importance of mindfulness in this visual story. Like many pessimistic skeptics drinking coffee and scowling at news headlines, I didn’t put a lot of weight in mindfulness when people first started talking about it. I thought it sounded nice, but I didn’t think there was any way it could apply to me and my busy life of being a full-time student and working full-time.

Mindfulness seems like something lofty and difficult to practice. It wasn’t simple, melancholic disinterest in my case though; I had a lot of preconceived notions about mindfulness and spirituality that just weren’t accurate.

Even if you do believe mindfulness could help you, feasibly implementing it in your busy life isn’t so simple. There are so many different ways to practice mindfulness that it’s hard to imagine how different types could work for you. In general, the practice has a “stop and smell the roses” vibe, which doesn’t always seem very feasible.

I wasn’t interested in mindfulness for a long time. However, there are scientifically proven benefits to practicing mindful meditation. I can argue the validity of something with myself, but it’s harder to argue when these facts are getting published in research journals. Brain, Behavior, and Immunity published a study on how a meditation style called “mindfulness meditation” significantly reduced the levels of stress hormones. In stressful times, we could all use a little more relaxation.

I’m on a journey toward really implementing mindfulness in my life, but I’m early in that journey. That’s why, when embarking on figuring out a simple story to tell with code, I wanted mindfulness to be a central theme.

The process of creating this simple story had a lot of not-so-simple steps.

First, identifying the tools to make the code happen.

After some guidance, I set out to use the ml5js library, specifically their pose-recognition component, PoseNet. It sounds simple—use an onboard webcam to recognize poses. They both sound simple at the surface level, but I realized rather quickly that implementing this cool party trick of a library into a project was harder than I anticipated.

The idea of creating a story that people could play through with just a webcam was what brought me to PoseNet. I wanted to make a physically interactive story very easy for someone to play through. Webcams are by far more common than VR headsets. The idea would open the door to making interactive storytelling with physical elements more available to the average user.

Second, I set out for a visual direction.

Initially, when I started writing, I was going for a lighter and snarkier tone. I was thinking I’d start out on a humorous note then slide toward something more serious and motivational. I tried to reflect the levity in the design with the little sketchy stars in the corners.

Though I wanted to get to that theme of mindfulness, I wanted to start out with something that people could resonate with easier. It’s hard to tell people to start keeping gratitude journals and try to be present each day. People will nod and think “that’s nice” but lack the interest or desire to really try. Subsequently, I thought starting out with something lighter, hooking them in, then moving into more meaningful topics would be a good pathway.

My initial idea was a hope that PoseNet would have the capability to recognize rather small, complicated motions like the act of lifting a coffee mug to your lips. This was a rather optimistic assessment of the library’s abilities.

As I kept going in this direction, I tried to get a little more serious in the second portion. My first thought was to get the person acting out motion in front of the webcam to do portions of very simple yoga poses. Though it sounds incredibly cliche, I feel like yoga is one of the biggest acts of self-care a person can do for themselves.

This third part was getting ever more optimistic about what PoseNet could handle. However, I was hoping to craft the perfect morning routine and guide the player through acting it out. Starting out with something pleasant then shifting some beneficial, simple stretches to nurture the body.

Then I wanted to get into the psychologically restorative activities, like gratitude journaling. It’s another thing that sounds very hammy if you’ve never done it before. However, I started trying it out this past October when I faced one of the craziest juggling acts I’ve done—moving, working, and doing grad school all at once. I moved rather suddenly and had to pack everything up in just three weeks. I was losing my mind a little and desperately needed something to keep my head organized and keep me grounded.

I decided to try out a form of gratitude journaling that felt a little me grounded to me. Journaling paired with a content ideation technique, a morning routine championed by one of my favorite authors,
Ayodeji Awosika

Put very simply, the routine involves writing down 3 things you’re grateful for and 10 ideas for things I want to write. This is incredible for writers. However, it’s definitely a mix and match thing. If you aren’t a writer, you can use the ideation time for something more relevant to your discipline. That fusion of self-care and productivity made gratitude journaling feel a lot more approachable.

Regardless, it was about this point in the project that I stopped and decided that I needed to test these things out before I wrote anything more or dove any deeper into the graphics.

Third, I lost myself completely, swallowed whole by code.

I’m somewhat embarrassed to say that I’ve gone through seven different iterations of this project. I started trying to test out ml5 and PoseNet to see if it would be able to handle tiny gestures like writing or sipping from a cup of coffee. I realized pretty fast that it wasn’t going to work.

Amid following tutorials for using ml5, I ended up trying to build-out an ml5 neutral network. I made a simple sketch—we’ll call it version 3—with a lot of borrowed code from the ml5 website and had some very simple buttons to try and train the network. Even though this was a lot of Frankencode, it still took quite a while to even get that much working.

However, there were some serious problems there. The users would need to train the brain behind the sketch in order to make it recognize poses. This wouldn’t be a fun game in the slightest and it wouldn’t be possible to give the player any simple cues.

Fourth, I scrapped that first visual direction.

Though I liked it, I realized that my original idea was a little too wordy for a story with just a few stages. Beyond that, I wanted to keep with that theme of movement and motivation, but I know I needed to get more creative and work with fewer words to make that work.

I don’t have quite as much background story to share with this iteration; some of the things are similar, though they are less specific. They’re vague enough to still have some meaning—albeit perhaps not as much gratitude journaling—but they’re simple enough that PoseNet’s limitation to reading wrists and not hands makes the interactions workable.

Grappling with burnout was a big influencer of this project.

By nature, I’m a diehard perfectionist. For example, even once I changed the visual direction, I remade most of these images twice and ended up with some odd numbering issues. At first, I jumped right into the “story” without setting the stage at all. I had simply “Look at your reflection” as the first image. When I uploaded it to p5js and sat down to code it in, I realized how useless that was since it didn’t give any kind of usable prompt to advance the story.

There were a few scrapped visual styles.

Bigger is better? It took a few different drafts and versions to find something workable. I also did try to make the canvas a bit larger to improve clarity and give a few more options to the visual formatting, but that spectacularly broke the PoseNet keypoints system.

Gifs are fun, right? I considered using .gifs for a little more visual flair, but p5js’s options for formatting gifs are a bit complicated. Considering that layering the still images over each other broke the sequencing of p5 and ml5’s abilities to read poses, I walked away from the .gif experiment pretty quickly.

On a less poetic note, there was a fair amount of Frankencode in this project.

I must give credit where credit is due! It’s broken down pretty clearly in the p5.js sketch where things were pulled from sources like the PoseNet GitHub page. I’ve marked it out in the pseudocode. Ironically, as I moved through different iterations of the project, a lot of the Frankencode died out.

Once I abandoned the neural network, I tried to get the interactions working with simple hitboxes that reacted to when PoseNet detected a joint going into a new hitbox and advance the story with if/then statements.

Image by Author

I also made myself a ridiculous cheat sheet to plot the coordinates for the hitboxes.

PhotoShop, her beautiful gridlines, and a canvas that was the same as my sketch made the process of mapping out all the coordinates of each hitbox a lot easier.

I thought eyeballing it would be easy enough at first, but then I got my greater than and less than signs mixed up, as the infinite mathematician I am.

Thus, I made this little cheat sheet for myself with a rough estimate of where the player should be located in their webcam feed. This made coding out the hitboxes for the various actions a lot easier to size up.

All of that brings us to the final project.

After several weeks of troubleshooting, several office hours sessions, and a lot of flailing in front of the webcam, the project was done. I recorded to project two ways; through my computer’s webcam and with my camera. I wanted to show both the play process as comprehensively as possible.

At long last, it was version lucky number seven, after a little more guidance and help, that sorted out all of the issues. The full code is happily living in that p5.js sketch.

If I could go back in time, I’d focus on the hitbox interaction from day one.

Screenshot by Author

As it looks right now, it might just make Don Norman cry into his teapot for masochists. Looking at the finished product, I feel like messing with the neutral network was a colossal waste of time. I wish I would’ve spent those days tinkering with the hitbox and redoing the images another time.

However, from a learning perspective, it wasn’t entirely a loss. When I look back at my first project, where I tedious positioned hundreds of ellipses and other simple shapes, I realize that this is a lot more sophisticated than what I started out with. Beyond that, though it took a while, I have a rudimentary understanding of variables. There are certain things, life if/then statements, that I actually understand and can implement now without sitting around and reading hours of tutorials.

Regardless, I spent so much time on that iteration of the project that was scrapped that I feel like my final visuals aren’t quite as impressive as they could have been. This is partially my own fault; I should have reached out for help and other ideas to make this idea work a little sooner. The best I can do is tell myself that this is a good lesson in the value of community and why I shouldn’t keep stabbing into the dark with lines of code.

My ultimate goal in this project was to convey the importance of rest and how vital it is to keep moving forward.

These are two things in life that I’m always trying to balance — resting enough to avoid burnout and still moving forward. I’ve learned the hard way that pushing until you break and can’t lift a finger for days is the worst way to manage yourself.

Stagnation is never the answer. Sitting back and doing nothing doesn’t change a bad situation. It’s taking those small steps forward that really can make a difference in improving a situation.

In the future, I’d love to explore this style of interactive storytelling in VR.

Although the idea of using a webcam to get a person moving is interesting, I think I’d need to use different tools to really make the most of it. Beyond that, it is rather awkward to limit people to motions they can do sitting down. If I wanted people to do larger movements, it would need to be something more like the Xbox Kinects of the olden days where people can use a larger screen and a camera that they're further away from.

While this could be explored more, looking ahead at emerging technologies, I think looking at interactive storytelling in VR is something I’d like to concentrate on more in the future. However, I think I’m still walking away with some important takeaways.

Visual storytelling sometimes needs to be brief.

“[S]ince brevity is the soul of wit,
And tediousness the limbs and outward flourishes,
I will be brief.”

—William Shakespeare

My good old friend Willy Shakes said it best. Brevity is the soul of wit.

Visual storytelling can be effective with minimalist visual elements.

At risk of sounding lazy, this is something I want to explore more. Sometimes, stories with simple visuals are able to convey visceral emotions thanks to their simplicity. Visual simplicity lets people place themselves in the stories more easily than if they were trying to form a connection or resonate with a very specific cast of characters or setting.

I feel like it would be beneficial for me to brainstorm ways I could tell an excellent story with a humble amount of visual assets. For example, if I was trying to tell a story by building it in Unreal with all free assets, what kind of unique world could I build with assets that already exist? It’s not currently in my plans to learn the full pipeline of 3D modeling, which would limit my abilities as a digital storyteller.

Subsequently, a lot of the things I grappled with in p5.js are principles that I can learn from and apply to a variety of different projects. Whether that’s trying to code another story in p5.js or piecemeal something together in Unreal Engine down the road, it’s the same type of interaction planning and the same style questions. The backbone of this project—trying to tell an interactive story—is a skill that I want to keep developing in the years to come.

Writer and poet from Neptune. Instructional designer in NYC. Grad student at @NYUTandon studying Integrated Digital Media.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store