I’ve been working with Jared on Interference for a long time now, but it’s not every day I really take the time to step back and think about what a journey it’s been to get to where we are today. Doing this developer interview was a great way to reflect on everything from the moment we first decided to make a video game (spoiler alert: it was on a bus to Six Flags), to how the themes of the story we’re telling still resonate with us today.
I hope that this video gives you a unique insight into what drives me to work on this game every day. If not, Jared will have forced me to get out of bed to do this interview on a Saturday morning for nothing.
Having never so much as attended a gaming conference, finding out that we’d been selected to exhibit Interference at DreamHack Anaheim was equal parts exciting and terrifying. It meant that a bunch of people would have a chance to experience the game we’d been working on for nearly 2 years for the first time, and it also meant that we’d have a little under 2 months to whip up an updated demo and handle all the logistics of setting up a booth on the other side of the country. So, okay, maybe it was slightly more terrifying than exciting.
But once we had everything wrapped up and we arrived in Anaheim with our demo and all our booth decor in tow, the excitement really took over. Neither of us are particularly outgoing, but it quickly became surprisingly easy to shed that anxiety and talk to people about the game. Seeing the interest in people’s faces when we gave the elevator pitch and getting their genuine positive feedback to playing the demo really helped to break down some of the imposter syndrome we’d been dealing with along the way. Huh, okay, so people do actually seem to enjoy playing this game, maybe a few people actually will play it when we release it!
Perhaps the most rewarding part of the whole experience was meeting other devs and learning more about their projects and processes. We’d come to learn from Twitter and Discord that indie game development is a strong and supportive community, but getting to experience that in person with a bunch of other devs all dealing with the same anxieties and successes that we’ve been experiencing the last 2 years really made that clear. And on top of that, every single game we saw or played impressed in some way. We’ll give a special plug to some of our booth neighbors — Chromatose, Adventures of Chris, and Inscryption will all be day one purchases — but we can honestly say that if any of the DreamHack Anaheim Indie Playground games catch your attention, check them out because they are all worth playing.
When it was all said and done, we were both thoroughly exhausted, physically and mentally, and we’re both still getting our sleep schedule back on track a week later. But I don’t think either of us would trade that experience for all the rest in the world, and we’re so thankful that we had the opportunity to go out there and preview Interference to the world. Here’s hoping it won’t be the last time!
Creating a 3D game asset is not all that different from creating a physical object in the real world. The skills and labor required don’t line up perfectly (you’re more likely to develop carpal tunnel than lose a finger, just as one example), but there’s more overlap there than one might think.
This series of posts will go through our workflow for creating 3D assets from the ground up. I’ll mention the tools and services we use in each step, but won’t get too bogged down in technical specifics. Instead I will focus on the thought process and theory behind each step, which can be applied to any pipeline or workflow.
To kick things off, we will look at modeling a 3D object. We use Blender for this step of the process, and the deliverable will be one or several FBX files with all the data we need for our next steps.
Before we even open our 3D software, the first and arguably most critical step is to research what exactly it is we want to make. We have a lot of flexibility and artistic license when creating things to populate our game’s world, but it’s still important to keep real world dimensions and design standards in mind, since inconsistencies across objects can be very noticeable when playing the game.
The specific things to research vary wildly from item to item, but here is a general checklist we always compare against before we start modeling:
Dimensions: Make sure the object is the correct size relative to the booth and, more importantly, other objects.
Era-appropriate: A coffee maker from 2015 looks a lot different than one from 1985.
Feasibility: If we find a reference for an object with complex geometry or that relies on unnecessary physics simulation, it’s probably not ideal for our own sanity and for the ultimate performance of the game.
Once we’ve completed our research and gathered some reference photos and dimensions, it’s time to open up Blender and get to work. We’ve done a deep dive into Blender in a previous post, so definitely check that out if you are interested in the program specifics.
On a higher level, the goal here is to create an approximation of the real-world object while keeping the geometry as simple as possible. 3D modeling for games and real-word manufacturing share a key limitation: complex geometry is more expensive. In physical manufacturing, it costs time and money to get a person (or machine) capable of creating an object to match a complex spec. In computer modeling, that complexity results in more processing power required to render the virtual object.
So while we never want to cut corners in noticeably detrimental ways, some smart planning can optimize the model’s geometry while maintaining the standard of realism we strive for. Working on a model with lots of complex, beveled edges? Maybe save the higher resolution bevels for the most prominent edges facing the player. Have a really cool design carved into the side of an object? Don’t model it, leave the surface flat and add the design to the material with a normal map (we’ll get to this in a later post!).
By thinking in terms of optimization before and while we model, we can end up with something game-ready the first time, rather than trying to optimize an ultra-complex model after the fact.
UV Mapping is cool because there isn’t really a clear parallel to real-world manufacturing. To explain it simply, 3D models are made up of 2D faces. When applying a 2D texture to a 3D model, a program needs data to know which parts of the texture image should map to which faces. This data is provided in the form of a UV map.
A UV map is essentially a 2D plane where faces of a 3D model can be laid out and arranged on a 2D image. If you’re curious to learn more, you can check out this great post that explores it in a bit more depth.
In practice, our approach to UV mapping generally involves trying to pack as many faces as possible into a single 2048×2048 texture while trying to maintain consistent texture resolution and avoiding too many obvious “seams” when a texture should flow seamlessly between faces.
It sort of creates a three way juggling act, because trying to accomplish one goal often comes at the expense of others. For example, shrinking the faces on the UV map allows you to pack more in, but that results in lower texture resolution for those faces, which might be an issue.
At the end of the day, it’s really more of an art than a science, and something that just begins to come a little more naturally with time. It’s maybe not the most flashy work, but I truly think that getting even high resolution UVs evenly mapped to a complex mesh is one of the most satisfying feelings in this entire process.
Once our model is complete and the UVs are mapped, it’s time to export our FBX file(s). An FBX file is readable by Unreal Engine and contains all the geometry and UV data we just spent all that time perfecting in Blender, so it’s a great universal exchange format for just about any 3D modeling workflow.
Our rule of thumb is that any piece that will be animated in-engine will be its own FBX. As these FBX files will contain all the data we need for the remainder of the process, we are done with Blender for now!
Next up we will be looking at graphic design, so stay tuned for another post next week!
The last few posts have walked through Basic Interactables and Physics Objects, two-thirds of the available interaction types in Interference. However, there is a third distinguishable type of interactive object known as Focusables. Let’s check them out!
Focusables, perhaps, involve the most complex set of interactions available to the player. Not to be confused with Basic Interactables, these objects require the player to engage with them before interaction can occur, and the interactions are specifically tailored to each object.
At the outset, Focusables seem relatively simple: “focus” on the object to use it and “de-focus” from the object to return to the game. However, it’s within that focus where interaction becomes complicated, as each object has its own set of usage requirements, many of which are intrinsic to the core gameplay of Interference.
All the critical elements of Interference are Focusable objects: the radio, the map, and the computer. Each one of these objects necessitates different sets of actions to, well, play the game, as well as dispenses the information needed by the player to engage with the narrative; the radio provides the interface for communicating with your trapped friend, the map gives you the necessary spatial information of the facility she is trapped in, and the computer clues you into the dangers lurking around every corner.
The other Focusables, while not as critical to the immediate narrative, give a greater sense of the world of Interference. CCTV monitors give you external views from your desert outpost, a TV and VCR tune you into 80s-era programming, memos and other scraps of paper provide insight to the lives and personalities of your coworkers. And the linchpin to the entire experience? The book of wordsearch puzzles lying open on the desk.
It’s difficult not to talk about the wordsearch (after all, it’s a featured object in our teaser trailer). While the object itself doesn’t necessarily differentiate itself from the myriad of other distractions scattered around the guard booth, its presence as a Focusable object is representative of the kind of game we set out to make. It was one of the first interactions Brad and I conceptualized when we started talking about player autonomy: give the player something else to focus on (ha), and maybe they’ll forget (or want to forget) about the life-threatening situation playing out through the crackling speakers of the radio.
Even though the functionality of each object may vastly differ, all Focusables share the same set of properties:
“Using” the object requires that the player is focused on said object
The player cannot move freely while using a Focusable object
The player must “exit” the Focusable to return to the play-space
Let’s focus on the facility map, for example (ha, again) to see these properties in action.
The facility map available to the player in Interference is one of the most important objects in the game. Walking through the above steps, let’s see how interacting with the map plays out:
The player clicks on the map, and the camera zooms into the map for a closer look
The player cannot move freely while using the map but can do the following:
Move the camera around the surface area of the map by moving the mouse
Pick up, move, and place pushpins on the map by clicking and moving the mouse
Zoom into the map even closer by pressing the spacebar
The player right-clicks to exit the map and return to the play-space
While Basic Interactables and Physics Objects are important to the overall experience of Interference, Focusables do much of the heavy lifting through their contribution to gameplay and story.
Next time, we’ll put everything together and wrap things up with some final thoughts. Until then!
Interference is chock-full of interactive objects. Last time we talked about Basic Interactables (objects that accept an input to perform a specific task) and our methodology behind ascribing functionality to interactive objects. Today, our topic is much simpler: we’re gonna talk about Physics Objects!
Physics Objects provide a straightforward interaction to the player: pick up and drop. In a sense, Physics Objects are just another implementation of Basic Interactables, the key difference being that these objects aren’t specifically used to perform a task—they simply exist, waiting for a task to be performed on them.
But the binary choice of picking up and dropping an object does not encompass what it’s like to manipulate objects in the real world. Objects can be moved. They can be inspected at all angles. And most importantly, they can be thrown. And in Interference, the same is true.
There isn’t much to talk about regarding the implementation of these actions. Unreal Engine’s physics engine does much of the heavy lifting in terms of simulation. We merely provide the input schemes and applied forces behind tossing your favorite mug at the wall. However, when accounting for player variability in the way these objects are manipulated, we must keep a few things in mind:
1. How do Physics Objects interact with other objects?
There are two ways of interpreting this question. A Physics Object colliding with another object will do just that: collide. This collision might result simply in the Physics Object bouncing back. If the Physics Object is breakable, it might break. If the collided object is also a Physics Object, both objects will exert their forces on each other. These collisions are all handled by the engine.
But what about objects that are for other objects, like a key? A key can’t be a Basic Interactable since it needs to be manipulated by the player and moved to its corresponding lock to trigger an interaction. These Physics Objects are given additional functionality and are designated as an Object-for-Object (or OFO), a subset of Physics Objects that make up a small number of items in the guard booth.
OFOs are told what object they are for, and from there a collection of generic functions determine whether the OFO is in position to trigger an action on the receiving object. As per our method of anticipating intention and breaking down tasks into their fundamentals, we don’t belabor the player with nuances such as twisting the key. Once the key overlaps with a lock’s collision box, it automatically snaps into place and opens the lock.
2. How do Physics Objects stay within reach of the player?
I’m glad you asked! All our Physics Objects share a parent class that defines common characteristics of all Physics Objects. One of these variables drives the size of the object’s sphere of influence. If the player character is inside one of these spheres, then the interaction system knows that object is available for interaction.
To keep objects within the reach of the player at all times is as simple as making sure that the sphere radius is large enough for all situations in which another object (like the desk) may impede the player’s reach. But that’s not enough; since objects in front of another object take precedence in our interaction system, if an object is fully blocked by another object—even if the player is within its sphere—the player won’t be able to interact with that object.
We try to place objects in such a way that these blockages cannot occur. However, there are a few places objects can still go that need a bit more attention. To be honest, we haven’t spent much time addressing these interaction-voids since we’ve yet to lock down the final placement of objects, but the intent is that angled collision planes with low friction will be placed in these locations to gently encourage objects that find themselves there to slide into view of the player.
It is important to note that due to the inherent unpredictability of physics engines, we determined that no Physics Objects will be needed to engage with the narrative components of Interference, so any solutions to the blocked object issue are primarily to decrease player frustration rather than preventing situations that break the experience completely.
3. What happens if a Physics Object “breaks”?
If a Physics Object decides to rebel against our carefully crafted safeguards, your computer will crash and all your files will be erased.
Just kidding! (At least that’s not supposed to happen. We are not liable if it does.)
There are situations where the physics engine will perform erratically. Objects may clip through other objects, get stuck inside geometry, or warp through walls. We’ve done what we can in both the physical and conceptual design of Interference to mitigate any issues, but in a game that takes a sandbox approach to the play-space, the unexpected is, quite frankly, expected.
As mentioned above, no Physics Objects are required to engage with the narrative experience of Interference. However, as an experiment in player autonomy, it’s important that these non-critical elements are functional so that any approach in playstyle is treated fairly. If you don’t want to help your friend Valerie escape, that’s fine. And it shouldn’t feel as if we cut corners on the non-narrative components of Interference to allow for such a thing, because doing so would be antithetical to the entire purpose of the game. Even if it’s supremely hilarious when something like this happens:
Physics Objects comprise most object types found around the guard booth in Interference. Books, writing utensils, BERF balls, you name it! All can be moved around the space completely at the player’s discretion, and as we continue to develop Interference, great care is being taken to make sure these interactions behave properly.
On the next post in this series, we will talk about the third and final type of interaction: Focusables.
Video games exist on a bedrock of player interaction. But have you ever been absorbed in a game only for that engagement to be compromised by questions like, “why can I open this door but not that door” or “why can’t I look at this object closer”?
Interference is a game that experiments with player autonomy, and we knew early in the design process that constraining the playable area to a single room would necessitate heaps of interactive elements to successfully conduct said experiment. And it became clear fairly quickly that we would need to address those types of questions above or risk pulling the player out of the experience altogether.
Long story short: yes, you can interact with virtually everything in Interference.
We spent months prototyping interaction systems before settling on the one we are currently using. The first implementation completely broke our game and sent us straight back to the drawing board, and even now, we are continuously modifying and refining the current system to feel cohesive, yet expansive enough to cover the many “types” of interactions we’ve scattered throughout the guard booth.
Objects are broken into various categories to provide a framework for how we assign interactivity. This classification affords us the ability to create generalizations for how the player interacts with that object; through the use of parent classes, we are able to make an adjustment to one interaction type and have those changes reflected in every object of that type. For the player, categorizing these objects provides a shorthand understanding for how an object will work and what control scheme to expect.
Over the next few weeks, we will talk in-depth about the various “types” of interactions Interference has to offer: Basic Interactables, Physics Objects, and Focusables. Let’s start with Basic Interactables.
Interactive objects in their simplest form accept an input to perform a specific task. Some simple examples:
Light switches: turns lights on and off
Doors and drawers: can be opened and closed
Cactus: can be touched
Basic interactivity can be expanded on to create seemingly more complex interactions based on object states and external factors. For instance, objects that use electrical power in Interference cannot be turned on and off during power fluctuations or outages.
In any interaction, we break down what the fundamentals are for that interaction in reality to create a shorthand version for use in-game. Intention becomes important in crafting these simplified representations. Take our coffee system for example.
How does one use a coffeemaker in real life? Well…
Water, filter, and coffee grounds go into the coffeemaker
A button is pressed to brew the coffee
Coffee is poured from the carafe into a mug
The coffee is consumed
This process is one that’s ingrained into our daily lives; we don’t often have to think about the steps in the coffee-making process. However, while pouring a liquid from a carafe to a mug is generally a task that requires very little thought in real life, transferring that interaction to a video game is not so simple. All of a sudden you’re dealing with liquid simulation, multiple objects, and the need to replicate the motor skills necessary to delicately pour scalding hot liquid into a mug. Not to mention, what if the mug overflows? Does it burn the player? Do we let the player pour coffee on everything in the booth?
And the questions cascade. Believe us, we’d love to allow for such complicated interactions as soaking books with coffee, but we’ll leave that for real life. With the added layer of screen and controller, performing these steps becomes a clunky, complicated ordeal (VR perhaps eliminates this issue, but that’s a whole other story). Generally, we keep interactions self-contained, which is beneficial to both us (we want to, ya know, finish the game) and the player (through setting simple expectations for how objects work).
So, how do we approach the coffee-making process in-game? First, we distill the above steps down into its fundamentals of intent:
Then, we break down how best to represent each fundamental step:
If the carafe is empty and power is stable, interacting with the coffeemaker will simply brew coffee. While the coffee is being brewed, the player cannot interact with the coffeemaker.
Once the coffee is brewed and the carafe is full, interacting with the coffeemaker will allow the player to drink coffee (assuming there is at least one unbroken mug left in the booth). While the player is drinking coffee, player input is disabled. The level of available coffee lowers until the carafe is once again empty.
The rest of the process is simply represented by objects that can be found in the booth: a coffee tin, coffee filters, and mugs.
By breaking down complex processes into simple, straightforward interactions, we are able to quickly add all sorts of interactivity throughout the booth. However, Basic Interactables are merely one classification of interactivity; we are only scratching the surface on the depth of interactivity Interference has to offer.
Next time, we will talk about our next type of interaction: Physics Objects.
We put it off for as long as we could, but the fact of the matter is developing a video game costs money. In our case, we’ve been incredibly frugal, but since every cent has come out of our own pockets, we’ve turned to Patreon to help offset some of our necessary expenses.
First and foremost, Interference will be made whether we only have one Patreon supporter or 1000. Development may take a bit longer with fewer supporters, but it’s a pace we’ve been accustomed to for the last year and a half since we started.
Second, any money we do receive through Patreon will not be pocketed. 100% of donations (after Patreon’s share, of course) will be put towards covering the following:
And lastly, there’s something in it for you! Our Patreon supporters will receive exclusive access to development builds, behind-the-scenes images and videos, opportunities to provide feedback that may affect the game itself, their name in the credits, and more! We won’t be locking any of these goodies behind arbitrary “tiers” either, not now nor in the future since we believe anyone should be able to be part of our community.
And that’s that! Hit us up on Twitter if you have any questions, comments, or concerns. And if you would like to become a Patreon supporter, we’d be honored to have ya:
After nearly 15 months of development, we are very excited to formally announce to the world our first game slated for public release: Interference.
Well, technically we “announced” Interference back in January. And we’ve been posting periodically about it on variouschannels since then. We’re not exactly releasing state secrets here, is what we’re saying. But we do have some cool new stuff to share with you all, and we think that’s more than enough reason to get excited!
A New Name
First things first, our team has a new name. We’ve transitioned from our longtime moniker of The Scary Farm to adopt Fear of Corn because we wanted a name that was unique and memorable. We loved “The Scary Farm” for its simplicity, but we feel like Fear of Corn speaks more to our approach to making games: it’s multilayered, has a sense of humor, and it’ll stick with you.
But the new name is just the tip of the iceberg.
A New Purpose
Our primary motivation for an official unveiling of Interference now, as opposed to back in January, is a renewed sense of purpose for why we are pouring countless hours into this game. What is driving us to get it over the finish line? Because investing this much time into a game with no concrete purpose would be crazy, right?
We are serious about seeing Interference through because we know that it’s going to be a narrative experience that’s not exactly like anything else that’s been made before it. We’ve been working with two brilliant storytellers to craft true-to-life dialogue and build it into a compelling interactive narrative framework. All the while we’ve continued to explore the idea that the player can participate in a game’s story without being the driving force behind it.
In Interference, the player is not the most important figure in the game’s world. They’re not even the most important figure in the particular story we’re telling. They’re only connected to the action via a two-way radio connection, which can be totally ignored if the player so desires. But the story is still gonna happen with or without them.
We feel that this approach to storytelling is refreshing because it’s more true to the consequences of real life decision-making, which in turn make the narrative stakes feel more real. Indecisiveness in real life doesn’t mean lingering on the dialogue options while you run to look up which narrative path will yield the best quest rewards. Indecisiveness in real life can end friendships and cost lives, and we want a game that captures that in some way.
We all love being the center of the universe in games from time to time, but what effect does this have on storytelling? Can true player freedom and a realistic sense of narrative consequence coexist in a game? We’re out to prove that they can.
A New Trailer
Finally, we’re very, VERY excited to announce our first official trailer for Interference! If you haven’t seen it yet, check it out right here:
That’s all for now, but we will have much more in the coming weeks, so we encourage you to follow us on Twitter, check out our TIGSource thread, and join our mailing list! We’d love to hear from you so please don’t hesitate to reach out on any of those platforms with your thoughts.
Okay, that’s not totally true, but when it’s 3:00 AM and you’ve been pulling your hair out for hours over what should have been an easy-to-fix bug, it’s easy to question why you’ve chosen this life for yourself in the first place.
There’s not much elaboration that needs to be made on this topic, but I must warn anyone who is reading this the dangers of this guideline: it won’t be fun all the time. There will always be those late nights, those inane, frustrating tasks. I just spent the last few weeks reformatting a flowchart for the intro dialogue sequence for Interference. Was it fun? Hell no. Does it allow us to turn our focus back to the parts of development that are fun? You betcha.
It’s all about balance; the “fun” parts should counteract the “not-so-fun” parts. And the “fun” can come from any number of places. It can be found in the feeling of accomplishment in implementing a new game feature, the hilarity of designing fake brands to populate the game environment, the relief of finally squashing that bug at 3:37 AM. If at any point Brad and I take a holistic view of our work and realize we aren’t enjoying it, we’re in big trouble, because having fun is the driving force behind our motivation.
The other guidelines are the fortifications, the safeguards we’ve put into place to protect our sanity. And by periodically checking our progress against the guidelines we set for ourselves over a year ago, we’ve stayed strong, we’ve persevered… we’ve had fun.
One thing that Jared and I have always come back to over the course of our nearly 4 years of experience building games with Unreal Engine is the fundamental fact that we both enjoy it. So why is it that we’ve given up in the past?
We’ve discussed in previous posts a few of the pitfalls that have contributed to our inability to follow through on past attempts at releasing a finished game project, but the truth is that it wasn’t over ambition or perfectionism that made us not finish our previous games. We didn’t finish those games for a very simple reason: we eventually stopped working on them.
And that is exactly what we tried to address with our third guideline: Maintain Accountability.
It’s easy to say at the onset of a long creative project that you simply won’t stop working on it until it’s done. But that project will eventually start to feel daunting, and if you don’t take measures to hold yourself accountable, letting it fizzle out will become a natural outcome, despite your best intentions.
Working on Interference is not a full-time commitment for us. Jared and I both have jobs and lives and only so many hours in the day in which we can fully commit to working on the game. We also live in different timezones. We knew going in that if we wanted to make this work, we’d need to be better about communication and task management than we had been on past projects.
For the last year, we’ve been committed to tracking our work with Trello cards and checking in at least once per week via video call, with tons of communication over Slack in between. Life can be unpredictable and there are certainly days where we don’t really accomplish much. But by setting a cadence of weekly check-ins, we’ve been really effective with never letting those periods of unproductivity stretch beyond a few days. We knew from experience that a week of not working on a game very easily becomes two weeks, which becomes two months and eventually three years.
Weekly check-ins disrupt this natural depression in productivity over time by making sure we can always reassess our goals and objectives if we ever find ourselves losing steam. We avoid hard deadlines, but always set reasonable weekly goals so we can keep a pulse on how we’re doing and stay flexible in our approach to avoid burnout. Working on the game for 14 months straight sounds exhausting. Working on it for just a few more days until the next check-in though? How can I give up when I’m just three days away from hitting the goal I set for myself just last week!?
Of course, even with all the accountability in the world, other factors can still sap the fun out of a project start to make it feel like a drag. But the beauty of staying accountable is that we’re always in control over the fate of the project.
Had we decided 3 months in that we just weren’t feeling it and wanted to put the game on hiatus, it would have been a bummer, but it at least would have been a conscious decision and not something we look back on years later and just think “man what ever happened with Interference…?”.
Thankfully, our other guidelines have kept us motivated well beyond that 3 month mark, and their importance cannot be overstated. But it’s been the commitment to accountability that has given us a framework to channel that motivation into an actual game, and seeing our progress unfold week by week has been the biggest motivator of all.