My son loves Disney's "Frozen 2." I must have watched it with him a hundred times. In the film, ice queen Elsa ventures into the unknown with courage and curiosity to discover her true identity and purpose.
Software engineers sometimes need to be like Elsa—minus the ice magic, of course. Finding solutions to novel problems requires that we brave the fog of uncertainty. We need to get things done with incomplete knowledge. But when does strategic ignorance cross the line into reckless risk-taking?
In this post, I explore systematic approaches to handling uncertainty in software projects.
A theory of knowledge
Engineering decisions require balancing what we know against what we don't. If perfect knowledge were possible, we'd always implement the optimal solution. But reality is messy.
Categorising knowledge isn't an academic exercise—it's a tool for deciding where to invest our limited research time and when to start building. With this in mind, we can categorise information into three groups:
Known: Facts and constraints we're confident about.
Knowable: Information we could discover with some effort.
Unknowable: Uncertainties that cannot be resolved in advance.
The first two categories are straightforward—what we know and what we could learn. The third category is trickier: things we can't reasonably know until we're in the weeds of building the software.
In history, “unknowables” are what Nassim Nicholas Taleb would call “Black Swan” events. In his research, Taleb demonstrates how our planning consistently underestimates unknowable events—not because we're careless, but because they are outside our predictive capabilities.
In tech, as in most domains, some things are unknowable—like how markets and cultures will evolve in relation to what we're building. In the structured world of software, many things are theoretically knowable. Practically though, some information requires so much effort to obtain that it is very nearly unknowable. It’s a continuum ranging from easy-to-know to practically unknowable.
When venturing into something new, whether it's a greenfield project or major changes to existing systems, it can be helpful to map the information that matters to us most (knowable or not) so we can increase our chances of building the right thing in the right way.
The knowledge quadrant
Every project begins with decisions that require learning. How else can we make informed choices in a complex, vast universe if we’re not first obtaining knowledge?
To keep things practical, I use a 2x2 matrix to prioritise what I need to learn:
It’s like Eisenhower’s classic “Urgent-Important” matrix, except I’m not concerned with a temporal dimension, but instead trying to prioritise learning according to impact and effort.
When making decisions under uncertainty, it can help to create a short-list of “things you’d like to know” and organise it to maximise certainty with minimal effort. At the end of the day, we gotta ship products.
You can use the knowledge quadrant to do that:
Important + Easy
These are no-brainers. Do quick research, build a prototype, resolve the uncertainty.
Consider payment processing in a web app—deciding whether to build your own system or use an established provider like Stripe. Basic research into security requirements, PCI compliance, and implementation time will quickly reveal that using an existing payment provider is a good idea. This high-impact decision can be made with a small investment in upfront research.
Important + Hard
This is where agile development methods shine. Start working, target the uncertainty, and create feedback loops to learn as you build.
When designing a real-time collaboration feature (like multiple users editing the same document), the exact conflict resolution strategy that will provide the best user experience isn't something that’s easy to find out. Interactions between concurrency, latency, and user expectations make that complicated. An answer may theoretically be knowable, but probably not without significant effort. Instead of trying to design a perfect system upfront, start small, test it with real people, and improve it based on feedback and your own observations.
Not Important + Easy
Make assumptions, pick something and move on. If it's not important, it doesn't matter if you're wrong.
When choosing a logging library or deciding on code formatting standards, just pick something reasonable based on quick research or personal preference. These decisions rarely make or break a project.
Not Important + Hard
Don't waste precious engineering time here.
Like trying to predict exactly how your application might need to scale three years from now if you succeed beyond all expectations. Instead of over-engineering for hypothetical future scale, build a solid architecture that handles current needs well with reasonable extension points, knowing you might change parts of the system as you learn more.
The solution landscape
Think of your journey into the unknown as navigating an unfamiliar mountainous landscape. Your goal is to plant a flag on the highest mountain but you don’t have a map.
You have two activities at your disposal:
Scouting the terrain (exploration): Climbing nearby high points to see what’s ahead, sending small parties in different directions to discover paths, obstacles and resources. This is your research, prototypes and user interviews.
Traveling the path (exploitation): Committing to a direction and making efficient progress along it. This is your focused development, optimisation, and refinement work.
Both activities are important, but they compete for your effort. Every day spent scouting is a day not traveled towards a destination. Every day traveling without terrain awareness could lead you down a wrong path, forcing you to backtrack.
The landscape metaphor isn't just helpful imagery—it connects to optimisation theory in computer science. When building software, we're solving what mathematicians would call an "optimisation problem with incomplete information"—finding the best solution without seeing the whole map. A journey through this imaginary landscape reveals relevant optimisation constraints:
Local / global optima: You might climb the nearest hill (local optimum) not realising a much taller mountain (global optimum) was just outside your line of sight. This is the same challenge that optimisation algorithms like gradient descent face—getting stuck on a good solution while missing the best one.
Gradient locality: The further ahead you look, the less accurate your predictions become. Plans made for distant terrain will be inaccurate. In optimisation theory, this mirrors how gradients provide only local information—they tell you which direction to climb at your current position, but can't reliably predict the landscape beyond your immediate vicinity.
Path dependence: The cost of changing direction increases dramatically the further you travel along a given path. Optimisation theorists would recognise this as a form of "hysteresis", where your history constrains your future options.
Unknown unknowns: No matter how thoroughly you scout, some features of the terrain only reveal themselves when you encounter them directly. This connects to the "No Free Lunch Theorem" in optimisation, which tells us that no strategy can discover all properties of an unknown space without direct sampling.
Software engineers often face this dilemma: do we invest time exploring potential solutions (scouting) or commit to a path and start building (traveling)? This challenge is particularly complex because software landscapes, unlike physical terrain, exist in multiple dimensions, evolve continuously, and shift in response to our own actions.
Lost in the woods
I've seen it happen before (and done it myself, too): teams get stuck in analysis paralysis, debating theoretical concerns while competitors ship products. Analysis paralysis is like spending all your time on the nearest vantage point, mapping every possible route but never actually traveling. The quest for perfect knowledge is understandable—but it's ultimately self-defeating.
Behavioural economists have studied this fact. In their research on decision-making under uncertainty, Daniel Kahneman and Amos Tversky identified "ambiguity aversion"—our irrational preference for known risks over unknown ones, even when they're mathematically equivalent. Ambiguity aversion makes us prefer well-mapped routes with known obstacles over potentially shorter but uncharted paths.
I’ve also seen the opposite: teams charging ahead with blind confidence, only to build something nobody wants or that collapses under real-world conditions.
Kahneman and Tversky identifies this cognitive bias as well—"planning fallacy"—where we systematically underestimate complexities and overestimate our understanding. The planning fallacy has us underestimating how difficult certain terrain will be to traverse, even when similar journeys have taken longer in the past.
As such, the magic is in the middle—enough exploration to set a clear direction, followed by decisive action with built-in feedback.
Conclusion
Uncertainty isn't a bug in software engineering—it's a feature. It's what makes our field challenging, dynamic, and fun. Mastery in software isn't about memorising APIs or frameworks (hello LLMs). It's about developing judgment under uncertainty. At least, that’s part of it.
I've provided a few ideas to help you tackle uncertainties more systematically. The rest is up to you—mastering this skill requires artistic intuition and scientific rigor. Though I’m far from perfecting this myself, I'm committed to developing this skill because I think it's essential for successful software projects.
Like Elsa venturing into the unknown, we must move forward with courage, humility, and a willingness to adapt as we learn. The path reveals itself to those in motion.
May you find the questions you seek,
// // /////
// // // //
//////// //
// // /
// ////////