How to make hard decisions and have impact

Want an insiders view into what successful career exploration looks like?

Want to understand why particular approaches fail?

Read on! 

Some career decisions are easy. There’s a clear best option. 

Many are not easy. Where are you a good fit? What would you be best at? What work is most important? 

These questions can feel baffling. How do you even start getting the information to make these decisions well? 

Do you wish you could just look at how successful people made their decisions? What made them confident in their plans?

Well, here you can. Here are 10+ real stories of career decisions. Each story is a real account from a real person. You can see what works -- and what doesn’t. 

Even better, these stories illustrate three big tools for making important, complex decisions. You’ll learn high-level strategies for efficiently gathering vital information, so that you’ll be able to confidently make decisions.  

In case you’re wondering how I came up with these, I brainstormed 25+ examples, clustered them, got feedback from a few people, and then selected the best examples for the most important clusters.

So, why should you read 6k words of stories? Why don’t I just give you the tools in a bullet list? 

Because abstract concepts alone aren’t enough. There are already lots of resources for generic advice. But hard career decisions are nuanced and complex. 

When you only have the abstract advice, it can be hard to apply to all of the situations where you need it. Personal fit tests for someone exploring bio careers are going to look very different than my tests for journalism. It’s also easy to think you understand the advice, when actually you’re missing important details. 

For example, 80,000 Hours has some great advice on investigating uncertainties. However, beyond talking to people and reading their posts, their advice for testing is “Look for ways to test your uncertainties.” They give one example of what this might look like, but one example isn’t enough. That sentence deserves its own entire blog post! 

This is that blog post. The stories teach you the details to flexibly apply these tools. They’re meant to convey the tools in depth so you can apply them yourself in a variety of situations. Because, of course, career decisions don’t stop once you have a job. Doing research, starting a charity, or planning your team’s next project all benefit from the same tools.

And, hey, if you’re not sold on reading the case studies, you can always jump to the conclusion to read the super high level summary of the whole process. 

But I hope you’ll read the whole thing. I think most of the value comes from seeing the strategies in action. 

Data rich experiments 

Identify your key uncertainties, then find the richest real-world data possible to help you reduce them. 

For data rich experiments, you want to get as close to the real-world setting as possible. If you’re testing job fit, you want to try doing work that’s as close as possible to the real job. 

You’ll need to start by identifying your key uncertainties, then find ways to quickly get data to reduce those uncertainties. 

Data rich experiments usually structure effort in the order that decreases your uncertainty most quickly (e.g. start with most likely to fail and work towards least likely to fail). You want to tackle your biggest uncertainties quickly with tests that yield a lot of info.

Rich data is usually external. There’s a detailed, nuanced world out there, and you need to have lots of data points from bumping into it. Get your hands dirty! Go talk to people, try things, learn how the world works. 

For my career exploration, “finding the richest data” meant looking for experiments where I expected to be surprised. I couldn’t easily predict what the outcome would be. For example, I didn’t know what I would discover by submitting pitches. I expected more information on whether I liked the activity, but it was novel enough that I expected to learn surprising things. Just trying something new is often a rich source of data since my models were poor before. (If I’m repeating things I’ve done many times, I don’t expect to learn much.)

I did some tests as “sanity checks” to confirm I didn’t have gaping misunderstandings. But in general, if I expect a certain outcome, I don’t learn much by testing it. So I look for those tests where I can’t confidently predict the outcome. 

Better yet, I look for tests that will give me detailed, nuanced feedback. If I’m just testing whether I can draft a post in a day, the answer is yes or no. A richer data set is investigating what factors predict how fast I draft a post. An even richer data set (for career exploration) would have been testing how quickly I can draft a journalism-style piece and comparing it against a published piece. Here the data is getting richer the more detail I’m extracting and the closer the test is to the real-world scenario I’m testing. 

Question to ask yourself: What are my key uncertainties right now? What are the most detailed, closest-to-real scenario tests I could run to reduce those uncertainties? 

Extra:

1. This method clicked for me when I sought out tests where the result was high variance - they could go either way, and I couldn't predict in advance what I would learn. I suspect that’s often a good litmus test for telling whether a test might yield rich data. 

Asking other people for advice or reading works about the topic are a good starting point for finding data rich tests. My career tests were based on advice from people in the field. My user testing interviews were based on the process in the book Sprint. 

2. Do what’s uncomfortable. 

By definition, testing things means you’re not sure what the outcome will be. That’s often uncomfortable. For example, you can spend hours debating the relative merits of two jobs, but if your biggest uncertainty is whether you can get job offers, you should probably just apply and then compare the job offers. You’ll save yourself a lot of time that wasn’t ultimately moving you forward. 

In my therapy platform project, I was so tempted to just start designing evaluations. I could have worked quite happily to create a project I loved – without realizing it had serious flaws that would later make it fail. Instead, I got feedback early, far before I felt ready to do so. Because I got that feedback early, I could pivot without wasting too much effort on the discarded idea. 

Don’t wait until you feel totally ready to share your idea. Test your assumptions, get feedback, and crash your idea against reality early and often. 

Question to ask yourself: Is there a test that would give me valuable information that I’m avoiding because it feels ughy or scary or uncomfortable? 

Ben Kuhn’s hiring process

Ben Kuhn uses a careful approach to planning behavioral interviews to extract enough data that he can make informed hiring decisions. 

“One of the worst mistakes you can make in a behavioral interview is to wing it: to ask whatever follow-up questions pop into your head, and then at the end try to answer the question, “did I like this person?” If you do that, you’re much more likely to be a “weak yes” or “weak no” on every candidate, and to miss asking the follow-up questions that could have given you stronger signal.”

He plans detailed questions in order to get rich data that is close to how they would likely perform if he hired them. For example, he emphasizes digging into the details. “Almost everyone will answer the initial behavioral interview prompt with something that sounds vaguely like it makes sense, even if they don’t actually usually behave in the ways you’re looking for. To figure out whether they’re real or BSing you, the best way is to get them to tell you a lot of details about the situation—the more you get them to tell you, the harder it will be to BS all the details.” 

Giving What We Can’s failed forum 

Michelle: When we were building a forum for Giving What We Can, we were trying to build a forum that was exactly of the kind that we wanted, and it ended up really blowing out as a project. 

I hadn't managed a tech project before. We were still mostly volunteers, and it ended up taking many months and just totally wasn't worth it. It didn't end up being used very much at all. 

I think that's a case where I hadn't properly zoomed out and been like, "Okay, how important actually is this, and at what point should we pivot away from working on this, even if we put quite a bit of time into it?" 

Lynette: Sounds good. Knowing what you do now about zooming out, what would you have done differently early on? Concretely, what would that have looked like?

Michelle: I think it probably would have looked like doing more of a minimum viable product. I think we considered this at the time and our worry was that a forum only works if you get enough people on it. If you do something that's fine but not great, then you get a few people, and you just don't get enough. It's bound to fail. 

I think, for example, the Giving What We Can Community Facebook group, which I think is what we ended up going with, has actually done pretty well and got fairly good engagement on it. I think I might have just ended up sticking with that. 

I might have tried something like a Google Group with some different threads and then sent that around and been like, "Do people want something like this?" I guess something that was quick to build and could immediately see whether people were using it. 

Testing key uncertainties in my career exploration

Around the beginning of 2023, I started a career review to decide if I wanted to do something besides coaching. 

I liked writing and thought journalism seemed potentially impactful. However, I knew basically nothing about what journalism actually involved day-to-day and I had only a vague theory of change. So my key uncertainties were: What even is journalism? Would journalism be high impact/was there a good theory of change? Would I be a good fit for journalism? 

Cheapest experiments: 

So I started by doing the quickest, cheapest test I could possibly do: I read 80,000 Hours’ profile on journalism and a few other blog posts about journalism jobs. This was enough to convince me that journalism had a reasonable chance of being impactful.

Meanwhile, EA Global rolled around and I did the second cheapest quick test I could do: I talked to people. I looked up everyone on Swapcard (the profile app EAG uses) who worked in journalism or writing jobs and asked to chat. Here my key uncertainties were: What was the day-to-day life of a journalist like? Would I enjoy it? 

I quickly learned about day-to-day life. For example, the differences between staff and freelance journalism jobs, or how writing is only one part of journalism – the ability to interview people and get stories is also important. I also received advice to test personal fit by sending out freelance pitches.

Deeper experiment 1: 

On the personal fit side, one key skill the 80,000 Hours’ profile emphasized was the ability to write quickly. So a new, narrowed key uncertainty was: Can I write fast enough to be a journalist?

So I tried a one-week sprint to draft a blog post each day (I couldn’t), and then a few rounds of deliberate practice exercises to improve my writing speed. I learned a bunch about scoping writing projects. (Such as: apparently, I draft short posts faster than I do six-thousand-word research posts. Shocking, I know.) 

It was, however, an inconclusive test for journalism fit. I think the differences between blogging and journalism meant I didn’t learn much about personal fit for journalism. In hindsight, if I was optimizing for “going where the data is richest”, I would have planned a test more directly relevant to journalism. For example, picking the headline of a shorter Vox article, trying to draft a post on that topic in a day, and then comparing with the original article. 

Deeper experiment 2: 

At this point, I had a better picture of what journalism looked like. My questions had sharpened from “What even is this job?” to “Will I enjoy writing pitches? Will I get positive feedback? Will raising awareness of AI risks still seem impactful after I learn more?” 

So I proceeded with a more expensive test: I read up on how to submit freelance pitches and sent some out. In other words, I just tried doing journalism directly. The people I’d spoken with had suggested some resources on submitting pitches, so I read those, brainstormed topics, and drafted up a few pitches. One incredibly kind journalist gave me feedback on them, and I sent the pitches off to the black void of news outlets. Unsurprisingly, I heard nothing back afterwards. Since the response rate for established freelance writers is only around 20%, dead silence wasn’t much feedback. 

Instead, I learned that I enjoyed the process and got some good feedback. I also learned that all of my pitch ideas had been written before. Someone, somewhere had a take on my idea already published. The abundance of AI writing undermined my “just raise awareness” theory of change. 

Deeper experiment 3: 

Since I was now optimistic I would enjoy some jobs in journalism, my new key uncertainties were: Could I come up with a better, more nuanced theory of change? Could I get pieces published or get a job in journalism?  

I applied to the Tarbell Fellowship. This included work tests (i.e. extra personal fit tests), an external evaluation, and a good talk about theories of change, which left me with a few promising routes to impact. (Yes, applying to roles is scary and time consuming! It’s also often a very efficient way to test whether a career path is promising.) 

Future tests: 

Now my key uncertainties are about how I’ll do on the job: Will I find it stressful? Will I be able to write and publish pieces I’m excited about? Will I still have a plausible theory of change after deepening my models of AI journalism? 

It still feels like I’m plunging into things I’m not fully prepared for. I could spend years practicing writing and avoiding doing anything so scary as scaling down coaching to work at a journalism org – at the cost of dramatically slowing down the rate at which I learn. 

Learning my idea sucked 

At EAG 2022, I stumbled onto some conversations about the need for vetted EA therapists and coaches. Could we thoroughly evaluate a few providers? We could provide detailed insight into their methods and results, so that EAs could find providers with solid evidence of impact. Would it be possible to identify coaches who get 10x the results for their clients? 

I was really excited about this idea. 

As I’m also a fan of lean methods, I designed some user testing interviews for my idea. If I was to build something like a mini-GiveWell-style evaluator for therapists and coaches, it would probably take months or years of effort. Before I invested that, I wanted to refine my idea. The goal wasn’t to build the vetting process more quickly at first, the goal was to figure out what vetting process to build. 

So I followed the process outlined in Sprint. I built mock prototypes with fake data showing the kinds of evaluation info I hoped to include, plus Calendly links and availability data. I scheduled user testing interviews and watched how they interacted with the prototypes. I also hopped on calls with a few providers to discuss the possibility of including them in the vetting process. 

Basically, I jumped into the data as richly as possible. And it told me that my precious, exciting idea...sucked.

First, the providers had limited space for new clients. One had a ten-month waiting list to work with her. Even if each provider could take on 20 or 30 clients, they already had full schedules and would only gradually have space for new clients. My plan for “a few, highly vetted providers” would only benefit a small number of clients.  

Second, the potential users didn’t place much weight on the evaluations. They cared more about whether a provider was covered by their insurance or whether they specialized in a particular issue. 

Both of these indicated that a larger database of providers would be better, even if that meant forgoing the deep vetting process I’d wanted.  

So I let go of my original plan. That sucked a bit. I’d been excited about it. 

One piece of advice I got to help this kind of thing was “It’s much easier to go towards rich data if you’re curious or open. If you’re finding internal resistance, then try to ramp up curiosity.” Here I felt curious about what would help EAs most, so I got excited about the new idea pretty quickly. 

I worked with the EA Mental Health Navigator to revamp and expand their provider database. This was a much smaller and quicker project that, I think, turned out strictly better than my original idea. 

Jacob Steinhardt’s approach to failing fast in research 

For those of you more mathematically minded, Jacob Steinhardt has a beautiful post about how to reduce uncertainty most efficiently. Specifically, he recommends that you structure your research in the order that decreases your uncertainty most quickly. E.g. start with most likely to fail and work towards least likely to fail. (Additionally, multiple clients have told me it was helpful for speeding up their research as well.)

He claims this method quadrupled his research output. 

I’ve included a few excerpts from it here: 

Suppose you are embarking on a project with several parts, all of which must succeed for the project to succeed. For instance, a proof strategy might rely on proving several intermediate results, or an applied project might require achieving high enough speed and accuracy on several components. What is a good strategy for approaching such a project? For me, the most intuitively appealing strategy is something like the following:

(Naive Strategy)
Complete the components in increasing order of difficulty, from easiest to hardest.

This is psychologically tempting: you do what you know how to do first, which can provide a good warm-up to the harder parts of the project. This used to be my default strategy, but often the following happened: I would do all the easy parts, then get to the hard part and encounter a fundamental obstacle that required scrapping the entire plan and coming up with a new one. For instance, I might spend a while wrestling with a certain algorithm to make sure it had the statistical consistency properties I wanted, but then realize that the algorithm was not flexible enough to handle realistic use cases.

The work on the easy parts was mostly wasted--it wasn't that I could replace the hard part with a different hard part; rather, I needed to re-think the entire structure, which included throwing away the "progress" from solving the easy parts.

Rather, we should prioritize tasks that are more likely to fail (so that we remove the risk of them failing) but also tasks that take less time (so that we've wasted less time if one of the tasks does fail, and also so that we get information about tasks more quickly).

A Better Strategy: Sorting by Information Rate

We can incorporate both of the above desiderata by sorting the tasks based on which are most informative per unit time.

(Better Strategy)
Do the components in order from most informative per unit time to least informative per unit time.

Example 1: All of the steps of a project have roughly equal chance of success (80%, say) but take varying amounts of time to complete.

In this example we would want to do the quickest task first and slowest last, since the later a task occurs, the more likely we will get to skip doing it. Sorting "easiest to hardest" is therefore correct here, but it is rare that all steps have equal success probability.

Example 2: An easy task has a 90% success probability and takes 30 minutes, and a hard task has a 40% success probability and takes 4 hours.

Here we should do the easy task first: if it fails we save 240 minutes, so 0.1 * 240 = 24 minutes in expectation; conversely if the hard task is done first and fails, we save 30 minutes, for 0.6 * 30 = 18 minutes in expectation. But if the hard task takes 2 hours or the easy task has a 95% chance of success, we should do the hard task first.

Thus, in this method we formalized "most informative per unit time" by looking at how much time we save (in expectation) by not having to do the tasks that occur after the first failure. 

Iterated depth

The idea of iterated depth is that you start with a wide range of options, shallowly explore them, move the most promising onto the next round of exploration, and repeat with increasingly in-depth explorations. 

1. Start with small tests and build up. 

Start with the cheapest, easiest tests that are teaching you new information. For Kit, this meant asking a friend about their job, reading a blog post about what it’s like to work in that industry, or looking at available job postings. For Allan or CE, it means doing a 30-minute write up before committing to spending hours or months on a project. 

Go deeper with more extensive and expensive tests as you need to but ONLY as you need to. Don’t do a month-long project before you’ve spent a weekend trying it out. Don’t do a weekend-long project until you’ve spent an hour reading about the job. 

Question to answer: What are the smallest, cheapest tests that would have a decent chance of changing my mind about which options are the best fit for me? 

2. Start with a big top of the funnel and gradually narrow. 

Don’t make decisions based on only one data point. If the first option you tried goes okay, still try more options. You need the ability to compare multiple options to see when something is going unusually well vs just okay vs unusually horribly. 

Serendipity and chance play a big role in career exploration, but you want at least a few data points so that you’re optimizing for the best out of five or ten options, rather than the best out of one or two. 

Start with more career paths or specific jobs or project ideas than you can follow, and do a tiny bit of exploration in several. Then go deeper on a select few, narrowing the number you’re trying with each step. By the time you’re accepting a job or committing to a project, you want to have explored several other paths. 

Question to answer: Am I considering at least three options (ideally ten plus options) for early stages of the funnel? 

Extra:

You likely won’t feel convinced you know enough to cut off all the other paths. You’re trying to gain more robust clarity – where it’s harder to get new information that would change your choice. (See How much career exploration is enough?

You might end up going back up a step if a deep dive doesn’t pan out, say if an internship doesn’t go as well as you would like. That’s a good time to reconsider other paths that seemed promising. 

Kit’s career exploration

When Kit Harris wanted to reassess his career, he started from scratch. (You can read his longer account of the process here.)

He identified 50 potentially high-impact roles spanning operations, generalist research, technical and strategic AI work, grantmaking, community building, earning to give, and cause prioritization research.

Then he roughly ranked them by promisingness and explored the top 10 ideas more deeply. “At first, the idea of choosing 1 next step from 50 ideas was quite overwhelming. Explicitly arranging the ideas made exploration much more approachable.”

He started with brief explorations, such as: 

  • Talking to someone working in a similar role

  • Talking briefly with potential collaborators

  • Beginning but not finishing an application, learning from the process what might make me a good fit

Then he advanced to deeper investigations of the most promising ideas, such as: 

  • Selecting small projects which seemed representative of the work and trying them

  • Applying for a position which had work tests in the application process

  • Contracting for a relevant organization

  • Interviewing relevant people and presenting an organization with a project plan

At the end of the process, he spent time contracting for Effective Giving UK (now Longview Philanthropy), and ultimately accepted a full-time job there. Kit wrote that he felt “quite confident” in his decision. 

Founding Charities 

To choose which charities to found, Charity Entrepreneurship uses an iterative depth approach. 

“With our 2020 charity research, that meant doing a quick 30-minute prioritization of hundreds of ideas, then a longer two-hour prioritization of dozens of ideas, and, finally, an 80-hour prioritization of the top five to ten. Each level of depth examines fewer ideas than the previous round, but invests considerably more time into each one.” From How to launch a high-impact nonprofit

GiveWell and Open Philanthropy use a similar process for selecting priority charities. 

You can read a longer description here of Charity Entrepreneurship’s process from an early round.

Allan Dafoe’s shallow research tests

Jade Leung’s account of how her former professor, Allan Dafoe, selected research projects: 

“He always has a running list of research ideas, way more than he could ever get done. He explores more ideas than he would actually be able to follow through with.

By this process of light experimentation, he delves into the idea and tries to understand it a little bit better. Maybe writing up a couple of pages on it and getting into the mode of actually investigating it. 

I think that gave him a bunch more data about whether this was a project that actually had the legs that he thought it could have, whether it felt pleasant and fun to work on, whether it felt like it was exploring a bunch of other ideas, or whether it felt a bit flat.”

An easy way to try this is to keep your attention open for promising ideas and have a place you can jot these down with minimal friction.

Closing the loop

Closing the loop is about having systems to constantly experiment, learn from the tests, and plan new experiments based on your updated hypothesis. This process of constantly iterating enables the iterated depth and data rich methods above. 

There’s a leap of faith to committing to a career path. You have to make commitments before you have deep models that can really inform you, so you’re always making the choice to start something based on less information than you will have later. 

But you want to reevaluate that path once you get more information. You want to check whether you’re still on the right path as your model becomes more granular. Related: Theory of Change as a Hypothesis

To do this, you want to be able to run longer experiments, and know you’re going to circle around and update your hypothesis based on the data you gathered. 

For me, reevaluation points are a good way to balance bigger experiments with frequent check-ins. Quarterly reevaluation points especially help when I’m feeling down or pessimistic about my work. When this happens, my instinct is to immediately reevaluate my entire career path. Having a record of the reasons behind my plans is immensely helpful when I feel like my blog draft is terrible and should never see the light of day and maybe I should immediately quit blogging forever. 

Extra: 

Keep your feedback loops small. 

When running experiments, you don’t want the end of a year “exploring” research to be the first time you analyze how it’s going. Even for longer experiments, you want smaller loops of collecting more granular data. 

Build up to big conclusions with regular bits of data. Keep close to the data, so that you’re constantly making little bits of contact with it. 

Failing to learn from exploration 

I spent the year after college doing psych research as a test for whether I wanted to do grad school. At the end, disillusioned with psych research, I knew the answer was no. 

I just had no idea what I did want to do. 

I spent a year “testing” whether I wanted to do research, but at no point was I actually collecting data or even thinking about what I wanted to learn from that test. 

I wasn’t repeatedly circling around to plan what data to collect, collecting it, updating beliefs, and then planning new tests. In other words, I wasn’t closing the loop. 

I could have broken my key uncertainties (e.g. will I be a good fit for a PhD?) into small components and looked for data (or ways to get data) on them. I could have kept a log of which elements of my job I enjoyed and which I slogged through. I could have noted where I got praised and where I received silence or negative feedback. I could have done some tests of other options in my free time. I could have asked other people about their labs, to check whether my experience was typical for the field.

Ideally, I would have had a system where I thought weekly about what I learned and kept a log of updates. I could have planned questions in advance to pay attention to during the week. 

Instead, I just did my work and tried to reflect at the end. Then, I could generate career ideas but I didn’t have good data points from which to evaluate my ideas. I didn’t have a good sense of my strengths or weaknesses. I had to work to even figure out what I liked. 

I think this was really a wasted opportunity. 

Developing a weekly system 

I wanted a good way to regularly circle back to close the loop on little experiments. So I came up with this process of hypothesis driven loops. 

The basic idea was: while planning my week, I would also plan what data I was collecting to help with career exploration. I wrote down my questions, how I was collecting data, and what I currently predicted I would find (plus how confident I was). 

All of this made it easier to notice when I was surprised. I was collecting data from recent experiences when it was fresh in my mind. Because I was writing in advance what I guessed I’d find, it was easy to notice when I actually found something quite different. 

It allowed me to plan what I wanted to learn from my goals each week, have that in the back of my mind, and reliably circle around the next week to write down any info. This left a written trail documenting the  evolution of my plans.  

Some examples: 

When I was trying to assess whether I should continue working on the ML engineer accelerator program ARENA after the first iteration. One of my key uncertainties was whether the other leaders running ARENA thought it was worth continuing. 

Hypothesis: When I talk with Matt and Callum about ARENA, it will pass this first test for scalability (65%). 

Result: It passed the scalability test, but Callum is going to try exploring himself. I think he's a much better fit than I am, so I’m happy to let him take it on. I'll reevaluate later if he doesn't continue or I get new information. 

When I was drafting the ADHD post. I wanted to test how long it took me to conduct interviews, since I hadn’t done that for a post previously. 

Hypothesis: Reach out to people/schedule interviews/have interviews/draft post in 1 week: ambitious goal, but seems good to try for my goal of writing faster. 30% I can have a draft by the end of the week. 70% I can do at least 2 interviews. 

Result: I didn’t get around to drafting, but I did six interviews. 

Cedric Chin’s hypothesis system 

You Aren't Learning If You Don't Close the Loops by Cedric Chin, author of CommonCog, emphasizes the importance of studying the results of experiments to actually learn from your trials. His system is a good example of systems for exploring bigger key uncertainties over time. Excerpt: 

“Here at Commoncog we spent four months working on our Burnout Guide, summarizing the bulk of burnout research in one easy-to-read piece. The original intention for this guide was to see if we could use a set of commonly known SEO techniques to grow the site’s rankings. But the guide went viral on launch. I was so distracted by the distribution, the positive feedback and the attention that the guide was getting that I nearly forgot about the original hypotheses that we had. It was only when I consulted the original 6-pager I wrote at the outset that I realised we needed to test certain things; the virality was nice but not the main purpose of this particular execution loop.

(This is, by the way, an explicit recommendation to write out your hypotheses before you execute. It doesn’t matter if you jot it down in a 6-pager format or a Google Doc or whatever; the point is that you’re likely to forget your original goals by the end of a loop if you don’t put things down on a page.)”

Conclusion

The next time you need to make a complex, important decision where it’s worth it to put in 100 hours, you should upfront plan a process that looks something like this. 

  1. Start with brainstorming a broad top of the funnel.

  2. Identify your key uncertainties. 

  3. Get data points as close to the real-world setting as possible so you can compare.

  4. Iteratively narrow and deepen the best ones. 

  5. Structure your effort in the order that decreases your uncertainty most quickly. E.g. start with most likely to fail and work towards least likely to fail.  

  6. Have systems that let you regularly circle back to check what you’re learning and update your hypotheses.

Hopefully the case studies helped you think about how to do that. If you want more help, please reach out! I’d love to help design great career tests. 


Related resources: 

My blog post Theory Of Change As A Hypothesis: Choosing A High-Impact Path When You’re Uncertain was an earlier attempt to apply the importance of regular iteration for reducing uncertainty in career choice.  

Logan Strohl’s naturalism posts are one way to think more about getting in close contact with the data. My guess is that you’ll know pretty quickly whether you click with his style, so feel free to check it out and move on if this one isn’t for you. 

Reading about LEAN methods, particularly The Lean Startup and similar takes on lean as a “method for iterating quickly to reduce uncertainty” is another helpful take.