Impact Case Studies
You can view all public case studies here.
Bio Persis was the Executive Director of the Wild Animal Suffering Research Project (WASR) at the Effective Altruism Foundation. She has since joined the Open Philanthropy Project as a researcher for Farm Animal Welfare.
What did the coaching help them accomplish? We worked together to optimize her deep work, strategies for knowledge retention, and productive routines. She estimates the coaching increased her output by the equivalent of 3 hours a day.
In their own words: “During my sessions, Lynette she taught me how to organize my various project management tasks, break down long-term goals into concrete activities, and increase the quality of my deep work time. She also gave me tools to address procrastination, low motivation, and aversion to tasks. I found these sessions incredibly valuable and could/would not have acquired the necessary knowledge and skills independently.”
Bio Krystal is a Researcher at Animal Charity Evaluators. She conducts experimental research and data analyses to identify and understand promising methods for reducing animal suffering.
What did the coaching help them accomplish? She estimates the coaching increased her productive time by 10-20 hours over a month via breaking large projects in manageable tasks and finding new productivity tools.
In their own words: “Lynette helped me identify behavioral patterns that were harming my productivity and taught me how to breakdown large goals into manageable tasks. She approached our meetings as an unbiased observer, asked insightful questions, offered tools tailored to my needs and situation, and ended each meeting with concrete steps. I enjoyed having an impartial person to talk to about my productivity challenges.”
Bio Matthijs is a Research Affiliate with the Center for the Governance of AI and a PhD Fellow in Law and Policy on Global Catastrophic and Existential Threats at the University of Copenhagen.
What did the coaching help them accomplish? I advised him on setting daily goals, implementing commitment devices, and planning a better work environment. He estimates these interventions increased his deep work time by 20 hours a month.
In their own words: “As an academic, I've at times struggled with prioritizing my key research, and protecting my time for deep work. Lynette's coaching has helped me considerably in addressing these problems. She helped me formulate clear, concrete and measurable weekly goals--calling out overtly vague commitments. Moreover, she has helped me develop a toolkit for developing and 'installing' new, productive habits, which I feel has already allowed me to improve my productivity, and which will serve me well in future self-improvement experiments.”
This report summarizes the impact evaluation for EA Coaching’s first year and a half from its founding in October 2017 until May 2019. It’s supplemented by a longer document that includes the appendixes and footnotes.
What Does EA Coaching Do?
EA Coaching helps people working on the most pressing problems get more done. As of May 2019, I have had 800+ sessions with 100+ clients.
I work with professionals who already accomplish a lot -- consultants, professors, software engineers, managers, researchers -- to pinpoint their bottlenecks and help them solve the biggest problems holding them back from accomplishing more. Together, we clarify their goals, implement more effective strategies, and increase focused work time on their top priorities. Coaching typically consists of four to twelve 50-minute calls.
Most of my work is with clients who are likely to contribute to top cause areas, since marginal improvements in productivity for this group may have a disproportionately large impact on the world. Half of my current clients are at FHI, Open Phil, CEA, MIRI, DeepMind, the Forethought Foundation, and ACE.
I expect productivity coaching to have an impact by improving prioritization and increasing focused work. Clients report an average of 16 extra productive hours a month, and it’s not uncommon for them to claim the coaching doubled their output via prioritization changes.
Clients think coaching is useful, as evidenced by client surveys, impact case studies, and revealed preferences. These metrics support the conclusion that the coaching is valuable as implemented, and not just in theory. However, it seems likely these metrics imprecisely correlate with objective output, the ultimate goal, due to biases in self-report and uncertainty about counterfactual impact.
I built a rough model quantifying the impact for a cost-benefit comparison, which suggests that the benefit from coaching is about twice the opportunity cost.
I plan to keep refining the coaching over the next year. My calculations indicate clients reported 20% more benefit on average per session in the first half of 2019 compared to 2018 (see Appendix B), and I think there’s still significant room to improve.
Unfortunately, many details can’t be shared publicly due to confidentiality.
Table of contents:
Who Do I Work With?
How Does Coaching Have an Impact? Prioritization and Increasing Focused Work
Uncertainties regarding “Real” Impact
Evidence Coaching has an Impact: Client Surveys, Impact Case Studies, and Revealed Preferences
Who Do I Work With?
I work with people I think can contribute toward important cause areas, primarily those identified on 80,000 Hours’ global problems page. Most of my expected impact comes from working with this group, since marginal improvements in their productivity may have a disproportionately large impact on the world.
Half of my current clients (a third of all clients I’ve worked with) are at FHI, Open Phil, CEA, MIRI, DeepMind, the Forethought Foundation, and ACE.
Approximately half of my current clients are working on X-risk areas, primarily artificial intelligence safety and policy. The other half mostly work on meta-EA causes, global priorities research, animal welfare, earning-to-give, and building career capital.
The majority of clients are currently doing direct work on one of the top causes. For these clients, my focus is on improving the efficiency with which they make progress.
Some of my clients are currently building career capital or making a career transition into a higher impact job. For example, 80,000 Hours recently started referring people to me if productivity coaching might help them transition into a top cause area. I help these clients plan how to explore career paths, build the required skills, and consistently make time for those plans in addition to their regular jobs. I currently have a couple of data points indicating this approach might increase successful career transitions but need more time to evaluate given how long career transitions take.
You can view the impact case studies below for more details.
How Does Coaching Have an Impact?
I’m most excited about my impact via improving prioritization, and my focus has shifted to increasingly focus on prioritization over the past year. It’s likely that improving prioritization increases productive output by 2-10x per hour, such that a couple of hours of top priority work might be worth a full day of less important work. It’s not uncommon for clients to report the coaching doubled their output. Because it’s hard to quantify improved prioritization, I haven’t been collecting that data in surveys, and hence it’s not included in my impact models. I’m now working to better quantify prioritization changes by having clients track output in relevant key areas each month before and during coaching.
I look at prioritization from career to day-to-day actions, and everything in between. I frequently work with clients to decide which projects to take on (e.g. which papers to work on, which jobs to apply for, which skills to build), how to structure projects to efficiently capture the value (e.g. which actions should be skipped, when and who to ask for help, what actions capture most of the value), and how to focus day-to-day on those priorities (e.g. goal setting, accountability).
On the other hand, I think changing people’s priorities has the biggest potential for harm. While it’s possible I’m making people less productive in the number of hours worked (e.g. by leading them to spend too much time organizing to-do lists instead of doing research), I would be quite surprised if this were the case. However, if explicit reasoning produces worse priorities than intuition (seems highly unlikely in most cases) or if people are missing crucial considerations, it’s possible that changes in prioritization may lead to net negative outcomes. It seems unlikely I’m influencing people to prioritize worse -- e.g. I prompt people to consider how things could go wrong, so I would expect them to be less likely to miss some crucial consideration.
Increasing Focused Work
The next biggest focus area is increasing working time and/or output, e.g. strategies to reduce procrastination, improve focus, handle email, or manage to-do systems. Tackling these areas increases work on both big priorities and maintenance tasks, and often also improves motivation and happiness.
It’s plausible that these strategies increase valuable output by 10%, and are most valuable as a multiplier along with improving priorities. If clients actually gain the reported average of 16 hours per month, that’s a 10% gain on a 40-hour workweek. Despite my uncertainties about self-reported results, 10% seems intuitively plausible.
The emphasis here is on increasing focused work, because my focus is not on getting people to work as many hours as they can. Using hours spent working as the main metric of success may increase chances of burnout. In fact, I often redirect clients’ focus to prioritization when they want to maximize their hours spent working.
Uncertainties regarding “Real” Impact
It’s uncertain exactly how much coaching results in objective increased output. It’s likely that on average clients get a few times more value than the time and money they invest, but both no net value added and 10x ROI are possible.
I’m uncertain how accurate self-reported measures are. While the client feeling they were more productive probably at least correlates with them actually being more productive, the objective gain could vary significantly from reported hours given that most people don’t track their time. I’m introducing time tracking to more of my clients, so I may get more objective hour reports soon. Given biases in self-report, it seems likely that the "real" gain would be smaller rather than larger. E.g. clients may feel social pressure to tell me I was helpful, or want to feel like the time they invested was worthwhile.
I’m also uncertain about the counterfactual impact of coaching. It’s possible that a large part of the reported hours are a regression to the mean if people sought out coaching because their productivity was unusually bad compared to normal for them. On the other hand, some people report seeking coaching because they’ve been trying to improve some area for a long time without success, which would suggest the coaching is valuable for otherwise intractable problems.
Evidence Coaching has an Impact
Clients report large benefits during the period they received coaching.
In client surveys, clients reported an average increase of 16.4 productive hours per month during coaching (n=48), and they valued the benefit of four sessions of coaching at an average of $1,522 (n=45).
92% would recommend it to their friends. (n=26)
I am less confident about the lasting impact after coaching ceases; clients reported an average of 15 hours a month up to a year later, but the variance was high.
In July 2018, I added more quantitative questions to the feedback survey that clients fill out every four sessions. These graphs reflect the data from those quantitative questions.
Equivalent Grant Value
“Assuming you got a grant instead of receiving the most recent four sessions of coaching, how much would you have needed to receive in order to be indifferent between the grant and coaching?” (n=45)
Productive Hours Added
“If you were more productive, how many additional productive hours or hours’ worth of output would you estimate you had over the past month because of the coaching?” (n=48)
These numbers are not frequency adjusted. My survey question asked for impact over the past month, not over the past four calls, so each data point represents the gains from between 1 and 4 calls.
% would recommend
92% of clients responded 7 or above on a 10-point scale to “How likely would you be to recommend EA Coaching to a friend or colleague?”. (n=26)
In July and August 2018, I sent a survey to the 29 clients who had ceased coaching at least two months previously and completed at least 4 coaching sessions; 15 responded. In 2019, I sent a follow-up survey to 11 clients who had ceased coaching more than a year previously; three responded, and two more people filled out the survey when they resumed coaching after having ceased for several months.
Productive Hours Added
“How many extra productive hours would you estimate you had over the past month because of the coaching?” (n=20)
People qualitatively reported not remembering what they got from coaching vs elsewhere after a year. I expect that even if there is a change on the order of 10 hours a month after a year, people wouldn't be able to easily trace it to coaching. Of course, the coaching might just have no lasting impact after a year.
Impact Case Studies
Qualitative reports (usually verbal) inform my expectation of impact in addition to the above. I compiled case studies from all clients who were willing to share them, which you can view here.
The case studies in the document are not cherry-picked; the document contains all studies where the client gave permission to share their name publicly. However, it is possible that clients are less likely to share if they didn’t find the coaching valuable. While I’m confident that this is not the main reason clients choose to remain anonymous, it’s likely there are a small number of people for whom this is the reason.
Empirically, clients continue attending coaching sessions and paying money. This strongly indicates that clients value the coaching more than their time or money. This is a more costly signal than positive reports on surveys, so it probably deserves more weight.
Clients sign up for 4 calls when they start coaching. 93% of clients complete those four calls, 52% of clients continue after the initial 4 calls, and 16% of clients continue for longer than 12 sessions. It’s often fine for clients to stop after four calls, since we want to graduate people from coaching once they gain the tools they need. However, we can infer that the people who continued thought the coaching was still worthwhile after having done it for four sessions.
My standard rate is $125 a session, with a generous sliding scale if that would be a burden financially. Clients who can afford the full rate pay that amount, which indicates they think it’s worth that much. Clients paying the full rate continue after the initial four sessions as often as those paying less, which indicates that they still think the coaching is worth paying for after having experienced it for four sessions.
Several EA organizations are working with me to offer coaching to their employees and/or affiliates, including 80,000 Hours (for their coachees), CEA, Gov AI, and the Forethought Foundation (for the Global Priorities Fellows).
One caveat to revealed preferences is that they may be measuring a different benefit from the one I care about. It’s plausible that these preferences are capturing how much my clients enjoy coaching, rather than how much I’m increasing their output.
Dollar Value of Impact
I used my own evaluations to calculate a range for the value added by increasing productive time. I built the model mostly around added productive hours because their value is much easier to quantify than the value of priority changes, and because I expect them to represent one of the main ways clients gain value. See Appendix A for more details about the model.
I take this calculation as only as a very rough approximation of benefit given that my model is rough, the self-reported inputs are likely imprecise, and I’m not attempting to measure prioritization impact. Hence, this model matters less to my overall assessment than client self-reports and the expected value of prioritization changes.
Given that I expect the larger portion of my impact to come via prioritization, this is a conservative estimate of impact. Taken as a lower threshold, this model slightly increased my confidence that the benefits of coaching likely compare favorably to its opportunity costs.
I estimate extra productive hours clients gained minus the time they spent in coaching sessions (net added hours) based on the information in Appendix A. I adapted 80,000 Hours’ system for evaluating career paths and their 2018 Talent Gap Survey to approximate the value of added hours. The value of time numbers from the Talent Gap Survey may be high for reasons given here.
Based on those methods, my best guess and ranges are:
The cost was the opportunity cost of my full-time work and the funding for the org.
Taking an outside view, I estimate the opportunity cost of my time to be equivalent to that of a junior hire at an EA org, so somewhere in the ballpark of $150,000 to $350,000 per year. While this approximation of the value of time may be high, the net added value number above and the opportunity cost of my time come from the same source, so the errors from that source should cancel out.
Funding for 2018 was approximately $40,000 in grants and $30,000 from clients.
Cost-Benefit and Caveats
My best guess is the coaching delivered at least $720,000 worth of benefit at the opportunity cost of $370,000 worth of time and funding.
These numbers seem surprisingly high. Both dollar values of time are based on this survey, which 80,000 Hours suggests is likely to overestimate the value. In addition, there are many judgment calls and sources of uncertainty that went into the benefit calculations. Hence, I recommend prospective donors make their own assessments of the impact compared to the opportunity costs.