Why Overheads Go Over Our Heads
‘Overhead’ is commonly marred as a dirty word in the nonprofit industry. It is consistently perceived as a gluttonous money monster that drains resources from the otherwise noble mission. As Dan Pallotta mentions in his popular TED talk, “Charitable giving has been stuck at 2% of gross domestic product [in the US] since the 1970s. In 40-plus years, the nonprofit sector has not been able to wrestle market share from the for-profit world.”
Donors’ aversion to overheads, including non-programmatic costs that cover administration, significantly impedes contributions charitably. As Pallotta points out, we complain about getting two pieces of mail in a short period of time from a charity, yet we rarely fret after getting three full catalogues from Pottery Barn. Similarly, we think it is unconscionable for the Breast Cancer Foundation to spend $25 M in marketing, yet few bat an eye after hearing that L’Oreal spent $1.5 B to sell products to the same women.
Our puzzling and disproportionate negative reaction to this inconvenient truth can be explained by a powerful unconscious bias. In this article, solutions sourced from behavioral insights are offered.
Evaluability Bias
The data reveals a negative correlation between the amount donated and the amount that organizations spend, suggesting that donors are sensitive to the management of collected funds (Tinkleman & Mankaney, 2007). From a purely rational perspective, donors should be most interested in efficiency. They should compare different charitable options that produce the same good and then choose the one that delivers optimal quality at the lowest price.
The actual decision-making process fails to coincide with this criteria as donors are typically unaware of the quality and the price of the goods. Researchers suggest that without clear information on cost-effectiveness, donors tend to rely on the evaluability bias to make a decision (Caviola, et al., 2014). In one study, 94 participants were presented with two possible charities. The two experimental conditions included a joint-evaluation group (participants were presented with both Charity A and Charity B enabling comparison) and a separate-evaluation group (participants were presented either with Charity A or Charity B).
Charity A spent $600 on administrative costs and saved 5 lives
Charity B spent $50 on administrative costs and saved 2 lives
Behavioral Science, Democratized
We make 35,000 decisions each day, often in environments that aren’t conducive to making sound choices.
At TDL, we work with organizations in the public and private sectors—from new startups, to governments, to established players like the Gates Foundation—to debias decision-making and create better outcomes for everyone.
Table 1. Charity options in Study 1
Results suggested that participants donated significantly more to Charity B compared to Charity A in the separate-evaluation condition. Interestingly, these preferences reversed in the joint-evaluation condition as participants donated more to Charity A (the more cost-effective option). This suggests that donors are clearly making suboptimal choices when left without a point of comparison. The researchers explain that this is not a deliberative process. Instead, Caviola, et al. (2014) argue that donors primarily value cost-effectiveness but manifest evaluability bias in cases where they find it difficult to make evaluations.
Our aversion to overhead spending is problematic as it negatively impacts charities’ ability to initiate fundraising campaigns, support their infrastructure, or invest in future planning. This pressure often leads charities to underreport costs, which broadens the information-gap (Pollak, 2004). To fill this gap, some effective-altruism based charity evaluators like Givewell use an in-depth scientific approach to rank charities by using metrics based on cost-effectiveness (e.g., cost-per-lives-saved). Such websites take all the guesswork out of the decision-making process.
It is important to note, though, that increased information availability does not always result in behavioral change. In the next section, three nudges that effectively reframe fundraising solicitations are outlined.
Nudge 1: Seed Money
In an experiment that mailed letters asking participants to fund a capital campaign at the University of Central Florida (n = 3,000), donation amounts increased six fold and became twice as likely when solicitations described that a lead donor had covered a portion of running costs. This effect occurred when the seed donation, or the initial money allocated to funding the project, was described as having increased from 10% to 67% (List & Lucking-Reiley, 2002). This finding suggests that donors’ contributions increase as the amount of seed money increases. Interestingly, the researchers explain that beyond a critical level of seed money, contributions start to decrease. Taking this insight into consideration should help nonprofits in finding the optimal amount of seed money.
Nudge 2: Matching
In a large-scale field experiment, researchers sent direct mail solicitations to donors who were unfamiliar to the charity (n = 61,483). In one group, donors were informed that their contributions would be matched by the Bill & Melinda Gates Foundation (BMGF), and a second group was told that contributions would be matched by an anonymous group. Results suggested that those who knew BMGF were 26% more likely to donate and donated 51% more on average (Karlan and List 2014). Donors reportedly trusted the BMGF foundation and felt comfortable enough to contribute. Matching employs the powers of social referencing. In the absence of information, donors tend to base judgements on the decisions of other societal members and authority figures. In this case, donors are likely assuming that BMGF has exercised their due diligence as they would not have otherwise agreed to match contributions.
The AI Governance Challenge
Nudge 3: Once and Done
In a recent field study, researchers sent out direct mail solicitations from Smile Train, a non-profit that treats children with cleft palates. The first group received a conventional invitation letter and a second group received “once and done” solicitations. In the second group, participants were given the option to check “never receive any other mail requests”. Results showed an initial response rate of 0.34% for the control group with a total revenue of $178,609. The “once and done” group had a higher initial response rate of 0.66% with a higher total revenue of $260,783. Most surprisingly, the “once and done” group gave more future donations than the control group. This is especially interesting as the control group did not have the option to opt-out of future mail solicitations (Mullaney et al., 2015).
Conclusion
Taken together, these nudges are reassuring donors when they have insufficient information for optimal decision-making. These nudges are designed to leverage our natural tendency to engage in social referencing. When we are told that a lead donor has covered running costs, or agreed to match contributions, we automatically assume that the charity has been vetted. This process assigns credibility, which makes us gain trust and comfort in our decision to donate.
References
Tinkelman, K. Mankaney, Nonprofit Volunt. Sector Q. 36, 41–64 (2007).
Karlan, & List. 2014. “How Can Bill And Melinda Gates Increase Other People’s Donations to Fund Public Goods?” National Bureau of Economic Research.
List, John and Lucking-Reiley, David. 2002. “Effects of Seed Money and Refunds on Charitable Giving: Experimental Evidence from a University Capital Campaign.” Journal of Political Economy. 110: 215-232.
Mullaney, B. (2008). “Once and Done: Leveraging Behavioral Economics to Increase Charitable Contributions,”. University of Chicago, 2013
H. Pollak, “What we know about overhead costs in the nonprofit sector,” Brief No. 1 (The Urban Institute, Washington, DC, 2004).
About the Author
Arash Sharma
Arash is a Behavioural Scientist at the Government of Canada.