Blog

Network Evaluation Should Be Transparent

Several third-party firms collect data on the performance of U.S. wireless networks. Over the last few months, I’ve tried to dig deeply into several of these firms’ methodologies. In every case, I’ve found the public-facing information to be inadequate. I’ve also been unsuccessful when reaching out to some of the firms for additional information.

It’s my impression that evaluation firms generally make most of their money by selling data access to network operators, analysts, and other entities that are not end consumers. If this was all these companies did with their data, I would understand the lack of transparency. However, most of these companies publish consumer-facing content. Often this takes the form of awards granted to network operators that do well in evaluations. It looks like network operators regularly pay third-party evaluators for permission to advertise the receipt of awards. I wish financial arrangements between evaluators and award winners were a matter of public record, but that’s a topic for another day. Today, I’m focusing on the lack of transparency around evaluation methodologies.

RootMetrics collects data on several different aspects of network performance and aggregates that data to form overall scores for each major network. How exactly does RootMetrics do that aggregation?

The results are converted into scores using a proprietary algorithm.[1]

I’ve previously written about how difficult it is to combine data on many aspects of a product or service to arrive at a single, overall score. Beyond that, there’s good evidence that different analysts working in good faith with the same raw data often make different analytical choices that lead to substantive differences in the results of their analyses. I’m not going take it on faith that RootMetrics’ proprietary algorithm aggregates data in a highly-defensible manner. No one else should either.

Opensignal had a long history of giving most of its performance awards to T-Mobile.[2] Earlier this year, the trend was broken when Verizon took Opensignal’s awards in most categories.[3] It’s not clear why Verizon suddenly became a big winner. The abrupt change strikes me as more likely to have been driven by a change in methodology than a genuine change in the performance of networks relative to one another. Since little is published about Opensignal’s methodology, I can’t confirm or disconfirm my speculation. In Opensignal’s case, questions about methodology are not trivial. There’s good reason to be concerned about possible selection bias in Opensignal’s analyses. Opensignal’s Analytics Charter states:[4]

Our analytics are designed to ensure that each user has an equal impact on the results, and that only real users are counted: ‘one user, one vote’.

Carriers will differ in the proportion of their subscribers that live in rural areas versus densely-populated areas. If the excerpt from the analytics charter is taken literally, it may suggest that Opensignal does not control for differences in subscribers’ geography or demographics. That could explain why T-Mobile has managed to win so many Opensignal awards when T-Mobile obviously does not have the best-performing network at the national level.

Carriers advertise awards from evaluators because third-parties are perceived to be credible. The public deserves to have enough information to assess whether third-party evaluators merit that credibility.

Fancy Phones: Now An Even Worse Deal

About twelve years ago, Apple released the first iPhone.[1] While it was an expensive device, the original iPhone had all sorts of features that the competition lacked. Of course, the first iPhone is a pretty extreme example.

Even in 2012, the year the iPhone 5 was released, I felt that there were significant differences between the latest, high-end phones and phones that were sold at lower price points.[2] I didn’t think the latest, greatest devices were actually worth the cost for the typical consumer back then, but I could at least understand the appeal of those fancy devices.

Today, things are different. Hardware has continued to improve, but it’s not clear that hardware improvements have had much to offer to the typical consumer. Today, you can get an unlocked Motorola G6 or G6 Play without any carrier subsidy for less than $200.[3]

The G6 performs well for the sorts of things typical consumers use their phones for. The phone’s cameras are pretty good. It has a fingerprint reader. Hell, the phone even does pretty well in terms of aesthetics. I’m struggling to come up with meaningful limitations it has compared to higher-end phones. It’s not waterproof?

Perhaps the high-quality of low-end phones these days explains why the latest iPhone models haven’t sold well. My point is not that the higher end phones don’t have better features. They do. Having the best phone these days may matter if you’re an Instagram star, you want to play the most intense mobile video games with the highest possible performance, or you want to make your friends jealous. On the other hand, if you use your cell phone for pretty typical purposes, you can save a lot of money without forgoing many useful features.

Magical Growth in Subscriber Numbers

Yesterday, one of my favorite journalists covering wireless, Mike Dano, published an article with the title “Why Wireless Carriers Magically Keep Growing Every Quarter.”

Dano notes that there’s been a roughly 2.5% growth in wireless subscribers for each of the past several quarters. This growth rate is tricky to make sense of:

[MoffettNathanson analysts] noted that the industry’s growth rate appears to be outstripping population growth rates and the growing number of teenagers getting phones, and isn’t attributable to other factors like the growing number of secondary phone-type devices like the Apple Watch. ‘The most likely answer appears to be the simplest,’ wrote the MoffettNathanson analysts. ‘Carriers are offering free or partially subsidized phones in return for adding additional lines.’

They continued: ‘It is all but certain that some customers have taken advantage of these offers even if it means adding a line they don’t need, and won’t use. The customer would simply reassign the new BOGO handset to an existing (used) line, moving an old unwanted handset to the new (unused) line.’

I’m not convinced this is the simplest explanation. A lot of consumers would find the process of adding a line to take advantage of a buy-one-get-one (BOGO) offer then switching service between devices complicated or sketchy. Around 2012, massive phone subsidies on post-paid plans were extremely common. At the time, I was involved in the cell phone resale business. I noticed that a surprising number of people were eligible for subsidized upgrades but not using them. In these scenarios, a subscriber could upgrade to a new ~$400 device for free, switch back to an old device, and quickly resell the new device. Even though the opportunity was relatively simple, I got the impression that people rarely took advantage of it.

The other problem I have with the explanation is that if consumers are taking advantage of BOGO offers in large numbers, carriers ought to notice what’s going on. Perhaps some carriers want to pad their subscriber numbers, but I find it unlikely that there’s an industry-wide willingness to pad subscriber numbers today since that will lead to higher churn in a year or two. I would guess that some carriers seeing consumers regularly add new lines to get free devices would be inclined to promote device-financing options. I expect financing options would often be simpler for consumers and more profitable for carriers.

That said, it’s pretty clear that at least some new lines come from those taking advantage of BOGO offers. A recent FCC filing stated the following (emphasis mine):

Sprint’s postpaid net additions recently have been driven by ‘free lines’ offered to Sprint customers and the inclusion of less valuable tablet and other non-phone devices, as well as pre to post migrations that do not represent ‘new’ Sprint customers.[1]
What I wonder is whether BOGO offers are the primary driver of unexpected growth. Dano mentions a claim found in a recent Wall Street Journal article (paywalled) that’s based on work from New Street Research:
Telecom consultant New Street Research estimated that customers signing unneeded wireless contracts to pocket more valuable smartphones added 1.7 million ‘fake’ lines to cellphone carriers’ tallies in 2018.
If we take that number at face value, that’s roughly 400,000 lines per quarter.[2] However, BOGO offers are not new. They reached their peak several years ago. For “fake” BOGO lines to be driving growth, there must be more “fake” lines getting activated now than there are “fake” lines falling off.[3] From my vantage point, it looks like BOGO offers might be less appealing than they were in the past. It used to be the case that devices that had been out for a few years were substantially worse than recently-released devices. That seems less true today. Recent declines in iPhone sales may indicate the other people feel the way I do.

What else might explain the large growth in subscriber numbers? On Twitter, industry-analyst Roger Entner mentioned that the growth could be due to subscribers transferring off of the Lifeline subsidization program.

It’s an interesting puzzle, and I might just be missing something. Despite my skepticism, I still don’t think it’s implausible that BOGO promotions really are driving lots of growth in subscriber numbers.

Lies, Damned Lies, and AT&T’s 5GE

There are three kinds of lies: lies, damned lies, and statistics.Benjamin Disraeli*

Fortunately, the sentiment behind this quote isn’t always accurate. AT&T has recently taken a lot of heat for misleadingly branding advanced 4G networks as “5GE.” Ian Fogg at Opensignal published a post where he draws on Opensignal’s data to assess how AT&T’s 5GE-enabled phones perform compared to similar phones on other carriers. The results:[1]

In response to AT&T’s misleading branding, Verizon launched a video advertisement showing a head-to-head speed comparison between Verizon’s network and AT&T’s 5GE network.

In that video, Verizon’s 4G LTE network comes out with a download speed near 120Mbps while AT&T’s 5GE network came out around 40Mbps. That, of course, seems funny given the Opensignal data suggesting the networks deliver similar speeds on average.

A portion of the Verizon video—not long enough to show the final results—showed up in a Twitter ad. That ad led to a Twitter exchange between myself; Light Reading’s editorial director, Mike Dano; and Verizon’s PR manager, Steven Van Dinter. Dinter explained that Verizon chose to film in a public spot where AT&T’s 5GE symbol was very strong. I take Dinter’s word that there wasn’t foul play or blatant manipulation, but it is funny to see Verizon fighting misleading branding from AT&T with misleading ads of its own.

The Optimizer’s Curse & Wrong-Way Reductions

Following an idea to its logical conclusion might be extrapolating a model beyond its valid range.John D. Cook

Summary

I spent about two and a half years as a research analyst at GiveWell. For most of my time there, I was the point person on GiveWell’s main cost-effectiveness analyses. I’ve come to believe there are serious, underappreciated issues with the methods the effective altruism (EA) community at large uses to prioritize causes and programs. While effective altruists approach prioritization in a number of different ways, most approaches involve (a) roughly estimating the possible impacts funding opportunities could have and (b) assessing the probability that possible impacts will be realized if an opportunity is funded.

I discuss the phenomenon of the optimizer’s curse: when assessments of activities’ impacts are uncertain, engaging in the activities that look most promising will tend to have a smaller impact than anticipated. I argue that the optimizer’s curse should be extremely concerning when prioritizing among funding opportunities that involve substantial, poorly understood uncertainty. I further argue that proposed Bayesian approaches to avoiding the optimizer’s curse are often unrealistic. I maintain that it is a mistake to try and understand all uncertainty in terms of precise probability estimates.

This post is long, so I’ve separated it into several sections:

  1. The optimizer’s curse
  2. Models, wrong-way reductions, and probability
  3. Hazy probabilities and prioritization
  4. Bayesian wrong-way reductions
  5. Doing better

Part 1: The optimizer’s curse

The counterintuitive phenomenon of the optimizer’s curse was first formally recognized in Smith & Winkler 2006.

Here’s a rough sketch:

  • Optimizers start by calculating the expected value of different activities.
  • Estimates of expected value involve uncertainty.
  • Sometimes expected value is overestimated, sometimes expected value is underestimated.
  • Optimizers aim to engage in activities with the highest expected values.
  • Result: Optimizers tend to select activities with overestimated expected value.

Smith and Winkler refer to the difference between the expected value of an activity and its realized value as “postdecision surprise.”

The optimizer’s curse occurs even in scenarios where estimates of expected value are unbiased (roughly, where any given estimate is as likely to be too optimistic as it is to be too pessimistic[1]). When estimates are biased—which they typically are in the real world—the magnitude of the postdecision surprise may increase.

A huge problem for effective altruists facing uncertainty

In a simple model, I show how an optimizer with only moderate uncertainty about factors that determine opportunities’ cost-effectiveness may dramatically overestimate the cost-effectiveness of the opportunity that appears most promising. As uncertainty increases, the degree to which the cost-effectiveness of the optimal-looking program is overstated grows wildly.

I believe effective altruists should find this extremely concerning. They’ve considered a large number of causes. They often have massive uncertainty about the true importance of causes they’ve prioritized. For example, GiveWell acknowledges substantial uncertainty about the impact of deworming programs it recommends, and the Open Philanthropy Project pursues a high-risk, high-reward grantmaking strategy.

The optimizer’s curse can show up even in situations where effective altruists’ prioritization decisions don’t involve formal models or explicit estimates of expected value. Someone informally assessing philanthropic opportunities in a linear manner might have a thought like:

Thing X seems like an awfully big issue. Funding Group A would probably cost only a little bit of money and have a small chance leading to a solution for Thing X. Accordingly, I feel decent about the expected cost-effectiveness of funding Group A.

Let me compare that to how I feel about some other funding opportunities…

Although the thinking is informal, there’s uncertainty, potential for bias, and an optimization-like process.[2]

Previously proposed solution

The optimizer’s curse hasn’t gone unnoticed by impact-oriented philanthropists. Luke Muehlhauser, a senior research analyst at the Open Philanthropy Project and the former executive director of the Machine Intelligence Research Institute, wrote an article titled The Optimizer’s Curse and How to Beat It. Holden Karnofsky, the co-founder of GiveWell and the CEO of the Open Philanthropy Project, wrote Why we can’t take expected value estimates literally. While Karnofsky didn’t directly mention the phenomenon of the optimizer’s curse, he covered closely related concepts.

Both Muehlhauser and Karnofsky suggested that the solution to the problem is to make Bayesian adjustments. Muehlhauser described this solution as “straightforward.”[3] Karnofsky seemed to think Bayesian adjustments should be made, but he acknowledged serious difficulties involved in making explicit, formal adjustments.[4] Bayesian adjustments are also proposed in Smith & Winkler 2006.[5]

Here’s what Smith & Winkler propose (I recommend skipping it if you’re not a statistics buff):[6]

“The key to overcoming the optimizer’s curse is conceptually quite simple: model the uncertainty in the value estimates explicitly and use Bayesian methods to interpret these value estimates. Specifically, we assign a prior distribution on the vector of true values μ=(μ1,…,μn) and describe the accuracy of the value estimates V = (V1,…,Vn) by a conditional distribution V|μ. Then, rather than ranking alternatives based on the value estimates, after we have done the decision analysis and observed the value estimates V, we use Bayes’ rule to determine the posterior distribution for μ|V and rank and choose among alternatives based on the posterior means, i = E[μi|V] for i = 1,…,n.”

For entities with lots of past data on both the (a) expected values of activities and (b) precisely measured, realized values of the same activities, this may be an excellent solution.

In most scenarios where effective altruists encounter the optimizer’s curse, this solution is unworkable. The necessary data doesn’t exist.[7] The impact of most philanthropic programs has not been rigorously measured. Most funding decisions are not made on the basis of explicit expected value estimates. Many causes effective altruists are interested in are novel: there have never been opportunities to collect the necessary data.

The alternatives I’ve heard effective altruists propose involve attempts to approximate data-driven Bayesian adjustments as well as possible given the lack of data. I believe these alternatives either don’t generally work in practice or aren’t worth calling Bayesian.

To make my case, I’m going to first segue into some other topics.

Part 2: Models, wrong-way reductions, and probability

Models

In my experience, members of the effective altruism community are far more likely than the typical person to try to understand the world (and make decisions) on the basis of abstract models.[8] I don’t think enough effort goes into considering when (if ever) these abstract models cease to be appropriate for application.

This post’s opening quote comes from a great blog post by John D Cook. In the post, Cook explains how Euclidean geometry is a great model for estimating the area of a football field—multiply field_length * field_width and you’ll get a result that’s pretty much exactly the field’s area. However, Euclidean geometry ceases to be a reliable model when calculating the area of truly massive spaces—the curvature of the earth gets in the way.[9] Most models work the same way. Here’s how Cook ends his blog post:[10]

Models are based on experience with data within some range. The surprising thing about Newtonian physics is not that it breaks down at a subatomic scale and at a cosmic scale. The surprising thing is that it is usually adequate for everything in between.

Most models do not scale up or down over anywhere near as many orders of magnitude as Euclidean geometry or Newtonian physics. If a dose-response curve, for example, is linear for observations in the range of 10 to 100 milligrams, nobody in his right mind would expect the curve to remain linear for doses up to a kilogram. It wouldn’t be surprising to find out that linearity breaks down before you get to 200 milligrams.

Wrong-way reductions

In a brilliant article, David Chapman coins the term “wrong-way reduction” to describe an error people commit when they propose tackling a complicated, hard problem with an apparently simple solution that, on further inspection, turns out to be more problematic than the initial problem. Chapman points out that regular people rarely make this kind of error. Usually, wrong-way reductions are motivated errors committed by people in fields like philosophy, theology, and cognitive science.

The problematic solutions wrong-way reductions offer often take this form:


“If we had [a thing we don’t usually have], then we could [apply a simple strategy] to authoritatively solve all instances of [a hard problem].”


People advocating wrong-way reductions often gloss over the fact that their proposed solutions require something we don’t have or engage in intellectual gymnastics to come up with something that can act as a proxy for the thing we don’t have. In most cases, these intellectual gymnastics strike outsiders as ridiculous but come off more convincing to people who’ve accepted the ideology that motivated the wrong-way reduction.

A wrong-way reduction is often an attempt to universalize an approach that works in a limited set of situations. Put another way, wrong-way reductions involve stretching a model way beyond the domains it’s known to work in.

An example

I spent a lot of my childhood in evangelical, Christian communities. Many of my teachers and church leaders subscribed to the idea that the Bible was the literal, infallible word of God. If you presented some of these people with questions about how to live or how to handle problems, they’d encourage you to turn to the Bible.[11]

In some cases, the Bible offered fairly clear guidance. When faced with the question of whether one should worship the Judeo-Christian God, the commandment, “You shall have no other gods before me”[12] gives a clear answer. Other parts of the Bible are consistent with that commandment. However, “follow the Bible” ends up as a wrong-way reduction because the Bible doesn’t give clear answers to most of the questions that fall under the umbrella of “How should one live?”

Is abortion OK? One of the Ten Commandments states, “You shall not murder.”[13] But then there are other passages that advocate execution.[14] How similar are abortion, execution, and murder anyway?

Should one continue dating a significant other? Start a business? It’s not clear where to start with those questions.

I intentionally used an example that I don’t think will ruffle too many readers’ feathers, but imagine for a minute what it’s like to be a person who subscribes to the idea that the Bible is a complete and infallible guide:

You see the hard problem of deciding how to live has a demanding but straightforward solution! You frequently observe people—including plenty of mainstream Christians— experience failure and suffering when their actions don’t align with the Bible’s teachings.

You’re likely in a close-knit community with like-minded people. Intelligent and respected members of the community regularly turn to the Bible for advice and encourage you to do the same.

When you have doubts about the coherence of your worldview, there’s someone smarter than you in the church community you can consult. The wise church member has almost certainly heard concerns similar to yours before and can explain why the apparent issues or inconsistencies you’ve run into may not be what they seem.

A mainstream Christian from outside the community probably wouldn’t find the rationales offered by the church member compelling. An individual who’s already in the community is more easily convinced.[15]

Probability

The idea that all uncertainty must be explainable in terms of probability is a wrong-way reduction. Getting more detailed, the idea that if one knows the probabilities and utilities of all outcomes, then she can always behave rationally in pursuit of her goals is a wrong-way reduction.

It’s not a novel proposal. People have been saying versions of this for a long time. The term Knightian uncertainty is often used to distinguish quantifiable risk from unquantifiable uncertainty.

As I’ll illustrate later, we don’t need to assume a strict dichotomy separates quantifiable risks from unquantifiable risks. Instead, real-world uncertainty falls on something like a spectrum.

Nate Soares, the executive director of the Machine Intelligence Research Institute, wrote a post on LessWrong that demonstrates the wrong-way reduction I’m concerned about. He writes:[16]

It doesn’t really matter what uncertainty you call ‘normal’ and what uncertainty you call ‘Knightian’ because, at the end of the day, you still have to cash out all your uncertainty into a credence so that you can actually act.

I don’t think ignorance must cash out as a probability distribution. I don’t have to use probabilistic decision theory to decide how to act.

Here’s the physicist David Deutsch tweeting on a related topic:

What is probability?

Probability is, as far as we know, an abstract mathematical concept. It doesn’t exist in the physical world of our everyday experience.[17] However, probability has useful, real-world applications. It can aid in describing and dealing with many types of uncertainty.

I’m not a statistician or a philosopher. I don’t expect anyone to accept that position based on my authority. That said, I believe I’m in good company. Here’s an excerpt from Bayesian statistician Andrew Gelman on the same topic:[18]

Probability is a mathematical concept. To define it based on any imperfect real-world counterpart (such as betting or long-run frequency) makes about as much sense as defining a line in Euclidean space as the edge of a perfectly straight piece of metal, or as the space occupied by a very thin thread that is pulled taut. Ultimately, a line is a line, and probabilities are mathematical objects that follow Kolmogorov’s laws. Real-world models are important for the application of probability, and it makes a lot of sense to me that such an important concept has many different real-world analogies, none of which are perfect.

Consider a handful of statements that involve probabilities:


  1. A hypothetical fair coin tossed in a fair manner has a 50% chance of coming up heads.

  2. When two buddies at a bar flip a coin to decide who buys the next round, each person has a 50% chance of winning.

  3. Experts believe there’s a 20% chance the cost of a gallon of gasoline will be higher than $3.00 by this time next year.

  4. Dr. Paulson thinks there’s an 80% chance that Moore’s Law will continue to hold over the next 5 years.

  5. Dr. Johnson thinks there’s a 20% chance quantum computers will commonly be used to solve everyday problems by 2100.

  6. Kyle is an atheist. When asked what odds he places on the possibility that an all-powerful god exists, he says “2%.”

I’d argue that the degree to which probability is a useful tool for understanding uncertainty declines as you descend the list.

  • The first statement is tautological. When I describe something as “fair,” I mean that it perfectly conforms to abstract probability theory.
  • In the early statements, the probability estimates can be informed by past experiences with similar situations and explanatory theories.
  • In the final statement, I don’t know what to make of the probability estimate.

The hypothetical atheist from the final statement, Kyle, wouldn’t be able to draw on past experiences with different realities (i.e., Kyle didn’t previously experience a bunch of realities and learn that some of them had all-powerful gods while others didn’t). If you push someone like Kyle to explain why they chose 2% rather than 4% or 0.5%, you almost certainly won’t get a clear explanation.

If you gave the same “What probability do you place on the existence of an all-powerful god?” question to a number of self-proclaimed atheists, you’d probably get a wide range of answers.[19]

I bet you’d find that some people would give answers like 10%, others 1%, and others 0.001%. While these probabilities can all be described as “low,” they differ by orders of magnitude. If probabilities like these are used alongside probabilistic decision models, they could have extremely different implications. Going forward, I’m going to call probability estimates like these “hazy probabilities.”

Placing hazy probabilities on the same footing as better-grounded probabilities (e.g., the odds a coin comes up heads) can lead to problems.

Part 3: Hazy probabilities and prioritization

Probabilities that feel somewhat hazy show up frequently in prioritization work that effective altruists engage in. Because I’m especially familiar with GiveWell’s work, I’ll draw on it for an illustrative example.[20] GiveWell’s rationale for recommending charities that treat parasitic worm infections hinges on follow-ups to a single study. Findings from these follow-ups are suggestive of large, long-term income gains for individuals that received deworming treatments as children.[21]

There were a lot of odd things about the study that make extrapolating to form expectations about the effect of deworming in today’s programs difficult.[22] To arrive at a bottom-line estimate of deworming’s cost-effectiveness, GiveWell assigns explicit, numerical values in multiple hazy-feeling situations. GiveWell faces similar haziness when modeling the impact of some other interventions it considers.[23]

While GiveWell’s funding decisions aren’t made exclusively on the basis of its cost-effectiveness models, they play a significant role. Haziness also affects other, less-quantitative assessments GiveWell makes when deciding what programs to fund. That said, the level of haziness GiveWell deals with is minor in comparison to what other parts of the effective altruism community encounter.

Hazy, extreme events

There are a lot of earth-shattering events that could happen and revolutionary technologies that may be developed in my lifetime. In most cases, I would struggle to place precise numbers on the probability of these occurrences.

Some examples:

  • A pandemic that wipes out the entire human race
  • An all-out nuclear war with no survivors
  • Advanced molecular nanotechnology
  • Superhuman artificial intelligence
  • Catastrophic climate change that leaves no survivors
  • Whole-brain emulations
  • Complete ability to stop and reverse biological aging
  • Eternal bliss that’s granted only to believers in Thing X

You could come up with tons more.

I have rough feelings about the plausibility of each scenario, but I would struggle to translate any of these feelings into precise probability estimates. Putting probabilities on these outcomes seems a bit like the earlier example of an atheist trying to precisely state the probability he or she places on a god’s existence.

If I force myself to put numbers on things, I have thoughts like this:

Maybe whole brain emulations have a 1 in 10,000 chance of happening in my lifetime. Eh, on second thought, maybe 1 in 100. Hmm. I’ll compromise and say 1 in 1,000.

An effective altruist might make a bunch of rough judgments about the likelihood of scenarios like those above, combine those probabilities with extremely hazy estimates about the impact she could have in each scenario and then decide which issue or issues should be prioritized. Indeed, I think this is more or less what the effective altruism community has done over the last decade.

When many hazy assessments are made, I think it’s quite likely that some activities that appear promising will only appear that way due to ignorance, inability to quantify uncertainty, or error.

Part 4: Bayesian wrong-way reductions

I believe the proposals effective altruists have made for salvaging general, Bayesian solutions to the optimizer’s curse are wrong-way reductions.

To make a Bayesian adjustment, it’s necessary to have a prior (roughly, a probability distribution that captures initial expectations about a scenario). As I mentioned earlier, effective altruists will rarely have the information necessary to create well-grounded, data-driven priors. To get around the lack of data, people propose coming up with priors in other ways.

For example, when there is serious uncertainty about the probabilities of different outcomes, people sometimes propose assuming that each possible outcome is equally probable. In some scenarios, this is a great heuristic.[24] In other situations, it’s a terrible approach.[25] To put it simply, a state of ignorance is not a probability distribution.

Karnofsky suggests a different approach (emphasis mine):[26]

It’s my view that my brain instinctively processes huge amounts of information, coming from many different reference classes, and arrives at a prior; if I attempt to formalize my prior, counting only what I can name and justify, I can worsen the accuracy a lot relative to going with my gut…Rather than using a formula that is checkable but omits a huge amount of information, I’d prefer to state my intuition – without pretense that it is anything but an intuition – and hope that the ensuing discussion provides the needed check on my intuitions.

I agree with Karnofsky that we should take our intuitions seriously, but I don’t think intuitions need to correspond to well-defined mathematical structures. Karnofsky maintains that Bayesian adjustments to expected value estimates “can rarely be made (reasonably) using an explicit, formal calculation.” I find this odd, and I think it may indicate that Karnofsky doesn’t really believe his intuitions cash out as priors. To make an explicit, Bayesian calculation, a prior doesn’t need to be well-justified. If one is capable of drawing or describing a prior distribution, a formal calculation can be made.

I agree with many aspects of Karnofsky’s conclusions, but I don’t think what Karnofsky is advocating should be called Bayesian. It’s closer to standard reasonableness and critical thinking in the face of poorly understood uncertainty. Calling Karnofsky’s suggested process “making a Bayesian adjustment” suggests that we have something like a general, mathematical method for critical thinking. We don’t.

Similarly, taking our hunches about the plausibility of scenarios we have a very limited understanding of and treating those hunches like well-grounded probabilities can lead us to believe we have a well-understood method for making good decisions related to those scenarios. We don’t.

Many people have unwarranted confidence in approaches that appear math-heavy or scientific. In my experience, effective altruists are not immune to that bias.

Part 5: Doing better

When discussing these ideas with members of the effective altruism community, I felt that people wanted me to propose a formulaic solution—some way to explicitly adjust expected value estimates that would restore the integrity of the usual prioritization methods. I don’t have any suggestions of that sort.

Below I outline a few ideas for how effective altruists might be able to pursue their goals despite the optimizer’s curse and difficulties involved in probabilistic assessments.

Embrace model skepticism

When models are being pushed outside of the domains where they have been built and tested, caution should be exercised. Especial skepticism should be used in situations where a model is presented as a universal method for handling problems.

Entertain multiple models

If an opportunity looks promising under a number of different models, it’s more likely to be a good opportunity than one that looks promising under a single model.[27] It’s worth trying to foster several different mental models for making sense of the world. For the same reason, surveying experts about the value of funding opportunities may be extremely useful. Some experts will operate with different models and thinking styles than I do. Where my models have blind spots, their models may not.

Test models

One of the ways we figure out how far models can reach is through application in varied settings. I don’t believe I have a 50-50 chance of winning a coin flip with a buddy for exclusively theoretical reasons. I’ve experienced a lot of coin flips in my life. I’ve won about half of them. By funding opportunities that involve feedback loops (allowing impact to be observed and measured in the short term), a lot can be learned about how well models work and when probability estimates can be made reliably.

Learn more

When probability assessments feel hazy, the haziness often stems from lack of knowledge about the subject under consideration. Acquiring a deep understanding of a subject may eliminate some haziness.

Position society

Since it isn’t possible to know the probability of all important developments that may happen in the future, it’s prudent to put society in a good position to handle future problems when they arise.[28]

Acknowledge difficulty

I know the ideas I’m proposing for doing better are not novel or necessarily easy to put into practice. Despite that, recognizing that we don’t have a reliable, universal formula for making good decisions under uncertainty has a lot of value.

In my experience, effective altruists are unusually skeptical of conventional wisdom, tradition, intuition, and similar concepts. Effective altruists correctly recognize deficiencies in decision-making based on these concepts. I hope that they’ll come to accept that, like other approaches, decision-making based on probability and expected value has limitations.


Huge thanks to everyone who reviewed drafts of this post or had conversations with me about these topics over the last few years!

Added 4/6/2019: There’s been discussion and debate about this post over on the Effective Altruism Forum.

Become Certified Awesome TODAY!

Hello friends!

Today I’m happy to announce a new, innovative project! The seeds of this idea were planted several months ago when I published Third-party Evaluation: Trophies for Everyone! In that post, I mentioned how legitimate companies seem surprisingly comfortable advertising awards from entities that totally lack credibility.

Since then, I’ve noticed more forms of bogus website endorsements. For example, Comodo Group’s trusted site seals:

A Comodo SSL trust seal indicates that the website owner has made customer security a top priority by securely encrypting all their transactions. This helps build confidence in the site and increases customer conversion rates…For a site seal to be effective, customers have to have confidence in the ‘endorsement brands’ that are on your site. If visitors are to trust you, they must trust the companies behind the logos on your site…Comodo is now the world’s largest SSL certificate authority and over 80 million PC’s and mobile devices are protected using Comodo desktop security solutions. That adds up to a lot of online visitors trusting you because they trust us.[1]

You can get these seals for free here. You don’t even have to verify that you’re using any kind of security! I indicated that I have a UCC SSL certificate. I don’t have one of those, but look at the cool seal I got!


UC SSL Certificate
UC SSL Certificate

SiteLock also offers cool security seals. They look like this:

That’s just an image for illustrative purposes. It’s not a real, verified seal. Getting an actual seal costs money and involves verification. The verification component is interesting. If SiteLock realizes a site is not safe for visitors, will the seal make that clear?

Nope!

If a scan fails site visitors will not be alerted to any problem. The SiteLock Trust Seal will simply continue to display the date of the last good scan of the website. If the site owner fails to rectify the problem SiteLock will remove the seal from the site and replace it with a single pixel transparent image within a few days. At no point will SiteLock display any indication to visitors that a website has failed a scan.[2]

All this got me thinking. What if I offered free, honest endorsement seals?

This idea had an obvious flaw: a total lack of credibility or credentials on my part. I decided it was time I got myself some credentials. I went to the Universal Life Church (ULC) website and began the arduous process of becoming an ordained minister. After painstakingly entering my personal details and clicking the “Get Ordained Instantly” button, I had my first credential:

My Universal Life Church Ordination

A few days later, I had physical proof:

A lot of people have been ordained by the ULC. To make sure people could know I’m really trustworthy, I went ahead and got a few less common credentials:

After acquiring my credentials, I spent an intense eight minutes creating a professional endorsement seal:

You can get one of these seals for your own website if you certify its awesomeness. If you’re not sure if your website is awesome, the book On Being Awesome: A Unified Theory of How Not to Suck might be able to help. Click below if you’re ready:

✔ Yes, my website is awesome!
Congratulations! Your website is now certified awesome and you have permission to use the Confusopoly Endorsement Seal™ displayed below. The seal can be shared with the following code:

<a href="https://confusopoly.com/2019/04/01/become-certified-awesome-today/"><img src="https://confusopoly.com/wp-content/uploads/2019/03/ConfusopolyEndorsment.png" width="800" height="600" class="aligncenter size-full wp-image-2338" /></a>

May the force be with you,
Dr. Christian Smith, PhD

Misleading Gimmicks from Consumer Reports

You better cut the pizza in four pieces because I’m not hungry enough to eat six.Yogi Berra (allegedly)

The other day I received an envelope from Consumer. It was soliciting contributions for a raffle fundraiser. The mailing had nine raffle tickets in it. Consumer Reports was requesting that I send back the tickets with a suggested donation of $9 (one dollar for each ticket). The mailing had a lot of paper:

The raffle had a grand prize that would be the choice of an undisclosed, top-rated car or $35,000. There were a number of smaller prizes bringing the total amount up for grabs to about $50,000.

The materials included a lot of gimmicky text. Things like:

  • “If you’ve been issued the top winning raffle number, then 1 of those tickets is definitely the winner or a top-rated car — or $35,000 in cash.”
  • “Why risk throwing away what could be a huge pay day?”
  • “There’s a very real chance you could be the winner of our grand prize car!”

Consumer Reports also indicates that they’ll send a free, surprise gift to anyone who donates $10 or more.

It feels funny to make a donate money based on the premise that I might win more than I donate, but I get it. Fundraising gimmicks work.

I don’t think it would be productive or even reasonable to pretend that humans are (or even ought to be) creatures that donate to non-profits for exclusively coldly-calculated, altruistic reasons.

That said, I get frustrated though when the gimmicks border on dishonest. Doubly so if deception is coming from an organization like Consumer Reports that brands itself as an organization committed to integrity.

One of the pieces of paper in the mailing came folded with print on each side. Here’s the first part that was visible:

Unfolding that paper and looking on the other side, I found a letter from someone involved in Consumer Reports marketing. The letter goes to some length to explain that it would be silly to not at least see if I had winning tickets. Here’s a bit of it:

It amazes me that among the many people who receive our Consumer Reports Raffle Tickers — containing multiple tickets, mind you, not just one — some choose not to mail them in. And they do this, despite the fact there is no donation required for someone to find out if he or she has won…So when people don’t respond it doesn’t make any sense to me at all.

This is ridiculous on several levels.

First, the multiple tickets bit is silly. It’s like the Yogi Berra line at the opening of the post. It doesn’t matter how many tickets I have unless I get more tickets than the typical person.

Second, it seems pretty obvious that Consumer Reports doesn’t care if a non-donor decides not to turn in tickets. The most plausible explanation for why Consumer Reports includes the orange letter is that people who would ignore the mailing may end up feeling guilty enough to make a donation. Checking the “I choose not to donate at this time, but please enter me in the Raffle” box on the envelope doesn’t feel great.

Finally, it makes perfect sense why I might not respond. Writing my name on each ticket, reading the materials, and mailing the tickets takes time. My odds of winning are low. I’d also have to pay for a stamp. Nothing about the rationale for not sending in the tickets is complicated. Consumer Reports knows that.

I’ll pretend that the only reason not to participate is that the stamp used to mail in the tickets isn’t free. That stamp is 55 cents at the moment.[1] Is my expected reward greater than 55 cents?

Consumer Reports has about 6 million subscribers.[2]

Let’s give Consumer Reports the benefit of the doubt and assume it can print everything, send initial mailings, handle the logistics of the raffle, and send gifts back to donors for only $0.50 per subscriber. That puts the promotion’s costs at about 3 million dollars. The $50,000 of prizes is trivial in comparison. Let’s assume that Consumer Reports runs the promotion based on the expectation that additional donations brought in will cover the promotion’s cost.

The suggested donation is $9. Let’s say the average, additional funding brought in by this campaign comes out to $10 per respondent.[3]

To break even, Consumer Reports needs to have 300,000 respondents.

With 300,000 respondents, nine tickets each, and $50,000 in prizes, the expected return is about 1.7 cents per ticket.[4] Sixteen cents total.[5] Not even close to the cost of a stamp.

4/12/2019 Update: I received a second, almost-identical mailing in early April.

Most Websites Are Not Resource-Intensive

I think many customers shopping for web hosting dramatically overestimate how demanding their website will be on a server.

Most individuals and small businesses create content-oriented websites that serve text and images. These websites usually only have modest numbers of visitors. They’re often built on content management systems like WordPress and Joomla. Neither system is particularly demanding.

Let’s use Confusopoly.com as an example. The hosting environment includes a WordPress installation, plugins, images, an email account with about 1,000 messages, and a few MySQL databases.[1] Total disk space used: under one gigabyte.

My homepage has a size of about 300 kilobytes. Let’s pessimistically assume that every page view requires all the content to be loaded (no files are stored in visitors’ browser caches). How many page loads could occur with 100 just gigabytes of bandwidth? Over 300,000.[2] Admittedly, my homepage is fairly lightweight. It’s not that unusual though. Wikipedia’s main page came in at a similar 350 kilobytes when I just tested it.

How about my Mobile Phone Service Confusopoly page that has a couple of images? It comes in at about 900 kilobytes. With 100 gigabytes of bandwidth and no caching, the page could be loaded over 100,000 times.[3]

Average Download Speed Is Overrated

I’ve started looking into the methodologies used by entities that collect cell phone network performance data. I keep seeing an emphasis on average (or median) download and upload speeds when data-service quality is discussed.

  • Opensignal bases it’s data-experience rankings exclusively on download and upload speeds.[1]
  • Tom’s Guide appears to account for data-quality using average download and possibly upload speeds.[2]
  • RootMetrics doesn’t explicitly disclose how it arrives at final data-performance scores, but emphasis is placed on median upload and download speeds.[3]

It’s easy to understand what average and median speeds represent. Unfortunately, these metrics fail to capture something essential—variance in speeds.

For example, OpenSignal’s latest report for U.S. networks shows that Verizon has the fastest average download speed of 31 Mbps in the Chicago area. AT&T’s average download speed is only 22 Mbps in the same area. Both those speeds are easily fast enough for typical activities on a phone. At 22 Mbps per second, I could stream video, listen to music, or browse the internet seamlessly. For the rare occasion where I download a 100MB file, Verizon’s network at the average speed would beat AT&T’s by about 10.6 seconds.[4] Not a big deal for something I do maybe once a month.

On the other hand, variance in download speeds can matter quite a lot. If I have 31 Mbps speeds on average, but I occasionally have sub-1 Mbps speeds, it may sometimes be annoying or impossible to use my phone for browsing and streaming. Periodically having 100+ Mbps speeds would not make up for the inconvenience of sometimes having low speeds. I’d happily accept a modest decrease in average speeds in exchange for a modest decrease in variance.[5]