Network Evaluation Should Be Transparent

Several third-party firms collect data on the performance of U.S. wireless networks. Over the last few months, I’ve tried to dig deeply into several of these firms’ methodologies. In every case, I’ve found the public-facing information to be inadequate. I’ve also been unsuccessful when reaching out to some of the firms for additional information.

It’s my impression that evaluation firms generally make most of their money by selling data access to network operators, analysts, and other entities that are not end consumers. If this was all these companies did with their data, I would understand the lack of transparency. However, most of these companies publish consumer-facing content. Often this takes the form of awards granted to network operators that do well in evaluations. It looks like network operators regularly pay third-party evaluators for permission to advertise the receipt of awards. I wish financial arrangements between evaluators and award winners were a matter of public record, but that’s a topic for another day. Today, I’m focusing on the lack of transparency around evaluation methodologies.

RootMetrics collects data on several different aspects of network performance and aggregates that data to form overall scores for each major network. How exactly does RootMetrics do that aggregation?

The results are converted into scores using a proprietary algorithm.[1]

I’ve previously written about how difficult it is to combine data on many aspects of a product or service to arrive at a single, overall score. Beyond that, there’s good evidence that different analysts working in good faith with the same raw data often make different analytical choices that lead to substantive differences in the results of their analyses. I’m not going take it on faith that RootMetrics’ proprietary algorithm aggregates data in a highly-defensible manner. No one else should either.

Opensignal had a long history of giving most of its performance awards to T-Mobile.[2] Earlier this year, the trend was broken when Verizon took Opensignal’s awards in most categories.[3] It’s not clear why Verizon suddenly became a big winner. The abrupt change strikes me as more likely to have been driven by a change in methodology than a genuine change in the performance of networks relative to one another. Since little is published about Opensignal’s methodology, I can’t confirm or disconfirm my speculation. In Opensignal’s case, questions about methodology are not trivial. There’s good reason to be concerned about possible selection bias in Opensignal’s analyses. Opensignal’s Analytics Charter states:[4]

Our analytics are designed to ensure that each user has an equal impact on the results, and that only real users are counted: ‘one user, one vote’.

Carriers will differ in the proportion of their subscribers that live in rural areas versus densely-populated areas. If the excerpt from the analytics charter is taken literally, it may suggest that Opensignal does not control for differences in subscribers’ geography or demographics. That could explain why T-Mobile has managed to win so many Opensignal awards when T-Mobile obviously does not have the best-performing network at the national level.

Carriers advertise awards from evaluators because third-parties are perceived to be credible. The public deserves to have enough information to assess whether third-party evaluators merit that credibility.

Fancy Phones: Now An Even Worse Deal

About twelve years ago, Apple released the first iPhone.[1] While it was an expensive device, the original iPhone had all sorts of features that the competition lacked. Of course, the first iPhone is a pretty extreme example.

Even in 2012, the year the iPhone 5 was released, I felt that there were significant differences between the latest, high-end phones and phones that were sold at lower price points.[2] I didn’t think the latest, greatest devices were actually worth the cost for the typical consumer back then, but I could at least understand the appeal of those fancy devices.

Today, things are different. Hardware has continued to improve, but it’s not clear that hardware improvements have had much to offer to the typical consumer. Today, you can get an unlocked Motorola G6 or G6 Play without any carrier subsidy for less than $200.[3]

The G6 performs well for the sorts of things typical consumers use their phones for. The phone’s cameras are pretty good. It has a fingerprint reader. Hell, the phone even does pretty well in terms of aesthetics. I’m struggling to come up with meaningful limitations it has compared to higher-end phones. It’s not waterproof?

Perhaps the high-quality of low-end phones these days explains why the latest iPhone models haven’t sold well. My point is not that the higher end phones don’t have better features. They do. Having the best phone these days may matter if you’re an Instagram star, you want to play the most intense mobile video games with the highest possible performance, or you want to make your friends jealous. On the other hand, if you use your cell phone for pretty typical purposes, you can save a lot of money without forgoing many useful features.

Magical Growth in Subscriber Numbers

Yesterday, one of my favorite journalists covering wireless, Mike Dano, published an article with the title “Why Wireless Carriers Magically Keep Growing Every Quarter.”

Dano notes that there’s been a roughly 2.5% growth in wireless subscribers for each of the past several quarters. This growth rate is tricky to make sense of:

[MoffettNathanson analysts] noted that the industry’s growth rate appears to be outstripping population growth rates and the growing number of teenagers getting phones, and isn’t attributable to other factors like the growing number of secondary phone-type devices like the Apple Watch. ‘The most likely answer appears to be the simplest,’ wrote the MoffettNathanson analysts. ‘Carriers are offering free or partially subsidized phones in return for adding additional lines.’

They continued: ‘It is all but certain that some customers have taken advantage of these offers even if it means adding a line they don’t need, and won’t use. The customer would simply reassign the new BOGO handset to an existing (used) line, moving an old unwanted handset to the new (unused) line.’

I’m not convinced this is the simplest explanation. A lot of consumers would find the process of adding a line to take advantage of a buy-one-get-one (BOGO) offer then switching service between devices complicated or sketchy. Around 2012, massive phone subsidies on post-paid plans were extremely common. At the time, I was involved in the cell phone resale business. I noticed that a surprising number of people were eligible for subsidized upgrades but not using them. In these scenarios, a subscriber could upgrade to a new ~$400 device for free, switch back to an old device, and quickly resell the new device. Even though the opportunity was relatively simple, I got the impression that people rarely took advantage of it.

The other problem I have with the explanation is that if consumers are taking advantage of BOGO offers in large numbers, carriers ought to notice what’s going on. Perhaps some carriers want to pad their subscriber numbers, but I find it unlikely that there’s an industry-wide willingness to pad subscriber numbers today since that will lead to higher churn in a year or two. I would guess that some carriers seeing consumers regularly add new lines to get free devices would be inclined to promote device-financing options. I expect financing options would often be simpler for consumers and more profitable for carriers.

That said, it’s pretty clear that at least some new lines come from those taking advantage of BOGO offers. A recent FCC filing stated the following (emphasis mine):

Sprint’s postpaid net additions recently have been driven by ‘free lines’ offered to Sprint customers and the inclusion of less valuable tablet and other non-phone devices, as well as pre to post migrations that do not represent ‘new’ Sprint customers.[1]
What I wonder is whether BOGO offers are the primary driver of unexpected growth. Dano mentions a claim found in a recent Wall Street Journal article (paywalled) that’s based on work from New Street Research:
Telecom consultant New Street Research estimated that customers signing unneeded wireless contracts to pocket more valuable smartphones added 1.7 million ‘fake’ lines to cellphone carriers’ tallies in 2018.
If we take that number at face value, that’s roughly 400,000 lines per quarter.[2] However, BOGO offers are not new. They reached their peak several years ago. For “fake” BOGO lines to be driving growth, there must be more “fake” lines getting activated now than there are “fake” lines falling off.[3] From my vantage point, it looks like BOGO offers might be less appealing than they were in the past. It used to be the case that devices that had been out for a few years were substantially worse than recently-released devices. That seems less true today. Recent declines in iPhone sales may indicate the other people feel the way I do.

What else might explain the large growth in subscriber numbers? On Twitter, industry-analyst Roger Entner mentioned that the growth could be due to subscribers transferring off of the Lifeline subsidization program.

It’s an interesting puzzle, and I might just be missing something. Despite my skepticism, I still don’t think it’s implausible that BOGO promotions really are driving lots of growth in subscriber numbers.

Lies, Damned Lies, and AT&T’s 5GE

There are three kinds of lies: lies, damned lies, and statistics.Benjamin Disraeli*

Fortunately, the sentiment behind this quote isn’t always accurate. AT&T has recently taken a lot of heat for misleadingly branding advanced 4G networks as “5GE.” Ian Fogg at Opensignal published a post where he draws on Opensignal’s data to assess how AT&T’s 5GE-enabled phones perform compared to similar phones on other carriers. The results:[1]

In response to AT&T’s misleading branding, Verizon launched a video advertisement showing a head-to-head speed comparison between Verizon’s network and AT&T’s 5GE network.

In that video, Verizon’s 4G LTE network comes out with a download speed near 120Mbps while AT&T’s 5GE network came out around 40Mbps. That, of course, seems funny given the Opensignal data suggesting the networks deliver similar speeds on average.

A portion of the Verizon video—not long enough to show the final results—showed up in a Twitter ad. That ad led to a Twitter exchange between myself; Light Reading’s editorial director, Mike Dano; and Verizon’s PR manager, Steven Van Dinter. Dinter explained that Verizon chose to film in a public spot where AT&T’s 5GE symbol was very strong. I take Dinter’s word that there wasn’t foul play or blatant manipulation, but it is funny to see Verizon fighting misleading branding from AT&T with misleading ads of its own.

Average Download Speed Is Overrated

I’ve started looking into the methodologies used by entities that collect cell phone network performance data. I keep seeing an emphasis on average (or median) download and upload speeds when data-service quality is discussed.

  • Opensignal bases it’s data-experience rankings exclusively on download and upload speeds.[1]
  • Tom’s Guide appears to account for data-quality using average download and possibly upload speeds.[2]
  • RootMetrics doesn’t explicitly disclose how it arrives at final data-performance scores, but emphasis is placed on median upload and download speeds.[3]

It’s easy to understand what average and median speeds represent. Unfortunately, these metrics fail to capture something essential—variance in speeds.

For example, OpenSignal’s latest report for U.S. networks shows that Verizon has the fastest average download speed of 31 Mbps in the Chicago area. AT&T’s average download speed is only 22 Mbps in the same area. Both those speeds are easily fast enough for typical activities on a phone. At 22 Mbps per second, I could stream video, listen to music, or browse the internet seamlessly. For the rare occasion where I download a 100MB file, Verizon’s network at the average speed would beat AT&T’s by about 10.6 seconds.[4] Not a big deal for something I do maybe once a month.

On the other hand, variance in download speeds can matter quite a lot. If I have 31 Mbps speeds on average, but I occasionally have sub-1 Mbps speeds, it may sometimes be annoying or impossible to use my phone for browsing and streaming. Periodically having 100+ Mbps speeds would not make up for the inconvenience of sometimes having low speeds. I’d happily accept a modest decrease in average speeds in exchange for a modest decrease in variance.[5]

Issues with Consumer Reports’ 2017 Cell Phone Plan Rankings

Consumer Reports offers ratings of cellular service providers based on survey data collected from Consumer Reports subscribers. Through subscriber surveying in 2017, Consumer Reports collected data on seven metrics:[1]

  1. Value
  2. Data service quality
  3. Voice service quality
  4. Text service quality
  5. Web service quality
  6. Telemarketing call frequency
  7. Support service quality

I don’t understand why data service and web service were evaluated with separate metrics. Unfortunately, Consumer Reports doesn’t offer much information about the methodology behind the survey.

The surveys collected data from over 100,000 subscribers.[2] I believe Consumer Reports would frown upon a granular discussion of the exact survey results, so I’ll remain vague about exact ratings in this post. If you would like to see the full results of their survey, Consumer Reports subscribers can do so here.

Survey results

Results are reported for 20 service providers. Most of these providers are mobile virtual network operators (MVNOs). MVNOs don’t operate their own network hardware but make use of other companies’ networks. For the most part, MVNOs use networks provided by the Big Four (Verizon, Sprint, AT&T, and T-Mobile).

Interestingly, the Big Four do poorly in Consumer Reports’ evaluation. Verizon, AT&T, and Sprint receive the lowest overall ratings and take the last three spots. T-Mobile doesn’t do much better.

This is surprising. The Big Four do terribly, even though most MVNOs are using the Big Four’s networks. Generally, I would expect the Big Four to offer network access to their direct subscribers that is as good or better than the access that MVNO subscribers receive.

It’s possible that the good ratings can be explained by MVNOs offering prices and customer service far better than the Big Four—making them deserving of the high ratings for reasons separate from network quality.

Testing the survey’s validity

To test the reliability of Consumer Reports methodology, we can compare MVNOs to the Big Four using only the metrics about network quality (ignoring measures of value, telemarketing call frequency, and support quality). In many cases, MVNOs use more than one of the Big Four’s networks. However, several MVNOs use only one network, allowing for easy apples-to-apples comparisons.[3]

  • Boost Mobile is owned by Sprint.
  • Virgin Mobile is owned by Sprint.
  • Circket Wireless is owned by AT&T.
  • MetroPCS is owned by T-Mobile.
  • GreatCall runs exclusively on Verizon’s network.
  • Page Plus Cellular runs exclusively on Verizon’s network.

When comparing network quality ratings between these MVNOs and the companies that run their networks:

  • Boost Mobile’s ratings beat Sprint’s ratings in every category.
  • Virgin Mobile’s ratings beat Sprint’s ratings in every category.
  • Cricket Wireless’s ratings beat or tie AT&T’s ratings in every category.
  • MetroPCS’s ratings beat or tie T-Mobile’s ratings in every category.
  • GreatCall doesn’t have a rating for web quality due to insufficient data. GreatCall’s ratings match or beat Verizon in the other categories.
  • Page Plus Cellular doesn’t have a rating for web quality due to insufficient data. Page Plus’ ratings match or beat Verizon in the other categories.
World’s best stock photo.
Taken at face value, these are odd results. There are complicated stories you could tell to salvage the results, but I think it’s much more plausible that Consumer Reports’ surveys just don’t work well for evaluating the relative quality of cell phone service providers.

Why aren’t the results reliable?

I’m not sure why the surveys don’t work, but I see three promising explanations:

  • Metrics may not be evaluated independently. For example, consumers might take a service’s price into account when providing a rating of its voice quality.
  • Lack of objective evaluations. Consumers may not provide objective evaluations. Perhaps consumers are aware of some sort of general stigma about Sprint that unfairly affects how they evaluate Sprint’s quality (but that same stigma may not be applied to MVNOs that use Sprint’s network).
  • Selection bias. Individuals who subscribe to one carrier are probably, on average, different from individuals who subscribe to another carrier. Perhaps individuals who have used Carrier A tend to use small amounts of data and are lenient when rating data service quality. Individuals who have used Carrier B may get more upset about data quality issues. Consumer Cellular took the top spot in the 2017 rankings. I don’t think it’s coincidental that Consumer Cellular has pursued branding and marketing strategies to target senior citizens.[4]

Consumer Reports’ website gives the impression that their cell phone plan rankings will be reliable for comparison purposes.[5] They won’t be.

The ratings do capture whether survey respondents are happy with their services. However, the ratings have serious limitations for shoppers trying to assess whether they’ll be satisfied with a given service.

I suspect Consumer Reports’ ratings for other product categories that rely on similar surveys will also be unreliable. However, the concerns I’m raising only apply to a subset of Consumer Reports’ evaluations. A lot of Consumer Reports’ work is based on product testing rather than consumer surveys.