Skip to Content

Privacy & Data

Are You a Two-Star Passenger? The Problem with Uber’s Hidden Customer Rating System

Uber has been hailed as one of the success stories in “disruptive innovation” — providing customers with the ability to order cabs from their phones without having to hail a taxi. This weekend, a little-known Uber internal practice — allowing Uber drivers to rate passengers — came to light, upsetting consumers and highlighting the need for transparency when it comes to what companies do with data. Uber’s core value proposition is making it easier to get cabs, no matter time of day, location, or what you look like. But if passengers receive lower ratings from drivers for specious or unfair reasons, patterns of discrimination could repeat themselves, negating the benefit of convenience.

In a Medium post on Sunday, software engineer Aaron Landy described the practice. Both drivers and customers are assigned a rating, but while most customers are aware they can rate their drivers, they seem to have no idea that they’re being rated themselves. When you request an Uber and are assigned a driver, you get to see the driver’s contact information and rating; presumably the reverse is true as well. In his post, Landy explained a workaround — which Uber has since disabled — that allowed customers to figure out their rating.

While there are clearly reasons that having passenger ratings may be helpful — for example, if a customer is abusive or inappropriate – Uber should have done a better job of communicating this practice to its customers. Moreover, even if individuals did know about the rating system, they had no way of easily finding out what their ratings were. There also doesn’t seem to be a way to correct or respond to inaccurate or unfair ratings (for example, if a driver rated a passenger more poorly because of race, gender, sexual orientation, or other suspect classifications), and users have no way to know on what criteria they’re being rated.

These kinds of private scoring systems are prone to abuse if companies don’t publicize them or make the ratings sufficiently transparent.

These kinds of private scoring systems are prone to abuse if companies don’t publicize them or make the ratings sufficiently transparent. The system that Uber uses to rank passengers could easily be used like the credit rating system, which lenders use to determine which customers they extend credit to. Presumably, Uber drivers are using passenger ratings in order to determine which fares to pick up, and which to decline. But there are some key differences. In the credit rating system, individuals can request their scores for free, once per year, from each of three credit reporting agencies. And individuals can correct inaccurate information that goes into determining their scores. None of these options are available to Uber customers.

The credit rating system is designed to reduce discrimination in lending — a shameful recent chapter in this country’s history — and the Fair Credit Reporting Act (FCRA) helps consumers feel confident that such discrimination has been mitigated (although probably not eliminated). FCRA also helps to promote accuracy in credit reports; prior to its passage, reports were error-filled, but still considered accurate.

If companies are going to use similar types of scoring from our electronic transactions and treat those scores as illustrative of consumer behavior, they need to be more upfront about their practices and allow consumers redress. These principles are central to the Fair Information Practice Principles, which CDT has advocated as a necessary framework to govern data broker practices and other types of scoring systems. Notice, transparency, and redress are crucial in promoting consumer trust and reducing the risk that these types of scores are used inappropriately. We hope that Uber and other companies that rate their customers take these principles to heart. If not, Congress should consider broadening the application of FCRA in order to prevent discrimination.