Uber’s Fingerprinting Foibles and the Costs of Not Complying with Industry Self-Regulation
Written by Joseph Jerome
No stranger to privacy kerfuffles, Uber is once again in the news for its business practices and invasive use of technology. This time, the headlines are focused on Uber’s intentional circumvention of Apple’s developer rules, which prohibit apps from collecting certain technical identifiers from iPhones. Uber went so far as to obscure its collection practices from Apple by geofencing the company’s headquarters.
The story points to the pervasiveness of fingerprinting technologies to tag and identify individual web browsers, mobile devices, and even motor vehicles that have continued to ramp up, with little to no notice and choice available to consumers. But it also demonstrates some of the limitations of using developer guidelines and self-regulation as the primary privacy backstop online.
What consequences came from this flagrant violation of the rules? Uber’s CEO was summoned to a meeting at Apple where he was informed to stop the practice or have Uber removed from the App Store. Uber agreed to the demand and continues to fingerprint iOS devices in line with Apple’s rules. In effect, the consequences were a slap on the wrist. It’s also unclear whether and to what degree Uber is a special case, or what criteria actually make an app worthy of being booted from the App Store.
This isn’t the first time a company has played fast and loose with both Apple’s developer guidelines and consumers’ expectations. Just last year, one advertising network was able to get around device location settings by inferring an individual’s physical location by pinging WiFi networks and comparing them to a network database. This technical trickery has forced Apple to play an unwanted game of privacy whack-a-mole, and it is part of the reason that Apple has had to limit access to universal device identifiers, change how the “Limit Ad Tracking” features work, and even take steps to randomize device MAC addresses.
These constant privacy and security violations arise because of a systemic problem in the United States with privacy enforcement. Absent sector-specific rules with respect to certain categories of information, privacy practices are generally left for industry to determine. The Federal Trade Commission’s privacy enforcement is based largely on its ability to police deceptive statements, which has made unclear and vague privacy statements an effective legal obligation. Self-regulatory codes of conduct add further specific disclosures but also vary in their specificity and general applicability, particularly outside the narrow context of marketing. Technical compliance terms from major players like Apple, Google, and Amazon may also attempt to restrict activities for mobile applications, wearables, and other IoT technologies. But the consistent enforcement of these obligations is hard to determine, and while Apple may devote time and resources to rooting out violations of its rules, it is ultimately beholden to its bottom line. The end result is that technology companies and Silicon Valley start-ups operate with rules that permit them to seek any advantage.
Perhaps that is a part of an innovative business, but as Uber’s behavior demonstrates, there is no real clarity as to the consequences that companies should face for failing to follow what rules do exist. Accountability has been held up as an organizing principle to enforce industry compliance with privacy rules, but we haven’t determined what the actual cost of non-compliance should be. For nearly two decades now, we have discussed what constitutes meaningful self-regulation without any clear insight into what should happen when there are violations. Should Uber’s flagrant violation of Apple’s developer terms, for example, be viewed as something of a misdemeanor or felony? Uber’s ultimate sanction appears to be a meeting and a minimal shift in its operating practices. Is that fair? Is that consistent with consequences for other developers? The answer to all of these questions are unknown to regulators and the public, greatly reducing the meaningfulness of developer terms and industry codes of conduct.
The larger challenge is determining whether Uber’s violation of Apple’s developer terms could or should raise regulatory ire. The Federal Trade Commission has held companies liable under Section 5 for potentially misleading statements that are made to other businesses, particularly if those representations affect consumers, but that raises tough questions about business-to-business contracts and developer terms of service. Sanctions should be tailored to fit the crime, but when it comes to privacy and security mishaps with technology, consumers and their advocates are left being fingerprinted in the dark.