A “Smart Wall” That Fails to Protect Privacy and Civil Liberties Is Not Smart
Written by Mana Azarmi
Congress is working on a border security spending package ahead of the February 15th deadline before another partial government shutdown. Democratic leadership’s vision for border security is a “smart wall,” a non-physical barrier composed of technology like drones and sensors. Last Thursday, Democrats published their initial border funding proposal, which provided more detail. It would provide U.S. Customs and Border Protection (CBP) $400 million for border security technology procurement and deployment. While the deal is negotiated, we urge Congress not to write CBP a no-strings attached check to build a “smart wall.” Technology is not a panacea for the problems at the border.
Congress should take note of the following:
Some Technology is Particularly Invasive
CBP currently deploys a variety of sensors, radar, thermal imaging devices, and cameras along the border, as well as drones and other types of aerial surveillance, automatic license plate readers, and biometric collection and identification systems like facial recognition technology. Identification systems can be particularly invasive, as these technologies allow the government to monitor, identify, and track individuals. Individuals living near the border currently bear the brunt of border surveillance, and should not be subject to the constant warrantless monitoring that such surveillance tools facilitate. These are the technologies that demand more careful deliberation before Congress authorizes continued or expanded funding.
Some of these surveillance technologies are ineffective or can lead to “false positive” identification of individual targets. A 2015 IG review of CBP’s drone program determined that drones are “dubious achievers” and expensive to operate. The IG concluded that “[n]otwithstanding the significant investment, we see no evidence that the drones contribute to a more secure border, and there is no reason to invest additional taxpayer funds at this time.” CBP has also solicited small drones armed with facial recognition technology. We urge Congress not to approve funding for this kind of technology. Accuracy concerns loom over facial recognition—research demonstrates that error rates are not evenly distributed across race and gender. CBP seeks to use this technology to identify the border crossers Border Patrol officers encounter in the field. A mistaken match could result in a fatality—the stakes are simply too high to green-light funding for this technology.
If Congress decides to fund technology to surveil people at the “border,” such funding must be conditioned upon safeguards to ensure the preservation of rights. One such condition is a geographic limitation on use. CBP claims the authority to operate in the entire border zone, which constitutes any land within 100 miles of the actual border of the U.S. —an area containing over 200 million people. Some states are entirely enveloped in the border zone. CBP conducts surveillance with few limitations in this area. For example, CBP’s Federal Aviation Administration (FAA) drone authorization allows CBP drones to operate along and within 100 miles of the northern border, and along and within 25 to 60 miles of the southern border. CBP operates automatic license plate readers (ALPRs) at ports of entry and checkpoints, and claims the authority to set up ALPRs anywhere within the border zone. This technology, which automatically collects a car’s license plate number and location with a timestamp, can be used to create detailed maps of individual movements. CBP claims authority to gather and analyze all of this data without a warrant. Efforts have been made to limit CBP’s ability to operate these checkpoints to within 25 miles of the border. A similar restriction on drone surveillance would help limit how overbroad border security missions can, and have, become.
Congress should also limit the extent to which CBP shares and retasks its surveillance technology. CBP shares its drones with police departments, and retasks them to support non-Border Patrol missions. Data collected from these technologies must be subject to stringent collection, retention, sharing, and use limitations. Finally, any procurement and deployment of a technology should be subject to an independent evaluation of rights compliance and efficacy in accomplishing the purpose for which it was deployed.
Surveillance At The Border Will Not Stay at The Border
The tools we provide CBP today will be the tools of tomorrow’s law enforcement. Technology is tested on border crossers and border communities. CBP socializes the surveillance, and serves as a bridge for technology to leave war zones like Afghanistan and enter the interior of the United States. In the past, CBP has shared technology with law enforcement, which can lead to efforts by local police to procure their own. Drones and facial recognition technology are examples of this border-to-police pipeline. Congress should be aware that they are shaping not only border security, but also the future of law enforcement.
To this end, Congress should take steps to ensure that broader uses of technology in the interior are and will be rights respecting. The limitations discussed above would help achieve these goals—restricting retasking and use, as well as requiring a study of effectiveness. Additionally, Congress should require CBP to study the impact their use of technology has on the privacy rights of individuals living in the United States, and identify means of mitigating identified encroachments. For example, such a study could result in CBP identifying tools and techniques that are less invasive and continue to allow it to achieve its mission.
We’ve Been Here Before: Smart Wall, Meet Smart Fence
The fervor to solve border security with technology is not new. Before anyone had uttered the words “smart wall”, there was a “smart fence”. In 2005, DHS launched the Secure Border Initiative (SBI), which called for physical fencing at the U.S.-Mexico border, and complementing the physical fence with a “virtual fence” of cameras and sensors. This second layer, termed SBInet, would alert CBP whenever anyone hopped the fence. In 2010, the Government Accounting Office determined the project did not “live up to expectations” and SBInet was ultimately scrapped in 2011 after almost $1 billion in accrued expenses. The project failed due to poor management and the technology not functioning as promised. SBInet serves as a cautionary tale for overreliance on technology, and a reminder for DHS and CBP to adequately assess how technology meets operational needs. Reviewing the project, the DHS Inspector General observed that “SBInet clearly illustrates that poorly defined and documented operational requirements, and failure to adequately plan, results in missed milestones and wasted resources.” There’s little reason to believe the agencies have learned this painful lesson.
In 2017, the DHS IG said CBP should learn from SBI while planning for a number of acquisitions to secure the southern border: “because CBP lacks strong well-defined operational requirements and an overall strategy framework for securing the 2,000 miles of border, CBP may not properly focus and stabilize the direction of the acquisition.” Furthermore, CBP has not demonstrated an ability to measure the effectiveness of the technology it deploys. A March 2018 GAO report reviewing CBP’s technology deployment at one section of the border observed that “the Border Patrol has not yet used available data to determine the contribution of surveillance technologies to border security efforts.” CBP’s drone program is again an example of the agency’s failure to match technology with needs. The IG determined that drone surveillance assisted in fewer than 2% of CBP’s apprehensions at the border.
Congress needs to be smart about this “smart wall.” CBP’s history of grossly mismanaging technology projects, and its liberal use of surveillance tools beyond the physical border, caution against a hands-off approach. Any funding Congress provides to invasive border surveillance technologies should be conditioned on efficacy requirements and limitations on use that are designed to preserve the human and civil rights of those against whom they will be used.