Last summer, U.S. Immigration and Customs Enforcement (ICE) announced that it was looking for machine-learning tools to continuously monitor U.S. visitors’ and immigrants’ social media posts and predict who should be allowed into the U.S. and who should be deported. Since then, CDT has worked alongside other advocates—including the Brennan Center for Justice, Georgetown Law’s Center on Privacy & Technology, and many, many others—to stop this dangerous and discriminatory program from going forward. After some searching, ICE has discovered what advocates have been saying all along: No software exists that can make these predictions. Software that would be used to make such predictions would violate civil and human rights, chill speech, and introduce arbitrary and discriminatory criteria into the immigration vetting process. For now, ICE appears to be backing off of its pursuit of this technology and will instead hire analysts to expand vetting using “current technological capabilities.”
A lot of questions remain about ICE’s new “Visa Lifecycle Vetting” initiative. For example, ICE previously said that it wanted tools to predict visa applicants’ “probability of becoming a positively contributing member of society as well as their ability to contribute to the national interests.” It’s unclear whether ICE analysts will evaluate applicants based on these vague, highly subjective standards, which are not derived from immigration laws (the language comes from the Trump Administration’s January 27th executive order known as the original “Muslim ban”). Such an amorphous standard could leave too much discretion for analysts to rely on arbitrary or discriminatory criteria, such as whether a visa applicant has criticized government policies or how much money she makes.
ICE’s former plan also involved a quota system, requiring automatic generation of at least 10,000 leads per year for deportation and visa denial. According to reporting by the Washington Post’s Drew Harwell and Nick Miroff, ICE analysts won’t be held to a strict quota system, but ICE reportedly still has a goal of continuously monitoring 10,000 people per year deemed “risky.” Again, it’s unclear what standards or criteria determine who is risky, or how ICE came up with that number. These types of quotas or “goals” can create perverse incentives for agents to over-flag people for exclusion from the U.S.
ICE also has not clarified how these analysts will monitor or evaluate social media and other online information (such as blog posts and academic websites). Will people’s social media posts be used as evidence that they are a security risk? Depending on how these determinations are made, social media surveillance may lead to more false positives rather than improving ICE investigations. Social media information is often untrustworthy (e.g. sarcasm, parody accounts, fake biographical information) and difficult to verify.
ICE says it is only looking at “publicly available information,” but will the agency have special access to social media platforms beyond what the average user would have? In July 2017 conversations with industry, ICE indicated that it was looking for “workaround[s]” to circumvent technical limitations on accessing information from social media platforms and other websites (this likely refers to “scraping” information despite websites’ API restrictions on doing so).
We have a lot more questions about the government’s plans to continuously monitor U.S. visitors and immigrants, and we will continue to push the administration to answer these questions. We also have no guarantee that ICE—or any other agency—won’t use unreliable and discriminatory machine-learning tools to do automated vetting in the future.
Even if ICE backs off of its dangerous procurement plan, announcing such an ill-conceived plan in the first place is a dangerous threat to human rights and public trust in government. ICE has attempted to couch its industry day and accompanying documents as exploratory, and not necessarily reflective of the agency’s policy. But when an agency publicly announces its intent to solicit dangerous and discriminatory-by-design technologies, it incentivizes industry to develop these tools. Vendors could then attempt to sell these tools to other agencies—perhaps state and local governments—if ICE doesn’t want them. Regardless of whether ICE moves forward with this initiative, the threat to immigrant communities remains.
Accountability measures should stop these types of irresponsible procurement practices from happening. But the onus is also on industry to push back when agencies pursue technologies and systems that don’t work and threaten civil and human rights. Potential vendors and other influential companies are best positioned to correct technical oversights and refuse to let their systems be used for discriminatory enforcement of laws, privacy violations, and speech chilling activities.