It is well known that individual police officers can have personal biases. Say that an officer is prejudiced toward people of a certain race or ethnic background. They may be more likely to arrest people of this race or assume that they are criminals when they have not done anything wrong.
Predictive policing, however, uses computers to analyze data. This data could include historic arrest records or reports of criminal activity. The computer then predicts where crime may occur and helps to dispatch police officers in advance, sometimes preventing criminal activity or leading to a prompt arrest.
Since the system is run by a computer that just analyzes data, it seems like it would not be biased. However, some researchers claim that it still is—and that it should not be used. Why does this happen?
Limited data options
The issue is that the algorithm can only pull data from limited sources, and much of that data is simply gathered by police officers.
In other words, if an officer is prejudiced against people of a certain race and arrests them more often, the algorithm will soon start to assume that these individuals are more likely to break the law. This just reflects and amplifies the initial bias. This is why some people claim that predictive policing can be racist, even though the computer has no concept of race itself.
Legal defense options
Technology raises a lot of questions when it comes to police procedures and operations. Those who have been arrested or are facing charges must be aware of their legal options.