Lying, corrupt, anti-American cops are running amok with AI – The Next Web

Posted: July 29, 2021 at 8:55 pm

Hundreds of thousands of law enforcement agents in theUS have the authority to use blackbox AI to conduct unethical surveillance, generate evidence, and circumvent our Fourth Amendment protections. And theres little reason to believe anyones going to do anything about it.

The problem is that blackbox AI systems are a goldmine for startups, big tech, and politicians. And, since the general public is ignorant about what they do or how theyre being used, law enforcement agencies have carte blanche to do whatever they want.

Lets start with the individual officers.

Any cop, regardless of affiliation or status, has access to dozens (if not hundreds) of third-party AI systems.

When I mention an AI system, you may be imagining a server with a bunch of blinking lights or a well-dressed civilian leaning over a console with half a dozen monitors.

But Im talking about an Android or iPhone app that officers and agents can use without their supervisors even knowing.

A cop installs software from a company such as Clearview AI on their personal smartphone. This allows them to take a picture of anyone and surface their identity. The cop then runs the identity through an app from a company such as Palantir, which surfaces a cornucopia of information on the individual.

So, without a warrant, officer Friendly now has access to your phone carrier, ISP, and email records. They have access to your medical and mental health records, military service history, court records, legal records, travel history, and your property records. And its as easy to use as Netflix or Spotify.

Best of all, at least for the corrupt cops using these systems unethically, theres absolutely no oversight whatsoever. Cops are often offered these systems directly from the vendors as trials so they can try them before they decide whether to ask their departments to adopt them at scale.

The reason officers use these systems is because they make their jobs much easier. They allow a police officer to skip the warrant process and act as judges themselves.

Law enforcement agencies around the country spend billions on AI services every year, many of which are scams or surveillance tools. These includefacial recognition systems that dont work for Black faces, predictive-policing systems that allow cops to blame the over-policing of poor minority communities on the algorithm, and niche services whose only purpose is generating evidence.

Predictive-policing is among the most common unethical AI systems used by law enforcement. These systems are basically snake oil scams that claim to use data to determine where crimes are going to happen. But, as we all know, you cant predict when or where a crime is going to happen. All you can do is determine, historically, where police tend to arrest the most people.

What predictive policing systems actually do is give the police a scapegoat for over-policing minority and poor communities. The bottom line is that you cannot, mathematically speaking,draw inferences from data that doesnt exist. And there is no data on future crime.

Anyone who says these systems can predict crime is obviously operating on faith alone, becausenobody can explain why a blackbox system generates the output it does not even the developers who created it.

Simply put: any time an AI system used by law enforcement can, in any way, affect an outcome for a human, its probably harmful.

Vice published an article today detailing the Chicago police departments use of ShotSpotter, an AI system purported to detect gunshots.

According to the company, it can detect gunshots in large areas with up to 95% accuracy. But in court they claim thats just a marketing guarantee and that background noise can affect the accuracy of any reading.

Which means its a blackbox system that nobody can explain, and no legal department will defend.

Vice reports that police instructed ShotSpotter employees to alter evidence to make it appear as though the system detected gunshots it didnt in several cases. In one,the police had an employee change the location of a detection to reflect the location of a crime. And in another they had an employee change the designation fireworks to gunshot in order to facilitate an arrest.

When challenged in court, prosecutors merely withdrew the evidence. Thats it. To the best of our knowledge nobody was arrested or indicted.

The problem here isnt that ShotSpotter doesnt work (although, if you have to use the Tucker Carlsondefense in court it probably doesnt). Its that, even if it did work, it serves absolutely no purpose.

Have you ever heard a firearm discharge? Theyre loud. They dont go undetected if there are people around, and a gun fired in any given area of Chicago would be heard by tens of thousands of people.

And people, unlike blackbox algorithms, can testify in court. They can describe what they heard, when they heard it, and explain to a jury why they thought what they heard was or was not a gunshot.

If we find out that prosecutors told them to say they heard a gunshot and then they admit in court that they lied, thats called perjury and its a crime. We can hold people accountable.

The reason theres so much unethical cop AI is because its incredibly profitable. The startups and big tech outfits selling the AI are getting paid billions by taxpayers who either dont care or dont understand whats going on.

The politicians authorizing the payouts are raking in money from lobbyists. And the cops using it can ignore our Constitutional rights at their leisure with absolutely no fear of reprisal. Its a perfect storm of ignorance, corruption, and capitalism.

And its only going to get worse.

The US founding fathers, much like AI, could not predict the future. When they drafted the Second Amendment, for example, they had no way of knowing that hundreds of thousands of heavily-armed government agents would one day patrol our communities around the clock thus making our right to keep and bear arms a moot form of protection against tyranny.

And now the same has happened to our Fourth Amendment rights. When our private information was locked away in filing cabinets and the only way to access it was with a judges signature on a search warrant, our right to privacy was at least somewhat safeguarded against corruption.

Now those protections are gone. You dont need a predictive algorithm to understand, historically speaking, what happens next.

Read more from the original source:
Lying, corrupt, anti-American cops are running amok with AI - The Next Web

Related Posts