AI has INCREDIBLE potential, that should be balanced with a security conscious mindset as its deployed by Law Enforcement.
The Industrial Revolution, occurring from 1760 to 1840, brought about a technological overhaul to business. Jobs were lost to automation and machinery, and the world as we knew it was forever changed. With the benefit of hindsight, it’s interesting to see how early skepticisms were overcome, and how the technology invented during this period catapulted our society into our modern way of life.
The innovations of the industrial revolution, such as the assembly line, internal combustion engine, and the telephone were all met with initial criticism. And objectively speaking, they were well-founded fears. Machines and industrialization were putting people out of work, and even eliminated entire professions. But it’s hard to imagine that we ever worked without those technologies.
AI, and really the computing age, is bringing about the same paradigm shifting changes to the modern workplace. In about 50 years, we’ve gone from the introduction of personal computers to a hyper connected society. Every person, even elementary school kids have a smart phone, as anyone who’s spent time working ICAC knows all too well. I mean, you’ve got more computing power on your wrist than NASA used to put men on the moon.
We’ve come a long way in a short amount of time. I remember playing frogger as a kid on 5.25” floppy disk with my Commodore 64, feeling like we were so technologically advanced that alien first contact was sure to happen any day. Maybe I’d been watching too much Star Trek…. But I look back on that day, only 35 or so years back, and laugh to myself about how “advanced” I thought we were.
Tech Growth in Law Enforcement
Policing is also no stranger to technological development. Think for a minute about how you would patrol your jurisdiction without a radio, phone, vehicle, or repeating firearm (Samuel Colt didn’t invent the revolver until 1831), let alone a CAD system. I started my policing career without a cell phone, but now I can’t imagine working without one. Technology revolutionizes industry, policing included. We’d be naïve to think that the same thing isn’t going to happen with Artificial Intelligence, or AI.
The Dawn of AI
Society is now staring down the barrel of “Artificial Intelligence” (said in a thunderous voice with ominous background music), and it’s going to be interesting to see where this goes. I’ve heard several people talk about how the think AI will be the next .com bubble and is destined to fail. Perhaps they’re right. There are more AI startups than you can shake a stick at right now, and a lot of them are going to fail. But just like business on the internet is here to stay, so is AI.
We’re seeing this happen in the restaurant and customer service industries right now. Chat bots have just been super charged, and many times, you don’t even know you’re talking with a robot. And I don’t think it will be long before the only job available at McDonalds is “Robot Service Technician.” And just like with previous tech innovations, AI is coming for Law Enforcement too. In fact, it’s already here.
“I’m sorry Dave. I’m afraid I can’t do that.”
Just like with the industrial revolution, AI tech has been met with HEAVY skepticism, and a healthy amount of fear, which is perhaps well founded.
Look at the experimental simulations the Air Force did with AI piloted drones. They wanted to test an idea that AI could execute missions such as search and destroy without the need for human intervention, so they made a fake AI drone, programmed to seek out and destroy surface-to-air missile (SAM) sites. Their failsafe in the simulation was that a human operator could veto the drone’s decisions.
Long story short, in nothing short of a 2001: A Space Odyssey moment, the AI pilot went rogue in its efforts to accomplish the mission. Viewing the human operator as an obstacle to SAM site destruction, the AI unexpectedly “terminated” the human operator to prevent them from canceling its decisions. When the AI was reprogrammed to prevent that, the AI destroyed the satellite being used to relay commands. Unable to receive a “no” order from the human operator, the AI drone went on its merry way bombing all kinds of “SAM sites.”
Also, for what it’s worth, the Air Force has publicly denied the existence of such a simulation or experiment (of course they did).
Whether it happened or not, the fact remains that we’re a little scared of AI, and probably should be. Regardless of whether you think AI will revolutionize our society, or bring about the singularity (the emergence of artificial sentience) and with it the robot revolution, Law Enforcement should be careful in its adoption of AI technology, for a couple of reasons:
- Public Accountability and Oversight Concerns
- Data privacy laws and governing policies are typically well behind the curve.
Public Criticism
People are skeptical of AI, especially when it’s being used in the public sector. Look at the public’s reaction to the deployment of technologies like predictive policing. Many people fear bias and discrimination are the result of these tools.
To a certain extent, the notion that a predictive algorithm fed by historically biased data will become a “self-fulfilling prophecy” has merit, but it’s also a little myopic. One of the hallmarks of AI technology is its ability to “think” in unexpected ways. AI powered tools have just as much potential to overcome biased policing as they do to create it.
Nevertheless, a wise Law Enforcement leader looks to balance public concern with improving how we keep them safe. The solution here lies more in how Law Enforcement Leaders can work cooperatively with their citizens to deploy these technologies.
There’s a big difference between “we’re using this technology against you” and “we’re using this technology to accomplish the objectives you’ve given us.” The result is similar, but the public has buy in when they understand that we can’t suppress crime to the extent that they want, while simultaneously reducing spending and proactivity without the help of this technology.
Stranger Danger
The other reason Law Enforcement should be careful with AI is because we’re not quite sure what some of these tools will do with the private information you’re mandated to safeguard. I think we can all agree that regulations such as the CJIS Security Policy are well behind the technological curve. The wheels of government turn slowly, and it’s easy to get out ahead of the rules and the safety net they provide. The last thing you want for your agency is to become the reason for the new policy.
As you adopt these tools, also be sure you know who’s behind them, where your data is going, and what they’re doing with it. Many of these technologies are being developed by China, Russia and other countries that pose serious security risks to the United States. There’s a reason that many states have laws prohibiting the deployment of technologies developed by those countries.
Aside from political challenges, we’re still not entirely sure what AI might do with access to a repository of people’s personal information. A recent experiment with AI saw the bot successfully solving the “I’m not a robot” CAPTCHA problem by hiring a human to defeat the challenge. AI’s desire and capacity to overcome obstacles in unexpected ways is astounding, and the last thing we need for a tool like that is to have unfiltered access to our personal information and Law Enforcement Sensitive Data.
Cautiously Optimistic
Does this mean that we should just steer clear of AI? Not at all, but the old adage of “trust, but verify” applies best here. We’re going to see some amazing technological advancements over the next few years! Many of those will have paradigm shifting impact on the way we Police our communities. We just need to make sure Law Enforcement doesn’t get “caught up in the hype” and make poor decisions.
Many vendors, EFORCE included, are utilizing AI technologies to improve the functionality of their products. As you look for these products, be sure your vendor understands the impact of their technologies on your agency. They need to be as security conscious as you are. Look for features that fit within the scope of your current data privacy policies and regulations as well. The last thing you want to do is spend a bunch of money on a tool that CJIS won’t let you deploy.
If you’d like to learn more about how EFORCE is using AI technologies to streamline your agencies workload, give us a call at 888-570-4943, x. 3 or send an email to [email protected]. We’d love to talk with you about how our tools can help bring your agency safely into the future!