From spotting weapons and detecting smoke to predicting where incidents are most likely to occur, artificial intelligence is reshaping the future of physical security. In this episode of Today in Tech, host Keith Shaw and guest James Benum explore how AI-powered cameras, drones, and large language models are improving response times, reducing false alarms, and helping security teams work smarter.They also discuss the balance between safety and privacy, the convergence of cyber and physical security, and why humans will always play a vital role in protecting people and property.
Register Now
Keith Shaw: Integrating artificial intelligence into physical security processes is starting to happen, but it’s more than just using AI software to scan for potential threats. Will this technology help or hurt the way companies secure their physical assets? We’ll discuss that on today’s episode of Today in Tech.
Hi everybody, I’m Keith Shaw. Joining me on the show today is James Benum, Chief Strategy and Growth Officer at Trackforce. Welcome, James.
James Benum: Thanks, Keith. Happy to be here.
Keith: Here’s a quick question to start: When you think about the first time you saw AI in a science fiction movie or a crime show — something like Robocop, Minority Report, or even 24 — what stuck with you about how computers or AI were being used for physical security?
James: My first reaction was, “This is fake — it’s made for TV. This will never become real.” But now, when you look at what’s out there, even the cost of humanoid robots, it’s clear things are changing.
Look at Tesla’s Optimus or unitary robots for $5,000 — those things are becoming real. Much of what we see in television eventually does become reality, either because people already imagine it or by coincidence. But originally, I thought it was fake.
Keith: Do you think we’re getting closer to that reality? James: Absolutely.
You can’t deny it. If you look at the pace of world events — the tech bubble, the financial crisis, COVID — each event is larger and closer together. Things are happening much faster now, driven by technology and by how connected society is.
Keith: Before the show, we talked about my assumption that AI in physical security was just about facial recognition — like in 24 or Mission Impossible where they “enhance, enhance” a video feed and instantly recognize someone. But the reality is AI is being used in completely different ways.
What are you seeing in the space?
James: The well-established category is computer vision. Models trained on video can identify things like guns, fights, smoke, or loitering. They’re very accurate because they’ve been trained on vast amounts of data, long before ChatGPT and large language models. Traditionally, security staff had to sit and watch CCTV feeds 24/7.
Now, AI helps them focus only on exceptions. That’s a perfect use case — reacting to what matters. And with large language models smoothing out the context, you get fewer false positives and better decisions.
Keith: Does that help humans in the loop react faster, or just reduce false alarms? James: Both.
You’ll see fewer false positives and faster response times. The critical factor in real incidents is how quickly you can respond. Cameras can only do so much — you may still need to deploy humans, because humans are comforting and can de-escalate situations.
AI ensures guards respond to real threats instead of just watching feeds all day.
Keith: You also mentioned generative AI helping guards write reports more efficiently, right? James: Exactly.
Most guards are typing on small mobile screens while focused on their surroundings. Large language models can clean up their reports, generate text from photos, and reduce their cognitive load. It also improves report quality, which matters to clients and compliance.
Keith: And that report data can be fed back into AI to improve processes? James: Yes.
This industry has gone from pen-and-paper to mobile phones in every guard’s hand. Now, it’s all digitized. Each phone becomes a data collection device, producing indisputable records — photos, tours, checkpoints — that can feed predictive models.
Combine private data from guards with public data like crime stats, and you get powerful insights for prediction and prevention.
Keith: Are you seeing demand grow, even though fewer people are entering the physical security field? James: Absolutely.
Demand is being driven by global conflict, social unrest, climate events, crime, and overstretched law enforcement. But the number of security firms has stayed flat — around 8,000 in the U.S. That widening supply-demand gap means technology and AI must step in.
Keith: Environmental monitoring also seems key — not just spotting guns or shoplifting, but smoke, fire, or floods. James: Exactly.
With the proliferation of cameras — fixed, mobile, or drone-based — you can detect smoke, fire, or perimeter breaches. Drones with FAA waivers can patrol like guards on a vertical plane, collecting images and feeding them into AI for classification.
Keith: And drones can carry sensors too — like gas leak detectors or heat monitors.
James: Yes, and you can instrument humans as well. Body cams are common, but additional devices could provide more telemetry for safety and environmental monitoring.
Keith: But many firms are still behind the curve — using pen and paper? James: Right.
Out of 8,000 firms, some remain small and haven’t scaled technology yet. Larger international players have more resources, but as tech gets cheaper, even smaller firms can adopt AI. That will be the real game-changer.
Keith: Could AI prediction models lead us toward a Minority Report world of preventing crime before it happens?
James: To some extent. Machine learning uses historical data to predict future probabilities. Public crime data plus private incident data can build stronger models to guide resource allocation — deploying guards where they’re most needed.
Keith: How does physical security investment compare to cyber security?
James: Cyber gets far more funding, but physical security is just as critical. Bad actors may bypass the internet and walk through the front door using tailgating, unattended assets, or social engineering. That’s why physical and cyber need convergence.
Keith: Is convergence happening?
James: Large enterprises are making progress, connecting cyber and physical systems with APIs. Smaller firms lag behind, but integration is key to closing security gaps. For example, AI cameras detecting tailgating can connect with access systems to identify risks.
Keith: Just to clarify — tailgating here isn’t football, right? James: Correct.
In security, tailgating means someone slipping in behind an authorized badge swipe. Cameras with AI can detect that anomaly, which employees often don’t report.
Keith: What advice do you have for companies looking to adopt AI in physical security?
James: First, get comfortable with AI by experimenting. Apply privacy-by-design principles and impact assessments, especially for sensitive areas like facial recognition or HR. Start small — optimize reports, then scale into broader integrations.
Keith: How big are concerns about bias and discrimination in AI deployments?
James: Everyone’s aware of it. Facial recognition and HR use cases raise concerns, but even simple use cases like report optimization need audit trails for compliance. Regulations like GDPR and the EU AI Act are setting standards that will shape deployment.
Keith: What are AI’s limitations — things humans still do better?
James: Presence and original thought. People feel safer seeing human guards, and humans are better at nuanced de-escalation. AI is trained on averages, not creativity. Guards still provide a psychological deterrent that machines can’t match.
Keith: How do frontline guards feel about AI entering their work?
James: Most see it as help, not threat. Standards bodies like ASIS are beginning to include AI knowledge in certifications. AI augments guards’ work, reduces risks, and makes their jobs safer.
Keith: So in the future, will job descriptions say “AI-assisted security guard”?
James: Maybe not explicitly, but AI will be embedded seamlessly. Younger workers expect it, and if done elegantly, they may not even notice.
Keith: Of course, some worry about living in a surveillance society. Where’s the line?
James: Education and regulation are key. Systems are designed to identify bad actors, not everyday people. Privacy laws require data deletion after set periods. If you’re a normal citizen, you should feel safer; if you’re a bad actor, you should be worried.
Keith: Have you seen real cases where AI prevented incidents or improved safety? James: Definitely.
Many don’t make headlines, but examples include gun detection in school parking lots or early smoke detection in waste facilities. These interventions prevent escalation, even if the successes go unnoticed.
Keith: Great insights, James. Thanks for joining me today. James: Thanks, Keith.
Great to be here.
Keith: That’s it for this week’s show. Be sure to like, subscribe, and leave your thoughts below. Join us next time on Today in Tech. Thanks for watching.
Sponsored Links