I have my disagreements with Larry Keane, but this piece at NSSF conveys buried but recently released views by the administration on so-called “smart guns.”
He’s right on one count. As vice president, he did deal with tech leaders to attempt combining authorized-user, or so-called “smart gun” technology into firearms. It didn’t work. It didn’t get to the point where it could even be properly tested.
[ … ]
Early on in the presidential campaign, President Biden claimed, “… we have the capacity now in a James Bond-style to make sure no one can pull a trigger unless their DNA and fingerprint is on it.” That’s some serious science-fiction fantasy technology. It makes for a good movie. In real life, it’s clumsy and failure prone at best and impossible at worst.
The president’s campaign trail claim of DNA-enabled smart guns is completely false. No one has introduced technology that would match a DNA sample to activate a firearm. However, attempts have been made at fingerprint-style authorized user-technology. Think of the way a fingerprint is used to open a smartphone. Now, think of all the times a smartphone won’t open when a fingerprint is applied. A little wet, not the right angle, dirty, God-forbid bloody… all these can cause a failure of the fingerprint lock to not activate the technology.
In a life-or-death situation when an individual is under duress and trying to activate the tool that would save their lives, swiping a fingerprint screen is the last concern. If your iPhone doesn’t open, you’re inconvenienced. If your firearm doesn’t work at the moment you need it you could be dead. That’s why study and survey work on this topic show that reliability is of paramount concern. Because the technology is not yet sufficiently reliable, there is very limited consumer interest in purchasing authorized-user equipped firearms.
[ … ]
Let me be explicitly clear, contrary to the false claims of gun control groups the firearm industry does not oppose the research and potential development of this technology being applied to firearms. Consumers are best left to decide what they want and the free market does a good job of weeding out bad ideas so good ones flourish. What NSSF strongly opposes, however, is the mandate of such technology, like what has recently been proposed by U.S. Rep. Carolyn Maloney (D-N.Y.). She introduced H.R. 1008, legislation that would mandate that every gun sold within five years be equipped with the unworkable technology. It goes further. It also would require all legacy firearms be retrofitted within 10 years. That’s sure to go over well with collectors.
Here Larry simply isn’t precise enough. I would love nothing more than for investors to throw their money away on this only to find out that no one wanted it.
What we must oppose, however, is government (think here taxpayer) sponsored research. But one of the real reasons for such stuff wasn’t discussed.
A trio of computer scientists from the Rensselaer Polytechnic Institute in New York recently published research detailing a potential AI intervention for murder: an ethical lockout.
The big idea here is to stop mass shootings and other ethically incorrect uses for firearms through the development of an AI that can recognize intent, judge whether it’s ethical use, and ultimately render a firearm inert if a user tries to ready it for improper fire.
That sounds like a lofty goal, in fact the researchers themselves refer to it as a “blue sky” idea, but the technology to make it possible is already here.
According to the team’s research:
Predictably, some will object as follows: “The concept you introduce is attractive. But unfortunately it’s nothing more than a dream; actually, nothing more than a pipe dream. Is this AI really feasible, science- and engineering-wise?” We answer in the affirmative, confidently.
The research goes on to explain how recent breakthroughs involving long-term studies have lead to th
e development of various AI-powered reasoning systems that could serve to trivialize and implement a fairly simple ethical judgment system for firearms.
This paper doesn’t describe the creation of a smart gun itself, but the potential efficacy of an AI system that can make the same kinds of decisions for firearms users as, for example, cars that can lock out drivers if they can’t pass a breathalyzer.
This just gets better and better. As I’ve said before, “Perform a fault tree analysis of smart guns. Use highly respected guidance like the NRC fault tree handbook.
Assess the reliability of one of my semi-automatic handguns as the first state point, and then add smart gun technology to it, and assess it again. Compare the state points. Then do that again with a revolver. Be honest. Assign a failure probability of greater than zero (0) to the smart technology, because you know that each additional electronic and mechanical component has a failure probability of greater than zero.
Get a PE to seal the work to demonstrate thorough and independent review. If you can prove that so-called “smart guns” are as reliable as my guns, I’ll pour ketchup on my hard hat, eat it, and post video for everyone to see. If you lose, you buy me the gun of my choice. No one will take the challenge because you will lose that challenge. I’ll win. Case closed. End of discussion.”
Now, consider the superimposition of an AI ethical lockout on top of all of the other failure modes introduced by this “technology” (I use the word loosely, because improved technology is something that should make the machine simpler and less prone to failure modes, not more complex and more prone to failure).
Also as I’ve observed, the desire to control others is the signal pathology of the wicked. In the instance of smart guns, the control is just remote rather than just at the point of purchase.