I wish ‘Judeo-Christian norms’ were enough to ensure ethical AI warfare. They’re not.
What will we do when AI goes to war?
After all, a future of “killer robots” isn’t far off. We have unmanned aircraft already—the US-pioneered drone warfare in the Middle East in the post-9/11 era, and subsequent conflicts, including Russia’s invasion of Ukraine, have led other countries and combatants to get in on the action. The technology needed to make drones, drone swarms, and other weapons operate autonomously is in active development or, more likely, already exists. The question isn’t whether we’ll soon be able to bomb by algorithm, but whether we’ll judge it good and right.
That’s the future Lt. Gen. Richard G. Moore Jr., deputy chief of staff for plans and programs of the US Air Force, was considering when he made widely reported comments about ethics in AI warfare at a Hudson Institute event last week. While America’s adversaries may use AI unethically, Moore said, the United States would be constrained by a foundation of faith.
“Regardless of what your beliefs are, our society is a Judeo-Christian society, and we have a moral compass. Not everybody does,” he argued, “and there are those that are willing to go for the ends regardless of what means have to be employed.”
Mainstream headlines about Moore’s remarks were subtly but unmistakably skeptical. “‘Judeo-Christian’ roots will ensure U.S military AI is used ethically, general says,” said a representative example at The Washington Post, where Moore sent a statement clarifying that his aim was merely “to explain that the Air Force is not going to allow AI to take actions, nor are we going to take actions on information provided by AI unless we can ...
from Christianity Today Magazine
Umn ministry