Allegations surfaced last week indicating that the Israel Defense Forces (IDF) are choosing targets in the assault against Hamas in Gaza by means of an artificial intelligence (AI) system named Habsora, which is Hebrew for "The Gospel." There have been reports that the system has been used to identify additional targets for bombing, connect places to Hamas operatives, and predict the number of civilian deaths that are likely to occur beforehand......."
Allegations surfaced last week indicating that the Israel Defense Forces (IDF) are choosing targets in the assault against Hamas in Gaza by means of an artificial intelligence (AI) system named Habsora, which is Hebrew for “The Gospel.” There have been reports that the system has been used to identify additional targets for bombing, connect places to Hamas operatives, and predict the number of civilian deaths that are likely to occur beforehand. What is the significance of using AI-targeting systems in combat? The nature of war is already changing due to artificial intelligence, according to my study on the ethical, social, and political ramifications of the military’s employment of remote and autonomous systems. As “force multipliers,” militaries deploy autonomous and remote systems to boost the effectiveness of their forces and safeguard the lives of their personnel. Even if humans become less noticeable on the battlefield and instead gather intelligence and target from a distance, AI technologies have the potential to increase army efficiency and are likely to accelerate and lethally escalate combat. Will the present ethical framework for war persist when militaries may kill whenever they want and pose minimal threat to their own soldiers? Is there a possibility that the growing utilization of AI will lead to further dehumanization of opponents and a greater distance between the civilizations that wage wars and the battles themselves?