Algorithmic Blind Spots: Operational Risks of Gender Bias in Indonesia’s Defense AI

Algorithmic Blind Spots: Operational Risks of Gender Bias in Indonesia’s Defense AI

Vol. VII / No. 1 | February 2026

Authors:

Wendy Prajuli – Lecturer at the Department of International Relations, Binus University
Cynthia Sipahutar –
Lecturer at the Department of International Relations, Binus University, Doctoral Student in the Department of International Relations, Universitas Indonesia
Curie Maharani – 
Lecturer at the Department of International Relations, Binus University

 

Summary

Artificial intelligence (AI) is increasingly integrated into  military systems, reshaping defense architectures and operational practices worldwide.  The article argues that as Indonesia integrates Artificial Intelligence (AI) into its defense architecture—ranging from C4ISR systems to recruitment—it risks embedding significant gender biases. Drawing on global research from UNESCO and Berkeley Haas Center and an analysis of Indonesian policy documents (Stranas KA, Jakumhanneg), the authors contend that a “governance gap” exists. They warn that failing to address bias in datasets and algorithms could lead to operational failures (e.g., misidentifying female combatants) and normative harms (e.g., reinforcing militarized masculinity).

Keywords: Artificial Intelligence, AI Defense, Indonesia, Gender, Security.

Artificial intelligence (AI) is increasingly gaining traction as an integral component of contemporary human life, with applications ranging from ride-hailing and navigation services to facial recognition technologies. In the defense sector, AI has evolved from conceptual speculation to an operational reality, as evidenced by the progression of defense systems from C2 to C4ISR, C5ISR, and C6ISR architectures. The deployment of precision-guided munitions exemplifies the prominent use of AI-enabled capabilities.

The rapid expansion of AI in military applications has prompted many states, including Indonesia, to pursue the development and acquisition of AI-based defense systems. In Indonesia, research and development in this field have begun, but progress remains constrained by budgetary limitations and competing priorities. Despite the growing ubiquity of AI across multiple sectors, there is limited recognition that AI technologies are not inherently gender-neutral.

A growing body of scholarship demonstrates that AI systems can replicate and amplify gender biases because they are created, trained, and deployed by humans whose perspectives are shaped by prevailing cultural norms. How cultural norms amplify gender bias and gap in AI can be seen in research by Tunjungbiru, el.al. Their research shows that 74.24% of Indonesian women lack AI literacy, with 12.12% at the basic level and 13.64% at the advanced level, while among men, 60% lack AI literacy, 21.54% have basic AI literacy, and 18.46% have advanced AI literacy. Recognizing and addressing these biases is essential if Indonesia is to develop and adopt AI-based defense systems that are both operationally effective and socially equitable.

 

AI and Gender Bias: Evidence from Global Research

Evidence from global research underscores the persistence of gender bias in AI. A UNESCO study analyzing three AI-based applications, GPT, ChatGPT, and LLaMA, found that these systems tended to associate women with the domestic sphere, including home, family, children, and marriage, while linking men to business, executive leadership, careers, and salaries. The same study also revealed a negative bias toward homosexuality. Comparable findings emerged from research conducted by the Berkeley Haas Center for Equity, Gender & Leadership, which examined 133 AI systems across industries and found that 44 percent exhibited gender bias. In contrast, 25 percent demonstrated both gender and racial prejudice. According to UNESCO, gender bias in AI stems from three primary sources: bias in datasets, bias in algorithm selection, and bias in implementation. Dataset bias arises when training data lack variation and adequate representation. Algorithmic bias occurs when modeling processes fail to account for diversity. Implementation bias arises when AI systems are applied beyond their original context or adjusted in response to user feedback without accounting for demographic diversity.

The underrepresentation of women in AI development compounds these problems. Research by Interface indicates that women constitute only 22 percent of the global AI workforce, with even lower representation at senior levels. This lack of diversity not only limits perspectives in the design and deployment of AI systems but also perpetuates the risk of embedding existing social inequalities into technological systems.

 

Potential Gender Bias in Defense AI

In defense applications, gender bias can originate from both technological and normative sources. Technological and data-related biases arise when skewed datasets are used in AI training, resulting in algorithms that systematically disadvantage certain genders. Since humans are responsible for selecting and labeling training data, any pre-existing gender biases they hold can be reproduced in AI outputs. This can have significant implications, such as in AI-assisted military recruitment, where biased algorithms could reject non-male candidates by deeming them unfit based on gendered standards for height or physical composition. Such practices can significantly restrict women’s access to military careers. These risks are not hypothetical; the United Kingdom has already begun integrating AI into its military recruitment processes.

Operationally, biased algorithms may also jeopardize mission success. In reconnaissance-strike operations, for example, if AI systems are trained solely on male combatant profiles, they may fail to recognize female combatants deployed by an adversary. This could result in misclassification of threats, targeting errors, or mission failure.

Furthermore, military AI applications linked algorithmic (gender) bias to civilian suffering in war settings can escalate into acts of gender-based violence in conflict zones. Many AI-based target profiles portray men as violent, dangerous, and predatory, which contributed to this outcome. The AI military application design was based on the concept of militarized masculinity. The influence of such bias features in military AI creates status for cisgender men as undeserving of civilian status. In a larger sense, this assumption increases the danger of civilian injury from gender-based violence in conflict zones.

Hyper militarized masculinity culture in defense technology including in AI military application challenges the inclusivity and intersectionality of the AI technology development. Whereas ethical issues are essential in building a human centered and responsible AI weapons, male dominated engineers and military officers remain leading the armaments design and production process for the advantages of efficiency, speed and scale. Without inclusivity in AI military production and development, elite power also dominates the business and raises more discrimination, marginalization and exclusivity which can safeguard the bias and unethical uses of AI in military applications.

On the normative front, there is a notable absence of national or international regulations explicitly addressing gender bias in defense AI. While some states have endorsed initiatives such as the REAIM blueprint for responsible AI use in the military, these frameworks generally omit provisions to mitigate gender bias, particularly in the development and implementation of AI defense systems. For example, this initiative of the blueprint for responsible use of AI only generally acknowledges that all AI applications in the military “must be developed, deployed, and used in accordance with international law, including, as applicable, the UN Charter, international humanitarian law, international human rights law; and, as appropriate, other relevant legal frameworks, including regional instruments.” There is no specific mention of addressing gender bias in defense AI.

In some cases, states prefer the term “social bias” over “gender bias” in official documents, potentially obscuring the issue and reflecting the incomplete integration of gender norms into international governance of defense AI.

 

Indonesia’s Emerging Defense AI Capabilities and the Gender Bias Gap

Indonesia’s development of defense AI remains in its early stages but shows considerable potential. At present, its use is primarily confined to virtual reality and augmented reality applications in military training. The Indonesian Air Force (TNI AU) has announced plans to introduce AI-based airspace security systems by 2025, and AI has already been incorporated into exercises such as the 2024 Angkasa Yudha operation, which utilized AI in air communications. The Air Command and Staff College (Seskoau) has also sent officer cadets to China to study AI-based defense technologies.

The Indonesian Army (TNI AD) collaborated with a private defense company to develop an AI-enabled unmanned aerial vehicle (UAV). Meanwhile, the Navy (TNI AL) created an AI-powered data analytics tool called System Performance Readiness and Tactical Analysis (Spartan). Spartan helps detect unusual vessel traffic and activities, automates vessel tracking, and suggests possible actions based on real-time data from naval warships at sea.

At the strategic level, the Commander of the Indonesian Armed Forces has expressed intentions to adapt military doctrine to accommodate AI integration. AI is also being used for public relations and information dissemination, and plans are underway to implement AI in defense-related functions in the new capital city, Nusantara.

Despite these advancements, gender bias has yet to become a formal consideration in Indonesia’s defense AI development. Even the Stranas KA 2020-2024  (National Strategy of Indonesian Artificial Intelligence), a blueprint for Indonesia’s AI development, also does not address the importance of gender in AI development. The same applies to the two Indonesian defense documents, Jakumhanneg 2020-2024 (General National Defense Policy 2020-2024) and Jakgarahanneg 2020-2024 (National Defense Implementation Policy 2020-2024). Those documents also do not mention gender in AI defense development.

Where gender elements are present, they tend to be incidental, as in the case of Navy Second Lieutenant (E/W) Fitria Dwi Ratnasari’s involvement in the creation of Antasena, a maritime surveillance system. Without deliberate measures to address bias, Indonesia risks embedding gender inequalities into its defense AI systems, with potentially adverse consequences for both operational effectiveness and social inclusion.

Addressing this challenge requires sustained awareness that AI is neither value-free nor gender-neutral. It also demands integrating feminist perspectives into defense AI development, increasing gender diversity among the researchers and engineers responsible for such systems, and collecting more representative datasets that explicitly incorporate gender variables.

Moreover, the formulation of both national and international regulatory frameworks that explicitly prohibit gender bias in defense AI will be crucial to ensuring that Indonesia’s defense innovation trajectory aligns with the principles of equality, accountability, and responsible technological governance. Additionally, Indonesia’s early-stage AI-based defense development presents opportunities to design gender-sensitive systems from the outset, thereby establishing a strong foundation for future growth.

 

Reference

Antara News, TNI AU kembangkan teknologi pertahanan berbasis AI pada 2025. 2024, https://www.antaranews.com/berita/4524067/tni-au-kembangkan-teknologi-pertahanan-berbasis-ai-pada-2025

Antara News, Ratusan pasis Seskoau kunjungi China pelajari AI di bidang pertahanan. 2024, https://www.antaranews.com/berita/4260415/ratusan-pasis-seskoau-kunjungi-china-pelajari-ai-di-bidang-pertahanan

Borchert, Heiko., Schutz, Torben., dan Verboszky, Joseph. (2024) The Very Long Game: 25 Case Studies on the Global State of Defense AI, Springer.

Bimo, Emanuel Ario dan Jatmiko, Bagus. A Careful Walk on Thin Ice: Defence AI in Indonesia, DAIO Study 25/26. 2025, https://defenseai.eu/wp-content/uploads/2025/11/daio_study2531_a_careful_walk_on_thin_ice_emanuel_ario_bimo_bagus_jatmiko.pdf

BPPT, Strategi Nasional Kecerdasan Artifisial Indonesia 2020-2045. 2024, https://korika.id/document/strategi-nasional-kecerdasan-artifisial-indonesia-2020-2045/

Daniel, Brett. C2 vs. C4ISR vs. C5ISR vs. C6ISR: What’s the Difference?. 2016, https://www.trentonsystems.com/en-us/resource-hub/blog/c2-c4isr-c5isr-c6isr-differences#:~:text=Essentially%2C%20the%20only%20real%20difference,respectively%2C%20to%20the%20C2%20framework.

Detik News, TNI AD Kini Punya Presenter Berita Kowad AI. 2024, https://news.detik.com/berita/d-7176682/tni-ad-kini-punya-presenter-berita-kowad-ai.

Haas Berkeley, Mitigating Bias in Artificial Intelligence. 2020, https://haas.berkeley.edu/wp-content/uploads/UCB_Playbook_R10_V2_spreads2.pdf

Indonesia Defense, Kemhan Pertimbangkan Integrasi Penggunaan AI untuk Kekuatan Alutsista. 2024, https://indonesiadefense.com/kemhan-pertimbangkan-integrasi-penggunaan-ai-untuk-kekuatan-alutsista/

Interface, AI’s Missing Link: The Gender Gap in the Talent Pool. 2024, https://www.interface-eu.org/publications/ai-gender-gap

Mohan, Shimona., Filling the Blanks: Putting Gender into Military A.I. 2023, http://orfonline.org/research/filling-the-blanks-putting-gender-into-military-a-i

Payne, K. (2018). Artificial Intelligence: A Revolution in Strategic Affairs? Survival, 60(5), 7–32. https://doi.org/10.1080/00396338.2018.1518374

Provan, Anna., Militarising Artificial Intelligence: A Feminist Analysis of Algorithmic Warfare, Robert Bosch Stiftung: The Centre for Feminist Foreign Policy. 2025, https://centreforfeministforeignpolicy.org/wordpress/wp-content/uploads/2025/06/CFFP_PolicyBrief_MilitaryAIBriefing.pdf

REAIM Blue Print of Action. 2024, https://www.mofa.go.kr/www/brd/m_4080/down.do?brd_id=235&seq=375378&data_tp=A&file_seq=9.

RRI, TNI AL Ciptakan Aplikasi Pemantau Bawah Laut “Antasena”.2023, https://rri.co.id/iptek/389176/tni-al-ciptakan-aplikasi-pemantau-bawah-laut-antasena

The Telegraph, Army uses AI to speed up recruitment as staffing crisis bites. 2024, https://www.telegraph.co.uk/news/2024/02/18/army-ai-speed-up-recruitment/#:~:text=Army%20recruiters%20are%20using%20artificial,records%20provided%20alongside%20their%20application.

Tunjungbiru, A. D., Pranggono, B., Sari, R. F., Sanchez-Velazquez, E., Purnamasari, P. D., Liliana, D. Y., & Andryani, N. A. C. (2025). AI Literacy and Gender Bias: Comparative Perspectives from the UK and Indonesia. Education Sciences, 15(9), 1143. https://doi.org/10.3390/educsci15091143.

UNESCO, Generative AI: UNESCO study reveals alarming evidence of regressive gender stereotypes. 2024, https://www.unesco.org/en/articles/generative-ai-unesco-study-reveals-alarming-evidence-regressive-gender-stereotypes

Accessibility