U.S. Army General Says He’s “Really Close” With ChatGPT
Public Domain
A senior U.S. Army commander has revealed that he regularly uses ChatGPT — describing the AI tool as a close partner in his command role.
The disclosure marks one of the most concrete public acknowledgements of a senior military leader leaning on a commercial generative-AI chatbot in decision-support functions.
What Was Said — And How It’s Being Used
William “Hank” Taylor, commanding general of the 8th Army in South Korea and chief of staff for the U.N. Command/ROK-U.S. Combined Forces, told media at the Association of the United States Army conference that:
“I’ve become — Chat and I are really close lately… As a commander, I want to make better decisions. I want to make sure that I make decisions at the right time to give me the advantage.”
He clarified that the tool is not being used to make combat decisions or to autonomously issue orders. Instead, he says it supports decision-making in non-combat domains: staff tasks, personal choices, writing weekly reports, modelling scenarios, and refining leadership outcomes.
Why This Disclosure Is Significant
- Leadership rhetoric shift: Military-grade AI usage has mostly been confined to classified systems or internal research. A general openly discussing using a public A.I. model is unusual.
- Decision-support evolution: Using an AI chatbot to help build analytic models or frameworks reflects how fast technology is being integrated into command thinking.
- Transparency & risk: The public nature of the statement invites scrutiny over how reliable, secure, or appropriate a commercial tool is in a military context.
- Symbolic of broader trend: The U.S. military and allied forces are increasingly embracing AI; this may be a visible example of that movement.
The Limits, Warnings, and Considerations
Taylor was careful to stress, “This tool is not for combat decisions. I still have to decide.” That distinction matters because AI in military settings raises major concerns:
- Reliability & trust: Large-language models like ChatGPT can hallucinate, misinterpret prompts, or lack transparency.
- Security & data classification: Commercial AI models may not meet military classification, handling of sensitive data, or adversarial resilience.
- Autonomy vs human oversight: The move from “tool” to “decision-maker” is a pivotal line that many experts caution against.
- Escalation risk & legal/ethical implications: Introducing AI into decision frameworks — even non-combat ones — may influence timing, thresholds and strategy in unpredictable ways.
What to Watch Going Forward
- Will other senior commanders follow suit and publicly discuss using commercial AI tools in their processes?
- Will military policies or doctrine evolve to classify, regulate or restrict the use of generative-A.I. in leadership or operational roles?
- How will the Army document, audit, and validate use-cases to ensure AI supports rather than undermines human judgment?
- What safeguards will be adopted to prevent misuse, flawed modelling, or intelligence dependence?
You might also want to read: Donald Trump Reverses Course on Student Loan Forgiveness