

캐롤린 가이슨-바이셀/MIT SMR | 게티 이미지
How can you tell whether you’re conducting your sessions with ChatGPT or Claude wisely? One executive’s solution was to build a self-audit. He shares the macro-prompt he created to assess GenAI interactions across five goals and 30 habits that are key to getting richer insights from your generative AI outputs. Copy and paste that resource into an existing GenAI conversation you’ve been having and learn where to refocus to achieve better results.
More than a year into using generative AI daily, I wondered whether I was getting the most out of my AI use. There was no benchmark or feedback loop, and no one was grading my sessions with ChatGPT and Claude — until I created a self-audit.
I did what I’ve always done when faced with a process that lacked measurement. I studied every method I could find — prompting guides, conversations with colleagues, my own session patterns. I used AI to help me use AI better. Over time, I built a single self-audit prompt — one that encapsulates more than 30 habits for getting the most from AI.
Each time I ran the self-audit prompt, the output got sharper. The discipline became reflexive for me. That’s the real value of the self-audit: It made me better at using AI, in every session.
Now, at the end of any significant AI session, I simply prompt: “Review this session and assess it against my AI habits guide. Score how I did, identify what I missed, and guide me to apply missed habits.” Within a few minutes, I get a diagnostic that is uncomfortably specific about what I missed. I now have an answer to a key question: whether my process was good, not just the GenAI output.
A recent field experiment confirmed what I found through my experience. A research team that included MIT Sloan professor Jackson Lu randomly assigned 250 employees at a technology consulting firm in China to either use ChatGPT to assist with their work or to work without it.1 The employees with ChatGPT access were judged as significantly more creative by both their supervisors and outside evaluators. But the gains showed up exclusively among employees with strong metacognitive strategies — those who reflected on their own thinking, recognized knowledge gaps, and refined their approach when results were weak. That finding underscores that metacognition — thinking about your thinking — is the missing link between simply using AI and using it well.
AI widens the gap between disciplined and undisciplined professionals. People who skip the discipline generate more volume without more insight — a pattern consistent with what researchers at the University of California, Berkeley’s Haas School of Business called “unsustainable intensity” in findings published in early 2026.2
Knowing how to use AI is good — but to get the most value from the tool, you need to know whether you’re using it well. The self-audit gives you that.
A Self-Audit That Measures Five Key Goals
My self-audit prompt is organized across five goals: set up, refine, verify, own, and systematize.
참조
1. S. Sun, Z.A. Li, M.-D. Foo, et al. “How and for Whom Using Generative AI Affects Creativity: A Field Experiment,” Journal of Applied Psychology 110, no. 12 (December 2025): 1561-1573, https://psycnet.apa.org/doi/10.1037/apl0001296.
2. A. Ranganathan and X.M. Ye, “AI Doesn’t Reduce Work — It Intensifies It,” Harvard Business Review, Feb. 9, 2026, https://hbr.org.
#Audit #GenAI

