Analysis Group Launches In-House Experiment to Understand Impact of Structured Guidance on Use of AI

December 19, 2025

The rapid rise of AI has fueled hopes of major productivity gains across industries. Yet many organizations still struggle to provide guidance on AI adoption, and many have failed to identify measurable improvements in performance or output. According to a recent survey, 61% of organizations opt to restrict employee use of tools like ChatGPT, and more than a quarter have banned them outright. Beyond the question of whether or not tomorrow’s firms will use tools like ChatGPT, which seems like a foregone conclusion, a pressing issue is to determine how AI can be deployed in real-world workflows to ensure safe, reliable, quality- and productivity-enhancing effects.

An Experiment to Understand the Impact of Structured Guidance on the Use of AI

Analysis Group and Dr. Timothy DeStefano – Associate Research Professor at Georgetown University’s McDonough School of Business and an applied economist specializing in digital technology, AI, and firm productivity – have partnered in an ongoing study of AI integration, including a series of scientifically designed and implemented field experiments. The firm recently launched a randomized field experiment focused on systematic literature review (SLR) tasks that demand accuracy, synthesis, and judgment in evaluating previously published works. The experiment compared employee performance when using AI tools independently versus using AI tools paired with structured guidance and training tailored to the task, in order to isolate the impact of AI on work quality and efficiency.

Initial findings are promising, indicating that structured, task-specific guidance grounded in workflow design, rather than mere tool familiarity, can transform AI use so that, in addition to being a time-saving step, it’s also a quality-improvement tool.

Pilot study participants highlighted that the structured guidance helped them shift their cognitive focus, as they were able to reallocate their time and effort from manual extraction to critical review, reconciliation, and decision making. Participants also noted that they benefited from the emphasis within the structured guidance on how to systematically review the work of the AI outputs, which improved their overall quality of work.

Key Takeaway

These early findings point to a powerful insight for organizations: The real value of AI may depend less on the technology itself and more on the structure that surrounds its use. Firms eager to adopt AI tools often focus on access and speed, but effective integration requires thoughtful design around human workflows, guidance, and feedback loops. The likely result will be not only greater productivity, but higher-quality insights.

What’s Next

Professor DeStefano is supported by an Analysis Group team led by Managing Principals Chris Borek and Mihran Yenikomshian, Principal Jimmy Royer, Managers Timothy Spittle and Gregory Weiss, and Associate Karen Yang. They will continue to expand this research, broadening the participation and tracking outcomes over a sustained period. Findings will be shared in an upcoming paper.