– and why it’s changing how I engage with research again
I graduated from my clinical psychology program in 2006 and, after a few short bursts working as a clinician, spent most of the years between 2007 and 2016 doing project and research work. I was okay at it. I worked on some cool projects, with some very cool people, and even published a few papers along the way.
In 2017, I shifted into a new role at Flinders University, focusing on mental health promotion. It was a meaningful change, and one that brought with it a different set of daily tasks. One of the big shifts I noticed early on was in how I consumed information. I found myself spending less time with peer-reviewed research and more time in other formats — online lectures, podcasts, books, websites, blogs, and existing programs.
These sources speak a different language — one that is often more immediately useful for giving talks, running workshops, and building campaigns. I still try to stick with sources grounded in evidence — interviews with researchers, evidence-based programs, reputable health websites — but my information diet changed significantly. It has moved away from “what does the academic literature say?” to “what has already been translated into something practical?”
Recently though, I’ve found myself turning back toward the research literature again. One reason for this is that I’m helping to write the next wellbeing strategy for Flinders University. As part of that process, I want to feel more confident that what we’re recommending is informed by a good understanding of what’s showing up in the research — not just what’s trending on social media or captured in a book summary.
So each week, I run a quick Google Scholar search on keywords like “university students” and “mental health” or “wellbeing”. I browse some of the more cited or recent papers, pick one that seems relevant, and dive in. I then create a digest of that paper to share with a few of my colleagues who are also involved in setting strategic direction.
Here’s the part that has made this process more interesting: AI.
After reading the article myself, I now use AI to interrogate it further. I feed the article into ChatGPT (or a similar tool), and apply a set of research digest prompts I’ve developed over time. These prompts ask a wide range of questions — about the study’s context, methods, findings, implications, limitations, and more.
What I love about this process is that it widens my lens. Left to my own devices, I might latch onto one or two interesting points in a paper and call it a day. But when I use these prompts, I’m encouraged to look at the paper from different angles — including some I wouldn’t normally think to explore. And when I’m reading a study in an area I’m not very familiar with, the prompts help break down the content into more digestible chunks.
The initial summaries from AI often spark new questions, and before I know it, I’ve spent 30 minutes having a back-and-forth conversation about the paper — thinking through implications, brainstorming how to communicate the key ideas, and drafting a shareable summary for my colleagues. It becomes less about passive reading and more about active engagement.
Of course, I still read the paper first. AI isn’t perfect — it can misread tone, get facts and numbers wrong, or oversimplify. But as a thought partner, it’s proven to be surprisingly useful.
Below, I’ve included the prompt questions I use when digesting a study. Feel free to borrow, adapt, or expand them. I find they’re particularly useful when I’m reading outside of my comfort zone — where the jargon and density of academic language would otherwise make comprehension a slog.
I’m hopeful that as tools like ChatGPT continue to evolve — and as platforms like Consensus (a research-specific AI search engine) mature — they will open up research to a much broader audience. If we want to live in a world where people can choose between getting their information from a random influencer on TikTok or from a peer-reviewed article, we need to make the latter option easier to access, navigate, and apply.
AI might just be the bridge that helps more people turn to research — and actually understand it — when they have real questions about their mental health, learning, or wellbeing.
Digest Questions
1. Big Picture / Context
- What’s the broader context of the research?
- Why is this topic important or relevant right now?
- What problem or gap in the literature is this research addressing?
- What issue or challenge is this study trying to better understand or solve?
- Why might this be important to people working in education, wellbeing, or mental health today?
2. Research Aims & Objectives
- What specific questions or hypotheses does the study address?
- What did the authors aim to achieve or clarify with their research?
- Is the study aiming to compare, evaluate, propose, or consolidate knowledge?
3. Methodology & Design
- What was the overall research design? (Experimental, correlational, qualitative, etc.)
- Who were the participants? (Demographics, how recruited, how representative?)
- What methods or procedures did the researchers use? (Interviews, surveys, experiments, observations?)
- What measures or instruments did the researchers use to capture key variables?
- Are there any notable strengths or weaknesses in the study’s design or methods?
- For secondary research (like reviews):
- New: What types of studies were included (e.g., RCTs, observational, qualitative)?
- What inclusion/exclusion criteria were used?
- How was study quality or bias assessed (e.g., AMSTAR, GRADE)?
- What methods were used to synthesise findings (e.g., meta-analysis, narrative synthesis)?
4. Key Findings & Results
- What are the primary outcomes of the research?
- Were the results statistically and/or practically significant?
- Are the results consistent or inconsistent with previous findings in this area?
- What unexpected findings or insights emerged?
- Which findings were strongest or most consistent across studies?
- What findings were less conclusive or had mixed results?
5. Interpretation of Results
- How do the authors explain their results?
- Are their interpretations reasonable given the data and methods used?
- Are there alternative explanations for the findings?
- What assumptions do the authors make in drawing their conclusions?
6. Implications & Applications
- What are the practical implications of this study’s findings?
- Who could directly benefit from these findings, and how?
- What actions, recommendations, or changes might be inspired by this research?
- If someone were to implement these insights, what first steps could they take?
- What types of actions (e.g., program design, policy changes, teaching strategies) does this research seem to support?
7. Limitations & Critical Considerations
- What limitations did the authors acknowledge?
- Are there other limitations or critiques not mentioned in the study?
- How generalisable are these findings beyond the specific context studied?
- What caveats or cautions should readers bear in mind?
- What are the limitations of the evidence base itself (e.g., missing populations, outcomes, contexts)?
8. Future Directions
- What unanswered questions remain?
- How could future research build on or clarify this study’s findings?
- What could be the next steps in practical or applied terms?
- What are the research gaps, and what applied or policy directions could address them?
9. Accessibility & Communication
- How clearly and engagingly are the findings presented?
- What terminology or concepts need simplifying for broader understanding?
- What analogies, metaphors, or practical examples could clarify key concepts?
- If you had to explain this study in under 2 minutes to a colleague, what would you say?