How we use AI
About the Healthy Forests Foundation’s use of artificial intelligence (AI)
We use AI carefully, transparently, and always under human direction.
The Healthy Forests Foundation is committed to restoring ecosystems, respecting cultural knowledge, and shifting how Australia thinks about healthy forests. As part of that mission, we use digital tools, including Large Language Models (LLM) or Artificial Intelligence (AI) such as OpenAI’s ChatGPT.
This page explains how and why we use AI, the rules we follow, and what we believe responsible use looks like in a values-led organisation like ours.
Why we use AI
AI is a tool, that helps us work smarter. It is not a substitute for expertise, ethics or relationships. When used well, it can:
- summarise complex research
- help us draft early versions of technical reports, grant submissions or policy briefs
- organise field data or identify trends across large datasets
- suggest better structure, phrasing or clarity for our written outputs.
Every AI-assisted output is reviewed by a human. Final responsibility always sits with our team.
How we guide AI: source control matters
We don’t let AI generate content freely from the internet.
We instruct AI tools to work only with sources we select, including:
- peer-reviewed scientific papers in our Zotero research library
- pre-selected news and website articles
- first Nations-led publications and protocols (where permission has been granted)
- operational manuals, monitoring frameworks, and policy standards used in our projects
- official sources (e.g. government, university, or trusted NGO websites)
- documents we’ve authored and published ourselves.
This keeps our work accurate, traceable, and aligned with ethical and scientific standards.
What we don’t use AI for
We do not use AI to:
- simulate, rewrite or interpret First Nations knowledge
- make decisions about project direction, policy, or communications
- invent data, citations or references
- represent a human author, co-author or stakeholder.
We will never use AI to replace real voices or lived experience.
Want to see what’s in our library?
We’re happy to share a read-only list of what’s currently in our Zotero collection, so partners, students and collaborators can see the breadth of material we’re working from. Just get in touch and we’ll send you a snapshot list. (We don’t share full-text access, but we’re transparent about what informs our work.)
How we Disclose AI use
We believe in honest, proportionate disclosure. If AI tools have contributed meaningfully to a piece of content, e.g. by helping generate structure, narrative or analysis, we will say so.
You may see a statement like this:
“This content was prepared with assistance from ChatGPT (OpenAI, May 2025) to summarise research and support early drafting. All content was reviewed and finalised by Healthy Forests Foundation staff.”
We don’t disclose trivial uses (e.g. grammar suggestions in Word). We always disclose substantive uses, because trust matters.
Our internal ethical guideline
Our staff follow a detailed internal ethical guideline for the responsible use and declaration of AI.
It outlines when AI use must be declared, how to maintain authorship integrity, how to avoid misuse, and how we uphold cultural safety in every AI interaction.
Where our thinking comes from
We’ve based our AI approach on respected ethical frameworks and research. Three sources that shape our position are:
- Can academics use AI to write journal papers? What the guidelines say – The Conversation
- Resnik & Hosseini (2025) Disclosing AI Use in Scientific Research and Publication – Accountability in Research
- Gorraiz (2025) Acknowledging the New Invisible Colleague – Journal of Informetrics
We adapt these insights to suit the work we do, grounded in land care, education, and ecological repair.
Questions or feedback?
We welcome honest conversations about how AI is changing the way we work and how it shouldn’t. Please reach out to us at general