The Australian workplace has always been a complex ecosystem of interpersonal dynamics and regulatory compliance. However, over the past year, a new variable has entered the fray: Generative Artificial Intelligence. At Central HR Australia, we have observed a significant shift in the reporting of workplace disputes. While AI tools are often praised for productivity, our experience also suggests that they are increasingly being used to “weaponise” the grievance process, creating a surge in formal complaints that are as sophisticated in their wording as they are thin on factual substance.
While we are not suggesting that all complaints generated using AI are an issue, this article focuses on a growing number of inaccurate, AI generated complaints being seen recently.
The Complainant’s Paradox: Professionalism Without Provenance
The most immediate impact of AI is the sheer volume of grievances being filed. Previously, making a formal complaint required effort to articulate the issue, which required reflection and thought . Today, an employee can input a few bullet points of frustration into an AI tool and receive a three-page, legally-tempered formal grievance in seconds.
We find that an increasing number of complaints are worded with extreme seriousness, often appearing so legally sound that they meet internal policy thresholds for a formal investigation by default. However, this has created a significant administrative burden. We are seeing a growing number of investigations triggered by documents that look like High Court submissions but lack the basic “who, what, when, and where.”
For the investigator, this makes the fact finding process more tedious and involved, to ensure the allegations made are thoroughly tested. Interviews with Complainants tend to be emotionally heightened; the complainant, emboldened by the “legalistic” weight of their AI-drafted document, frequently struggles when asked to provide the specific evidence or context required to substantiate their claims. The result is a cycle of exhaustive investigations that ultimately lead to unsubstantiated findings, leaving the complainant feeling ignored and the employer trapped in an expensive, inconclusive process.
The Respondent’s Pitfall: The Danger of Algorithmic Advice
It isn’t only the complainants who are turning to AI. We are observing a worrying trend where respondents are seeking strategic advice from AI sources before consulting a professional. Frequently, these tools provide inaccurate or overly aggressive interpretations of Australian employment law, misleading employees into thinking that cooperation with an internal process is unnecessary or optional.
One such example involved an employee who was a respondent in a misconduct investigation. The employee repeatedly asserted that he had been advised by his lawyers not to cooperate with the investigator who was independently examining the matter. However, upon further inquiry, it became apparent that the employee did not have genuine legal representation. Instead, he was being ‘coached’ by a generative AI tool and held significant misconceptions about his rights and obligations within an investigation process, and his responsibilities toward the employer.
We have observed that this case, and others like it, are often underpinned by premature threats of external legal representation while the individual remains actively employed, non-compliance with organisational policies (including refusal to attend interviews, follow protocols of confidentiality, or provide evidence), and a generally disruptive approach to the process. In many instances, this behaviour irreparably damages trust and ultimately renders the employment relationship untenable.
For HR managers, this creates complexity: what starts as a reasonable business process can end up being an argument of technicalities and misgivings, detracting from the core issues permanently. Meanwhile, the psychological support needs of all parties – complainant, respondent, and HR teams managing the process get compromised. When a respondent enters an investigation with a “defensive” AI-generated script, the opportunity for an honest, restorative conversation is often lost.
The Internal Burden: Errors and Perception
Perhaps the most concerning trend is the pressure placed on internal investigators. Stretched for time and facing an influx of these “polished” grievances, many internal teams are falling into common traps:
- Outdated Knowledge: Relying on superseded legislation or poorly interpreted internal policies that don’t account for the nuance of modern AI-generated claims.
- The Fact-Perception Gap: Failing to spend the necessary time separating an employee’s perception (often amplified by the emotive tone of AI) from verifiable facts, often succumbing to the pressure of urgency often created by AI generated letters
- Lack of Training: We are seeing an increase in non-evidence-based investigations conducted by untrained staff who make “gut-feel” calls, or work to predetermined outcomes because they are overwhelmed by the litigious language of the initial complaint.
The Death of Constructive Resolution
The overarching result of this AI influence is a shift toward litigious language and a dwindling appetite for “soft” resolutions. Mediation and conflict resolution—once the gold standard for maintaining workplace harmony—are being sidelined in favour of a “winner-takes-all” investigative approach. When a dispute is framed in the cold, adversarial language of an algorithm, the human element can be easily lost. The desire for a constructive path forward is replaced by a desire for a “verdict,” which rarely heals a fractured workplace culture.
Frequently Asked Questions
Q: If a grievance is clearly written by AI, can we dismiss it?
Q: How do we handle a respondent who refuses to cooperate based on "AI-sourced" legal advice?
Q: Why are we seeing more "unsubstantiated" findings lately?
Q: How can we reduce the number of investigations triggered by AI-drafted complaints?
The Path Forward Navigating this new era requires a return to fundamentals: evidence, empathy, and expertise. At Central HR Australia, we specialise in helping organisations look past the algorithm to find the facts and restore workplace harmony.