What Ear Wax Taught Me About Drug Checking

A reflection from the end of a year inside the NSW Drug Checking Trial

Tuesday, 7 March, 2026

Clancy Beckers is the Drug Checking Lead at NUAA, where they work at the intersection of peer-led harm reduction and health policy. Over the past year, Clancy has coordinated drug checking services across NSW music festivals as part of the NSW Drug Checking Trial, leading teams of peer workers to deliver front-line health support in festival settings.

With an understanding of the practical realities of service delivery and a commitment to community-led approaches, Clancy brings a distinctive perspective to questions about how health systems engage, and often fail to engage, with the people they're designed to serve.

In this reflection, written at the close of the Drug Checking Trial and their time in the role, Clancy draws on a year of insight to ask a deceptively simple question: if most harm reduction happens invisibly, in communities, without any professional input at all, what are we actually measuring when we evaluate the services we can see?


So, tonight was my last festival. After a year as a peer lead coordinating drug checking services across NSW music festivals, the season, and the role, is wrapping up. I'm feeling reflective. And the thing I keep coming back to isn't what I expected.

It started with ear wax.


A swab, a lab, and a bottle of Waxsol

Picture this: someone's dealing with a bit of ear wax. They drop by their GP, who takes a look and thinks, "Let's just swab this to be safe." Makes sense, yeah?

But let's walk through what happens next.

The patient books an appointment, makes their way to the clinic, sits in the waiting room, then gets seen. The GP spends ten minutes chatting through symptoms and having a look. Either a nurse or the GP grabs a swab, labels it, packs it up. Admin sorts out the pathology request. A courier swings by and picks it up with a stack of other samples. Over at the lab, someone logs it in, someone else plates it, it sits overnight in an incubator, a scientist checks the culture the next day and maybe runs some sensitivity tests, a pathologist gives it the tick, and the results get sent back. The GP has a look, gives the patient a ring.

Roughly speaking, that's about 35–50 minutes of professional time spread across six or more people, $50–150 in direct costs to the system,and 2–5 days on the calendar.

The outcome? "No infection. It's just wax. Pop some ear drops in."

So the patient heads to Chemist Warehouse, picks up an eight-dollar bottle of Waxsol, and uses it twice a week for ten minutes. That self-care routine (unglamorous, invisible, completely on their own terms) is actually what sorts the problem. Seventeen hours a year of their time. Zero system cost. And it just... works.

I'm not saying the swab was pointless. It ruled out infection. It covered the GP legally and clinically. It genuinely served a purpose. But the ratio is pretty wild: this huge institutional machine spins up to land on a solution that happens entirely outside the institution.

Keep that in mind for a moment.

The tent and the iceberg

Drug checking at music festivals lives in exactly this same tension.

I've spent the past year coordinating peer workers who run drug checking services. The infrastructure behind it is pretty substantial: spectroscopy gear, trained analysts, peer workers delivering health chats, supervision setups, reporting systems, ethics protocols, coordination with event medical teams, police liaison. It's resource-heavy, carefully thought through, and - I really want to be clear here - genuinely valuable. It picks up dangerous adulterants. It creates a rare space for non-judgemental health contact with people who use drugs. It saves lives.

But here's what's only become clear to me as the work's wound down: the drug checking tent is the ear swab.

The answer it gives, more often than not, is basically: "That's MDMA, but it's strong. Take less, wait longer, stay hydrated, look after your mates." Which is the harm reduction version of "use some ear drops."

Because the people coming through that tent aren't blank slates hearing this stuff for the first time. They're people who, in most cases, already dose carefully based on what they've learned, check in with friends about batches, time their use around what else they've got on, carry water, keep an eye on each other. They've got years of built-up, lived knowledge about managing drug-related risk.

That ongoing self-care, the everyday, invisible, community-level harm reduction that happens without any service, any funding, any professional input, that's the ear drops. And it's where the vast majority of actual harm reduction happens.

If you picture it as an iceberg:

Above the waterline sits everything that's visible, funded, and measured: the drug checking tent, the health chats, the clinical bits, the Trial itself. This is what gets reported to government, written up in evaluations, and mentioned in the media.

Below the waterline sits everything that's invisible, unfunded, and uncounted: peer knowledge networks, individual ways of managing risk, community monitoring of supply, people looking after each other at 3am. No KPIs. No Medicare item numbers. No press releases.

The above-the-waterline stuff might add up to a few hours of contact per person per year of going to festivals. The below-the-waterline stuff runs all the time.

Why the visible always wins

There's a pattern here that stretches way beyond drug checking. Governments and institutions really prefer what political scientist James C. Scott might call legible interventions; things that can be seen, counted, controlled, and reported on.

A drug checking tent is beautifully legible. You can count how many samples got tested. You can log how many health chats happened. You can track referrals. You can write in a ministerial brief: "We kept people safe at festivals."

Community-level harm reduction is illegible by its very nature. You can't count the overdose that didn't happen because someone's mate said "go easy on that batch." There's no line item for "knew what they were doing because they'd been doing it carefully for years." You can't stick it in a pie chart.

And here's where it gets tricky: because the legible intervention can be measured, it's the thing that gets funded. Because it's funded, it's the thing that gets evaluated. Because it's evaluated, it's the thing that gets expanded. The feedback loop just keeps reinforcing itself, regardless of where most of the actual harm reduction is taking place.

This isn't some conspiracy. Nobody's being dodgy about it. It's just a structural pattern that shapes how all health systems allocate resources. The ear swab gets a pathology rebate. The ear drops don't.

The question nobody's asking

If the Drug Checking Trial gets evaluated mainly on tent-based numbers, and those numbers look solid, which they should, because the services are well-run, the policy takeaway will probably be: "Brilliant. Let's fund more tents."

That's not wrong. But it misses the bigger opportunity.

The more interesting question is: how do you use the tent as a spark for building community-level harm reduction that keeps going when there's no tent around?

Because festivals are just a handful of weekends each year. The same people are making choices about drug use the other 340 days. The tent's not there then. But the peer networks are. The knowledge is. The relationships are.

What would it look like to design for that? To build the ear drops equivalent for drug checking? Something that's:

  • Easy to access and self-directed

  • Woven into people's existing routines and relationships

  • Cheap to keep going at scale

  • Doesn't need a professional gatekeeping every interaction

  • But is backed by the high-infrastructure system when something genuinely unusual pops up

Peer workers, the people I've spent this year learning from, sit right at this crossroads. They're funded and legible enough to work within the system, but they come from the community and work with the trust and knowledge that no institutional service can just manufacture. They're the bridge between what's visible on the surface and what's happening in the depths.

Five tensions worth sitting with

I don't have tidy answers. But I reckon the following tensions deserve more attention than they're getting in the evaluation and policy chat around drug checking in Australia:

Episodic vs. Continuous. The system funds episodes of care. Harm reduction is ongoing. How do you bridge that gap without forcing community practices into clinical boxes?

Professional vs. Peer. The system trusts credentialled professionals. But so much of what makes harm reduction effective lives in peer knowledge built up through lived experience. How do you recognise that knowledge without co-opting it or turning it into something it's not?

Measurement vs. Impact. What gets measured gets managed. But the most powerful harm reduction actively resists measurement. How do you make the invisible visible without making it institutional, without turning the ear drops into another swab?

Control vs. Autonomy. The system wants to deliver interventions to people. Harm reduction works best when people have agency over their own risk. How do you fund autonomy? How do you write a grant application for "people already know what they're doing, we'd just like to support that"?

The tent as infrastructure vs. the tent as symbol. Is drug checking mainly a health service, or is it mainly a signal that the state's willing to take a different approach to people who use drugs? Both matter, but they point to drastically different design choices, and ways of evaluating success.

Writing from the threshold

I'll be honest: I'm a bit embarrassed this is only clicking for me now, at the end of the role rather than at the start. The evaluation working group meetings have wrapped. My chance to directly shape this particular policy process has pretty recently and significantly shrunk.

But maybe that's exactly why it's visible now. When you're in the thick of the work, (briefing peer workers, sorting logistics, managing the day-to-day reality of running services at festivals) you're naturally focused on the tent. On making the visible thing work as well as it possibly can. It's only from this threshold, looking back, that the shape of the iceberg becomes clear.

So this is me getting it down before it fades. Not as a dig at the Trial, which has been genuinely important work that I'm proud to have been part of. But as a reflection on what comes next, and what we might be missing if we only evaluate what we can see.

The ear drops are working. They've always been working. The question is whether the system can learn to see that - and to support it - without turning everything into another swab.

Previous
Previous

Know Your Routes: Injecting Types & Risks

Next
Next

Between the Beats