Using AI responsibly in an evolving technology landscape
As AI tools become more prolific, and embedded in our everyday working lives, they can make some processes and tasks more efficient and streamlined.
However, we should adopt a degree of caution. AI is still prone to making assumptions, hallucinating, or occasional misunderstandings, making human input vital still.
Lewis Jones, Administration Support Officer in our Governance and Performance team, explores the benefits and potential dangers involved in utilising AI in his role.
“As an Administration Support Officer, the proliferation of AI tools can feel like a double‑edged sword: so many processes can be sped up but, as they still don’t work perfectly, a lot of time can be wasted checking for accuracy and correcting mistakes. They can also bring risks that don’t suit an environment where accuracy, audit trails, and immaculate information governance matter.
“A big plus is how advances in voice recognition, note‑taking, and summarisation have made the process of turning my messy scribblings into a clean meeting summary much easier. Human thought and judgement are still used throughout the process; AI simply accelerates the polishing.
“AI can also help with information organisation. When wrangling spreadsheets, tools can help identify duplicates, flag missing fields, suggest categories, and draft plain‑English summaries for stakeholders who don’t want to stare at raw data.
“However, as we all know, AI can be confidently wrong. AI tools can misread context, invent details, or fill gaps with assumptions their models have hallucinated. In an NHS operational setting, that’s dangerous. A fabricated reference number, the wrong date, or a made‑up policy can cause real disruption and leave us open to audit issues down the line. That means AI output is never the final answer. It can only create a draft that needs checking against source documents, shared folders, and established processes. If it can’t cite where a detail came from, it can’t be trusted as true.
“There is also significant data risk. Admin work involves sensitive operational information, and even when it is not patient data, it still requires proper handling. Rigorous GDPR policies are in place and must be met. We have to ask whether information put into an AI system can be trusted not to be reused or exposed elsewhere in the future. Fortunately, we are rolling out a robust AI policy, allowing us to stay compliant while still benefiting from the convenience.
“As these tools become embedded in more of the software we use every day, a balanced approach is essential. We can trust the machine to do what it is good at, but only with human input on both sides of the process. The use of AI for drafting, structuring, summarising, and pattern‑spotting can exceed human capacity, but it must always be paired with human scrutiny for accuracy, context, and compliance. In my role, AI is best seen as a capable assistant for the admin workload, not an authority. The value is real, but only if I stay in charge of what goes out under my name.”

