Skip to Content

The Applicability of LLMs in the Framework of Unqualified Data

November 18, 2025 by
Margarita Garcia

By: Margarita Garcia - Managing Director, Naoitech

The current landscape of Gen AI business implementation and development relies heavily on the use of Generic LLMs as a basis for their projects. The problem with this approach is that it starts with a fallacy: All the data that Generic LLMs use is accurate, fair, and usable. However, generic LLMs have been trained on massive amounts of unqualified data, which makes them unreliable to perform important and relevant tasks without the assistance of a Subject Matter Expert (SMEs) who can validate the accuracy and relevance of their responses.

LLMs are great tools if they are used in the correct context and under the appropriate implementation strategy; however, the current landscape of hasty and haphazard Gen AI project implementations is proving ineffective and, in some instances, harmful. Companies develop and deploy projects assuming that Gen AI will self-correct and circumvent the planning and evaluations required. In many cases, companies are relying completely on the results of the work performed by Gen AI Tools without the appropriate governance. 

A recent example of this approach involved a large consulting firm that was tasked with creating a report for the Australian government and was caught using Gen AI (ChatGPT 4.0). The issues were uncovered by a  Government Subject Matter Expert who found several sections in the document that had been fabricated by the LLM. These included judges with incorrect last names, who were attributed books that do not exist and fictitious references. The consulting firm had failed to perform its quality assurance, source validation, and disclosure of the use of Gen AI, which led the Australian Government to request a portion of the payment for this service to be returned. Later, it was reported that the same company repeated the same mistake, this time in a report requested by the Canadian Government (Newfoundland).

The importance of Subject Matter Experts when developing a research and implementation strategy in Gen AI-powered projects cannot be overstated. SMEs understand the processes and validity of the data. They are domain experts; therefore, they can discern the accuracy and relevance of the responses produced by an LLM, mitigate issues and devise strategies that can ensure the success of the project.

Governance is non-negotiable; explainability, human intervention, logging (or recording), and security must be integrated from the design phase. Decisions must be visible to avoid "black box" behaviour. The successful deployment of AI and Gen AI tools depends on a structured roadmap that prioritizes business value, data quality, ethical governance, and strategic partnerships. At Naoitech, we have developed a comprehensive implementation roadmap designed to navigate the complexities of AI deployment, ensuring every project follows risk management (operational, cybersecurity, data) and ensures the explainability, transparency, and human oversight of the models, in accordance with ethical principles and personal data protection regulations. This is how we guarantee a successful project implementation for our clients.


Sources

https://www.firstpost.com/explainers/deloitte-ai-citations-canada-australia-controversy-explained-13954229.html

https://globalnews.ca/news/11541408/deloitte-report-newfoundland-errors/

https://fortune.com/2025/10/07/deloitte-ai-australia-government-report-hallucinations-technology-290000-refund/