Automating the Formulation of Competency Questions in Ontology Engineering

dc.contributor.advisorTamma, Valentina
dc.contributor.advisorGrasso, Floriana
dc.contributor.advisorPayne, Terry
dc.contributor.authorAlharbi, Reham
dc.date.accessioned2025-03-05T08:20:14Z
dc.date.issued2025
dc.description.abstractOntology reuse is a fundamental aspect of ontology development, ensuring that new ontologies align with established models to facilitate seamless integration and interoperability across systems. Despite decades of research promoting ontology reuse, practical solutions for semi-automatically assessing the suitability of candidate ontologies remain limited. A key challenge is the lack of explicit requirement representations that allow for meaningful comparisons between ontologies. Competency Questions (CQs) , which define functional requirements in the form of natural language questions, offer a promising means of evaluating ontology reuse potential. However, in practice, CQs are often not published alongside their ontology, making it difficult to assess whether an existing ontology aligns with new requirements, ultimately hindering reuse. This thesis tackles the challenge of ontology reuse by introducing an automated approach to retrofitting CQs into existing ontologies. Leveraging Generative AI, specifically Large Language Models (LLMs), this approach generates CQs from ontological statements, enabling the systematic extraction of functional requirements even when they were not explicitly documented. The performance of both open-source and closed-source LLMs is evaluated, with key parameters such as prompt specificity and temperature explored to control hallucinations and improve the quality of retrofitted CQs. Results indicate high recall and stability, demonstrating that CQs can be reliably retrofitted and aligned with an ontology’s intended design. However, precision varies due to long-tail data effects, and potential data leakage may artificially inflate recall, necessitating further research. By enabling the reconstruction of CQs, this approach provides a foundation for assessing ontology reuse based on requirement similarity. Specifically, CQ similarity can serve as an indicator of how well an existing ontology aligns with the needs of a new ontology development effort. To operationalize this idea, this thesis proposes a reuse recommendation phase within ontology development methodologies. This phase systematically identifies candidate ontologies based on requirement overlap, offering a structured approach to reuse assessment. The methodology is validated through a practical case study, demonstrating its effectiveness in real-world ontology design. By embedding an explicit reuse recommendation step in the ontology engineering process, this approach provides ontology engineers with a systematic method to identify suitable candidate ontologies, enhancing the overall design process.
dc.format.extent222
dc.identifier.citationReham Alharbi, Valentina Tamma, Floriana Grasso, and Terry Payne. 2025. Automating the Formulation of Competency Questions in Ontology Engineering.
dc.identifier.urihttps://hdl.handle.net/20.500.14154/74982
dc.language.isoen
dc.publisherUniversity of Liverpool
dc.subjectCompetency Questions
dc.subjectLarge Language Models
dc.subjectOntology Reuse
dc.titleAutomating the Formulation of Competency Questions in Ontology Engineering
dc.typeThesis
sdl.degree.departmentDepartment of Computer Science
sdl.degree.disciplineArtificial Intelligence
sdl.degree.grantorUniversity of Liverpool
sdl.degree.nameDoctor in Philosophy

Files

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.61 KB
Format:
Item-specific license agreed to upon submission
Description:

Copyright owned by the Saudi Digital Library (SDL) © 2025