Automating the Formulation of Competency Questions in Ontology Engineering

No Thumbnail Available

Date

2025

Journal Title

Journal ISSN

Volume Title

Publisher

University of Liverpool

Abstract

Ontology reuse is a fundamental aspect of ontology development, ensuring that new ontologies align with established models to facilitate seamless integration and interoperability across systems. Despite decades of research promoting ontology reuse, practical solutions for semi-automatically assessing the suitability of candidate ontologies remain limited. A key challenge is the lack of explicit requirement representations that allow for meaningful comparisons between ontologies. Competency Questions (CQs) , which define functional requirements in the form of natural language questions, offer a promising means of evaluating ontology reuse potential. However, in practice, CQs are often not published alongside their ontology, making it difficult to assess whether an existing ontology aligns with new requirements, ultimately hindering reuse. This thesis tackles the challenge of ontology reuse by introducing an automated approach to retrofitting CQs into existing ontologies. Leveraging Generative AI, specifically Large Language Models (LLMs), this approach generates CQs from ontological statements, enabling the systematic extraction of functional requirements even when they were not explicitly documented. The performance of both open-source and closed-source LLMs is evaluated, with key parameters such as prompt specificity and temperature explored to control hallucinations and improve the quality of retrofitted CQs. Results indicate high recall and stability, demonstrating that CQs can be reliably retrofitted and aligned with an ontology’s intended design. However, precision varies due to long-tail data effects, and potential data leakage may artificially inflate recall, necessitating further research. By enabling the reconstruction of CQs, this approach provides a foundation for assessing ontology reuse based on requirement similarity. Specifically, CQ similarity can serve as an indicator of how well an existing ontology aligns with the needs of a new ontology development effort. To operationalize this idea, this thesis proposes a reuse recommendation phase within ontology development methodologies. This phase systematically identifies candidate ontologies based on requirement overlap, offering a structured approach to reuse assessment. The methodology is validated through a practical case study, demonstrating its effectiveness in real-world ontology design. By embedding an explicit reuse recommendation step in the ontology engineering process, this approach provides ontology engineers with a systematic method to identify suitable candidate ontologies, enhancing the overall design process.

Description

Keywords

Competency Questions, Large Language Models, Ontology Reuse

Citation

Reham Alharbi, Valentina Tamma, Floriana Grasso, and Terry Payne. 2025. Automating the Formulation of Competency Questions in Ontology Engineering.

Endorsement

Review

Supplemented By

Referenced By

Copyright owned by the Saudi Digital Library (SDL) © 2025