Knowledge-Graph-Embeddings-to-Implement-Explainability
Knowledge Graph Embeddings (KGE) to implement Explainable Artificial Intelligence. As AI develops users must know how algorithms make their decisions, especially for hazardous tasks such as driverless cars. Knowledge graphs are an inherently understandable form of text-based data created as an interconnected network of information. These can be converted into KGE by transforming the unqiue entites in the graph to vector representations. With these, predictions were made for missing/incorrect links in the network and further explainations were made by plotting the clusters of the data. Knowledge graphs and their embedded models were researched and four of these KGE were created and tested by their ability to rank the correct links from a Covid-19 dataset. This dataset was extracted from research papers about the virus to retrieve information quicker. The model which was most accurate was used to implement knowledge graph completion and explainability of the dataset using visual and textual interpretations. A 29,000-word thesis was written to describe the work done through the researching, testing and interpreting of this project.