- The advantages of DMX include building structured knowledge that is reliable and effective. However, its disadvantages are the need for manual construction, low efficiency, and difficulties in knowledge updating and expansion.
- The advantages of Large Language Models are quick access to data and easy updating. However, the drawbacks include the black box problem and the potential to provide users with false or erroneous knowledge.
- So, the question is whether it is possible to integrate a large language model into DMX to leverage the strengths of both. I would like to ask the experts if they have any good methods to implement this idea.
See, e.g. How LLMs teach you things you didn’t know you didn’t know [post]
RalfBarkow, thank you for your reply.
Dear Liangbing, I really appreciate your question and thought, since it totally matches similar ideas I have had already. Unfortunately I did not have the time to learn more about LLM recently. A few years ago we were applying for EU funding for a medical project with very similar requirements. It was about the stochastic analyzing of computer tomograph pictures of the brain via self learning inference systems, to identify irregular patterns that might be relevant for the early diagnosis of Alzheimer’s disease. The main argument for using DMX in this context was that the “reasoning” would not be done by the machine, but by a (human) doctor, but that the computer is much better in running picture comparisons. The goal was that the machine would select and suggest candidates to look at for the doctor. Also, it was key, that access to the sources for the selection process were provided via paths of associations within DMX, so that at any point, it was possible to retrace the process. And last but not least the doctors would be able to “feed” the systems, by creating new associations or deleting wrong ones, aka semantic editing.
Dear jpn, The application case you introduced is very inspiring to me. Thank you very much for your patient explanation!