FREE online courses on Expert Systems - Explanations
One of the more interesting features of expert systems is
their ability to explain themselves. Given that the system knows which rules
were used during the inference process, it is possible for the system to provide
those rules to the user as a means for explaining the results.
This type of explanation can be very dramatic for some
systems such as the bird identification system. It could report that it knew the
bird was a black footed albatross because it knew it was dark colored and an
albatross. It could similarly justify how it knew it was an albatross.
At other times, however, the explanations are relatively
useless to the user. This is because the rules of an expert system typically
represent empirical knowledge, and not a deep understanding of the problem
domain. For example a car diagnostic system has rules which relate symptoms to
problems, but no rules which describe why those symptoms are related to those
problems.
Explanations are always of extreme value to the knowledge
engineer. They are the program traces for knowledge bases. By looking at
explanations the knowledge engineer can see how the system is behaving, and how
the rules and data are interacting. This is an invaluable diagnostic tool during
development.