Symbolic Reasoning & Knowledge Representation
Symbolic reasoning is a term for artificial intelligence reasoning systems that rely on knowledge to perform rather than data. Symbolic reasoning is associated with knowledge representation or qualitative reasoning basically meaning a computer is using explicit words. In data based machine learning the system will reason based on data, which creates statistically derived outputs of information that has a high likelihood of being erroneous because the data-based system does not understand the concepts and words it is communicating. Knowledge in knowledge-based systems can be represented simply writing our facts and describing relationships either in a programming language or agreed upon syntax. For example, you can have a line stating "grass is green" and "grass is on the ground" as facts and re-create them in whatever knowledge base syntax you are using. With these facts a knowledge-based reasoner would be able to respond to a statement you make like "there was grass in the sky" with "grass cannot be in the sky because grass is on the ground and the sky is blue."
Knowledge representation systems can hold facts from subject matter experts and answer questions to other people scalable based on reliable information. In the case of ChatGPT, users often say that ChatGPT will give an answer that is rational and possible, but once fact checked it is incorrect. This is because ChatGPT is based on statistically learning from language as data rather than explicitly as facts. ChatGPT is giving an answer of what is statistically most likely based on what it has scraped from the internet leading researchers to call these types of systems "Stochastic Parrots" because they are just repeating what other people have said on the internet without actually understanding what they are saying. Symbolic reasoning systems are more reliable in this regard because they just create inferences on the tightly described knowledge an expert has embedded into the system.
Modern AI experts believe that a combination of neural machine learning systems and symbolic reasoning systems can alleviate a lot of the hardships we are running into with more advanced AI applications like self driving cars. Crashes that have occurred recently from self driving cars are usually caused by a neural network confusing something it is seeing with something else very quickly so the system is unable to make a decision i.e. brake, turn right, turn left... Symbolic reasoning would be able to make executive level decisions a neural network would be able unable to do, for example, maintain the knowledge that there is a stop sign at a street corner even if the computer vision system decides last minute like the stop sign is something different.