Herbert A. Simon's Computer-Modelled Theory of Scientific Discovery

IFSR Newsletter 1988 No. 4 (20) October/November
Wolfgang Schinagl
Ludwig Boltzmann Institut for Wissenschaftsforschung Graz, Mozartqasse 14, 8010 Graz, Austria
A report on the lecture held by Prof. Herbert A, Simon (CarnegieMellon University, Pittsburgh, USA: Nobel Prize 1978) at the Karl Franzens University in Graz, Austria.
Herbert Simon distinguishes between two orientations of Artificial Intelligence, The first one is involved with finding ways of getting computers to do tasks which, if they were done by human beings, would call for intelligence. The second, desiqnated as Cognitive Science, consists of trying to understand how the human mind does inteliigent things, how we human beings solve problems, use language, make decisions, design new objects, etc.
Cognitive scientists try to establish their theories of how human beings think in the form of computer programs, Simon asked what we have learned about human thinking by virtue of attempting to model human intelligence with the computer. The methodology of these investigations is taken from all areas of Science: “From the simple to the complex”. Explanations of simple, well structured things, like puzzle solving, was one of the main interests of early research in this field, but today scientists are asking how ill-structured problems can be solved. Scientific work and discovery are examples; the goals are at first only incompletely stated and are gradually elaborated during the process of problem solving, One aim of this research is to determine whether we can provide a satisfactory explanation for the processes of scientific discovery using only the same basic procedures that were employed in elucidating simpler forms of human thinking and human problem- solving; namely: reading information from outside to inside, writing information from inside to outside, storing information and relations between symbols and structures in some kind of memory, manipulating and modifying these structures and comparing patterns. “So the hypothesis, more specifically stated, is that a system which is capable of performing these elementary functions has the basic resources it needs for intelligence and, in fact, the basic resources it needs for the kind of intelligence that solves scientific problems and makes scientific discoveries.” Computers have these properties. “The hypothesis is, first, that since computers can do that, they are capable of intelligent behaviour and of modelling the intelligent behaviour of human beings including the making of scientific discoveries, and second, that it is because we human beings have the same capabilities that we are able to do those things.” This empirical hypothesis is tested first by looking at the available historical data concerning human scientific discovery and,
second, by writing computer programs which will model the processes of discovery-making. If the models are correctly designed, they will also have to be able to make discoveries, and if these programs are given the same initial conditions as a human scientist, they will recreate the discovery of that person.
A computer rediscovers scientific laws
In testing these hypotheses Simon first of all analyzed the data-driven science, in which scientists often start with some data and try to find regularities which can be summed up in a neat scientific law, perhaps a mathematical function. Simon introduced the computer program BACON worked out by H. Simon, P. Langley, G. Bradshaw and J. Zytkow, which formulates empirical laws from given numerical data by using simple heuristics. In several examples he showed that BACON had rediscovered numerous laws and concepts from the history of physics and chemistry including Kepler’s third law, the concepts of inertia mass, specific heat and index of refraction. By referring to the history of science he analyzed the arguments of those who deny that finding laws from data is a real discovery. “If it isn’t a real discovery, then we are going to have to revise our histories of science, we are going to have to make new decisions as to who the heroes are, we are going to have to take the pictures of Kepler, Black, Ohm, Planck and Balmer out of the history books or at least take out those parts of their work which were merely finding a law from empirical data.” He asserted, that in fact this process is an important aspect of scientific discovery. Others are finding problems, finding data, inventing instruments, planning experiments and inventing representations; these are also understandable and can be explained In information-processing terms. By analyzing the phenomena of finding problems, Simon pointed out the importance of being surprised. You can, however, only be astonished when you know enough about a situation, when you have expectations about what should happen. Fleming, for example, was able to find penicillin by using the following heuristics: defining the scope of the discovered phenomenon, trying to find the mechanism it involved, purifying and then identifying the chemical constitution. The same pattern can be recognized in the discovery of uranium, x-rays, etc. After analyzing the importance of finding new instruments for scientific discoveries, Simon talked about a computer program which plans experiments. It has some knowledge about those sciences for which it is supposed to plan research and, after being fed with the results of previous experiments, it devises new ones, which produce new results for further input data, etc. When historical data was fed in, this particular program proved successful and came to almost the same discoveries as the human scientists. Simon’s general conclusion is “that like the other phenomena of nature, the human mind is understandable, it is explainable by natural laws”, and that the things the human mind does in order to think are exactly the things that the computer, the ordinary digital computer, is capable of doing.”

| Category: IFSR NEWS