Texts
- The Problem of Learning
- Courseware on Problemistics
- Corso su Problemistica
- Resources Management
- Manuale/Intellettuale
- Campagna/Città
Problemistics - Problémistique - Problemistica
The Art & Craft of Problem Dealing
Chapter 3
Knowledge and knowledge engineering
Introducing the hypothesis
Operationalizing the hypothesis
The Artificial Expert
The expert system
The knowledge base
The inference engine
The Human Expert
Memory
Mastery
Consistency
Creativity
Chapter 1 has pointed out the existence of problems related to the way most institutions (schools) traditionally deal with the teaching-learning process. The lack of individualization, personalization and integration of-within learning experiences has been specifically underlined.
Chapter 2 has put forward a possible theoretical answer based on the utilization of multimedia instructional tools capable of providing a solution to those very deficiencies.
Chapter 3 will re-formulate the theoretical solution in terms of a specific hypothesis.
Introducing the hypothesis (^)
The hypothesis puts forward the argument that the use of computer assisted learning coursewares would promote individualization, personalization and integration of learning processes and would generally (i.e. in most cases) lead to an increased efficiency in learning performances and in user's skill.
The way the hypothesis is formulated (N. Bennet, 1973; D. Van Dalen 1979; Nachmias-Nachmias, 1982) takes into account two exigencies related to:
- content. The hypothesis should present "a relevant and logical explanation of phenomena" (D. Van Dalen, 1979);
- form. The statements contained in the hypothesis ought to be made operational (N. Bennet, 1973).
To make hypothetical statements operational means that the variables involved should be expressed as specific facts whose occurrence might be appropriately observed and measured.
This means:
i) to define the variables in practical specific terms, employing clearly expressed and understandable concepts (i.e. whose use and meaning is or might be shared by the scientific community);
ii) to undergo/devise appropriate (relevant in scope and magnitude) experiences (observations) and experiments (tests), scientifically conducted (verifiable and/or replicable), that will clarify (explain, assess, measure) the nature (kind, degree) of the relationships amongst the variables under examination.
Operationalizing the hypothesis
In order to make the hypothesis previously expressed operational, three steps need to be undertaken, related to:
i) Topic. An area of learning has to be chosen, in which respect the hypothesis will be tested; inside the chosen area, specific topics have to be selected that will provide materials for the building of the courseware.
ii) Skills. Capacities relevant to scholars and learners within the chosen area need to be pointed out and analysed in their components (what they are made of), causes (what made them) and consequences (what is likely to be made using them, i.e. performances). [Chapter 4]
iii) Tool. The instructional devices (hardware-software) that will operate as promoters for learning and for enhancing the capacities required in operating within the chosen area need to be described in general [Chapter 5].
After having undertaken these three steps, it will be possible to express in operational terms the hypothesized relationships, with reference to the specific area (the learning topic), between the independent variables (the learning tool) and the dependent variables (the learning performances and the skill's enhancements) and to test/assess the nature (kind, degree) of these relationships.
The topic (^)
The topic chosen belongs to the area of research methods.
Leady (1980) defines 'research' as "the manner in which we solve knotty problems in our attempt to push back the frontiers of human ignorance."
Two aspects appears to be stressed : the emergence of knowledge (as opposed to ignorance) and the endeavour towards the solution of problems.
On the basis of this and other scientists' definition (Churchman, 1971; J. Zeisel, 1981; R. Bennet, 1983) who view research as the development of knowledge for some specific purpose, research will be referred throughout the paper as any activity characterized by two aspects:
i) knowledge engineering
ii) problem solving.
What is meant by knowledge engineering and problem solving needs to be expressed in order to be able to point out, at a later stage, what are the demands to perform these tasks and, consequently, which capacities are required to satisfy those demands (i.e. to actually conduct a research process).
Knowledge Engineering (^)
During the '50s and '60s, Allen Newell, Cliff Shaw and Herbert Simon at the Carnegie Institute of Technology developed an Information Processing Theory of Human Problem Solving ("The theory proclaims man to be an information processing system, at least when he is solving problems" - Newell & Simon, 1972).
An information processing system is, basically, made of receptors (to receive data), processor (to interpret information) and memory (to store symbols).
Hawkins, Levy, Montgomery (1988) define data as signals detectable by the human senses and information as a collection of data, generally of different types and from different sources. The same authors refer to "the structure or the organization of information" as "knowledge".
As for 'research', Churchman (1971) characterizes it as "any activity that produces knowledge".
In the course of his analysis, he rejects the definition of knowledge as a collection of information because "knowledge resides in the user and not in the collection". For him "knowledge is a potential for a certain type of action" (C. W. Churchman, 1971).
The action the author refers to, is that leading to problem solving.
So, even accepting the substance of the Newell, Shaw & Simon approach, it seems more appropriate, with regard to the research activity, to replace the term information processing with that of knowledge engineering.
Knowledge engineering, used mainly with reference to computer science, has been defined by Feigenbaum (1980) as the process of reducing a large body of knowledge to a precise set of facts and rules.
The concept has come into widespread use in the '80s following the development and implementation (during the '60s and '70s) of a new kind of computer programs (expert systems) aiming at embodying and communicating in a structured organized way the expertise related to specific domains of knowledge.
Knowledge engineering will be referred to as any activity demanding acquisition and representation of knowledge.
This activity will be seen as a necessary pre-requisite for problem solving.
Problem solving (^)
Problem solving, as pointed out by A. Newell (1968), can be regarded as "a matter of search - of starting from some initial position (state of knowledge) and exploring until a position is attained that includes the solution - the desired state of knowledge".
A problem solving process could be considered as made by two main components: the problem itself or the goal to be achieved (what) and the solution to it or the way to achieve the goal (how).
In a paper for the National Society for the Study of Education (USA), Getzels (1964) presented a classification of "types of problems", ranging from cases in which the problem is given and there is a standard method for solving it to cases in which the problem itself exists but remains to be identified or discovered and no standard methods for solving it is known.
Given this variety of possibilities that might emerge within the problem solving process, Getzels (1964) remarked that "there is a difference between working on the solution to a problem that is presented and discovered and discovering a problem that need solution."
In relation to this spectrum of possibilities, two main different problem solving approaches have been identified and expounded by the scientific community: insight and heuristics.
Insight
Insight is a psychological concept introduced by W. Kohler (1925) and developed by the Gestalt school (Wertheimer, Dunker) in order to give account and stress the relevance of the intuitional and perceptual aspects in the problem solving process (D. Dellarosa, 1988).
This has meant to place problem finding/problem restructuring as a pre-requisite for problem solving. To this aim, it has been pointed out (Wertheimer, 1968) that flexibility in thinking-perceiving could lead to a productive representation of the problem that embodies, at the same time, its solution. The grasping of the solution happens in a flash, when a new conceptualization (mental representation) of the problematic case is achieved.
Heuristics
Heuristics is a product of the information processing approach.
It refers to plausible strategies for interconnecting, step by step, bits of organized information in order to approach and eventually reach a solution (for example the "means and end analysis" of Newell, Shaw and Simon).
Lenat (1982) refers to heuristics as "a large array of informal judgmental rules" and Gibbs (1978) defines it as a "rule of thumbs which enables problems to be solved by the use of intelligent guesses or short cuts or progressive trial and error" (Gibbs, 1978).
The demands of problem solving activity refers to the (physical & mental) manipulation and structuring of different (material/symbolic) systems in order to achieve a specific result (goal).
If the demands are successfully met, i.e. if the problem is found and the steps towards its solution are clearly and openly sequenced, then, the result is the emergence of an algorithm (H. R. Lewis and C. H. Papadimitriou, 1978).
An algorithm could be defined as the representation of procedural steps for performing a specific problem solving task process.
Computers programs are fed with algorithms while human beings learn and use algorithms all throughout their lives.
The skills needed to devise/activate algorithms and, more generally, to perform knowledge engineering and problem solving tasks (i.e. to carry out a research process) will be now searched and pointed out.
The Skills (^)
The performing of a task sees the interaction of three elements:
- the capacities of the performer,
- the demands of the task and
- the strategies (methods, techniques) to relate demands to capacities (Welford, 1980; 1986).
Welford (1980) defines skill as "the use of efficient strategies", efficient in terms of input applied (time, effort) and output achieved (quantity/ quality of results).
Following the characterization of skill as an entity composed of various capacities, the researcher's skill will be thought as the qualitative sum and interaction of those capacities that lead to the performing of an efficient research strategy that gives (satisfactory) answers to the demands of the research process.
Having, the research strategy, been equated to a strategy for knowledge engineering and problem solving, the capacities composing the skill to perform these tasks will be searched for and pointed out.
The search will be conducted in two stages that will deal with:
i) the skills embodied in an artificial expert (expert system);
ii) the skills required by an ideal type human expert (researcher).
The Artificial Expert (^)
It has been widely acknowledged (Miller, 1981; Feigenbaum, 1982; N. Dehn and R. Schank, 1985) that the analysis of how artificial experts work is apt to throw light, to some extent, on the way the human mind works.
The study and development of artificial experts belongs to the area of Artificial Intelligence that is "the branch of computer science that envisions the development of intelligent systems" (Barr and Feigenbaum, 1982).
Within the general framework of Artificial Intelligence, the analysis will focus primarily on a kind of programmes, the expert systems, whose aim is to assist the human expert, with structured knowledge, in decision making/ problem solving processes. The analysis will expand only insofar as the "capacities" embodied in this kind of artificial expert will be brought to emerge.
The Expert System
An expert system is composed by :
i) a Knowledge Base;
ii) an Inference Engine.
i) The Knowledge Base.
A Knowledge Base is "a collection of facts, relationships and rules which embody current expertise in a particular area" (A. J. Meadows et alii, 1987).
In order to build a Knowledge Base, knowledge needs to be engineered.
To do so, two factual operations are required:
- knowledge acquisition
- knowledge representation.
Knowledge Acquisition
The relevant knowledge pertinent to a specific knowledge domain needs to be elicited from wherever it might be (i.e. human experts, records).
The main ways of knowledge elicitation from human experts (A. Hart, 1986; S. Gronow & I. Scott, 1986; A. Walker, 1987) are based on :
- Explanation. The human expert describes, in a very thorough way, how he performs a certain task.
- Observation. The human expert presents concrete examples of what he does offering material for rule induction.
What is required from the artificial expert at this stage is the capability of storing the acquired knowledge for future use. To do so knowledge must be memorized. Memory represents the founding capacity on which a Knowledge Base is built. Without memory there is no Knowledge Base but memory, by itself, as a simple storage of data, is not sufficient to produce a Knowledge Base.
What is required is an organized memory that allows for the quick and easy retrieval of what has been previously stored.
To this purpose, knowledge needs to be represented in a way that fits the computer as a symbol manipulating machine (J. Haugeland, 1986).
This exigency is fulfilled by the subsequent step in knowledge engineering.
Knowledge Representation
Knowledge representation consists in the symbolic way the acquired knowledge is expressed and structured in order to be dealt in a meaningful way by the machine.
In an expert system, knowledge can be represented (A. Bennet, 1985; Q. Takashima, 1986; A. Walker, 1987) by:
- Rules expressed generally in the if ... then form (condition-action) meaning that whatever a certain situation(s) is encountered, a certain action is suggested (Rule based expert systems: Emycin, Syllog).
- Nets expressed in the form of nodes (containing evidence or hypotheses) and arcs (signifying causal linkages between nodes) (Nets based expert system: Prospector).
- Objects expressed in the form of attributes (properties that characterize the object) able to send and receive messages (interact) (Example: Smalltalk Language for building expert systems).
Whatever the way of representing knowledge, what seems essential is the fact that knowledge needs to be "meaningful" to the artificial expert if problem solving tasks have to be performed.
Past attempts, for instance in the Artificial Intelligence area of machine translation, failed because of lack of "understanding", on the part of the artificial translator, of the meaning of the text to be translated (N. Dehn and R. Schank, 1985).
Understanding and meaning are, clearly, in the minds of those who program and those who use the artificial expert (H. L. Dreyfus, 1972; J. R. Searle, 1987).
Nevertheless, the artificial expert, in order to "behave intelligently", has to act as if it autonomously possessed a sufficient level of "understanding."
To do so it needs to embody the capacity of Mastery.
In an expert system mastery should result in the ability to:
i) organize (classify, use taxonomies);
ii) evaluate (compare, discriminate, connect);
iii) explain (make clear the reasons underlying a certain advice/decision).
Mastery produces understanding and understanding "entails making inferences and results in the ability to make further inferences" (N. Dehn and R. Schank, 1985).
This leads straight to the subsequent component of an expert system: the Inference Engine.
ii) The Inference Engine
The knowledge acquired and represented is activated by a symbols manipulator known as the Inference Engine.
The Inference Engine is "the reasoning core of the system which infers logical conclusions from the Knowledge Base" (A. J. Meadows et alii, 1987).
Expert systems generally present a separation between the declarative knowledge (semantics) stored in the Knowledge Base and the procedural knowledge (syntactics) activated by the Inference Engine.
This is said to promote flexibility (in terms of changes) and transparency (in terms of explanations) (B. G. Buchanan & R. O. Duda, 1983).
Clearly the correctness of the acquired knowledge and the effectiveness of its representation play, on the whole, a very important role and reserves have been expressed (M. Minsky, 1987) about "the strategy of complete separation of specific knowledge from general rule of inference."
Nevertheless, what is here under focus are the capacities related to the process of making valid inferences and not the content of the statements to which the inference process is applied.
An expert systems presents two kinds of inference methods based on the implementation of:
i) fixed algorithms. The expert system solves problems executing a sequence of finite formally expressed steps.
The capacity required by the system can be defined as Consistency and it amounts to a correct linking and sequencing of statements according to the rules of logic.
ii) heuristic algorithms. The expert system tries to solve problems using plausible strategies that reduce (prune away) in a selective way the number of searches to be performed (D. B. Lenat, 1982; Byte, 1987).
Heuristic algorithms are implemented when the trap of combinatorial explosion is impending (i.e. when the number of possibilities to be investigated in order to solve the problem is enormous) or when the problem itself presents highly structured formalizable variations (D. B. Lenat, 1982) not reducible to the finite steps of a fixed algorithm.
The capacity required to perform an heuristic algorithm or, as Newell & Simon called it, an heuristically guided search (Haugeland, 1986) goes beyond that of formal logical consistency.
According to Rich (1986) it needs a form of rationality able to deal with a very large search space in which many or most of the paths are not productive. Creativity, he points out, is the capacity required to answer the demands of implementing heuristics (E. A. Rich in T. Bernold ed., 1986).
Creativity for an artificial expert should mean varieties of common sense at its best (rule of thumbs, trick, simplification - E. A. Feigenbaum & J. Feldman, 1964), and this appears indispensable in many problem solving cases to replace what M. Boden (1977) calls "the weakness of brute force" i.e. the exhaustive search through an algorithmic sequence.
This brief analysis of a type of artificial expert (expert system) has lead to the emergence of four capacities, (memory, mastery, consistency, creativity) as composing the total skill of an artificial expert.
Being the fact that artificial experts are made to resemble the "behaviour" of human experts, the same capacities are likely to reappear, but with different connotations, in the analysis of a human expert's skill.
The Human Expert (^)
As previously stated, the human expert involved in the research process is seen as a knowledge engineer and a problem solver.
In order not to be unduly restrictive, this definition applies to anybody involved in engineering knowledge in the attempt of solving problems, irrespective of whether he/she succeeds or not.
What needs to be assumed as implicit in this definition is that the researcher is characterized by a system of values and personality traits that manifest themselves in an attitude and aptitude to receive from (perception) and to respond to (action) the external world (inter-change) in a way suitable for the process of knowledge engineering and problem solving to take place (Bloom et alii, 1964).
Given for datum (i.e. as an indispensable pre-requisite) this openness and appropriateness to interact with the external world, the analysis will focus on the elements (capacities) that compose the skill of making research (i.e. of implementing effective research strategies).
The capacities will be pointed out each one on his own but it must be stressed that, in practical terms, they cannot be assumed as separate entities because, as previously said, the researcher's skill refers to the total strategy of conducting the research and this is based on the efficient synchronic working of all (or most of) the elements (capacities) in response to all (or most of) the research requirements (demands).
On the basis of what has emerged referring to artificial experts and of the evidence offered by human experts involved in research projects, four capacities seem to be required in a research activity.
Memory
The openness towards the external world matched by the endowment of appropriate receptors, lead to the phenomenon of perception.
Forgus (1966) has defined perception as "the process of information extraction".
The extracted information needs to become, in a selective way and to a certain extent, registered (encoded) in the human's mind.
Without this fixing/encoding, the external world would not be apperceived (K. Danziger, 1987) other than in a totally ephemeral way and the process of perception (information extraction) would start every time from anew.
In other words, without memory and the related demand/outcome that memory is supposed to satisfy, i.e. recall, learning could not take place (B. J. Underwood, 1985) and neither could any kind of research activity.
Memory then appears to be the founding stone for knowledge engineering activities (acquisition, representation). The memory referred to, is the capacity for long term, structured (organized), active (recall prone), storage
of knowledge.
At the same time, this indispensable capacity for knowledge engineering, while necessary is not sufficient as it is clear from the case of "les idiots savants" (S. Scarr and L. Carter-Saltzman, 1985) endowed with a prodigious memory but unable to go beyond reproducing what they have memorized.
Something further is required to the researcher as human knowledge engineer and this is the mastery of the perceived/acquired symbols.
Mastery
Mastery can be seen as the capacity, by the researcher, to grasp the semantics and the syntactics of the research process, i.e. understanding the meaning(s) of symbols and the rule(s) that guide their meaningful use.
The main difference between artificial and human experts consists in the different nature and level of this mastery. In fact, while an artificial expert embodies a weak semantic and a strong syntactics capability, the human expert is supposed to show a balanced capability in mastering both the meaning(s) carried by an event (in material and/or symbolic form) and the related rule(s).
This results, as in an expert system but at a different more sophisticated level, in an ability to:
i) organize (from classifying to theory building);
ii) evaluate (from scaling to producing value systems);
iii) explain (from hypothesis formulation to the discovery of relationships).
The demands mastery should answer and the outcome it should produce ought to result in what Reynolds (1986) characterizes as:
i) a sense of understanding (about the causes of events);
ii) a potential for control (over the causes of events).
Mastery could be seen in terms of knowledge (understanding) and in terms of action (control), as required and allowed by the specificity of the case.
In other words, the role of mastery is to produce understanding and understanding is the essential pre-requisite for a problem-solving action (H. Simon, 1979).
In order to implement this link between understanding (knowledge engineering) and control (problem solving action) a further capacity needs to be expressed by the researcher.
Consistency
Consistency is referred here as a capacity composed of two complementary aspects:
i) factual consistency : correspondence between theoretical statements and factual evidence, as advocated by the correspondence theory of truth (S. Haack, 1978);
ii) logical consistency : coherence amongst theoretical statements of an argument (i.e. within premises and between premises and conclusion), as advocated by the coherence theory of truth (S. Haack, 1978).
These two aspects are both necessary to the researcher's strategy because, while factual consistency gives strength to the enunciation of an inductive argument (semantic declarative robustness), logical consistency gives validity to the deployment of a deductive argument (syntactics procedural correctness).
In order to be a problem solver the researcher needs to employ both factual and logical consistency (Dewey, 1938).
This satisfies the demands for logical reasoning and effectual acting as required, for example, by devising and implementing fixed algorithms or heuristically guided searches.
By itself, consistency, while being a guarantee for the correctness of the researcher's behaviour (thinking/acting) in attempting to solve problems, nevertheless it lacks a sort of overall adequacy. It is like memory in knowledge engineering, apt to re-produce more then produce knowledge.
For this reason, in order to display productive acting and thinking (M. Wertheimer, 1968), the researcher needs to develop a further capacity.
Creativity
Creativity can be considered as the capacity that satisfy the demands for original productive thinking and acting during the process of solving problems.
Using the semantic/syntactic analogy, creativity can be viewed under two different angles related to:
i) representation (declarative). It refers to a productive original restructuring of the problem as pointed out by the Gestalt school (insight); this produces semantically new images that are likely to be conducive to the solution of the problem.
ii) activation (procedural). It refers to a productive original search for the solution of the problem as pointed out by the information processing approach (heuristics); this produces syntactically new linkages that could form the path towards the solution of the problem.
Generally, the way the problem is represented contributes (greatly) to its solution (G. Mosconi and V. D'Urso, 1974) and the solution achieved reflects back on its representation.
For this reason, the analytical distinction between declarative and procedural does not emerge so clearly in the real process of being creative that appears more likely to be a composite intermingling of productive representation and activation of the elements of the problem.
What here matters is to stress the fact that creativity, as pointed out by many scientists (S. J. Parnes and H. F. Harding ed., 1962; A. Koestler, 1970; P. E. Vernon ed., 1970) is a necessary component of the skill of a researcher as problem solver.
Summary (^)
The aim of this chapter has been to search and point out the kind of skills needed to perform a research activity. It has emerged that the researcher needs the working of four interrelated skills (memory, mastery, consistency, creativity) to perform related tasks in order to devise and implement an (efficient) research strategy that provides answers to the demands of the research process (knowledge engineering and problem solving).
research strategies |
problem solving |
In Chapter 4, skills and tasks, that have insofar only been pointed out, will be analysed in depth especially in their aetiology (what originates them) and phenomenology (how they manifest themselves).
References (^)
- [1925] Wolfgang Köhler, The Mentality of Apes, Penguin Books, Harmondsworth, 1957
- [1938] John Dewey, Logic : The Theory of Inquiry, Holt, Rinehart and Winston, New York, 1960
- [1945] K. Duncker, On Problem Solving, in P. C. Wason and P. N. Johnson-Laird (editors), Thinking and Reasoning, Penguin Books, Harmondsworth, 1968
- [1954] (First Enlarged Edition 1959) Max Wertheimer , Productive Thinking, Tavistock Publications, London, 1968
- [1956] Benjamin S. Bloom et alii, Taxonomy of Educational Objectives, Book 1 Cognitive Domain, Longman, London, 1979
- [1962] Deobold Van Dalen, Understanding Educational Research, Mc-Graw-Hill Book Co., New York, 1979
- [1962] S. J. Parnes and H. F. Harding (editors), A Source Book for Creative Thinking, Scribner’s sons, New York, 1962
- [1964] Arthur Koestler, The Act of Creation, Hutchinson, London, 1965
- [1964] Benjamin S. Bloom et alii, Taxonomy of Educational Objectives, Book 2 Affective Domain, Longman, London, 1973
- [1964] J. W. Getzels, Creative Thinking, Problem-Solving, and Instruction, in Theories of Learning and Instruction, The Sixty-third Yearbook of the National Society for the Study of Education, USA, University of Chicago Press, Chicago, 1964
- [1966] Ronald H. Forgus, Perception, McGraw-Hill, New York, 1966
- [1968] Allen Newell, On the Analysis of Human Problem Solving Protocols, in P. N. Johnson-Laird and P. C. Wason (editors), Thinking. Readings in Cognitive Science, Cambridge University Press, Cambridge, 1980
- [1970] P. E. Vernon (editor), Creativity (readings) Penguin, Harmondsworth, 1975
- [1971] C. West Churchman, The Design of Inquiring Systems : Basic Concepts of Systems and Organization, Basic Books, New York, 1971
- [1971] C. West Churchman, The Design of Inquiring Systems : Basic Concepts of Systems and Organization, Basic Books, Newq York, 1971
- [1971] Paul Davidson Reynolds, A Primer in Theory Construction, Macmillan Publishing Co., New York, 1986
- [1972] Allen Newell and Herbert A. Simon, Human Problem Solving, Prentice Hall, Englewood Cliffs, N.J., 1972
- [1972] Hubert L. Dreyfus, What Computers Can’t Do, The MIT Press, Cambridge, Mass., 1972
- [1973] G. Mosconi e V. D'Urso (a cura), La Soluzione di Problemi, Giunti-Barbera, Firenze, 1991
- [1973] Neville Bennet, Research Design, Open University Press
- [1974] G. Mosconi e V. D'Urso, Il Farsi e il Disfarsi del Problema, Giunti-Barbera, Firenze
- [1977] Margaret Boden, Artificial Intelligence and Natural Man, The Harvester Press, Hassocks, Sussex, 1979
- [1978] G. I. Gibbs, Dictionary of Gaming, Modelling and Simulation, E. & FN Spon Ltd., London
- [1978] Harry R. Lewis and Christos H. Papadimitriou, Elements of the Theory of Computation, Prentice Hall International, London, 1978
- [1978] Susan Haack, Philosophy of Logics, Cambridge University Press, Cambridge, 1978
- [1979] Herbert Simon, Models of Thought, Yale University Press, New Haven, 1979
- [1980] A. T. Welford, The Concept of Skill anf Its Application to Social Performance, in W. T. Singleton, P. Spurgeon and R. B. Stammers, The Analysis of Social Skill, Plenum Press, New York, 1980
- [1980] John R. Searle, Minds, Brains, and Programs, in John Haugeland editor, Mind Design, The MIT Press, Cambridge, Mass., 1981
- [1980] Paul D. Leedy, Practical Research : planning and design, Macmillan, London, 1980
- [1981] Edward A. Feigenbaum and Julian Feldman (editors), Computers and Thought, Robert E. Krieger Publishing, Malabar, Florida, 1981
- [1981] John Haugeland editor, Mind Design, The MIT Press, Cambridge, Mass., 1987
- [1981] John Zeisel, Inquiry by Design. Tools for Environment-Behavior Research Cambridge University Press, Cambridge, 1985
- [1981] Marvin Minsky, A Framework for Representing Knowledge, in John Haugeland editor, Mind Design, The MIT Press, Cambridge, Mass., 1981
- [1982] Avron Barr and Edward A. Feigenbaum (editors), The Handbook of Artificial Intelligence, vol. 2, HeurisTech Press, Stanford, 1982
- [1982] Chava Frankfort Nachmias and David Nachmias, Research Methods in the Social Sciences, Edward Arnold, London
- [1982] Douglas B. Lenat, Heuretics : Theoretical and Experimental Study of Heuristic Rules, in Proceedings of the AAAI 1982, Carnegie Mellon University, Pittsburgh, 1982
- [1982] Natalie Dehn and Roger Schank, Artificial and Human Intelligence, in Robert J. Sternberg (editor), Handbook of Human Intelligence, Cambridge University Press, Cambridge, 1985
- [1982] Sandra Scarr and Louise Carter-Saltzman, Genetics and Intelligence , in Robert J. Sternberg (editor), Handbook of Human Intelligence, Cambridge University Press, Cambridge, 1985
- [1983] Roger Bennet, Management Research, ILO, Geneva, 1983
- [1984] Alain Bonnet, Artificial Intelligence. Promise and Performance, Prentice Hall, Englewood Cliffs, N. J., 1985
- [1985] Elaine A. Rich, Rationality and Creativity in Artificial Intelligence, in Thomas Bernold (editor), Expert Systems and Knowledge Engineering, North Holland, Amsterdam, 1986
- [1986] Anna Hart, Knowledge Acquisition for Expert Systems, Kogan Page, London, 1986
- [1986] Qunio Takashima, Characteristics and Technical Challenges of Current Expert Systems, in T. Bernold (editor), Exper Systems and Knowledge Engineering, North- Holland, 1986
- [1986] Stuart Gronow and Ian Scott, Learning to Place a Value on Knowledge, Expert Systems User, August 1986
- [1987] Adrian Walker, Knowledge Systems : Principles and Practice, in Adrian Walker, Michael McCord, John F. Sowa and Walter G. Wilson (editors), Knowledge Systems and Prolog, Addison Wesley, Reading, Mass., 1987
- [1987] Byte, Educational Computing, February 1987
- [1988] Denise Dellarosa, A history of thinking, in Robert J. Sternberg and Edward E Smith, The Psychology of Human Thought, Cambridge University Press, Cambridge, 1988
- [1988] Donald T. Hawkins, L. Levy and K. Montgomery Knowledge Gateways : the Building Blocks in "Information Processing & Management", vol. 24, n. 4