.

SYSTEM BEHAVIOR AND THE CURRENT EDUCATIONAL PARADIGMS

CHAPTER TWO. SYSTEM BEHAVIOR AND THE CURRENT EDUCATIONAL PARADIGMS (continued)


Convergent Taxonomy and Social and Educational Inquiry

In the paradigm model of Chapter One, the general claim has been that there is a strong link between explanatory views of convergency and the higher levels of axioms, research and practice, and in particular, educational research and practice. This concept will be further considered from the physical, social science and educational perspective as related to the industrial age paradigm. The rationale for the pursuit of the convergent is to examine historical and current thinking in order to articulate implicit and hidden assumptions. But engaged in this pursuit, another question arises. To what degree has social and educational inquiry fully attended to the described categories of convergent system behavior? There is evidence to suggest that opportunities for new unexplored approaches to educational research exist even within the convergent division, the division more readily amenable to dominant scientific procedures. As a consequence, the interactions of convergent behavior and their linkage to higher paradigm levels will be explored from two perspectives: fixed point and limit cycle or periodic frames of reference.

Fixed Point Perception

This discovery of fixed state systems led to generalizations of broad historical application:
    Given an approximate knowledge of a system's initial conditions and an understanding of natural law, one can calculate the approximate behavior of the system. The basic idea of Western science is that you don't have to take into account the falling of a leaf on some planet in another galaxy when you're trying to account for the motion of a billiard ball on a pool table on earth. Very small influences can be neglected. There's a convergence in the way things work, and arbitrarily small influences don't blow up to have arbitrarily large effects (Gleick, 1987, p.15).

These ideas of predictability, equilibrium, convergence and order grow naturally from the macroscopic regular behavior of the growth of plants or the solar day, the lunar month and orbital year. At the microscopic level, Schrodinger's quantum mechanics equations "are equally deterministic. In consequence, mid-twentieth century man could believe in the triumph of determinism at both the microscopic and macroscopic levels" (Ford, 1989, p.2).

This knowledge of the convergent nature of some dynamic systems is used in many well known ways in our culture. Optimum conditions for the growth of many plants are well known. Equations allow programs running on even an inexpensive personal computer to show the night sky from any latitude and longitude point on the planet, from any point in time, 10,000 years past or forward from the present. Space experimenters can plan years in advance for the gravitational fields of planets to appear in time to pull satellites away from earth and out of the solar system in a cosmic relay of catch and fling. The evolution of a simple pendulum can be predicted with great accuracy.

Social and Statistical Science
Many disciplines sought the advantage of such prediction and control. It is only natural that the ideas and tools that succeeded with the aforementioned phenomena were adopted by others, even to the inclusion of social philosophies (e.g., marxism and historical determinism). Social systems wish to determine their futures, to improve them, to make them safer and better. Physicists and sociologists of the 19th century built on Newton's work with the dynamics of mass and motion and applied them to thermodynamic settings, the distribution of particles (physicists Maxwell and Boltzmann) and the "average" man (sociologist Quetelet) (Prigogine, 1983).

These ideas about a tendency towards equilibrium have been carried through into the social science of the twentieth century.

Another paradigm, perhaps the most influential one in American social science, is the image of society associated with Talcott Parsons (1902-1979). It is known as structural functionalism in sociology and political science, and as systems theory in social work and business management. ...Functionalists argue that any social system is always moving toward a state of equilibrium. ...Functionalism has been an attractive paradigm because it confirms the scientific notion of an orderly universe, in which there is a place and a reason for every element of society. (Williamson, Karp, Dalphin & Gray, 1982, p. 23)

These assumptions about the convergent nature of dynamic systems enabled the development of the concept of central tendency and thereby average, upon which commonly used statistical treatments rest, such as: mean, median, mode, standard deviation and bivariate analysis. Multivariate analysis, three and more factors, has been built upon these concepts:

    The partial correlation between variable x1 and variable x2, controlling for x3 ...conceptually can be thought of as the mean of the correlations between x1 and x2 for each of the scattergrams that would result if a separate scattergram were plotted between x1 and x2 for each value of x3. It is a measure of the average correlation between x1 and x2 when x3 is controlled. (Williamson, Karp, Dalphin & Gray, 1982, p. 399)

If we have an expectation that central tendency commonly exists in many systems of interest, including those systems of educational interest, we can assume that generally stable perceptions of certain relationships can be found. Identified relationships can stay similar in similar circumstances or can continue in similar ways. Today's snapshot of a potentially valid experimental situation should produce a recognizable countenance if taken of a similar situation at another place and time. Even random events should continue to produce similar random behaviors. Such behavior provides predictability. With predictability comes the potential for control and directed improvement.

These concepts about central tendency have been a part of human thought for hundreds of years. "Probability begins with games of chance, most notably dice throwing in the 1400's, whose outcomes were abstracted to frequency charts, up through the 1600's. It is hard, though, to be more specific given the obscurity of the information" (Kendall, 1970, p.19). Note here that such conclusions were first formed from simple gravitational effects, gravity and an inanimate cube, a point that will be expanded on later. "...(A)lthough the instruments of gaming had existed for several thousand years, probability theory as a conceptual abstraction of the laws of chance did not come into being until the 16th century. So it is with statistics.... The true ancestors of modern statistics is not 17th century statistics but Political Arithmetic" (Kendall, 1970, p.45). In other words, "statistics originated as state - istics, an accessory to governments wanting to know how many taxable farms or military-age men their realms contain" (Moore, 1985, p.xv).

    In 1660 A.D., statistics begins in Europe with the publishing of life assurance tables, a part of the larger arena of political arithmetic. Key early works were John Graunt's Observations, 1662 and Gregory King's Observations, 1696. Thru the 18th century there were enumerative studies (i.e. Sinclair's 1793 Statistical Account of Scotland) which parallel, but not link with the demographic and actuarial science. They do not merge until the 1800's, the 19th century. (Moore, 1985, p. 45)

However, the importance of modern statistics lies not in the phenomena of counting and itemizing, but in the creation of the basis for estimations or inferences about the data (predictions about the larger population).

    All statistical analyses can be assigned to one of two broad research models, ...descriptive and inferential statistics. ...As a body of knowledge, descriptive statistics corresponds to the tip of the iceberg. It is a small, but important, part of statistical methodology. (Marascuilo & Serlin, 1988, p.12)

Two fundamental tools in the quantitative armory, the normal curve (Gaussian distribution) and probability theory, assume characteristics that have a good fit with convergent systems.

    ...Statistical inference procedures are based on the probabilities of the possible outcomes of a chance process. (p. 247) ...The key element in defining critical regions and decision rules is the proper choice of a probability distribution to approximate the distribution of the variable in question. ...It is surprising that many, if not most of the statistical tests that are used by researchers can be derived from a single theory, based upon a very specific probability distribution with a long history. This distribution is called the normal distribution. (Marascuilo & Serlin, 1988, p.248)

The normal curve is a model of convergent tendencies. It is the standard, Gaussian distribution of things, the normal distribution. When things vary, they seek a middle point and distribute themselves around this point in a reasonably smooth way. Or as it can be stated in a traditional graphic form:

The

normal

law of error

stands out in the

experience of mankind

as one of the broadest

generalizations of natural

philosophy . It serves as the

guiding instrument in researches

in the physical and social sciences and

in medicine, agriculture, and engineering.

It is an indispensable tool for the analysis and

the interpretation of the basic data obtained by continual simple observation and experiment.

There are two corollaries to this law of error. The central limit theorem along with the law of large numbers requires that the more data there is, the more this normal distribution will be filled out. Secondly, short term changes have little or nothing to do with large, long term changes. Short term events are but the building blocks of the normal distribution falling into place in a haphazard fashion:

    Fast fluctuations come randomly. The small-scale ups and downs during a day's events are just noise, unpredictable and uninteresting. Long-term changes, however, are a different species entirely. The broad swings of change over months or years or decades are determined by deep macro-forces, the trends, forces that should in theory give way to understanding. (Gleick, 1987, p. 85)

The shift to probability from exact measurement is in part a consequence of the Heisenberg principle. Probability theory is used to analyze random and fuzzy phenomena, to determine the relative role of chance and cause. Probability begins with the description of a uniform sample space, the set of possible outcomes. Subsets of the sample space are called events, each iteration a trial. Probability requires that the objects (events) are distinct. Also, this theory requires an assumption of statistical independence. The result of one trial must not effect the result of another (Goldberg, 1960; Marascuilo & Serlin, 1988). "This idea of independence is one of the basic building blocks of the statistical inference model to be developed in later chapters" (Marascuilo & Serlin, 1988, p.135). Probability is determined through repeated trials. "Toss a coin a large number of times and divide by the number of heads that appear by the total number of tosses. If this quotient approaches a fixed limit as the number of tosses is increased, call that limit the probability of obtaining a head in one toss" (Encyclopedia Britannica, 1986, v.26, p.132). The concept of fixed limit lies at the root of the mathematical concept of probability. Many further axioms are built on these assumptions.

It should be noted that of course probability theory is not wedded to Gausian distribution or any particular distribution. An honest die should approach a fixed limit distribution of 1/6, 1/6, 1/6, 1/6, 1/6, 1/6. What these distributions all share is the concept of stability in approaching a fixed limit. This stability should provide stability in prediction. The distribution is expected to be very similar in later trials for the purpose of later analysis. The normal curve merely represents a bias of industrial age researchers, not a bias of probability theory per se.

The industrial age concepts of convergence and determinism are also applied to another important set of ideas previously discussed, Hegel's internal relationships, or in more modern terms: feedback, mutual causation and recursion. The essays of Strotz and Wold (1971), Fisher and Ando (1971) and Kenny (1979) serve as examples of this application. For a recursive system, "the essential property is that each relation is provided a causal interpretation in the sense of a stimulus-response relationship" (Strotz & Wold, 1971, p. 179). Strotz and Wold further note that "to assume that the values of the two variables determine each other makes sense only in an equilibrium system (1971, p. 185). The advantage of the equilibrium assumption is that "...it enables us to predict something about equilibrium values under control" (Strotz & Wold, 1971, p. 186). Note that Strotz and Wold make this analysis based on Violato's simplest recursive models of interaction, the stimulus response model, consisting of just two variables. A bit more daring, adding just two more constants can produce startling outcomes as was demonstrated with the logistic's equation earlier.

Fisher and Ando push the analysis of recursion even further through consideration of secondary factors. Their work is based on econometric studies of feedback as applied to the arena of political science. The equilibrium assumption is not stated explicitly but is assumed. In this reductionist frame of reference, a:

    ...completely decomposable system ...consists of several independent systems each one of which can be analyzed without reference to any of the others. ...This would be the case if every country's borders were really closed so that there were not inter-country effects or if every member of a group had no other roles to play outside the group or if those other roles had no effect whatsoever on actions within the group. (Fisher & Ando, 1971, p.191)

Since such systems are rare if not nonexistent, proof of the general irrelevance of secondary factors is important. This proof begins with the Simon-Ando theorem for nearly decomposable systems which claims:

    Provided that inter-set dependencies are sufficiently weak relative to intra-set ones, in the short run that analysis will remain approximately valid in all respects - that is the system will behave as if it were completely decomposable.

    Now if this were all, it would not be very remarkable, for it would merely mean that if neglected influences are sufficiently weak they take a long time to matter much; however, the theorem does not stop there. ...It asserts that even when influences which have been neglected have had time to make themselves fully felt, the relative behavior of the variables within any set will be approximately the same as would have been the case had those influences never existed, and this despite the fact that the absolute behavior of the variables - their levels and rates of change - may be very different indeed. (Fisher & Ando, 1971, p.192)

In short, Fisher and Ando are making general assumptions about the sensitivity of social systems to the initial conditions of an observation period. They conclude that negligible factors can be ignored, a claim that is refuted in the study of chaotic dynamics.

The Ando-Fisher theorem extends the Simon-Ando theorem further to include even less decomposable systems and their secondary variables which more clearly influence the primary variables than in nearly decomposable systems (Fisher & Ando, p. 194). This theorem also applies for both the short and the long run. "The importance of this result for the usefulness of intra-disciplinary studies in an interrelated world needs no emphasis" (Fisher & Ando, 1971, p.194). (Conversely, one could reason that if this theorem is falsified, the need for interdisciplinary curriculum and instruction becomes critical instead of unessential.)

Strotz and Wold argue that recursive systems are important while Fisher and Ando argue for the wide application of such dynamic systems which supports the idea in causal modeling that "...paring a theory down to a smaller number of paths involves some distortion, but if the omitted paths are small, then such distortion should be minor" (Kenny, 1979, p.262). But Kenny raises two problems for the concept of non-hierarchical (recursive) systems for applied social scientists: 1. there is difficulty in finding an appropriate instrument; 2. the strong assumption of equilibrium for such systems is implausible, that the necessary equilibrium requires iterative or non-recursive systems. However, Kenny encourages researcher to introduce interaction terms in order to test for error or to explore the need for the introduction of further variables.

    Unfortunately, the topic of interaction has received scant treatment in the causal modeling literature. To some degree researchers do not search for interactions because they are afraid to find them. ...The view here is more like the view of Campbell and Stanley (1963) who define an interaction as a threat to external validity. (Kenny, 1971, p. 255)

Kenny also emphasizes the importance of examining systems in equilibrium because mistaken conclusions are easy to come by until the systems have passed the beginning general dampening out stage.

To summarize, this recursive form of interaction was also generally perceived convergently, though this analysis also showed that tentative doubts about this perspective were beginning to surface at least on the part of Kenny. Chapter three will explore a number of researchers who appear to have passed the difficulty of finding non-equilibrium instruments and will re-examine the claim that weak inter-dependencies and short omitted paths can generally be ignored.

To review this analysis of physical and social science, the nature of certain machine-like aspects of tools, factories, solar systems and other phenomena prove an excellent fit between assumptions and practice. Today, "(t)he systematic study of data has now infiltrated most areas of academic or practical endeavor" (Moore, 1985, p.xv). However, this section has seen that this study of data is generally based on convergent assumptions, confirming Porter's analysis that the assumption is "that random fluctuations must in the long-run average out" (1986, p. 318). These analytical tools work well with all the previously discussed variations of convergent dynamic systems: fixed point systems, the limit cycle and the torus.

Educational Science and Counter Movements
With the incorporation of science into the study of social systems, the universal assumptions about convergence and "averaging out" (Porter, 1986) carried naturally from the development of the social sciences to the development of educational science. This section considers further the conceptual transfer from social science to education. However, the dominance of this perception in both the social sciences and education has not kept criticism of the mainstream and a counter-movement from emerging.

The mainstream.
Mainstream development of the convergent perspective has a long history before even this century, but only more recently does it connect with the social sciences. As will be further developed, though perspectives in education within curriculum and instruction splintered in this century, basic explanatory assumptions including convergency were retained in all these movements. The influence of this perspective and its assumptions is even felt by curriculum that could be perceived as remote from this view, as we will see in the example of art curriculum.

The historical roots of today's educational experimentalism can be traced at least to the Renaissance in eleventh century with the rediscovery of teachings of classical Greece by medieval Europe (Burke, 1985), though the case for transfer of experimentalism to social and educational issues took much longer to develop. Erickson (1986) points to the development of these roots in the last century:

    Comte, in the mid-19th century, proposed a positivist science of society, modeled after the physical sciences, in which causal relations were assumed to be analogous to those of mechanics in Newtonian physics (Comte, 1875/1968). Durkheim, Comte's pupil, may or may not have adopted the metaphor of society as a machine, but in attempting to contradict the notion that the individual is the fundamental unit of society, he argued that society must be treated as an entity in itself - a reality sui generis. Such a position can easily be interpreted as a view of society as an organism or machine. At any rate, what was central for Durkheim was not the meaning-perspectives of actors in society, but the "social facts" of their behaviors (Durkheim, 1895/1958). (Erickson, 1986, p. 124)

It is informative here to consider an example of the argument for educational experimentalism presented in the early part of this century, when its assumptions were not as readily accepted as they are today. Scholars were as anxious to move away from the fixed positions of the humanists as they were to move towards the principles of experimentalism and the scientific method. Dewey railed against "absolute philosophies which pretend that fixed and eternal truths are known by means of organs and methods that are independent of science" (1934/1964, p. 16).

In order to move away from the older foundational essentials, Dewey provides philosophy to move towards. Dewey felt the problem sufficiently important to devote a book length work to the issue, Quest for Certainty (1929). The title as accurately as any text quotation hints at the promissory note of determinism. In this work, Dewey places experimentalism in a line of liberators of human thought. Ritual freed humankind from the terror of nature. Kant's pure reason unchained thought from the dogma of the church. Experimentalism freed thought from the rigidity of pure reason. The psychological motivation for such progress is the desire for, the "...quest for a certainty which shall be absolute and unshakeable" (p.4). "The thing which concerns all of us as human beings is precisely the greatest attainable security of values in concrete existence" (p.35). The statistical tools of experimental method promised a way to deal with the fuzzier and more oscillatory nature of social convictions through experimental test.

Dewey spelled out the nature of the scientific method: forced change, measurement and prediction:

    The method of physical inquiry is to introduce some change in order to see what other change ensues; the correlation between these changes, when measured by a series of operations, constitutes the definite and desired object of knowledge. (p.84).

    ...The important thing in the history of modern knowing is the reinforcement of these active doings by means of instruments.... Among these operations should be included, of course, those which give a permanent register of what is observed and the instrumentalities of exact measurement by means of which changes are correlated with one another. (p.87) The work of Galileo was not a development, but a revolution. It marked a change from the qualitative to the quantitative or metric. (p.94)

    The chief consideration in achieving concrete security of values lies in the perfecting of methods of action.... Regulation of conditions upon which results depend is possible only by doing, yet only by doing which has intelligent direction, which takes cognizance of conditions, observes relations of sequence, and which plans and executes in the light of this knowledge. (p.36)

The use of forced change, measurement and prediction pave the way for system improvement through system control. A significant portion of educators have been drawn to this line of thought and include the use of the previously discussed descriptive and inferential statistics in education for this purpose.

Given that prediction and control were assumed within reach through these convergent assumptions, the curriculum movement divided on just what would be best to study and control or guide through the curriculum for what ends: the formation of the positive aspects of adult life (the social efficiency movement); the child (the developmentalists); or the elimination of negative social aspects (the social meliorists).

Kliebard (1986) sees the social efficiency movement achieve dominance over the decades:

    ...the social efficiency educators, were also imbued with the power of science, but their priorities lay with creating a coolly efficient, smoothly running society. The Rice exposes, begun in 1892, and impelled by genuine humanitarian motives, turned out to be a portent of a veritable orgy of efficiency that was to dominate American thinking.... In fact, efficiency, in later years, became the overwhelming criterion of success in curriculum matters. By applying the standardized techniques of industry to the business of schooling, waste could be eliminated and the curriculum, as seen by such later exponents of social efficiency as David Sneddon and Ross Finney, could be made more directly functional to the adult life-roles that America's future citizens would occupy. People had to be controlled for their own good, but especially for the good of society as a whole. (p. 28)

A world view that proved somewhat successful with engineers and physicists proved less successful with educational systems. Problems and challenges arose both from within the educational culture of the social sciences, from art and from within the natural sciences.

In Kliebard's analysis of educational curriculum history from the 1890's through the 1950's, two other movements chose to confront the social efficiency viewpoint. "Each represents a different conception of what knowledge should be embodied in the curriculum and to what ends the curriculum should be directed" (Kliebard, 1986, p.27). Their basic research and practice spread in opposite directions on the social-educational system scale. The developmentalists chose a finer scale of reference, the social meliorists the larger scale:

    Hall and the others in the child-study movement led the drive for a curriculum reformed along the lines of a natural order of development in the child. Although frequently infused with romantic ideas about childhood, the developmentalists pursued with great dedication their sense that the curriculum riddle could be solved with ever more accurate scientific data, not only with respect to the different stages of child and adolescent development, but on the nature of learning. From such knowledge, a curriculum in harmony with the child's real interest, needs and learning patterns could be derived.... (p.27)

The social meliorists formed the third movement:

    They saw the schools as the major, perhaps the principal, force for social change and social justice. The corruption and vice in the cities, the inequalities of race and gender, and the abuse of privilege and power could all be addressed by a curriculum that focused directly on those very issues, thereby raising a new generation equipped to deal effectively with those abuses. Change was not, as the Social Darwinists proclaimed, the inevitable consequence of forces beyond our control; the power to change things for the better lay in our hands and in the social institutions that we create. (p. 27)

Their approaches came from different scales of reference. The social meliorists or social reconstructionists took the larger socio-cultural perspective. The developmentalists took the personal route of the individual. These basic research establishments bounded by their chosen frame of reference sought to attack and improve the basic school model of social efficiency. However, the result, says Kliebard, was an uneasy truce, a detente between the basic research positions. That the end result is a truce instead of major change can be attributed in part to the common acceptance of foundational assumptions that are also tied to the industrial age: reductionism, mechanism and determinism. It can also be attributed in part to the deeper common assumptions about the convergent nature of system behavior by these three educational movements.

Kliebard's conclusions about the educational system through the 1950's are still being reached in the 1980's. Barger's critique of the educational reform reports including "A Nation at Risk" is that the reports are themselves a part of the problem. They call for a redoubling of educational effort within the present educational context without advocating any essential changes in educational structure, content, or methodology. Most of these reform studies have recommended measures that would but reinforce the status quo (Barger, 1987).

In addition to the aforementioned instructional concerns, Kliebard's analysis of educational movements make it clear that the curriculum itself is informed by and contributes to the industrial age perspective, both by its presentation through an instructional system deeply embedded in this perspective, but also in the nature of the ideas presented in the curriculum. However, if one envisions a curriculum continuum anchored on the left by art (e.g., painting, music, dance) and by science on the right (physics, chemistry, biology) with history and social studies somewhere in the middle, one might suspect that the hold this perspective maintains on the curriculum is not uniform, not as all encompassing as previous discussion might be interpreted. A first glance surmise of this continuum might be that students experience ideas across this continuum of curriculum subjects in their tenure in American schools, and this breadth would leaven the industrial age perspective with counter-point to this theme.

Deeper reflection on this continuum raises several points that need further consideration. The first point concerns a phenomena recognized as the "two-cultures" discussion. In the two culture's analysis, there are two broad areas of thought that have few if any communication lines between them, art and science. If there is some truth to this analysis, then this would lead to little cross-reference between art and science curriculum content. By art, I refer not just to the physical tasks commonly associated with art education such as drawing, painting and so on but also to conceptual issues such as the development of aesthetic appreciation and the education of intuition. Secondly, the market-share or curriculum-share of art content in education should be weighed. Finally, the content of art, art curriculum development and their relationship to this industrial paradigm should be noted.

Such discussion should begin with science curriculum itself. There is little question that the elementary and secondary curriculum of science by definition supports reductionist, mechanical and deterministic frames of reference integrated with the scientific method. Regarding the area of art, based on personal experience in observing and teaching art and science at the elementary level and some exposure to its instruction at the secondary level, I conclude that there is little cross-reference in the curriculum between the ends of this continuum at the level of practice, but I am not aware of studies that would further document this claim. As to the second point, the arena of thought represented by art curriculum certainly declines in influence when the number of art teachers and time devoted to art is significantly eroded, both trends noted disparagingly by Eisner. "On the average, elementary school teachers devote about 4% of school time each week to instruction in the fine arts. And this time is not prime time... for the fine arts, Friday afternoons are very popular" (Eisner, 1985b, p.202).

The third point regarding the degree of influence by the industrial age paradigm on art curriculum development and art education research is perhaps best made by a publication in art education that aims to be a "state of the art assessment of some of the major recent developments" (Farley & Neperud, 1988, p.vii). In seeking to regain greater influence with curriculum gate-keepers, the authors seek to both identify with and yet to some extent provide criticism of the industrial paradigm, furthering the detente previously noted by Kliebard. It is important here to note their effort to find and highlight deterministic and convergent features of art education through the introduction of recent developments in cognitive science. "...(T)he possible influences of some of these developments on education in the arts is sorely needed, as art and aesthetic education needs the elucidation of lawful principles and guidelines for effective pedagogy if it is to establish a strong position in the contemporary curriculum" (p.vii).

This influence however extends beyond art education research and curriculum to the practice of art itself. As Vitz (1988) notes, the entire movement of modern art has close ties to this industrial paradigm perspective. Modern art begins with "Manet and Impressionism and proceeding through Post-Impressionism, Cubism, and the severe abstractions of Malevich, Mondrian, and others, until it culminates in minimal and conceptual art" (Vitz, 1988). The logic of science pervades and inspires much of modernist painting. In turn, art theorists and avant-garde artists heavily criticize modern art and its ties to the industrial age paradigm (Morawski, 1988; Vitz, 1988).

So, art education plays a relatively small and perhaps declining curriculum role. In practice the more dominant subjects emphasize little of art thought. Further, much of the framework for understanding art ideas that are taught are heavily influenced by the scientific method paradigm. These considerations lead to the conclusion that significant aspects of art, art research and art education and thereby the entire curriculum are also dominated by the industrial age perspective.

In summary of the convergent industrial age perspective, this generally fixed range of system behavior undergirds the assumptions of reductionism, mechanism and determinism. The assumptions proved sufficient to generate a wide useful range of statistical tools useful to the experimental approach. These tools in turn provided a rich variety of machines and concepts that built twentieth century culture and its educational system. The hope of greater prediction and control have led to a strong belief in the ability of humankind to administer direction to the solution of its problems.

Counter-movement and negation.
The educational counter-movement critiques all levels of the industrial age paradigm: the explanatory, research and practice levels. This criticism has served as the deeper infrastructure for the development of the reconceptualists introduced in chapter one.

As to the explanatory level, a number of challenges have arisen relative to foundational concerns. Of further interest, is that the challenges presented here arise not from "soft-headed" philosophy departments, but from mathematics and physics, the core of the hard sciences. The ultimate defense (Foss and Rothenberg, 1987) of the practices of the scientific method rested with what has been called the logical possibility argument (Nagel, 1961). This argument considers it an elementary blunder to confuse what cannot be explained with what has yet to be explained. From the deterministic standpoint, it is only logical that there always remains the possibility of new data and new analytical techniques that will enable the researchers in this century or the next to gain control of a system that eludes them. Centuries of scientific research confirm this inference. As a consequence, abnormalities and singularities then become merely unsettled issues within the paradigm's research agenda. In other words, routes out of the paradigm are turned inward, providing a self-reinforcing mechanism that turns possible challenges to the paradigm into unresolved parts of the paradigm's agenda.

Two other principles of early twentieth century mathematics and physics provide some challenge to this basic industrial age defense. The discovery of non-Euclidean geometry began the challenge in mathematics. Later, Godel's triumph as a mathematician was to show in a generalizable way that no system of mathematics was complete (Hofstadter, 1979). Every explanatory system has blind spots and weaknesses. "(After Godel's First Incompleteness Theorem.) Any normal, stable consistent ...system must be incomplete" (Smullyan, 1987, p. 253). The second incompleteness theorem states that "any consistent mathematical system with enough power to do what is known as elementary arithmetic must suffer from the surprising limitation that it can never prove its own consistency!" (Smullyan, 1987, p.xi) That the scientific method (as a broadly conceived mathematical system) implies through the logical possibility argument that it is superior to other methods, leaves it open to attack by "Godel's Ax", his axioms about explanatory systems. Of course, this challenge does not solve the problem by showing the way out. It only says that any stable consistent system (e.g.: scientific method) must be incomplete and unable to prove its own consistency. In other words, if the scientific method provides a good explanatory system, there must be frames of reference it cannot explain, that require other frames of reference. But what are they? Naturally, Godel's theorems propose no answer for educational forays into curriculum and instruction.

Physics introduces another challenge. Foss and Rothenberg note the physicists' discovery of the critical importance of the subjective observer at the quantum level. Thoughts in the mind have become, in part, the cause. The thrust of their case is supplied by Morowitz:

    By combining the positions of Sagan, Crick, and Wigner as spokesmen for various outlooks, we get a picture of the whole that is quite unexpected. First, the human mind, including consciousness and reflective thought, can be explained by activities of the central nervous system, which, in turn, can be reduced to the biological structure and function of that physiological system (Sagan). Second, biological phenomena at all levels can be totally understood in terms of atomic physics, that is, through the action and interaction of the component atoms of carbon, nitrogen, oxygen, and so forth (Crick). Third and last, atomic physics, which is now understood most fully by means of quantum mechanics, must be formulated with the mind as a primitive component of the system (Wigner).

    We have thus, in separate steps, gone around an epistemological circle - from the mind, back to the mind.... (1980, 16).

This argument is interesting in that it further destabilizes the presumed objectivity of the old educational paradigm. However, this criticism does not indicate that the use of deterministic analysis has limitations other than limitations of external validation.

Both of these arguments for various reasons have proved insufficient individually or collectively to generate a wholesale Kuhnian paradigm shift though their reference by reconceptualists indicates they have contributed to new paradigm formation. As with previous criticism of the industrial age paradigm, they do not directly confront the belief in the convergent nature of systems or the assumption that humans can in principle track deterministic systems. Godel's theorems would indicate that the scientific method must have limitations but cannot indicate what they are, while substantiating the subjective loop of physics fails to show how the subjectivity itself inherently flaws scientific creation and understanding, though it does question the grounds for scientific validation. However, their potential at this theoretical level is to falsify, not to create and as Kuhn (1970) notes, criticism in and of itself is insufficient to force a conceptual shift.

At the paradigm level three of research (figure 1-2), criticism is equally vocal and broad in range. Mathematicians raise concerns about the unprovability of the behavioral sciences (Davis, 1987). Art theorists (Eisner, 1985a) and physical scientists (Ziman, 1978) both question the possibility of social scientists ever meeting the criteria for scientific investigation. Scientific philosophers debate the inviolability of scientific methodology (Feyerabend, 1975).

This broadening criticism raised the stress on the industrial age paradigm, encouraged ad hoc modifications and search for successors, but did not reverse long standing traditions in education as reflected by the published educational literature:

    Causal research is the dominant mode of inquiry in education, where experimental approaches are viewed as the most praiseworthy of methods. Indeed, one can hardly find an article in the American Educational Research Journal which does not employ some form of experimental design. Through their preoccupation with causal methods, researchers have narrowed the range of meaningful questions that are addressed in most educational studies. Only causal questions tend to be asked since only causal questions match the methodological predilections of most educational researchers. There are, of course, large numbers of questions which are not causal and which cannot be answered with experimental methods. (Smith, 1981, p.23)

(But it is one thing to point to items which appear to be non-causal and another to confront the nature of causality itself.) There is evidence that this bias in the selection of research approaches continues (Eisner, 1985a). Of course different fields take interest in new research directions at different rates, but in an area in which I take special interest, educational computing, the last four years of the Journal of Educational Computing Research still show causal research to be by far the dominant mode of published research with experimental studies predominating.

As previously discussed, there are however social scientists who have raised concerns about the nature of causality and the possible complications within interaction for research in education. These ideas deserve further emphasis:

    Generalizations decay. At one time a conclusion describes the situation rather well, at a later time it accounts for rather little variance, and ultimately is valid only as history. The half-life of an empirical proposition may be great or small. The more open the system, the shorter the half-life of relations within are likely to be. (Cronbach, 1975, p.22)

Cronbach's (1975) concern about a "half-life" for educational research generalizations and other ideas are often cited by the reconceptualists (e.g., Lincoln and Guba, 1985).

Of equal relevance is Cronbach's affirmation of Krathwohl's insistence (1985) on two stages of science, the creative stage and the confirmatory stage, both concepts that were incorporated into the distinctions between the two paradigms' research stages as described in chapter one. Cronbach's explanation for the dominance of one over the other points to the absence of an important theoretical development. Cronbach concludes that social scientists have:

    ...concentrated on formal tests of hypotheses - confirmatory studies - despite the fact that R. A. Fisher, the prime theorist of experimental design, demonstrated over and over again in his agricultural investigations that inquiry works back and forth between the heuristic and the confirmatory. But since he could offer a formal theory only for the confirmatory studies, that part came to be taken as a whole. (Cronbach, 1982, pp. ix-x)

A more compelling model of interaction and research would then require a formal theory for the heuristic and the half-life nature of studies.

The reconceptualists (discussed earlier as represented by Popkewitz and Lincoln and Guba) have sought to progress beyond the intense criticism and move a set of views from the margin to the mainstream of educational research. Popkewitz's view (1989) supports a co-existence with the old paradigm or at least its post-positivist reincarnation. Lincoln (1989) argues for a replacement and an abandonment of their prime antagonist. Yet both writers have similar views of interaction and determinism, and though I discussed basic problems with their positions on this issue in chapter one, further amplification is required.

Popkewitz on the whole appears to say, as with Cronbach, that generalizations decay and become history and have no positive predictive value. "...I can find no evidence that social science has anything to say qua science [his emphasis] about the future...." (1989, p.18) He views the nature of interaction as turbulent, "...a history which is dynamic and social" (1989, p.5) and dissonant "...poking holes in the causality that confronts us daily" (p.5) Yet he cannot completely bury causal predictive perception, noting his concern that "...deep description of interactions can also provide new methods of supervision, observation and control of individuals...."(1989, p.17). One could reject his analysis as contradictory, and risk a naive view. I choose to suspend disbelief and approach the observations as one would a zen koan, seeking a deeper resolution of honest tension in later chapters.

Lincoln (1989) perceives of interaction in her alternative naturalist paradigm as turbulent, with "...uncertainty, flux, and transformation as hallmarks of the paradigm itself..." (Lincoln, 1989, p.7). She also insists that the empiricists must give up their rigor, their need for concrete orderly knowledge, for certainty will never be possible. But she also carries her definitions of the naturalist paradigm back into rigor and perhaps rigidity as well. "The immediate realization is that accommodation between the paradigms is impossible" (1989, p.24). Intra-paradigm perception apparently also requires convergence and uniformity. "The thoroughly universal [her emphasis] nature of any paradigm eventually forces the choice between one camp or another. The intra-psychic need for coherence, order and logic demands that an individual behave in ways which are congruent and non-conflicting" (p. 25 ). Within this self-contradictory totalitarian view, there would seem little incentive for those pinnacles of intellect - creativity, invention or innovation. Fortunately, there is a rigorous analysis on which to falsify and thereby remove from her vision the perceived universality, Godel's Ax. That is, as previously discussed, consistent (read universal) systems can never prove their own consistency and must be incomplete. Further, it states that universal systems must be inconsistent. The intra-psychic need for coherence may be logically refutable, but even for the reconceptualists, carries significant staying power.

At the practice level, level four of the paradigm, assumptions and philosophies in turn shape the practice of systems that adopt them so that such assumptions can be better applied. The reductionist values that accompany the industrial age position make it further logical to apply this methodology to ever finer units on the system scale when solutions are not satisfactory. At the level of educational practice, this is reflected in the structure of today's schools:

    The dominant metaphor for today's education is the Newtonian Machine. ...In this sense, the education system (school) comes complete with production goals (desired end states); objectives (precise intermediate end states); raw material (children); a physical plant (school building); a 13-stage assembly line (grades K-12); directives for each stage (curriculum guides); processes for each stage (instruction); managers for each stage (teachers); plant supervisors (principals); trouble shooters (consultants, diagnosticians); quality control mechanisms (discipline, rules, lock-step progress through stages, conformity); interchangeability of parts (teacher-proof curriculum, 25 students per processing unit, equality of treatment); uniform criteria for all (standardized testing interpreted on the normal curve); and basic product available in several lines of trim (academic, vocational, business, general). (Sawada and Caley, 1985, p.15)

In other words, the end results of a theory of system behavior that assumes convergency is a system of curriculum and instruction which expects and works to produce students that average out or converge towards scientifically determined norms. It is this system which the many national reports have so heavily criticized, yet as Barger (1987) notes, reports which basically insist on doing more of what schools are now doing instead of more radical change. This appears to be a part of a larger perspective which finds that deterministic convergence applied to education reduces variance and along with it retards risk, innovation and creativity.

That such criticism would emerge from the field of instructional design and educational technology, a field dominated by deterministic top-down "prescriptions" (Reigeluth, 1989, p.79), is all the more significant. For example, Streibel concludes that both computer technology (1986) and the field of instructional design in general (1989) reduce variance and thereby retard learning. Curiously, Streibel's position that "technology determines" could actually be used to support belief in the ability of a central system to exercise top-down control through technological concepts and computer devices, and reinforces a view of beings as mindless drudges accepting the dictates of the software they use, a position that seems contrary to his real view.

Streibel's case against computer technology (1986) will be analyzed first. He discusses how the three major approaches to the use of the computer in education (drill and practice, tutorial, and simulation and programming) rely on not just another delivery system, but a system that "...has certain values and biases associated with it" (p.138), inherent convergent deterministic biases which decrease rather than increase the learner's personal intellectual agency, and "...delegitimize non-technological ways of learning and thinking about the world" (p.158).

It is important to emphasize that these criticisms apply not only to the software categories he criticizes, but to the hardware itself:

    The general question therefore becomes whether the student who controls the computer can go beyond the technological framework of the computer (the values associated with the computer and the symbol systems that can be manipulated by the computer). My answer will proceed as follows: tools tend to insist that they be used in certain ways and intellectual tools tend to define the user's mental landscape. (p. 153)

He finds further support for this idea in both Bruner (1975) and Greene (1978) and to their work I would add the broad perspective of Mumford (1967).

It is important, however, to separate the culturally determined common use of the tool from any inherent nature of the tool itself. Doing so reveals a basic problem with both his software determinism and his tool determinism. Evidence against such determinism can be found in the study of innovation and its more recent focus on the concept of re-invention, "the degree to which an innovation is changed or modified by a user in the process of its adoption and implementation" (Rogers, 1983, p. 175).

Beginning with the first scholars to acknowledge the occurrence of re-invention, Charters and Pellegrin (1972), the concept that innovation adoption is relatively invariant has been called into serious question. Re-invention, to varying degrees, from educational innovations to computer planning tools, is not only now seen as common, but is found difficult to prevent, requiring "re-invention proofing" on the part of some designers who claim concerns about quality control. This has a certain resonance with the now vulgar concept of "teacher-proofing" a curriculum. Rogers then concludes that there is a strong psychological need to re-invent, partly explainable in that it is more fun, challenging and creative than a simple clone or transfer.

We can then in turn re-examine Streibel's software categories and consider the degree to which these classes are open to re-purposing and design changes and with regards to re-invention, find that his class of simulation and programming represents an extreme in opportunity for re-invention, while the closed implementation of the other categories does generally support his criticism of them. The development of chaos theory must represent the ultimate falsification of tool determinism, chaos theory being a breakthrough concept in the study of complexity. The development of chaos theory appears to have required the invention of computer technology in order to reveal the conditions opposite the perception of the designers of computing, revealing that deterministic systems can deny long term prediction and control. For example, Von Neumann one of the principle architects of the computer, was after more than weather prediction, he wanted weather control and sought to use his creation to that end (Gleick, 1987).

I turn now to Streibel's case (1989) against general facets of the field of instructional design that relate to themes considered thus far. This field is dominated by the frame of cognitive science, which experiments with thinking from an information processing perspective and builds on the concept of plan. In the cognitivist paradigm:

    ...interaction is restricted to the physical science concept of "reciprocal action or influence." A human learner who wants to work within an instructional system therefore has to assume the ontology of a machine for themselves in order to "learn" from the machine. (Streibel, 1989, p. 7)

A number of the discussed characteristics of the industrial age paradigm are again visible in his analysis, interaction in Hegel's external sense, reductionism, mechanism and determinism. Streibel's criticism is that the cognitive scientists are in conflict with the "life-world" of the learner and in conflict with the reality of human learning. In short, an "...instructional designer can therefore not [his emphasis] predict which aspect of the instructional plan or which feature of the instructional system will be interpreted by the learner as a learning event, and so cannot design a plan for developing learning strategies" (1989, p.17).

Not considered in Streibel's analysis are human will, purposing and goal setting situations where the learner can be clear about what is needed and communicate this to an instructor. But if stateable purpose and followable advice exist and represent the successes of instructional design, what explanation is there for Cronbach's (1975) decay in successful educational generalizations? This query will be continued in the next chapter. However, nothing in this critique of Streibel's analysis should be construed to deny cultural perspective forming or determining, in Hansen's sense of theory ladenness, our cultural practice. That is, the problem is within us, not necessarily in tools that are open to re-purposing.

Curiously, these critics and their criticism within the levels of explanation, research and practice appears to seldom reference the points of view of the other identified levels of the paradigm model (figure 1-1), that is they have generally focussed on discourse bounded by particular levels of the paradigm shell befitting the divisions within their academic discipline. Now in this broader perspective, their combined criticism provides further support for their individual claims.

In summation of this look at assumptions about interaction in education, the convergent perspective, and even more narrowly, the fixed point division of our interaction taxonomy, has had a profound influence on our dominant educational paradigm. In addition, it is generating significant criticism. But before concluding this chapter, another area of the convergent taxonomy deserves a second look.

Limit Cycle and Torus Perception

It is now of some interest to compare the general perspective of the system behavior of our industrial age paradigm for educational science with the divisions of the convergent branch of the taxonomy described earlier: fixed point, limit cycle and torus. My conclusion is that if in this thought experiment, one sorted the varieties of educational behavior into these three boxes, the latter two would be relatively empty in comparison with general fixed point phenomena. In contrast, physics, chemistry and biology all would have a number of interesting specimens in these latter two sections. I find this curious given its importance in these other disciplines. Consideration of the cause of this development in education may open the door to a new area for research emphasis. This point will not be pursued in depth, therefore leaving a trail may be incentive to others to explore further. This trail will consist of a list of search terms related to periodic phenomena, a terse sample of citation types found in the ERIC database for the full years 1983 to 1988 and some thoughts on the lack of work in this area and the tools necessary to deal with periodicity.

First, however, further explanation is in order in reference to the idea that educational behavior is primarily of the fixed point variety. By this I do not mean that there are many precisely defined points that educational phenomena have been seen to reach. Instead, these phenomena are observed to hover or gather around a point with some general distribution, often the normal distribution. The question raised here is to what degree some of these other patterns such as limit cycle (periodic) or torus (the torus pattern is the result of interacting periodic systems) have been regularly observed, discussed and dealt with in curriculum and instruction.

The gathered search terms are drawn from a wide range of scientific literature, but especially from biology, where, as Rapp (1987) noted and as was discussed earlier, there has been extensive work done on periodic phenomena in the human system. A number of the terms used in other sciences did not produce any hits (citations) even when searching every field of the record (free text searching): entrainment, phase locking, tuning parameters, limit cycle or circannual. Entrainment, synchronization, phase locking and tuning parameters relate to the phenomena where the individual periods of members of a system grow closer together over time. An example of this noted in the educational literature is the convergence of menstrual cycles among women working and living closely together for some length of time. There was no indication in the few hits (citations) obtained that anyone had investigated a correlation between cognition (learning/thinking) and synchronization. There were hundreds of hits on variations of the following terms: periodic, rhythmic, cycle, cyclic, oscillation and frequency. But extended sampling turned up very few citations studying linkage to cognition and were more inclined to be about content areas such as rhythmic music or sound frequency studies in physics.

The citations collected noted periodic behavior, but usually in qualitative terms. The found periodicity referenced: textbook cycles, learning cycles, communication, suicide, menstruation, behavior, biorhythms, life cycles, with the most hits (5) referencing circadian rhythms from those studying circadian thermal cycles as related to stress in schools.

There are a number of unconfirmed possibilities for the general quantitative inattention to this area. The result may come from the real absence of periodicity in the educational data sets. Or with Hansen and his theory ladenness, we might decide that researchers have not found what they have not suspected is of interest. That is, they simply haven't collected data in a fashion that would reveal periodicity or examined what they had from this perception. Because of this lack of suspicion, the tools required for confirmation and falsification, such as adequate time series technique may not be sufficiently developed. Further, educational systems and their phenomena may not have been given the freedom to or were not driven to sufficiently explore their entire range of possibility to reveal such behavior, an exploration unlikely when converging patterns are assumed and preferred.

I regard the Hansen perspective as worth further consideration. One needs to examine the frequency of time series study and longitudinal study that generates a large number of data points with a measurement capacity which allows for considerable variance so that periodicity could be revealed. But there is another aspect of dynamic systems that prevents clues and hints of other behavior from arising. The problem is simply one of the nature of phase transitions. That is, as the stress or energy level of dynamic systems increases, behavior in the old phase bears little if any resemblance to the outcome of interactions in the new phase. For example, if nature had not frequently provided us with the variance to reveal the full range of behavior, what is there in the behavior of water that would lead one to suspect that at a certain lower temperature the liquid would not only turn into a solid, but expand as well? This point about phase transition is repeatedly emphasized in the engineering literature, and especially so with the incorporation of chaotic dynamics into engineering models (Thompson & Stewart, 1986). Systems must be subjected to a wide range of stresses to allow engineers to deal with the surprises that may be in store for them. Therefore, it may simply be that classroom management and instruction dampens and prevents periodic phenomena from occurring. Further, it must be noted that the discovery of dynamic phenomena in chemistry and biology which self-organize and achieves periodic behavior has been a relatively recent discovery and that their discovery followed the conceptualization of such phenomena in physics (Rapp, 1987).

Consequently, that a review of articles linked by keyword to concepts such as period, cycle and oscillation in the educational literature fails to turn up relevant educational research is not necessarily a sign that little further work is required. But another observation should be noted. The work done on cyclic phenomena in other disciplines was an important springboard to their work on divergent phenomena as well, for this work developed the tools and techniques that enabled discovery and experiment with chaotic phenomena. This may indicate that to exploit the ideas of chapter three on chaotic dynamics in education, development of tools and concepts in the convergent area of periodicity may be an important foundational step. Clearly, these thoughts are speculative and require further confirmation, but the unexpected discovery of an underdeveloped range of educational behavior that is open to the prediction and control of the convergent branch of the taxonomy would perhaps be of considerable interest and importance.

Overview

These criticisms of education's industrial age paradigm from educational, artistic and scientific perspectives do not in and of themselves make the shape of the new paradigm obvious, but they do point to substantial cracks in the old edifice.

At this demarcation of thought at the end of chapter two, it may now be helpful to consider our position in the overall trail of thought of this thesis. The prior chapter indicated the existence of a dominant paradigm and challengers and that a key idea or ideas that might integrate and organize the challengers into a more compelling alternative were missing in the educational literature. This chapter has shown that a range of classifications exists and is well understood for system behavior which is convergent. This convergency is intimately tied to deterministic assumptions. It has shown that all levels of the industrial age paradigm of education are linked to positive expectations about convergent behavior and these levels are linked to the paradigm levels of educational practice and research. It also indicated that a variety of challenges of varying degrees to the industrial paradigm exist and that a standard ad hoc modification to the paradigm exists in response to those challenges.

New considerations are in order. What if a direct challenge to the core assumption of the industrial age paradigm exists that would tackle falsification of convergency not on philosophical grounds, but on scientific deterministic grounds? Would this be sufficient argument to complete a paradigm shift? The next chapter reveals systems whose inherent nature is the opposite of the assumptions of the converging behavior that this chapter has shown is an integral part of the industrial age paradigm. These proposed systems diverge. They are common. They change the meaning of our expectation of prediction. They provide a more significant challenge to the standard defense of the industrial age paradigm, the logical possibility argument. They amplify the prior challenges presented in this chapter.


Top of Chapter Two - Part 2. Chapter Three. A Chaotic Paradigm: Table of Contents
Page author: Houghton