Free download. Book file PDF easily for everyone and every device. You can download and read online Bas van Fraassen’s Approach to Representation and Models in Science file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Bas van Fraassen’s Approach to Representation and Models in Science book. Happy reading Bas van Fraassen’s Approach to Representation and Models in Science Bookeveryone. Download file Free Book PDF Bas van Fraassen’s Approach to Representation and Models in Science at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Bas van Fraassen’s Approach to Representation and Models in Science Pocket Guide.

London and London As we have explored throughout this section, models and theories have informal internal components, and there are distinct modes of modeling and theorizing. Any synchronic analysis of the structure of models and theories must respect intentional diachronic processes of interpreting and using, as we shall now see. This account is thoroughly intentionalist and anti-essentialist. That is, mapping relations between model and world are left open and overdetermined. Specifying the relevant relations depends on contextual factors such as questions asked, and the kinds of similarities and isomorphisms deemed to be of interest.

The appropriate relations are selected from an infinite or, at least, near-infinite variety of possible relations e. Regarding practice thesis 5 , in addition to ample work on the experimental life mentioned above, consider a small example. They too saw the diamagnetic analogy as central to their theoretical practices. Criteria and practices of theory confirmation also differ from the ones typical of the Syntactic and Semantic Views.

While predictive and explanatory power as well as empirical adequacy remain important, the Pragmatic View also insists on a variety of other justificatory criteria, including pragmatic virtues sensu Kuhn ; Longino such as fruitfulness and utility. In a nutshell, the Pragmatic View argues that scientific theory structure is deeply shaped and constrained by functions and practices, and that theory can be interpreted and applied validly according to many different criteria.

The analytical framework of the Pragmatic View remains under construction. The emphasis is on internal diversity, and on the external pluralism of models and theories, of modeling and theorizing, and of philosophical analyses of scientific theories. The Pragmatic View acknowledges that scientists use and need different kinds of theories for a variety of purposes. There is no one-size-fits-all structure of scientific theories.

A further qualification in addition to the one above regarding American Pragmatism is in order. This philosophical tradition focused on themes such as meaning change and incommensurability of terms across world views e. The historicists also opposed the idea that theories can secure meaning and empirical support from a theory-neutral and purely observational source, as the Syntactic View had insisted on with its strong distinction between theoretical and observational vocabularies cf.

Galison This work was concerned more with theory change than with theory structure and had fewer conceptual resources from sociology of science and history of science than contemporary Pragmatic View work. Moreover, paradigms never quite caught on the way analyses of models and modeling have. Even so, this work did much to convince later scholars, including many of the Pragmatic View, of certain weaknesses in understanding theories as deductive axiomatic structures. As a final way to contrast the three views, we return to population genetics and, especially, to the Hardy-Weinberg Principle HWP.

Both Woodger , and Williams , provide detailed axiomatizations of certain parts of biology, especially genetics, developmental biology, and phylogenetics. Nicholson and Gawne Woodger elaborates these logical predicates or relations to produce a careful reconstruction of Mendelian genetics. This was a reference to HWP.

However, she does not address HWP. Were such principles considered secondary or uninteresting by Woodger and Williams? Proponents of the Semantic View claim that their perspective on scientific theory accurately portrays the theoretical structure of population genetics. Thompson provides both set-theoretical and state-space renditions of Mendelian genetics. The first involves defining a set-theoretic predicate for the system, viz. In contrast, the state-space approach of the Semantic View articulates a phase space with each dimension representing allelic or genotypic frequencies e.

Mathematically, the diploid version of HWP is written thus:. The left-hand side represents the allele frequencies in the parental generation and a random mating pattern, while the right-hand side captures genotype frequencies in the offspring generation, as predicted from the parental generation. This is a null theoretical model—actual genotypic and allelic frequencies of the offspring generation often deviate from predicted frequencies e.

Indeed, HWP holds strictly only in abstracted and idealized populations with very specific properties e. HWP is useful also in the way it interacts with laws of succession for selection, mutation, and so forth e. This powerful population genetic principle is central to Semantic View analyses of the mathematical articulation of the theoretical structure of population genetics. Recall that the Pragmatic View highlights the internal and external pluralism—as well as the purposiveness—of model and theory structure.

Consider recent uses of population genetic theory to specify the kinds and amounts of population structure existing in Homo sapiens. In particular, different measures and mathematical modeling methodologies are employed in investigating human genomic diversity e. It is possible to distinguish at least two different research projects, each of which has a unique pragmatic content e. Diversity partitioning assesses genetic variation within and among pre-determined groups using Analysis of Variance also crucial to estimating heritability, Downes Recall also that we are all already identical at, on average, out of nucleotides.

This is one way among many to deploy the population-genetic principle of HWP. In contrast, clustering analysis e. Edwards can be readily performed even with the small amount of among-continent genetic variance in Homo sapiens.

Winther , — That is, for a cluster to eventually be robust in the modeling runs, it should meet HWP expectations. Clustering analysis has sometimes been interpreted as a justification for a realist stance towards biological race see discussions in Hochman ; Kaplan and Winther , ; Winther and Kaplan ; Winther ; Edge and Rosenberg forthcoming; Spencer forthcoming.

This example of the mathematical modeling of human genomic diversity teaches that basic and simple formal components can be used in different ways to develop and apply theory, both inside and outside of science. These ontological ruptures can be identified despite the fact that both research projects assess population structure by examining departures from HWP i. This exploration of how the three views on the structure of scientific theory address population genetics, and in particular HWP, invites a certain meta-pluralism. That is, the Syntactic View carefully breaks down fundamental concepts and principles in genetics and population genetics, articulating definitions and relations among terms.

The Semantic View insightfully decomposes and interweaves the complex mathematical edifice of population genetics. The Pragmatic View sheds light on modeling choices and on distinct interpretations and applications of the same theory or model, both within and without science. The three perspectives are hardly mutually exclusive.

The structure of scientific theories is a rich topic. Theorizing and modeling are core activities across the sciences, whether old e. Furthermore, theory remains essential to developing multipurpose tools such as statistical models and procedures e. Given the strength and relevance of theory and theorizing to the natural sciences, and even to the social sciences e. This piece has focused on a comparison of three major perspectives: Syntactic View, Semantic View, and Pragmatic View.

In order to handle these complex debates effectively, we have sidestepped certain key philosophical questions, including questions about scientific realism; scientific explanation and prediction; theoretical and ontological reductionism; knowledge-production and epistemic inference; the distinction between science and technology; and the relationship between science and society.

Each of these topics bears further philosophical investigation in light of the three perspectives here explored. Table 2. The Syntactic, Semantic, and Pragmatic views are often taken to be mutually exclusive and, thus, to be in competition with one another. They indeed make distinct claims about the anatomy of scientific theories. But one can also imagine them to be complementary, focusing on different aspects and questions of the structure of scientific theories and the process of scientific theorizing. For instance, in exploring nonformal and implicit components of theory, the Pragmatic View accepts that scientific theories often include mathematical parts, but tends to be less interested in these components.

Moreover, there is overlap in questions—e. How are these three views ultimately related? A standard philosophical move is to generalize and abstract, understanding a situation from a higher level. Chakravartty Finally, the Pragmatic View, which did not exist as a perspective until relatively recently, imagines theory as pluralistic and can thus ground a holistic philosophical investigation. It envisions a meta-pluralism in which reconstructive axiomatization and mathematical modeling remain important, though not necessary for all theories.

By design, the ecumenical meta-pluralism sanctioned by the Pragmatic View does not completely offset identity and combat strategies. Even so, the complementarity strategy might be worth developing further. Compared to identity and combat meta-perspectives, it provides broader—or at least different—insights into the structure of scientific theories. More generally, exploring the relations among these views is itself a rich topic for future philosophical work. Introduction 1. The Syntactic View 2. The Semantic View 3.

The Pragmatic View 4. Population Genetics 6. Introduction In philosophy, three families of perspectives on scientific theory are operative: the Syntactic View , the Semantic View , and the Pragmatic View. Savage distills these philosophical perspectives thus: The syntactic view that a theory is an axiomatized collection of sentences has been challenged by the semantic view that a theory is a collection of nonlinguistic models, and both are challenged by the view that a theory is an amorphous entity consisting perhaps of sentences and models, but just as importantly of exemplars, problems, standards, skills, practices and tendencies.

In a classic exposition, the logical positivist Carnap writes: If in an investigation explicit reference is made to the speaker, or, to put it in more general terms, to the user of a language, then we assign it to the field of pragmatics. Whether in this case reference to designata is made or not makes no difference for this classification. If we abstract from the user of the language and analyze only the expressions and their designata, we are in the field of semantics.

And if, finally, we abstract from the designata also and analyze only the relations between the expressions, we are in logical syntax. The whole science of language, consisting of the three parts mentioned, is called semiotic. Examples of such queries are: What would be the most convenient metamathematical axiomatization of evolutionary processes e. In which formal language s would and could such axiomatizations be articulated e. Which formal and methodological tools would permit a smooth flow from the metamathematical axiomatization to the mathematical theory of population genetics?

Very generally, this exploration involves the following questions: What is the form and content of the directly presented class of mathematical models of evolutionary theory e. How could and should we organize the cluster of mathematical models sensu Levins of population genetics? Which additional models e. What are the relations among theoretical mathematical models, data models, and experimental models? How does theory explain and shape data? How do the data constrain and confirm theory? The following questions about the structure of population genetic theory might be here addressed: What role did R.

How did the development of computers and computational practices, statistical techniques, and the molecularization of genetics, shape theory and theorizing in population genetics, especially from the s to today? The Syntactic View According to the Syntactic View, which emerged mainly out of work of the Vienna Circle and Logical Empiricism see Coffa ; Friedman ; Creath ; Uebel , philosophy most generally practiced is, and should be, the study of the logic of natural science, or Wissenschaftslogik Carnap , ; Hempel The Semantic View An overarching theme of the Semantic View is that analyzing theory structure requires employing mathematical tools rather than predicate logic.

First-Order Predicate Logic Objection. This places heavy explanatory and representational responsibility on relatively inflexible and limited languages. Theory Individuation Objection. Since theories are individuated by their linguistic formulations, every change in high-level syntactic formulations will bring forth a distinct theory. Unintended Models Objection. There is no clear way of distinguishing between intended and unintended models for syntactically characterized theories e. Confused Correspondence Rules Objection. Correspondence rules are a confused medley of direct meaning relationships between terms and world, means of inter-theoretic reduction, causal relationship claims, and manners of theoretical concept testing.

Trivially True yet Non-Useful Objection. Presenting scientific theory in a limited axiomatic system, while clearly syntactically correct, is neither useful nor honest, since scientific theories are mathematical structures. Practice and History Ignored Objection. Syntactic approaches do not pay sufficient attention to the actual practice and history of scientific theorizing and experimenting. Consider the axioms of a projective plane: For any two points, exactly one line lies on both. For any two lines, exactly one point lies on both.

There exists a set of four points such that no line has more than two of them. A figure of a geometric model that makes this theory true is: Figure 1. Figure 2. Figure 3. Yes No Yes, although the distinction is hard to make. Bibliography Apostel, L. Awodey, S. Bailer-Jones, D. Machamer and M.

Silberstein eds. Barbujani, G. Ghirotto, and F. Minch, and L. Bartha, P. Zalta ed. Beatty, J. Bergstrom, C. Dugatkin, , Evolution , New York: Norton. Bird, A. Boumans, M. Morgan and M. Morrison eds. Braithwaite, R. Nagel, P. Suppes, and A. Tarski eds. Bridgman, P. Bueno, O. French, and J.

Brown, T. Brown, M. Callender, C. Campbell, N. Carnap, R. Creath and R. Schilpp ed. New York: Dover, Cartwright, N. Essler and M.

Models in Science

Frauchiger eds. Shomar, and M. Herfel, W. Krajewski, I. Niiniluoto, and R. Wojcicki eds. Carus, A. Cat, J. Chakravartty, A. Chang, H. Clatterbuck, H. Sober, and R. Coffa, A. Contessa, G. Craver, C. New York: Oxford University Press. Creath, R. Crombie, A. Crow J. Dalla Chiara Scabia, M.

Davidson, A. Davidson, D. Demopoulos, W. Derman, E. Dizadji-Bahmani, F. Frigg, and S. Downes, S. Dreyfus, H. Margolis, M. Krausz, and R. Burian eds. Duhem, P. Edge, M. Edwards, A. Eilenberg, S. Einstein, A. Elwick, J. Feigl, H. Radner and S. Winokur eds. Scriven, and G. Maxwell eds. Flyvbjerg, B. French, S. Friedman, M. Healey ed. New York: Cambridge University Press. Frigg, R. Galison, P.

Geary, J. Gentner, D. Miall ed. Nadel ed. Giere, R. Bickle, and R. Godfrey-Smith, P. Gould, S. Griesemer, J. Hacking, I. Misak ed. Halvorson, H. Hartl, D. Hempel, C. Feigl, M. Hermes, H. Henkin, P. Hesse, M. Edwards ed. Hitchcock, C. Hochman, A. Hodges, W. Hoffman, R. Honeck ed. Holton, G. Hookway, C. Hull, D. Jammer, M. Jobling, M. Hurles, C. Tyler-Smith, , Human Evolutionary Genetics. Jones, M. Jones and N. Cartwright eds.

Same individual as Thomson-Jones Kaplan, J. Keller, E. Kitcher P. Krivine, J. Kuhn, T.

Services on Demand

Kuhn ed. Ladyman, J. Bueno, M. Lakatos, I. Laudan, L. Leonelli, S. Levins, R. Lewis, R. Lewontin, R. Lloyd, E. Durham ed. London, F. Series A, Mathematical and Physical Sciences , : 71— Longino, H. Lorenzano, P. Lowry, I. Ludwig, D. Lutz, S. Mancosu, P. Margenau, H. Marker, D. Caporael, J. Griesemer, and W. Wimsatt eds. Matheson, C. McKinsey, J. Sugar, and P. Minsky, M. Kalenich ed. Morgan, M. Mormann, T. Richardson and T. Uebel eds. Morrison, M. Moulines, C. Nagel, E. Neurath, O. Cohen and M. Neurath eds. Nicholson, D. Nolte, D.

Okasha, S. Oppenheimer, J. Oyama, S. Pereda, C. Pickstone, J. Pigliucci, M. Popper, K. Notturno ed , Abingdon: Routledge, pp.

Bas van Fraassen: The Semantic Approach to Science, After 50 Years

Pritchard J. Stephens, and P. Preston, J. Putnam, H. Reichenbach, H. Reichenbach, Berkeley: University of California Press. Rice, S. Richards, R. Nitecki and D. Nitecki eds. Richardson, A. In his commentary, van Fraassen insists that the identification of physical quantities is not a problem to be evaded. But the way in which this sort of question appears in scientific practice does not answer to the sense it is given in metaphysics. It is rather a way of raising a question of adequacy for a scientific theory which concerns the extent to which values of quantities in its models are in principle determinable by procedures that the theory itself counts as measurements.

And that requires a pragmatist critique. While experimentation and modeling were studied in philosophy of science throughout the twentieth century, their delicate entanglement and mutuality has recently come increasingly into focus. The essays in this collection concentrate on the experimental side of modeling, as well as, to be sure, the modeling side of experimentation.

In order to provide adequate background to these essays, we shall first outline some the historical development of this philosophical approach, and then present in very general terms a framework in which we understand this inquiry into scientific practice. This will be illustrated with case studies in which modeling and experimentation are saliently intertwined, to provide a touchstone for the discussions that follow. Philosophical views of scientific representation through theories and models have changed radically over the past century.

In the early twentieth century there was a rich and complex interplay between physicists, mathematicians, and philosophers stimulated by the revolutionary impact of quantum theory and the theory of relativity. Recent scholarship has illuminated this early development of the philosophy of science in interaction with avant-garde physics Richardson ; Friedman ; Ryckman but also with revolutionary progress in logic and the foundations of mathematics. After two decades of seminal work in the foundations of mathematics, including the epochal Principia Mathematica —13 with Alfred North Whitehead, Bertrand Russell brought the technique of logical constructs to the analysis of physics in Our Knowledge of the External World and The Analysis of Matter Instants and spatial points, motion, and indeed the time and space of physics as well as their relations to concrete experience were subjected to re-creation by this technique.

This project was continued by Rudolf Carnap in his famous Der logische Aufbau der Welt and was made continually more formal, more and more a part of the subject matter of mathematical logic and meta-mathematics. Whether aptly or inaptly, this view was seen as the core of the logical positivist position initially developed in the Vienna Circle.

But by this time it was also seen as contestable. It is not perhaps infrequent that a movement reaches its zenith after it has already been overtaken by new developments. Montague stays with the received view of theories as formulated in first-order predicate languages, but his work is rich enough to include a fair amount of mathematics. Second, a physical system or its history is deterministic exactly if its history realizes some deterministic theory.

Montague announces novel results, such as a clear disconnection between periodicity and determinism, contrary to their intimate relationship as depicted in earlier literature. But it is instructive to note how the result is proved. First of all, by this definition, a theory that is satisfied only by a single history is deterministic—vacuously, one might say—even if that history is clearly not periodic. Second, given any infinite cardinality for the language, there will be many more periodic systems than can be described by theories axiomatizable sets of sentences in that language, and so many of them will not be deterministic by the definition.

Disconcertingly, what we have here is not a result about science, in and by itself, so to speak, but a result that is due to defining determinism in terms of what can be described in a particular language. Whereas the logical positivist tradition had viewed scientific theoretical language as needing far-reaching interpretation to be understood, both these seminal developments involved viewing scientific language as part of natural language, understood prior to analysis. Suppes provided guiding examples through his work on the foundations of psychology specifically, learning theory and on physics specifically, classical and relativistic particle mechanics.

Then he introduced, if still in an initial sketch form, the importance of data models and the hierarchy of modeling activities that both separate and link the theoretical models to the practice of empirical inquiry:. The concrete experience that scientists label an experiment cannot itself be connected to a theory in any complete sense. That experience must be put through a conceptual grinder. These canonical data constitute a model of the results of the experiment, and direct coordinating definitions are provided for this model rather than for a model of the theory. The assessment of the relation between the model of the experimental results and some designated model of the theory is a characteristic fundamental problem of modern statistical methodology.

Suppes , 7. What was not truly possible, or advisable, was to banish philosophical issues about language entirely from philosophy of science.


  • Christianity in Korea.
  • Drag to reposition!
  • Womens Organizations and Democracy in South Africa: Contesting Authority;
  • Science and Partial Truth: A Unitary Approach to Models and Scientific Reasoning.
  • Customer Reviews.
  • Observable versus unobservable phenomena!

Suppes offered a correction to the extremes of Carnap and Montague, but many issues, such as the character of physical law, of modalities, possibilities, counterfactuals, and the terms in which data may be presented, would remain. In actuality it presented a focus on models, understood independently of any linguistic formulation of the parent theory but associated with limited languages in which the relevant equations can be expressed to formulate relations among the parameters that characterize a target system.

Any scientist is thoroughly familiar with equations as a means of representation, and since Galois it has been common mathematical practice to study equations by studying their sets of solutions. When Tarski introduced his new concepts in the study of logic, he had actually begun with a commonplace in the sciences: to understand an equation is to know its set of solutions. It is in this fashion that one arrives at what in logical terminology is a model. The set of all models of the equations, so understood, is precisely the set of solutions of those equations.

The elements of a sequence that satisfy an equation may, of course, not be numbers; they may be vectors or tensors or scalar functions on a vector space, and so forth. Thus, the equation picks out a region in a space to which those elements belong—and that sort of space then becomes the object of study. In meta-mathematics this subject is found more abstractly: the models are relational structures, domains of elements with relations and operations defined on them.

Except for its generality, this does not look unfamiliar to the scientist. A Hilbert space with a specific set of Hermitean operators, as a quantum-mechanical model, is an example of such a relational structure. In the standard example of classical mechanics, the state of a particle is represented by three spatial and three momentum coordinates; the state of an N -particle system is thus represented by 3 N spatial and 3 N momentum coordinates. The space for which these 6 N -tuples are the points is the phase space common to all models of N -particle systems.

A given special sort of system will be characterized by conditions on the admitted trajectories in this space. For example, a harmonic oscillator is a system defined by conditions on the total energy as a function of those coordinates. At the least it was brought into many areas of philosophical discussion about science, with applications extended, for example, to the philosophy of biology e. To have a theory of the ruby laser [for example], or of bonding in a benzene molecule, one must have models for those phenomena which tie them to descriptions in the mathematical theory.

In short, on the simulacrum account the model is the theory of the phenomenon. This sounds very much like the semantic view of theories, developed by Suppes and Sneed and van Fraassen. But the emphasis is quite different. Cartwright , This provocative phrasing appeared in a joint paper in the mids, where the difference between the received view and the semantic approach was critically presented:. It takes only the midwife of deduction to bring them forth. On the semantic view, theories are just collections of models; this view offers then a modern Japanese-style automated version of the covering-law account that does away even with the midwife.

According to Cartwright and her collaborators, the models developed in application of a theory draw on much that is beside or exterior to that theory, and hence not among whatever the theory could have carried in its belly. What is presented here is not a different account of what sorts of things models are, but rather a different view of the role of theories and their relations to models of specific phenomena in their domain of application. One type of model, at least, has the traditional task of providing accurate accounts of target phenomena; these they call representative models.

They maintain, however, that we should not think of theories as in any sense containing the representative models that they spawn. This model is grounded in classical electromagnetism, but that theory only provided tools for constructing the model and was not by itself able to provide the model. That is, it would not have been possible to just deduce the defining equations of the model in question after adding data concerning superconductivity to the theory. Examples of this are actually ubiquitous: a model of any concretely given phenomenon will represent specific features not covered in any general theory, features that are typically represented by means derived from other theories or from data.

From the earlier point of view, a model of a theory is a structure that realizes satisfies the equations of that theory, in addition to other constraints. Cartwright and colleagues do not present an account of models that contradicts this. The important point is instead that the process of model construction in practice was not touched on or illuminated in the earlier approaches. The important change on the philosophical scene that we find here, begun around , is the attention to the fine structure of detail in scientific modeling practice that was not visible in the earlier more theoretical focus.

A brief history of this sort may give the impression of a straightforward, linear development of the philosophy of science. That is misleading. Just a single strand can be followed here, guided by the need to locate the contributions in this volume in a fairly delineated context. Many other strands are entangled with it. Equally, we can see Hans Halvorson as continuing in as well as correcting the semantic approach in the twenty-first century Halvorson , ; for discussion see van Fraassen a.

But we will leave aside those developments as well as much else to restrict this chapter to a proper introduction to the articles that follow. Although the philosophy of science community is by no means uniform in either focus or approach, the new attitude displayed by Cartwright did become prevalent in a segment of our discipline, and the development starting there will provide the context for much work being done today. The redirection of attention to practice thus initiated by Cartwright and her collaborators in the early s was systematically developed by Morgan and Morrison in their contributions to their influential collection Models as Mediators Emphasizing the autonomy and independence of theory, they describe the role or function of models as mediating between theory and phenomena.

What, precisely, is the meaning of this metaphor? Earlier literature typically assumed that in any scientific inquiry there is a background theory, of which the models constructed are, or are clearly taken to be, realizations. The metaphor takes on a stronger meaning with a new focus on how models may make their appearance, to represent experimental situations or target phenomena, before there is any clear application of, let alone derivation from, a theory.

A mediator effects, and does not just instantiate, the relationship between theory and phenomenon. That is the main content of the first thesis: independence in construction. The second thesis, autonomy of models, to the extent that it goes beyond this independence, can be illustrated by the delay, even negligence, with respect to the task of showing that the model proposed for a given phenomenon does actually satisfy the basic equations of the main theory.

For example, in fluid mechanics the basic principle in the background of all theorizing and model construction is the set of Navier—Stokes equations. There is no sense in which a specific fluid mechanics model, such as a model of turbulence in the wake of an obstacle, was ever deduced from those equations alone. But there is so little suspicion that a given model, proposed in practice, violates those equations that eventual checking on the consistency is left to mathematicians, without the experimenter waiting for reassurance.

That models are autonomous and independent in this sense does not by itself reveal the character of the role of mediation. So it is crucial to appreciate the two sides: while theories are drawn on to construct models, conversely the models aid in theory construction. In particular cases, a model may come along first, the extension of theory following upon what was learned while a resistant phenomenon was being modeled.

That models function as mediators between theory and the phenomena implies then that modeling can enter in two ways. The process of modeling may start with a phenomenon physical system or process and draw on theory to devise a representation of that phenomenon; or it may start with a theory, draw on other sources such as data or auxiliary theories to compliment that theory, and introduce as model a structure satisfying that theory.

In the first case it is or is intended to be an accurate representation of a phenomenon; in the second case it is a representation of what the theory depicts as going on in phenomena of this sort. As in science, so in philosophy: practice goes far beyond what is preached. By the second decade of the twenty-first century, in which the present collection is situated, the previous developments had borne ample fruit. In two new series of conferences, Models and Simulations and The Society for Philosophy of Science in Practice, starting, respectively, in France in and in the Netherlands in , the new orientation to scientific practice has been saliently displayed.

More or less concurrently with the workshops of —, a bevy of new studies appeared on how computer simulation was changing conceptions of modeling, measurement, and experiment. Parke , and Wendy Parker The main issue about computer simulation, evoking sustained debate in the literature, was saliently expressed by Margaret Morrison: Do computer simulations ever have the same epistemic status as experimental measurement?

The Structure of Scientific Theories (Stanford Encyclopedia of Philosophy)

Morrison had argued earlier as we have seen that models function, in some sense, as measuring instruments; she now argued that there is a way in which simulation can be said to constitute an experimental activity see further Morrison At the same time, as debated in the workshops, there were new arguments concerning the theory-dependence of measurement. This surfaces in the present collection especially in the contributions by Joseph Rouse and Paul Teller, but such discussion continued in a series of publications subsequently: Ann-Sophie Barwich and Hasok Chang , Nancy Cartwright , Teru Miyake , , Eran Tal , , , and Bas van Fraassen , b.

At issue here are the roles of idealization, abstraction, and prediction in establishing measurement outcomes as well as the theoretical status of criteria to determine what counts as a measurement and what it is that is measured. In the remainder of this introduction we will sketch in broad outlines, with illustrative case studies, a framework in which we see the experimental side of modeling currently approached. As the philosophical take on modeling changed, so did views of how experimentation relates to modeling.

The earlier stages inherited the logical positivist view of experimentation as the tribunal that tests models against data delivered by experimental and observational outcomes. In later stages much attention was paid to how measurement itself involves modeling from the outset. There is, as might be expected, another side to the coin as well: that, conversely, experimental activity makes a constructive contribution to the processes of modeling and simulating.

The autonomy of modeling was a new theme that is to be conjoined with another new theme: the constructive relation of experimentation to modeling as an interactive, creative, open-ended process that modifies both along the way. There are two aspects to this interaction. The first is that the specification of the relevant parameters of a phenomenon are not given from the outset. The phenomenon vaguely defined at the beginning of the investigation needs to be specified, and this is done by specifying what data represent the phenomenon and what data are to be regarded as a manifestation of the phenomenon.

What needs to be settled through experimentation will then include the conditions in which measurement outcomes qualify as relevant data—that is, data to which a putative model of the phenomenon is accountable. The second, which follows upon this, is conceptual innovation, as a result of an effort to make sense, through modeling, of the phenomenon and the conditions of its occurrence.

Looking back from our present vantage point we may discern illustrative examples in the past: surely it was experimentation that led to the reconceptualization of lightning as electric discharge, for example. But the intricate ballet between experimental and modeling progress can only become clear through a detailed analysis, and for this we shall choose a case in fluid mechanics.

The formation of a wake is a very common phenomenon that happens when air or liquid goes over a bluff not streamlined body, which can be a pole, a rock, or an island. The work on a better theoretical understanding of wakes spread from meteorology to the stability of bridges or platforms, from the design of cars and airplane wings to that of helicopter vanes—and more generally to all cases where periodic instabilities or transitions toward chaotic behavior are possible. A simple physical model of this phenomenon in a laboratory can show the wake of a flow behind a cylinder when the velocity of the upstream flow reaches a certain critical value.

In a diagram such a flow going, say, from left to right can be visualized in the plane perpendicular to the axis of the cylinder, with the wake formed by vortices that are emitted alternatively on each side of the cylinder and carried away with the downstream flow. As simple as it may look, the attempt to construct a theoretical model of this sort of wake triggered an enormous number of studies, and no less controversy. As our main example from this literature let us take one, seemingly simple, question that was the object of a debate involving experimental as well as numerical and theoretical studies in fluid mechanics in the second half of the twentieth century.

Formulation of such a question begins inevitably within a pre-existing modeling tradition. The most significant quantity, defined in terms of these three, is the dimensionless Reynolds number Re :. The wake is formed when this number reaches a critical value, where vortices are emitted with a certain frequency, the shedding frequency.

Question: What happens when Re is increased within a certain interval beyond the critical value? How does the evolution of the shedding frequency of the vortices vary with Re? Specifically, as Re is increased within a certain interval beyond the critical value is the variation with the Re of the shedding frequency a continuous linear variation or is there some discontinuity? The question can itself arise only within a theoretical background, but it clearly asks for data from experiment before models of the formation of the wake can be assessed and also, in effect, before a suitable model of the entire phenomenon can be constructed.

For the stable range Roshko provided an empirical formula for the increase of shedding frequency of velocity: a linear variation of the shedding frequency with the Reynolds number. Tritton argued on the basis of new measurements for the existence of a discontinuity in the curve that displays the frequency plotted against the velocity. Beyond the discontinuity—that is, for values of Re greater than the one for which the discontinuity occurs—the shedding of the vortices along the cylinder is not simultaneous.

To put it differently, the imaginary lines joining side-by-side vortices along the cylinder are not parallel to the axis of the cylinder—they are oblique.

Bas van Fraassen

That the successive lines of the vortices are or are not parallel to the axis of the cylinder translates in terms of the dimension of the dynamics of the wake. Parallel lines of vortices correspond to a two-dimensional dynamics of the wake. In contrast, nonparallel lines of vortices testify to the existence of a dynamics in the direction of the cylinder, which added to the two-dimensional dynamics would make the total dynamics of the wake three-dimensional. But three-dimensional effects on the dynamics were thought to be associated with the development of turbulence, which according to Roshko took place beyond the stable range.

Is or is not the discontinuity, and the oblique shedding, an intrinsic, fluid-mechanic phenomenon, irrespective of the experimental setup? It was not until that a model of the wake was proposed to account for its temporal dynamics—that is, for the temporal evolution of the amplitude of the vortices and of the frequency at which they are emitted Mathis, Provansal, and Boyer The model in question was obtained from the general model proposed by Landau to describe the development of a periodic instability, which he viewed as the first step toward turbulence.

As illustrated in Figure I. The measurements of the amplitude that were made showed that, in this respect, the model works beautifully—even better than expected. So for the evolution of the amplitude, at least, one and the same model can account for the development of the instability on the whole range of the Reynolds number. But the same model also predicts that the evolution of the frequency with the Reynolds number is linear, with no discontinuity! Yet the measurement results continue to show the existence of a discontinuity. And additional measurements made along the cylinder indicate the existence of a three-dimensional dynamics, an oblique shedding.

Figure I. Two-dimensional geometry: cross-section of the cylinder. Courtesy P. Le Gal. Adapted by Nousoul Digital Publishers. That depends. It depends on whether that discontinuity has to be accounted for by a model of the wake. If the discontinuity is an artifact, the model of the wake not only does not have to account for it, but should not account for it.

On the other hand, if it is an intrinsic feature of the wake, a model that does not account for it cannot, in that context, count as a model of the wake. Even if procedures of analysis are assumed to be in place and a data model is produced—a data model of the evolution of the shedding frequency with Re—we are still left with an open question: Is this data model one that the model of the wake should match? This has the form of a normative question. How is it settled in practice? Since it was in fact settled in practice, the subsequent history provides us with an instructive lesson in how fact and normativity interact in scientific inquiry.

If the discontinuity is intrinsic, it should not depend on the experimental setup; if it is shown to depend on the experimental setup, then it is not intrinsic. This motivated experimental studies of the effect of an increase of the non-uniformities in the flow or in the diameter, as well as of the effect of making a cylinder vibrate. In each case, the idea was to show that the discontinuity is generated by some specific features of the experimental setup and consequently is not a feature of the wake itself. It is not sufficient, however, to show that the discontinuity is only the effect of non-uniformities or vibrations.

It would also have to be shown that without non-uniformities or vibrations there is no discontinuity. This is precisely the challenge that some numerical studies were going to try to address. It is not easy to show that when there is no non-uniformity or no vibration there is no discontinuity. Both flowing fluid and the diameter of a cylinder keep a certain level of non-uniformity, however carefully they are prepared. Fortunately, by the end of the s the situation of the wake lent itself to the modern alternative: numerical simulation.

A simulation of the Navier—Stokes equations, which are fundamental equations in fluid mechanics, was performed to find out how the flow behind a cylinder develops when there are no non-uniformities of any sort and no vibration Karniadakis and Triantafyllou The results of the simulation were presented as pointing to a definite answer to the question of the nature of the discontinuity. And once again, as with the Landau model, the answer was that the evolution of the frequency with Re is linear, with no discontinuity.


  • Lost in the Barrens.
  • Constructive Empiricism (Stanford Encyclopedia of Philosophy);
  • The Hunchback of Notre Dame.

But the parameters of Navier—Stokes are those whose effect is constitutive of fluid-mechanical phenomena. So if one trusts the method used for the simulation a spectral-element method, used successfully in previous studies and does not envisage calling into question the validity of the fundamental equations, the most obvious conclusion would be that the effect of these additional factors constitutes an artifact and should therefore be shielded.

This conclusion about the discontinuity only holds, however, under certain assumptions. For the results to be relevant to the understanding of this phenomenon, the simulation must be an imitation, an accurate mimetic representation of the phenomenon we are interested in. Whether it is indeed, this is where the problem of identification of the relevant parameters sneaks in.

To speak of simulating the fundamental equations is not exactly right in at least two respects. First of all, the computer can only run a discrete model. Thus, the simulation requires the construction of a system of discrete equations and a method of discretization for time and space to obtain the simulation model.

Recensie(s)

As Lenhard shows, the construction of the simulation model may become a modeling process in its own right, when the norm that guides and regulates the construction is precisely in agreement with the observations of the phenomenon in question. The main normative requirement being the successful imitation, the simulation model may be as far from the theoretical model as from a phenomenological model. Something else may be overlooked when one speaks of simulating the fundamental equations, something that is independent of the way in which the simulation model is obtained.

The fundamental equations are abstract, and going from the fundamental equations to the simulation of a target phenomenon must involve specifications that determine what particular situation is the target of the simulation. On closer inspection, what raises doubt as to the significance of the result of the simulation is the geometry of the simulated situation. As may already have been apparent from Figure 1.

The simulation is meant to tell what the observation should be, what the phenomenon is really like, and whether the discontinuity is part of it or not. But how could this simulation of the development of a flow in a two-dimensional plane tell what it is like when a flow goes around a cylinder and develops in a space that contains not only the plane perpendicular to the axis of the cylinder but also the plane that contains that axis? There is an assumption that answers this question.

What the two-dimensional simulation shows then is how the wake develops, according to the Navier—Stokes equations, in conditions where all the points on the cylinder are relevantly interchangeable. But why should we think that all the points are interchangeable? The presence of the ends obviously creates an asymmetry contradicting the assumptions of the simulation. To this question too there is an answer.

Suppose that a cylinder that is long enough can be regarded as infinite, as a cylinder that has no end. If there is no end, then we are in the situation where all points are interchangeable. All that is needed to satisfy this assumption of an infinite cylinder is that, for a long enough cylinder, what happens in the middle part of the cylinder be independent from what happens at or near the ends. And it could then be admitted that the two-dimensional simulation will, at least, show what should happen in a long enough cylinder, far enough from the ends.

Taking the simulation as relevant is, consequently, taking the ends of the cylinder as being irrelevant to the understanding of the fluid-mechanical features, amplitude or frequency, of the wake. Another way to put this: so regarded, the ends of the cylinder are treated in the same way as non-uniformities of the flow or vibrations of the cylinder.

If they have an effect on the outcomes of measurement, this effect will be classified as an artifact and should be shielded. Thus, the implicitly made assumption is that the ends are taken not to be a relevant parameter of the system, and that the effects on the dynamics of the wake that are due to a finite cylinder having ends are not intrinsic characteristics of the dynamics.

This assumption about the ends of the cylinder would be temporarily supported by measurements that had shown that for a long enough cylinder the frequency of shedding in the middle of the cylinder is different from that found near the ends. But that should not mislead us into thinking that the assumption was an empirical assumption. With this in place, the conditions under which the ends would have an effect on the measurement results would simply not count as the proper conditions of measurement. There was for the experimenter an additional assumption in place: that the difference between a finite cylinder and one with no ends depends just on the length of the cylinder.

Concretely, this implies the assumption that the way to shield off the effect of the ends is to have a sufficient length. These two assumptions were called into question by Williamson in a thoroughgoing experimental study of the evolution of the shedding frequency, which was a turning point for our understanding of the discontinuity and the development of three-dimensional effects.

Measurements of the shedding frequency with a probe moving along the span of the cylinder showed the existence of different regions characterized by different shedding frequencies. In particular, a region of lower frequency was found near the ends. Note, however, that Williamson recognized only the absence of a direct influence. Why did Williamson underline the absence only of a direct influence of the ends on the wake in the central region? In the case where there was a difference in frequency between the ends and the central part of the cylinder, visualizations of the temporal development of the wake along the cylinder were made.

They showed that, initially, the lines of vortices traveling downstream were parallel to the cylinder and that progressively the parallel pattern was transformed into a stable oblique pattern, which propagated from the ends of the cylinder toward the central region. These observations suggested that there is an effect propagated toward the center.

If so, this could be attributed to an influence of the ends, which is indirect in that it is not on the value of the frequency itself. But whether this influence should be part of our understanding of the wake would still be a question. So far, all the observations and measurements had been made with endplates perpendicular to the axis of the cylinder.

But with this new focus on the possibility of effects due to the ends, further measurements were made for different values of the angle between the axis of the cylinder and the plates. And, lo and behold, for a certain angle the shedding becomes parallel—that is, two-dimensional—and the discontinuity disappears, even though the length did not change. Changing the angle of the plates has the effect of changing the pressure conditions responsible for the existence of a region of lower frequency toward the ends.

When there is such a region of lower frequency, a phase difference propagates from the ends toward the central region, and this propagation creates the pattern of oblique shedding. For a certain interval of angles of the endplates, when the pressure and the vortex frequency match those values over the rest of the span, there is no region of lower frequency and no propagation of phase difference, and the shedding is parallel.

The infamous discontinuity only appears in the oblique mode of shedding and is found to correspond to the transition of one oblique pattern to another with a slightly different geometry. The two forms of shedding correspond to different values of the angle between the endplates and the axis of the cylinder. If no special status is bestowed on certain values of this angle in contrast to the others, there is no reason to take only one of the shedding patterns as being normal or intrinsic.

In this new perspective, the parallel and the oblique pattern are not two distinct phenomena, with only one being the normal form of the wake. They are two possible configurations of the flow corresponding to different values of a parameter of the experimental system, two possible solutions for the same system in different conditions. But this new way of seeing implies that the two assumptions, on which the relevance of the simulation depended, must be rejected. First, a new parameter should be added to the set of relevant parameters of the system, namely, one that characterizes the end conditions of the cylinder.

This is to insist that the phenomenon under study is a process involving a finite cylinder because exactly finite cylinders are ones with ends; the effect that the end conditions have on the development of the wake is now part of the structural characteristics of the wake.

Second, this parameter is independent of the length of the cylinder. The difference between the ends and the central part needs to be reconceived in terms of pressure difference and the value of the angle of the end plates that determines the value of this pressure difference.

By integrating this parameter among the set of relevant parameters the gain is one of conceptual unification: what were seen as two distinct phenomena have been unified under the same description. To integrate the ends among the relevant factors through the definition of a new relevant parameter and not to bestow a special status on a particular range of values of the angle are normative transformations of the investigation. To sum up, the elaboration of an experimental system is an interactive, creative, open-ended process and contributes constructively to the processes of modeling and simulating.

The constructive contribution is mediated by the identification of the relevant parameters. The relevant parameters are characteristics of the experimental system such that not only does their variation have an effect on the phenomenon but this effect is constitutive of the phenomenon—intrinsic to the phenomenon. As we have seen, the classification of characteristics into intrinsic versus interference or artifact is not there beforehand; it is during the inquiry, with its successive steps of experiment and model construction, that the phenomenon is identified.

To paraphrase a line from a quite different philosophical scene, the phenomenon under study is what it will have been: what was studied is what it is seen to have been in retrospect. The identification of the relevant parameters is required for determining the conditions in which measurements provide the empirical touchstone of a putative model of the phenomenon.

Before that, a putative model is untestable. The specification of the relevant parameters involves a systematic empirical investigation of the effects of different factors, but the line that is drawn between which effects are relevant and which are not is normative.

The effects of the relevant parameters are those a model of the phenomenon should account for. We had an interactive process in that both the predictions of a still untestable model and the results of a prejudiced simulation contributed to shaping the experimental search for the relevant parameters. The new relevant parameter that was introduced in the conception of the phenomenon amounted to a conceptual innovation. The process of mutually entangled steps of experimentation and modeling is a creative process. And it is open ended.

A new model was formulated in response to the reconception of the phenomenon. New measurements were to follow as well as new simulations and a modified version of the model, and so it goes on. An experiment is a physical, tangible realization of a data-generating procedure, designed to furnish information about the phenomena to which a theoretical model or hypothesis pertains. But while it is correct that the experiment and the procedure performed are physical and tangible, it would be thoroughly misleading to regard them merely as thus.

The experimenter is working with a model of the instrumental setup, constructed following the general theoretical models afforded in the theoretical background. Duhem describes graphically the synoptic vision required of the experimenting scientist:. When a physicist does an experiment, two very distinct representations of the instrument on which he is working fill his mind: one is the image of the concrete instrument that he manipulates in reality; the other is a schematic model of the same instrument, constructed with the aid of symbols supplied by theories; and it is on this ideal and symbolic instrument that he does his reasoning, and it is to it that he applies the laws and formulas of physics.

We would say it somewhat differently today, if only to place less emphasis on the instrument than on the setup as a whole. Several steps in this process need to be distinguished and analyzed. Thus, we see alongside the model of the experimental setup and the model of the phenomenon a third level of modeling: the modeling of the data. But as Suppes emphasized, even that is an abstraction because many levels within levels can be distinguished.

In those seminal texts we see a conceptual development that was far from over. Unexpected complexities came to light in both aspects of the interactive process of experimentation and modeling. To show how data models are constructed we will select two examples from experimental work in physics, approximately one hundred and twenty years apart, so as to highlight the commonality of features in modern experimental work.

The terrestrial determination of the velocity of light was one of the great experimental problems of the nineteenth century. Between these, Foucault had obtained a result in that was of special interest to Michelson. The deflection the angle between the original and new direction of the beam is a function of three factors: the distance between the mirrors, the rotational velocity of R, and the speed of light. The first two being directly measurable, the latter can be calculated.

But this omits an important detail. There is a lens placed between R and M to produce an image of the light source on M. That image moves across M as R rotates, and if the distance is to be increased so as to increase the deflection , mirror M must be made larger. Foucault managed to increase the distance to about sixty-five feet, but that left the deflection still so small that the imprecision in its measurement was significant.

Michelson realized that by placing the light source at the principal focus of the lens he would produce the light beam as a parallel bundle of rays, and that could be reflected back as a parallel beam by a plane mirror placed at any desired distance to the rotating mirror. The experiment in improved significantly on that, and the value he obtained was within 0.

This value is what is reported in Michelson So what were the data generated on which this calculation was based? They are presented in tables, with both raw data and their means presented explicitly. The columns in Table I. Following this specimen comes the summary of all results from June 5 through July 2, ; Table I.

Concentrating on this part of the report omits many of the details and the ancillary reported observations, including those that serve to certify the calibration and reliability of the instruments used. For the steel tape used to measure distances, the micrometer, and the rate of the tuning fork for time measurement, the report includes a separate data-generating procedure and its results.

Although just a few paragraphs are devoted to the theory in effect, the previous schematic description , what is described in detail is the procedure by which the data are generated and the care and precautions to ensure reliability, issued in a summary presentation of those data and the calculated value of the speed of light. As so much else in fluid mechanics, the flow of a fluid between two plates, with one rotating relative to the other Couette flow , can be modeled starting with the Navier—Stokes equations.

But the study of idealized models—for example, with two infinite parallel plates, even with a pressure gradient imposed—leaves the experimenter with only clues as to what to expect in real situations of this sort. The gap h between the two disks could be varied continuously between 0 and 21 mm, and the rotational velocity between 0 and revolutions per minute rpm.

The specific results included in the report, as presenting a typical case, were for a fixed gap of approximately 2. The raw data are obtained by use of a video camera. The water is seeded with reflective anisotropic flakes to perform visualizations. The orientation of these particles depends upon the shear stress of the flow, so the structures that develop in the fluid layer can be observed. These images become the raw material for the production of the data model. The turbulent regions appear dark because of the completely disordered motion of the reflective flakes in these areas.

To extract physical characteristics, the diagrams are binarized: the disordered states appear in black, the ordered in white. Experimental setup for Couette flow. This procedure so far already involves a good deal of processing of the data, but it is still an intermediate step on the way to the data model that depicts the significant results. At a second threshold, these defects trigger the turbulent spirals. As the rotation rate is increased, the lifetime of these turbulent structures increases until a threshold is reached where they then form permanent turbulent spirals arranged nearly periodically all around a circumference.

However, because the number of these turbulent spirals decreases with the rotational frequency, the transition to a fully turbulent regime is not achieved. The data model thus described, constructed from a series of experiments of the type described here, is graphically and rather dramatically presented in Figure I. The difference in the procedures followed, at a concrete level, to manage large amounts of data collected not by individual observation but application of technology is clear.

To appreciate what is important about these two examples of experimentation, we need to appreciate their function, which is essentially the same but must be discerned within these very different technological realizations. The data model constructed on the basis of outcomes of measurement can be viewed as the pivotal element in the experimental process. What makes the data model a pivotal element is its dual epistemic function:. Given the pivotal function of the data model, to investigate the experimental process that connects theoretical models to phenomena is to clarify the different steps that lead to the construction and interpretation of data models:.

The experimental process diagrammatically displayed. The arrows in Figure I. To put it bluntly, even paradoxically, experimentally based modeling starts neither with a model to be tested nor with a data-generating procedure. It starts with a problem for which relevant data need to be produced and evidentially supported models need to be constructed.

The different stages in this process, from the abstract terms of the problem to the specification and realization of experimental procedures, from issues about evidential relevance to an explication of the similarity relation between models and data, are the subject of the contributions in this collection. The philosophical essays on modeling and experimentation in this volume are representative of the current literature in the philosophy of science in many ways: in their diversity of approach, in the tensions between them, and in the differences in focus that divide them.

One crucial and central concern is found in all of them—the most striking, groundbreaking result perceivable in these contributions is their radically new conception of the role of data in the modeling process. Specifically, they have a new awareness of what is problematic about data. Instead of being a passive element of modeling or experimenting, already standing ready to act the role of tribunal or simply flowing from an experimental procedure, data have become the central point of the experimentally based modeling process. Data are both what need to be produced and what need to be accounted for to create the prospect for articulation of both the theoretical model and the model of the experiment.

Data are the manifestation of the phenomenon under study, they are what needs to be interpreted, and their very relevance or significance itself becomes the issue. Logical terminology is typically foreign to the background mathematical practice: a set of models that is precisely thus, the set of models that satisfy a given set of sentences, is in logical terminology an elementary class.

The point is, of course, not that one cannot deduce all consequences of a conjunction from one conjunct—obviously—but rather to draw attention to the cooperation with other input so saliently, importantly needed in actual practice. Superconductivity presented an instructive example. See also appendix 1 of van Fraassen For a more comprehensive version of this section, see Peschard Indeed, to Margaret Morrison it did not seem inapt to say that models, including those models that are simulation programs on computers, can function as measuring instruments.

But this was certainly more controversial, especially for computer simulations. There is definitely an understanding of models as having a role in experimenting and measuring that does not make them measuring instruments cf. Peschard ; Parke For the larger story, see further Peschard a. This usage is to be distinguished from its use in other fields such as software engineering in database construction for business and industry.

The erosion is the elimination of a black pixel if a chosen number s or more of white pixels is found among its eight neighbors. The first step is an erosion in order to eliminate the undesirable black dots from noise. Then seven successive dilatations and a final erosion achieve a visual aspect of the binarized diagrams relevantly equivalent to the original ones. Barwich, Ann-Sophie, and Hasok Chang. Bogen, James, and James Woodward. Carnap, Rudolf. Der logische Aufbau der Welt. Berlin-Schlachtensee: Weltkreis Verlag.

Cartwright, Nancy. Cartwright, Nancy D. How the Laws of Physics Lie. Oxford: Clarendon Press. Morgan and Margaret Morrison, — Cambridge: Cambridge University Press. A New Introduction , edited by N. Cartwright and E.