Black box

Black box systems
System
Black box, Oracle machine
Methods and techniques
Black-box testing, Blackboxing
Related techniques
Feed forward, Obfuscation, Pattern recognition, White box, White-box testing, Gray-box testing, System identification
Fundamentals
A priori information, Control systems, Open systems, Operations research, Thermodynamic systems

In science, computing, and engineering, a black box is a system which can be viewed in terms of its inputs and outputs (or transfer characteristics), without any knowledge of its internal workings.[1][2] Its implementation is "opaque" (black). The term can be used to refer to many inner workings, such as those of a transistor, an engine, an algorithm, the human brain, or an institution or government.

To analyze an open system with a typical "black box approach", only the behavior of the stimulus/response will be accounted for, to infer the (unknown) box. The usual representation of this "black box system" is a data flow diagram centered in the box.

The opposite of a black box is a system where the inner components or logic are available for inspection, which is most commonly referred to as a white box (sometimes also known as a "clear box" or a "glass box").

Overview

A black box is any system whose internal workings are hidden from or ignored by an observer, who instead studies it by examining what goes in (inputs) and what comes out (outputs).[3] The observer looks for patterns in how inputs relate to outputs and uses those patterns to predict the system's behavior, without ever accessing the mechanism inside.

W. Ross Ashby, who first formalized the concept, offered an hypothetical scenario: imagine a sealed device from an alien source. An experimenter can flip its switches, push its buttons, and observe the results: a change in the sound it emits, a rise in temperature, a movement of a dial. By recording many such input-output pairs over time and looking for consistencies, the experimenter builds up a working model of how the device behaves. This model allows prediction ("if I flip this switch, the pitch will change") even though the internal mechanism remains entirely unknown.[4]

The black box approach is useful because many systems—an electronic circuit, a living organism, an economy—are either too complex to analyze component by component or have internals that are physically inaccessible, proprietary, or simply beside the point for the question at hand. Rather than requiring complete knowledge before acting, black box methods let an observer work with what can actually be observed.[5]

In black-box testing, inputs and outputs are observed to verify behavior without examining internal mechanisms.

The concept is applied in varying ways across fields. In software testing and engineering, black box analysis is typically a methodological choice: the tester treats the system as a black box to verify that specified inputs produce expected outputs, even though the source code could in principle be examined.[3]

Ashby: even a simple bicycle is a black box. Mechanical forces can be observed, but not the interatomic forces holding it together.

In cybernetics and philosophy of science, the concept sometimes carries a stronger implication: that all systems are ultimately black boxes because complete knowledge of internal mechanisms is never fully attainable. Even familiar objects like a bicycle involve forces and processes—interatomic bonds, material properties—that thwart direct inspection.[6] Most practical uses fall somewhere between these poles: a researcher may begin with black box methods because internals are currently inaccessible, then gradually "open" the box as new tools or techniques permit, while recognizing that some opacity will always remain.[7]

Because the observer decides what counts as input and output, designs the probes or experiments, and constructs explanatory patterns from observed regularities, knowledge gained from black box analysis is shaped by the investigation itself. Different observers, or the same observer using different instruments, may arrive at different—and possibly useful—descriptions of how the system behaves.[8]

History

The modern meaning of "black box" emerged from World War II radar research.[9] Peter Galison traces the term's popularity to the Radiation Laboratory at MIT, where components like amplifiers, receivers, and filters were housed in black-speckled enclosures.[10] Philipp von Hilgers proposes an earlier origin: the 1940 Tizard Mission, which transported an experimental cavity magnetron from Britain to the United States in a black metal deed box.[11] The magnetron was itself difficult to explain functionally: a "black box" inside a black box. The magnetron became the base of MIT's microwave radar development program, and Von Hilgers argues that both the object and the metaphor began to spread.[12]

The concept's theoretical development drew on related wartime work at MIT on feedback mechanisms and fire control. In the early 1940s, Norbert Wiener developed an antiaircraft predictor designed to characterize enemy pilots' evasive maneuvers, anticipate future positions, and direct artillery fire accordingly. Wiener came to view the pilot "like a servo-mechanism" whose behavior could be predicted through statistical analysis of inputs and outputs.[13] In a June 1942 letter, he described this approach as a component of communication engineering, "where the function of an instrument between four terminals is specified before anyone takes up the actual constitution of the apparatus in the box."[14] The black boxes accumulating at MIT thus became, as Elizabeth Petrick notes, a bridge between physical technology and a new way of thinking about systems in terms of inputs and outputs.[15]

Before the term emerged during World War II, similar thinking had developed in electronic circuit theory. Vitold Belevitch identifies Franz Breisig's 1921 treatment of two-port networks, characterized solely by their voltage equations, as an early instance of an input-output approach.[16] Similarly, Wilhelm Cauer's program for network synthesis (1926–1941), which studied circuits through their transfer functions rather than internal structure, has been described retrospectively as black-box analysis.[17]

Cross-disciplinary communication about the concept began during the war. In 1944, experimental psychologist Edwin Boring corresponded with Wiener about modeling psychological functions as electrical systems, describing the brain as "a mysterious box with binding posts and knobs on it."[18] The term "black box" itself entered cybernetics discourse in the early 1950s. When Wiener visited the Burden Neurological Institute in January 1951, W. Ross Ashby recorded in his journal that Wiener discussed "the problem of the black box"—how to observe a box with unknown contents, feed an input, observe the output, and deduce a machine with equivalent performance.[19]

A full treatment was given by Ashby in 1956 in An Introduction to Cybernetics, which devoted an entire chapter to black boxes.[20][7] Ashby argued that "the real objects are in fact all Black Boxes" since complete knowledge of any system's internal workings is impossible.[6][21] Wiener provided his most complete discussion in the 1961 second edition of Cybernetics, distinguishing between "black boxes" (systems whose internal structure is unknown) and "white boxes" (systems built with a known structural plan).[22][23] Many other engineers, scientists, and epistemologists, such as Mario Bunge used and refined black box theory in the 1960s.[24]

Systems theory

The black box is basic to open systems theory, which focuses on input and output flows exchanged with their surroundings.

In systems theory, the black box is a fundamental abstraction for analyzing open systems: systems that exchange matter, energy, or information with their environment. The key insight is that a system's behavior can be characterized entirely by the relationship between its inputs (stimuli from the environment) and outputs (responses to the environment), without reference to internal structure.[3]

Formal characterization

Mario Bunge formalized black box theory in 1963, defining it as the study of systems where "the constitution and structure of the box are altogether irrelevant to the approach under consideration, which is purely external or phenomenological."[3] On this view, a black box is characterized by:

  1. A distinction between what lies inside and outside the system boundary
  2. Observable inputs that the experimenter can control or measure
  3. Observable outputs that result from the system's internal processes
  4. An assumed causal relationship connecting inputs to outputs (the "explanatory principle")[25]

The theory assumes only that inputs precede their associated outputs in time—what Bunge called "antecedence."[26] No specific variables, laws, or constraints on internal mechanism are required. This generality makes black box theory applicable to physical, biological, economic, and social systems alike.

The role of the observer

The only source of knowledge about a black box is the protocol: a record of input-output pairs observed over time. As Ashby emphasized, "all knowledge obtainable from a Black Box (of given input and output) is such as can be obtained by re-coding the protocol; all that, and nothing more."[27]

By examining the protocol, an observer may detect regularities—patterns in which certain inputs reliably produce certain outputs. These regularities permit prediction. If input X has always produced output Y, the observer may reasonably expect it to do so again. Ashby called a systematized set of such regularities a canonical representation of the box.[28] When the observer can also control the inputs, the investigation becomes an experiment, and hypotheses about cause and effect can be tested directly.[27]

Limits of black box analysis

Black box analyses face a fundamental limitation: multiple internal mechanisms can produce identical input-output behavior. Claude Shannon demonstrated that any given pattern of external behavior in an electrical network can be realized by indefinitely many internal structures.[29] Black box observation can reveal what a system does but cannot uniquely determine how it does it.

Bunge identified three related problems:[30]:

  1. The prediction problem: given knowledge of the system's properties and an input, find the output
  2. The inverse prediction problem: given the system's properties and an output, find which input caused it
  3. The explanation problem: given observed input-output pairs, determine what kind of system could produce them

The prediction problem is typically well-defined. The inverse problems are often ill-posed: infinitely many combinations of inputs and mechanisms could produce the same observed output.[31]

White, grey, and black

Wiener contrasted the black box with a white box: a system built according to a known structural plan so that the relationship between input and output is determined in advance.[22] Most investigated systems fall between these extremes. They are partially transparent, with some internal structure known and some remaining opaque. Such systems are sometimes called grey boxes.

"Whitening" a black box—the process by which an initially opaque system becomes understood—is a central aim of science and engineering. However, some theorists argue that complete whitening is impossible: every white box, examined more closely, reveals further black boxes within.[32] As Ashby observed, even a familiar bicycle is a black box at the level of interatomic forces.[6]

Other theories

The observed hydrograph is a graphic of the response of a watershed (a blackbox) with its runoff (red) to an input of rainfall (blue).

Black box theories are those theories defined only in terms of their function.[33][34] The term can be applied in any field where some inquiry is made into the relations between aspects of the appearance of a system (exterior of the black box), with no attempt made to explain why those relations should exist (interior of the black box). In this context, Newton's theory of gravitation can be described as a black box theory.[35]

Specifically, the inquiry is focused upon a system that has no immediately apparent characteristics and therefore has only factors for consideration held within itself hidden from immediate observation. The observer is assumed ignorant in the first instance as the majority of available data is held in an inner situation away from facile investigations. The black box element of the definition is shown as being characterised by a system where observable elements enter a perhaps imaginary box with a set of different outputs emerging which are also observable.[36]

Adoption in humanities

In humanities disciplines such as philosophy of mind and behaviorism, one of the uses of black box theory is to describe and understand psychological factors in fields such as marketing when applied to an analysis of consumer behaviour.[37][38][39]

Black box theory

Black Box theory is even wider in application than professional studies:

The child who tries to open a door has to manipulate the handle (the input) so as to produce the desired movement at the latch (the output); and he has to learn how to control the one by the other without being able to see the internal mechanism that links them. In our daily lives we are confronted at every turn with systems whose internal mechanisms are not fully open to inspection, and which must be treated by the methods appropriate to the Black Box.

— Ashby[40]

(...) This simple rule proved very effective and is an illustration of how the Black Box principle in cybernetics can be used to control situations that, if gone into deeply, may seem very complex.
A further example of the Black Box principle is the treatment of mental patients. The human brain is certainly a Black Box, and while a great deal of neurological research is going on to understand the mechanism of the brain, progress in treatment is also being made by observing patients' responses to stimuli.

— Duckworth, Gear and Lockett[41]

Applications

When the observer (an agent) can also do some stimulus (input), the relation with the black box is not only an observation, but an experiment.

Computing and mathematics

  • In computer programming and software engineering, black box testing is used to check that the output of a program is as expected, given certain inputs.[42] The term "black box" is used because the actual program being executed is not examined.
  • In computing in general, a black box program is one where the user cannot see the inner workings (perhaps because it is a closed source program) or one which has no side effects and the function of which need not be examined, a routine suitable for re-use.
  • Also in computing, a black box refers to a piece of equipment provided by a vendor for the purpose of using that vendor's product. It is often the case that the vendor maintains and supports this equipment, and the company receiving the black box typically is hands-off.
  • In mathematical modeling, a limiting case.

Science and technology

  • In neural networking or heuristic algorithms (computer terms generally used to describe "learning" computers or "AI simulations"), a black box is used to describe the constantly changing section of the program environment which cannot easily be tested by the programmers. This is also called a white box in the context that the program code can be seen, but the code is so complex that it is functionally equivalent to a black box.
  • In physics, a black box is a system whose internal structure is unknown, or need not be considered for a particular purpose.
  • In cryptography to capture the notion of knowledge obtained by an algorithm through the execution of a cryptographic protocol such as a zero-knowledge proof protocol. If the output of an algorithm when interacting with the protocol matches that of a simulator given some inputs, it only needs to know the inputs.

Other applications

See also

Notes

References

  1. ^ Bunge 1963.
  2. ^ Haskel-Ittah, Michal (April 2023). "Explanatory black boxes and mechanistic reasoning". Journal of Research in Science Teaching. 60 (4): 915–933. Bibcode:2023JRScT..60..915H. doi:10.1002/tea.21817. ISSN 0022-4308.
  3. ^ a b c d Bunge 1963, p. 346.
  4. ^ Ashby 1956, pp. 88–89.
  5. ^ Ashby 1956, pp. 86–88.
  6. ^ a b c Ashby 1956, p. 110.
  7. ^ a b Petrick 2020, p. 585.
  8. ^ Glanville 2009, pp. 155–157.
  9. ^ Petrick 2020, pp. 577–578.
  10. ^ Galison 1994, p. 247.
  11. ^ von Hilgers 2011, pp. 47–48.
  12. ^ von Hilgers 2011, p. 48.
  13. ^ Galison 1994, pp. 229–236.
  14. ^ Galison 1994, p. 242.
  15. ^ Petrick 2020, p. 578.
  16. ^ Belevitch 1962.
  17. ^ Cauer, Mathis & Pauli 2000, p. 4.
  18. ^ Petrick 2020, p. 579.
  19. ^ Petrick 2020, p. 581.
  20. ^ Ashby 1956, pp. 86–117.
  21. ^ Petrick 2020, pp. 587–588.
  22. ^ a b Wiener 2019, p. xi.
  23. ^ Petrick 2020, p. 590.
  24. ^ Bunge 1963, pp. 346–358.
  25. ^ Glanville 2009, pp. 153–154.
  26. ^ Bunge 1963, p. 357.
  27. ^ a b Ashby 1956, p. 89.
  28. ^ Ashby 1956, pp. 90–91.
  29. ^ Ashby 1956, p. 102.
  30. ^ Bunge 1963, p. 347.
  31. ^ Bunge 1963, pp. 347, 357.
  32. ^ Glanville 2009, p. 163.
  33. ^ Definition from Answers.com
  34. ^ Clara, Parker (1963). "A General Black Box Theory". Philosophy of Science. 30 (4). Mario Bunge: 346–358. doi:10.1086/287954. S2CID 123014360. Retrieved 23 December 2020.
  35. ^ Vincent Wilmot, "Sir Isaac Newton – mathematical laws Black Box theory", new-science-theory.com, retrieved 13 October 2022.
  36. ^ Bunge, M. (1963). "A General Black Box Theory". Philosophy of Science. 30 (4): 346–358. doi:10.1086/287954. JSTOR 186066. Retrieved 8 January 2024.
  37. ^ Institute for working futures Archived 26 June 2012 at the Wayback Machine part of Advanced Diploma in Logistics and Management. Retrieved 11/09/2011
  38. ^ Black-box theory used to understand Consumer behaviour Marketing By Richard L. Sandhusen. Retrieved 11/09/2011
  39. ^ designing of websites Retrieved 11/09/2011
  40. ^ Ashby 1956.
  41. ^ WE Duckworth, AE Gear and AG Lockett (1977), "A Guide to Operational Research". doi:10.1007/978-94-011-6910-3
  42. ^ Beizer, Boris; Black-Box Testing: Techniques for Functional Testing of Software and Systems, 1995, ISBN 0-471-12094-4
  43. ^ "Mind as a Black Box: The Behaviorist Approach", pp. 85–88, in Friedenberg, Jay; and Silverman, Gordon; Cognitive Science: An Introduction to the Study of Mind, Sage Publications, 2006.

Bibliography