
In mathematics, Pythagorean addition is a binary operation on the real numbers that computes the length of the hypotenuse of a right triangle, given its two sides. Like the more familiar addition and multiplication operations of arithmetic, it is both associative and commutative.
This operation can be used in the conversion of Cartesian coordinates to polar coordinates, and in the calculation of Euclidean distance. It also provides a simple notation and terminology for the diameter of a cuboid, the energy-momentum relation in physics, and the overall noise from independent sources of noise. In its applications to signal processing and propagation of measurement uncertainty, the same operation is also called addition in quadrature.[1] A scaled version of this operation gives the quadratic mean or root mean square.
It is implemented in many programming libraries as the hypot function, in a way designed to avoid errors arising due to limited-precision calculations performed on computers. Donald Knuth has written that "Most of the square root operations in computer programs could probably be avoided if [Pythagorean addition] were more widely available, because people seem to want square roots primarily when they are computing distances."[2]
Definition
Hypotenuse calculator | |
---|---|
a | 3 |
b | 4 |
c = a ⊕ b | 5
|
According to the Pythagorean theorem, for a right triangle with side lengths and , the length of the hypotenuse can be calculated as This formula defines the Pythagorean addition operation, denoted here as : for any two real numbers and , the result of this operation is defined to be[3] For instance, the special right triangle based on the Pythagorean triple gives .[4] However, the integer result of this example is unusual: for other integer arguments, Pythagorean addition can produce a quadratic irrational number as its result.[5]
Properties
The operation is associative[6][7] and commutative.[6][8] Therefore, if three or more numbers are to be combined with this operation, the order of combination makes no difference to the result: Additionally, on the non-negative real numbers, zero is an identity element for Pythagorean addition. On numbers that can be negative, the Pythagorean sum with zero gives the absolute value:[3] The three properties of associativity, commutativity, and having an identity element (on the non-negative numbers) are the defining properties of a commutative monoid.[9][10]
Applications
Distance and diameter

The Euclidean distance between two points in the Euclidean plane, given by their Cartesian coordinates and , is[11] In the same way, the distance between three-dimensional points and can be found by repeated Pythagorean addition as[11]
Repeated Pythagorean addition can also find the diagonal length of a rectangle and the diameter of a rectangular cuboid. For a rectangle with sides and , the diagonal length is .[12][13] For a cuboid, the diameter is the longest distance between two points, the length of the body diagonal of the cuboid. For a cuboid with side lengths , , and , this length is .[13]
Coordinate conversion
Pythagorean addition (and its implementation as the hypot
function) is often used together with the atan2
function (a two-parameter form of the arctangent) to convert from Cartesian coordinates to polar coordinates :[14][15]
Quadratic mean and spread of deviation
The root mean square or quadratic mean of a finite set of numbers is times their Pythagorean sum. This is a generalized mean of the numbers.[16]
The standard deviation of a collection of observations is the quadratic mean of their individual deviations from the mean. When two or more independent random variables are added, the standard deviation of their sum is the Pythagorean sum of their standard deviations.[16] Thus, the Pythagorean sum itself can be interpreted as giving the amount of overall noise when combining independent sources of noise.[17]
If the engineering tolerances of different parts of an assembly are treated as independent noise, they can be combined using a Pythagorean sum.[18] In experimental sciences such as physics, addition in quadrature is often used to combine different sources of measurement uncertainty.[19] However, this method of propagation of uncertainty applies only when there is no correlation between sources of uncertainty,[20] and it has been criticized for conflating experimental noise with systematic errors.[21]
Other

The energy-momentum relation in physics, describing the energy of a moving particle, can be expressed as the Pythagorean sum where is the rest mass of a particle, is its momentum, is the speed of light, and is the particle's resulting relativistic energy.[22]
When combining signals, it can be a useful design technique to arrange for the combined signals to be orthogonal in polarization or phase, so that they add in quadrature.[23][24] In early radio engineering, this idea was used to design directional antennas, allowing signals to be received while nullifying the interference from signals coming from other directions.[23] When the same technique is applied in software to obtain a directional signal from a radio or ultrasound phased array, Pythagorean addition may be used to combine the signals.[25] Other recent applications of this idea include improved efficiency in the frequency conversion of lasers.[24]
In the psychophysics of haptic perception, Pythagorean addition has been proposed as a model for the perceived intensity of vibration when two kinds of vibration are combined.[26]
In image processing, the Sobel operator for edge detection consists of a convolution step to determine the gradient of an image followed by a Pythagorean sum at each pixel to determine the magnitude of the gradient.[27]
Implementation
In a 1983 paper, Cleve Moler and Donald Morrison described an iterative method for computing Pythagorean sums, without taking square roots.[3] This was soon recognized to be an instance of Halley's method,[8] and extended to analogous operations on matrices.[7]
Although many modern implementations of this operation instead compute Pythagorean sums by reducing the problem to the square root function,
they do so in a way that has been designed to avoid errors arising from the limited-precision calculations performed on computers. If calculated using the natural formula,
the squares of very large or small values of and may exceed the range of machine precision when calculated on a computer. This may to an inaccurate result caused by arithmetic underflow and overflow, although when overflow and underflow do not occur the output is within two ulp of the exact result.[28][29][30] Common implementations of the hypot
function rearrange this calculation in a way that avoids the problem of overflow and underflow and are even more precise.[31]
If either input to hypot
is infinite, the result is infinite. Because this is true for all possible values of the other input, the IEEE 754 floating-point standard requires that this remains true even when the other input is not a number (NaN).[32]
Calculation order
The difficulty with the naive implementation is that may overflow or underflow, unless the intermediate result is computed with extended precision. A common implementation technique is to exchange the values, if necessary, so that , and then to use the equivalent form
The computation of cannot overflow unless both and are zero. If underflows, the final result is equal to , which is correct within the precision of the calculation. The square root is computed of a value between 1 and 2. Finally, the multiplication by cannot underflow, and overflows only when the result is too large to represent.[31]
One drawback of this rearrangement is the additional division by , which increases both the time and inaccuracy of the computation. More complex implementations avoid these costs by dividing the inputs into more cases:
- When is much larger than , , to within machine precision.
- When overflows, multiply both and by a small scaling factor (e.g. 2−64 for IEEE single precision), use the naive algorithm which will now not overflow, and multiply the result by the (large) inverse (e.g. 264).
- When underflows, scale as above but reverse the scaling factors to scale up the intermediate values.
- Otherwise, the naive algorithm is safe to use.
Additional techniques allow the result to be computed more accurately than the naive algorithm, e.g. to less than one ulp.[31] Researchers have also developed analogous algorithms for computing Pythagorean sums of more than two values.[33]
Fast approximation
The alpha max plus beta min algorithm is a high-speed approximation of Pythagorean addition using only comparison, multiplication, and addition, producing a value whose error is less than 4% of the correct result. It is computed as for a careful choice of parameters and .[34]
Programming language support
Pythagorean addition function is present as the hypot
function in many programming languages and their libraries. These include:
CSS,[35]
D,[36]
Fortran,[37]
Go,[38]
JavaScript (since ES2015),[11]
Julia,[39]
MATLAB,[40]
PHP,[41] and
Python.[42]
C++11 includes a two-argument version of hypot
, and a three-argument version for has been included since C++17.[43]
The Java implementation of hypot
[44] can be used by its interoperable JVM-based languages including Apache Groovy, Clojure, Kotlin, and Scala.[45]
Similarly, the version of hypot
included with Ruby extends to Ruby-based domain-specific languages such as Progress Chef.[46] In Rust, hypot
is implemented as a method of floating point objects rather than as a two-argument function.[47]
Metafont has Pythagorean addition and subtraction as built-in operations, under the symbols ++
and +-+
respectively.[2]
History
The Pythagorean theorem on which this operation is based was studied in ancient Greek mathematics, and may have been known earlier in Egyptian mathematics and Babylonian mathematics; see Pythagorean theorem § History.[48] However, its use for computing distances in Cartesian coordinates could not come until after René Descartes invented these coordinates in 1637; the formula for distance from these coordinates was published by Alexis Clairaut in 1731.[49]
The terms "Pythagorean addition" and "Pythagorean sum" for this operation have been used at least since the 1950s,[18][50] and its use in signal processing as "addition in quadrature" goes back at least to 1919.[23]
From the 1920s to the 1940s, before the widespread use of computers, multiple designers of slide rules included square-root scales in their devices, allowing Pythagorean sums to be calculated mechanically.[51][52][53] Researchers have also investigated analog circuits for approximating the value of Pythagorean sums.[54]
References
- ^ Johnson, David L. (2017). "12.2.3 Addition in Quadrature". Statistical Tools for the Comprehensive Practice of Industrial Hygiene and Environmental Health Sciences. John Wiley & Sons. p. 289. ISBN 9781119143017.
- ^ a b Knuth, Donald E. (1986). The METAFONTbook. Addison-Wesley. p. 80.
- ^ a b c Moler, Cleve; Morrison, Donald (1983). "Replacing square roots by Pythagorean sums". IBM Journal of Research and Development. 27 (6): 577–581. CiteSeerX 10.1.1.90.5651. doi:10.1147/rd.276.0577.
- ^ This example is from Moler & Morrison (1983). Dubrulle (1983) uses two more integer Pythagorean triples, (119,120,169) and (19,180,181), as examples.
- ^ Ellis, Mark W.; Pagni, David (May 2008). "Exploring segment lengths on the Geoboard". Mathematics Teaching in the Middle School. 13 (9). National Council of Teachers of Mathematics: 520–525. doi:10.5951/mtms.13.9.0520. JSTOR 41182606.
- ^ a b Falmagne, Jean-Claude (2015). "Deriving meaningful scientific laws from abstract, "gedanken" type, axioms: five examples". Aequationes Mathematicae. 89 (2): 393–435. doi:10.1007/s00010-015-0339-1. MR 3340218. S2CID 121424613.
- ^ a b Incertis, F. (March 1985). "A faster method of computing matrix pythagorean sums". IEEE Transactions on Automatic Control. 30 (3): 273–275. doi:10.1109/tac.1985.1103937.
- ^ a b Dubrulle, Augustin A. (1983). "A class of numerical methods for the computation of Pythagorean sums". IBM Journal of Research and Development. 27 (6): 582–589. CiteSeerX 10.1.1.94.3443. doi:10.1147/rd.276.0582.
- ^ Penner, R. C. (1999). Discrete Mathematics: Proof Techniques and Mathematical Structures. World Scientific. pp. 417–418. ISBN 9789810240882.
- ^ Deza, Michel Marie; Deza, Elena (2014). Encyclopedia of Distances. Springer. p. 100. doi:10.1007/978-3-662-44342-2. ISBN 9783662443422.
- ^ a b c Manglik, Rohit (2024). "Section 14.22: Math.hypot". Java Script Notes for Professionals. EduGorilla. p. 144. ISBN 9789367840320.
- ^ Meyer, J. G. A. (1902). "225. – To find the diagonal of a rectangle when its length and breadth are given". Easy Lessons in Mechanical Drawing & Machine Design: Arranged for Self-instruction, Vol. I. Industrial Publication Company. p. 133.
- ^ a b Grieser, Daniel (2018). "6.2 The diagonal of a cuboid". Exploring Mathematics: Problem-Solving and Proof. Springer Undergraduate Mathematics Series. Springer International Publishing. pp. 143–145. doi:10.1007/978-3-319-90321-7. ISBN 9783319903217.
- ^ "SIN (3M): Trigonometric functions and their inverses". Unix Programmer's Manual: Reference Guide (4.3 Berkeley Software Distribution Virtual VAX-11 Version ed.). Department of Electrical Engineering and Computer Science, University of California, Berkeley. April 1986.
- ^ Beebe, Nelson H. F. (2017). The Mathematical-Function Computation Handbook: Programming Using the MathCW Portable Software Library. Springer. p. 70. ISBN 9783319641102.
- ^ a b Weisberg, Herbert F. (1992). Central Tendency and Variability. Quantitative Applications in the Social Sciences. Vol. 83. Sage. pp. 45, 52–53. ISBN 9780803940079.
- ^ D. B. Schneider, Error Analysis in Measuring Systems, Proceedings of the 1962 Standards Laboratory Conference, page 94
- ^ a b Hicks, Charles R. (March 1955). "Two problems illustrating the use of mathematics in modern industry". The Mathematics Teacher. 48 (3). National Council of Teachers of Mathematics: 130–132. doi:10.5951/mt.48.3.0130. JSTOR 27954826.
- ^ Smith, Walter F. (2020). Experimental Physics: Principles and Practice for the Laboratory. CRC Press. pp. 40–41. ISBN 9781498778688.
- ^ Drosg, Manfred (2009). "Dealing with Internal Uncertainties". Dealing with Uncertainties. Springer Berlin Heidelberg. pp. 151–172. doi:10.1007/978-3-642-01384-3_8. ISBN 9783642013843.
- ^ Barlow, Roger (March 22, 2002). "Systematic errors: facts and fictions". Conference on Advanced Statistical Techniques in Particle Physics. Durham, UK. pp. 134–144. arXiv:hep-ex/0207026.
- ^ Kuehn, Kerry (2015). A Student's Guide Through the Great Physics Texts: Volume II: Space, Time and Motion. Undergraduate Lecture Notes in Physics. Springer New York. p. 372. doi:10.1007/978-1-4939-1366-4. ISBN 9781493913664.
- ^ a b c Weagant, R. A. (June 1919). "Reception thru static and interference". Proceedings of the IRE. 7 (3): 207–244. doi:10.1109/jrproc.1919.217434. See p. 232.
- ^ a b Eimerl, D. (August 1987). "Quadrature frequency conversion". IEEE Journal of Quantum Electronics. 23 (8): 1361–1371. doi:10.1109/jqe.1987.1073521.
- ^ Powers, J. E.; Phillips, D. J.; Brandestini, M.; Ferraro, R.; Baker, D. W. (1980). "Quadrature sampling for phased array application". In Wang, Keith Y. (ed.). Acoustical Imaging: Visualization and Characterization. Vol. 9. Springer. pp. 263–273. doi:10.1007/978-1-4684-3755-3_18. ISBN 9781468437553.
- ^ Yoo, Yongjae; Hwang, Inwook; Choi, Seungmoon (April 2022). "Perceived intensity model of dual-frequency superimposed vibration: Pythagorean sum". IEEE Transactions on Haptics. 15 (2): 405–415. doi:10.1109/toh.2022.3144290.
- ^ Kanopoulos, N.; Vasanthavada, N.; Baker, R.L. (April 1988). "Design of an image edge detection filter using the Sobel operator". IEEE Journal of Solid-State Circuits. 23 (2): 358–367. doi:10.1109/4.996.
- ^ Jeannerod, Claude-Pierre; Muller, Jean-Michel; Plet, Antoine (2017). "The classical relative error bounds for computing and in binary floating-point arithmetic are asymptotically optimal". In Burgess, Neil; Bruguera, Javier D.; de Dinechin, Florent (eds.). 24th IEEE Symposium on Computer Arithmetic, ARITH 2017, London, United Kingdom, July 24–26, 2017. IEEE Computer Society. pp. 66–73. doi:10.1109/ARITH.2017.40.
- ^ Muller, Jean-Michel; Salvy, Bruno (2024). "Effective quadratic error bounds for floating-point algorithms computing the hypotenuse function". arXiv:2405.03588 [math.NA].
- ^ Ziv, Abraham (1999). "Sharp ULP rounding error bound for the hypotenuse function". Mathematics of Computation. 68 (227): 1143–1148. doi:10.1090/S0025-5718-99-01103-5. JSTOR 2584955. MR 1648423.
- ^ a b c Borges, Carlos F. (2021). "Algorithm 1014: An Improved Algorithm for hypot(x, y)". ACM Transactions on Mathematical Software. 47 (1): 9:1–9:12. arXiv:1904.09481. doi:10.1145/3428446. S2CID 230588285.
- ^ Fog, Agner (April 27, 2020). "Floating point exception tracking and NAN propagation" (PDF). p. 6.
- ^ Lefèvre, Vincent; Louvet, Nicolas; Muller, Jean-Michel; Picot, Joris; Rideau, Laurence (2023). "Accurate calculation of Euclidean norms using double-word arithmetic" (PDF). ACM Transactions on Mathematical Software. 49 (1) 1: 1–34. doi:10.1145/3568672. MR 4567887.
- ^ Lyons, Richard G. (2010). "13.2 High-speed vector magnitude approximation". Understanding Digital Signal Processing (3rd ed.). Pearson. pp. 13-6 – 13-8.
- ^ Cimpanu, Catalin (March 10, 2019). "CSS to get support for trigonometry functions". ZDNet. Retrieved 2019-11-01.
- ^ "std.math.algebraic". Phobos Runtime Library Reference, version 2.109.1. D Language Foundation. Retrieved 2025-02-21.
- ^ Reid, John (March 13, 2014). "9.6 Error and gamma functions". The new features of Fortran 2008 (PDF) (Report N1891). ISO/IEC JTC 1/SC 22, WG5 international Fortran standards committee. p. 20.
- ^ Summerfield, Mark (2012). Programming in Go: Creating Applications for the 21st Century. Pearson Education. p. 66. ISBN 9780321774637.
- ^ Nagar, Sandeep (2017). Beginning Julia Programming: For Engineers and Scientists. Apress. p. 105. ISBN 9781484231715.
- ^ Higham, Desmond J.; Higham, Nicholas J. (2016). "26.9 Pythagorean sum". MATLAB Guide (3rd ed.). Society for Industrial and Applied Mathematics. pp. 430–432. ISBN 9781611974669.
- ^ Atkinson, Leon; Suraski, Zeev (2004). "Listing 13.17: hypot". Core PHP Programming. Prentice Hall. p. 504. ISBN 9780130463463.
- ^ Hill, Christian (2020). Learning Scientific Programming with Python (2nd ed.). Cambridge University Press. p. 14. ISBN 9781108787468.
- ^ Hanson, Daniel (2024). Learning Modern C++ for Finance. O'Reilly. p. 25. ISBN 9781098100773.
- ^ Horton, Ivor (2005). Ivor Horton's Beginning Java 2. John Wiley & Sons. p. 57. ISBN 9780764568749.
- ^ van der Leun, Vincent (2017). "Java Class Library". Introduction to JVM Languages: Java, Scala, Clojure, Kotlin, and Groovy. Packt Publishing Ltd. pp. 10–11. ISBN 9781787126589.
- ^ Taylor, Mischa; Vargo, Seth (2014). "Mathematical operations". Learning Chef: A Guide to Configuration Management and Automation. O'Reilly Media. p. 40. ISBN 9781491945117.
- ^ "Primitive Type f64". The Rust Standard Library. February 17, 2025. Retrieved 2025-02-22.
- ^ Maor, Eli (2007). The Pythagorean Theorem: A 4,000-Year History. Princeton, New Jersey: Princeton University Press. pp. 4–15. ISBN 978-0-691-12526-8.
- ^ Maor (2007), pp. 133–134.
- ^ van Dantzig, D. (1953). "Another form of the weak law of large numbers" (PDF). Nieuw Archief voor Wiskunde. 3rd ser. 1: 129–145. MR 0056872.
- ^ Morrell, William E. (January 1946). "A slide rule for the addition of squares". Science. 103 (2665): 113–114. doi:10.1126/science.103.2665.113. JSTOR 1673946.
- ^ Dempster, J. R. (April 1946). "A circular slide rule". Science. 103 (2677): 488. doi:10.1126/science.103.2677.488.b. JSTOR 1671874.
- ^ Dawson, Bernhard H. (July 1946). "An improved slide rule for the addition of squares". Science. 104 (2688): 18. doi:10.1126/science.104.2688.18.c. JSTOR 1675936.
- ^ Stern, T. E.; Lerner, R. M. (April 1963). "A circuit for the square root of the sum of the squares". Proceedings of the IEEE. 51 (4): 593–596. doi:10.1109/proc.1963.2206.
You must be logged in to post a comment.