Content deleted Content added
MiszaBot II (talk | contribs)
m Archiving 1 thread(s) (older than 90d) to Wikipedia talk:WikiProject Statistics/Archive 4.
ƒ or f: conclusion
Line 306: Line 306:
:::::I second Bkell's comment in spades. As I indicated above, we had essentially the same discussion about the symbol to use when indicating an aperture in photography, e.g., ''f''/4. Examination of many works revealed that the ''f'' was simply in the “italic” font of the running typeface; because the vast majority of such works are printed in a serif typeface, the ''f'' is usually a hooked descender; in most of the few works I looked at set in sans-serif type, the ''f'' indeed matches the running face—it's just set in the “italic” (properly, oblique) font. The {{tl|f/}} template attempts to get around this by forcing a series of sans-serif typefaces, beginning with Trebuchet. There's an obvious clash even with the default running face, e.g., “the lens was set to {{f/}}4”, but if the reader has set preferences to use a serif typeface, the clash is glaring, e.g, “<span style="font-family: serif;">the lens was set to {{f/}}4</span>”. Forcing a typeface switch is a slightly different issue than using a special character, but in the end, the two are of the same ilk. Failure to separate form and content is usually a road to disaster (speaking from experience ...), and diddling typefaces or characters to achieve a specific appearance is but one example. It is a practice against which Wikipedia should resolutely set its face. [[User:JeffConrad|JeffConrad]] ([[User talk:JeffConrad|talk]]) 09:03, 20 November 2010 (UTC)
:::::I second Bkell's comment in spades. As I indicated above, we had essentially the same discussion about the symbol to use when indicating an aperture in photography, e.g., ''f''/4. Examination of many works revealed that the ''f'' was simply in the “italic” font of the running typeface; because the vast majority of such works are printed in a serif typeface, the ''f'' is usually a hooked descender; in most of the few works I looked at set in sans-serif type, the ''f'' indeed matches the running face—it's just set in the “italic” (properly, oblique) font. The {{tl|f/}} template attempts to get around this by forcing a series of sans-serif typefaces, beginning with Trebuchet. There's an obvious clash even with the default running face, e.g., “the lens was set to {{f/}}4”, but if the reader has set preferences to use a serif typeface, the clash is glaring, e.g, “<span style="font-family: serif;">the lens was set to {{f/}}4</span>”. Forcing a typeface switch is a slightly different issue than using a special character, but in the end, the two are of the same ilk. Failure to separate form and content is usually a road to disaster (speaking from experience ...), and diddling typefaces or characters to achieve a specific appearance is but one example. It is a practice against which Wikipedia should resolutely set its face. [[User:JeffConrad|JeffConrad]] ([[User talk:JeffConrad|talk]]) 09:03, 20 November 2010 (UTC)
:::::: Okay, I guess I concede this one.--<font face="Monospace" size="3">'''Dark Charles'''</font> 10:08, 20 November 2010 (UTC)
:::::: Okay, I guess I concede this one.--<font face="Monospace" size="3">'''Dark Charles'''</font> 10:08, 20 November 2010 (UTC)

I think we can conclude this discussion asserting that there was a consensus that ''f'' is preferred to ''&fnof;''. <font color="#aaa"> // <b>[[User talk:Stpasha|<font color="#888">st</font><font color="#000">pasha</font>]]</b> » </font> 05:37, 23 November 2010 (UTC)


== Request for addition ==
== Request for addition ==

Revision as of 05:37, 23 November 2010

Overall and general population

I noted in the example on the 1age standardisation' entry that the heart disease rates amongst indigenous Australians ae compared to the general population and to the overall population. Are the general population and the overall population the same?

Should the curve and surface fitting web site zunzun.com be discussed anywhere in Wikipedia?

Should the curve and surface fitting web site zunzun.com be discussed anywhere in Wikipedia? It seems relevant to some of the statistics topics.

What is an histogram?

There is a dispute between me and User:Nijdam on the definition of histogram. To me, an histogram is a mathematical method to estimate a distribution, which is usually plotted with a bar chart. To Nijdam is considered "a diagram". I personally think (and have been taught) that the identification of "histogram" with "bar chart" is a common misconception, but I am having a bit of a hard time finding a positive reference on Gbooks (I can't go to a library today since I am at home with a cold). Can someone knowledgeable here come and help untangle the dispute? Thanks! --Cyclopiatalk 13:40, 6 September 2010 (UTC)[reply]

For a dataset D with N elements Di, a histogram plots the frequency F of each Di residing in the intervals Ii of the set I. The limits of I and D are equal. Relative frequency histograms are also used which plot F/N in place of F. For clarity, I carries the same units as D, but F and F/N are unitless. Hope that was bookish enough for you. SEB —Preceding unsigned comment added by 63.245.15.11 (talk) 20:30, 17 September 2010 (UTC)[reply]

I beleive a histogram, and a bar chart, are both plots, in the sense that they are ways of visualising a given dataset, or looking at its distribution, although neither give any analytical information on the distribution of the data, only qualitative information. As to the difference between the two, a histogram is typically a more rigourously defined thing, consisting of equally spaced intervals with the end of each interval being the start of the next, while a bar chart can be a little more open to variations, with the 'bars' being seperate, and open on un-even intervals, also often used for catagorical data. Armadilloa (talk) 06:21, 10 September 2010 (UTC)[reply]

Thanks, however I'd prefer to see some reference from a book of mathematical statistics. --Cyclopiatalk 12:29, 10 September 2010 (UTC)[reply]

Biased Coin and Urn model experimental Designs

I recently was searching through wiki for articles on Biased Coin Design Studies and Urn Model Studies (an extention thereof), and realised there wasnt any, so i was hoping to start articles on these topics, i am an absolute begginer at making and/or editing wiki articles, so any and/or all help/advice would be very helpful, i.e. how to go about doing this, where to create the articles, how to link them to other existing articles (experimental design, etc.), etc. But i also wanted to ask if maybe there where already pages on these topics, and maybe i just failed to find them? if this is the wrong place to make this post please let me know. Thanks alot.

Armadilloa (talk) 06:14, 10 September 2010 (UTC)[reply]

Deletion proposal

A new article, Multifactor design of experiments software, has been proposed for deletion. See and contribute to the discussion at Wikipedia:Articles for deletion/Multifactor design of experiments software. Melcombe (talk) 13:35, 13 September 2010 (UTC)[reply]

Note that the result was "Keep". Melcombe (talk)

Statistics articles have been selected for the Wikipedia 0.8 release

Version 0.8 is a collection of Wikipedia articles selected by the Wikipedia 1.0 team for offline release on USB key, DVD and mobile phone. Articles were selected based on their assessed importance and quality, then article versions (revisionIDs) were chosen for trustworthiness (freedom from vandalism) using an adaptation of the WikiTrust algorithm.

We would like to ask you to review the Statistics articles and revisionIDs we have chosen. Selected articles are marked with a diamond symbol (♦) to the right of each article, and this symbol links to the selected version of each article. If you believe we have included or excluded articles inappropriately, please contact us at Wikipedia talk:Version 0.8 with the details. You may wish to look at your WikiProject's articles with cleanup tags and try to improve any that need work; if you do, please give us the new revisionID at Wikipedia talk:Version 0.8. We would like to complete this consultation period by midnight UTC on Monday, October 11th.

We have greatly streamlined the process since the Version 0.7 release, so we aim to have the collection ready for distribution by the end of October, 2010. As a result, we are planning to distribute the collection much more widely, while continuing to work with groups such as One Laptop per Child and Wikipedia for Schools to extend the reach of Wikipedia worldwide. Please help us, with your WikiProject's feedback!

For the Wikipedia 1.0 editorial team, SelectionBot 23:40, 19 September 2010 (UTC)[reply]

Basic error in calculating Variance

On the page http://en.wikipedia.org/wiki/Computational_formula_for_the_variance the formula for variance in incorrect.

It is shown as Var(X) = E(X^2) - (E(X))^2 whereas it should of course be Var(X) = (E(X^2) - (E(X))^2/N) / N for the variance relative to the sample mean and Var(X) = (E(X^2) - (E(X))^2/N) / (N-1) for the variance relative to the population mean.

(The same error occurs in my son's school textbook. They omitted the first /N. Perhaps this error is being propagated through textbooks.)

I'm very new so I don't feel capable of editing pages yet myself! But I'll have a go if no-one else wants to. Can anyone tell me how to start?

For reference (and from memory), the proof is

Var(X) = u / (N-1)

where

u = E(X-m)^2 m is the mean of X

So

u = E(X-m)^2 = E(X^2 -2mX + m^2) = E(X^2) -2m(EX) + N(m^2) = E(X^2) -2(EX)(EX)/N + N(EX/N)^2) = E(X^2) -2(EX)(EX)/N + N(EX)(EX)/N/N = E(X^2) -2(EX)(EX)/N + (EX)(EX)/N = E(X^2) - (EX)(EX)/N

So

Var(X) = (E(X^2) - (EX)(EX)/N) / (N-1)

And, by a similar proof

Cov(X,Y) = (E(XY) - (EX)(EY)/N) / (N-1)

Peterbalch (talk) 16:47, 20 September 2010 (UTC)[reply]

Peterbalch, generally, Wikipedia tries to follow what is in textbooks, even if the textbooks are wrong (read more in Wikipedia's verifiability policy). If you want to update the formula, generally you should be able to find a textbook that agrees with you.
In this case, in your proof you assume E(m2)=Nm2. But E(c)=c for any constant. This assumes you are treating the mean as a constant in the calculation. 018 (talk) 16:56, 20 September 2010 (UTC)[reply]


Yes, I am treating the mean as a constant in the calculation because it is a constant in the calculation.

If you wish verification, consider the following:

Assume that the samples are 3,3,3,3. The values are all the same so the variance is zero.

If we assume that

Var(X) = E(X^2) - (EX)^2

as the article states then

EX = 12

E(X^2) = 36

Var(X) = 36 - 12^2 = -108

A negative variance is clearly a ludicrous result.

You will find that the formula I gave gives the correct result.

Are you saying that if a formula can be mathematically demonstrated to be false then you will refuse to allow it to be corrected until you are shown a different textbook? Are we to have a battle of "well my textbook says ...". I can refer to the stats textbook I used in 1973; would that be sufficient?

I guess I don't understand the rules.

Peterbalch (talk) 20:12, 20 September 2010 (UTC)[reply]

Peterbalch, I think you would be better taking your questions to the math reference desk. 018 (talk) 20:40, 20 September 2010 (UTC)[reply]
In your case, E(X) is 3. E(X) is the expected value. Sum(X) is 12, but E(X) is Sum(X)/n. So, E(X^2) = 9, E(X)² = 9, so Var = 0, as it should be. -- Avi (talk) 20:43, 20 September 2010 (UTC)[reply]
Be careful there, E(X) is a population parameter, it can only be estimated from a sample. This is why I sent him to the reference desk--there are many different levels of confusion. 018 (talk) 20:55, 20 September 2010 (UTC)[reply]
In his case, the population was explicitly defined as {3, 3, 3, 3}, so E(X) = 3, no? -- Avi (talk) 20:58, 20 September 2010 (UTC)[reply]
Moreover, the initial function in the article he references is discussing the variance and expectation of the random variable itself, not the estimates from data, which may be part of the confusion as well. As it says "In probability theory and statistics, the computational formula for the variance Var(X) of a random variable X is the formula…where E(X) is the expected value of X." -- Avi (talk) 21:00, 20 September 2010 (UTC)[reply]
This is actually an interesting question. The article talks about Var(X) as well as the s2. Which is it about? I think that is worth a discussion on the talk page. 018 (talk) 21:10, 20 September 2010 (UTC)[reply]
The next formula is a "closely related identity" which "can be used to calculate the sample variance." There are formulæ for both, but the two should not be confused. -- Avi (talk) 21:57, 20 September 2010 (UTC)[reply]
I think he confuses symbols and E, which may look similar to someone unfamiliar with Greek alphabet.  // stpasha »  01:00, 21 September 2010 (UTC)[reply]
The confusion starts by having an article entitled "Computational formulae ..." when really it is about algebraic formulae ... algebra to use in finding a formula for a variance, rather than computational steps that are sensible to implement on a computer. What the OP is really concerned with may be better answered in the articles variance and/or Sample mean and sample covariance, and there may be others. Melcombe (talk) 08:42, 21 September 2010 (UTC)[reply]
Melcome, go check out the article and what links there. I also started a discussion about what the page is supposed to be about (is it the sample variance or the variance of a random variable?) I think computation is meant to be, "how you compute something," so differential calculus would qualify as a method of computing the rate of change in a function with respect to its argument. 018 (talk) 02:09, 24 September 2010 (UTC)[reply]
Go look up the meaning of "compute" and the origins of the word "computer". To compute something is not the samne as "find a formula for". Melcombe (talk) 08:37, 24 September 2010 (UTC)[reply]

lAPLACE DISTRIBUTION

how to get the final foumela of the laplace distribution mode —Preceding unsigned comment added by 41.254.3.118 (talk) 20:24, 21 September 2010 (UTC)[reply]

Deletion proposal

I have placed an AfD template on Gaussian minus exponential distribution. Please see Wikipedia:Articles for deletion/Gaussian minus exponential distribution and comment,etc.. There was discussion previously on this page about this article, now archived, but really only to say that it is poor. Melcombe (talk) 15:49, 23 September 2010 (UTC)[reply]

Multivariate kernel density estimation

The article on multivariate kernel density estimation is very new but very good. Could someone who is more familiar than me with the rating of statistics articles take a look at it and give it a rating? I have a feeling it may be a B-class article, but I could be wrong. Yaris678 (talk) 07:44, 24 September 2010 (UTC)[reply]

The article seems pretty good, however why oh why it has been split out from the kernel density estimation ? I suggest that the two articles were merged, since they are using essentially the same method.  // stpasha »  15:58, 24 September 2010 (UTC)[reply]

rich get richer

We now have:

How should we organize links between these, and from other pages? Michael Hardy (talk) 15:43, 27 September 2010 (UTC)[reply]

I suggest merging The rich get richer (statistics) into Preferential attachment, since these are both about the same phenomenon in statistics. I take the blame for having created The rich get richer (statistics), since I didn't realize the preferential attachment article already existed. As for the Matthew effect, it seems to refer more to a phenomenon in sociology than statistics, so I'd keep it separate, possibly adding a sentence indicating that it's sometimes used to refer to preferential attachment in statistics. BTW as for as the phenomenon of the "Matthew effect" in the history of science, I recently saw another article about someone or other's law that stated substantially the same thing, i.e. discoveries are rarely named after the person who discovered them (but usually someone else with high visibility and social standing). Benwing (talk) 09:37, 1 October 2010 (UTC)[reply]
It seems that The rich get richer (statistics) is rather different from Preferential attachment, since in the latter new observations tend to be close to previous ones because the probability model governing them mandates that, but in the former all the observations are independent under the primary model, and it is something to do with conditional distributions, marginalised over distributions for parameters, that shifts. While something like "Preferential attachment" might be going on here, the repeated marginalisation over unknown parameters that seems needed to present it as a "preferential attachment process" seems rather strongly different from the model described in Preferential attachment. After all from one point of view the observations just cluster in an iid way about an unknown point. Of course there are no references so no-one can check what is supposed to be going on in the limited context described by The rich get richer (statistics), and what is described there is rather different from what a statistician would typically mean by "the rich get richer"... who would presumably think of The rich get richer and the poor get poorer and they might be prepared to start formulating a probabilistic model for it. As it stands, what is in The rich get richer (statistics) might be better off (if it is worthwhile at all) as a sub-sub-section in some article on computational Bayesian statistics. Melcombe (talk) 13:28, 19 October 2010 (UTC)[reply]

another deletion proposal

I have placed a PROD template on Logmoment generating function, which seems to duplicate cumulant generating function. This is your chance to save it if anyone thinks it's worthwhile. Melcombe (talk) 09:17, 1 October 2010 (UTC)[reply]

Help for Piecewise regression analysis

If anyone is interested in progressing Piecewise regression analysis, please see that article's Talk at Talk:Piecewise regression analysis. Melcombe (talk) 09:16, 5 October 2010 (UTC)[reply]

Given no real progress with this article, I have placed an Afd on ot, with a suggestion to "userfy" rather than delete. But please form your own opinions an contribute at Wikipedia:Articles for deletion/Piecewise regression analysis. Melcombe (talk) 12:54, 19 October 2010 (UTC)[reply]

Work on redoing intros of articles to make them clearer and less technical

Hello. Many of the statistics articles used to be very confusing, and far too technical. I have a personal interest in fixing this because my statistics knowledge has been hard-won, and too often in the past when I looked up a relevant Wikipedia article I found it impossible to make sense of. So I've tried to rewrite the intros of a number of articles to make them clearer and less technical. Among the statistics-related articles so far whose intros or description sections I've redone or significantly hacked on are:

If anyone sees any further work that needs to be done to the above articles, or notices any other statistics articles that are confusing and need work, please note this below. Thanks. Benwing (talk) 09:33, 5 October 2010 (UTC)[reply]

FYI, N = 1 fallacy has been nominated for deletion. 76.66.200.95 (talk) 05:27, 9 October 2010 (UTC)[reply]

Good riddance :)  // stpasha »  06:58, 9 October 2010 (UTC)[reply]
Discussion is at Wikipedia:Articles for deletion/N = 1 fallacy. Melcombe (talk) 09:07, 11 October 2010 (UTC)[reply]
Note the result was that a poor simple redirect to Pseudoreplication was done. Melcombe (talk) 12:57, 19 October 2010 (UTC)[reply]

Merge?

I have proposed merging these three articles:

Michael Hardy (talk) 02:11, 11 October 2010 (UTC)[reply]


In turn, I will propose to merge other three articles:

into the parent article Probability distribution.  // stpasha »  01:30, 12 October 2010 (UTC)[reply]

Linear Least Squares

FYI, the usage of Linear least squares is under debate, see Talk:Numerical methods for linear least squares.

76.66.198.128 (talk) 03:53, 21 October 2010 (UTC)[reply]

Lay-man

Would it be possible for more of this information to be written for they lay-man?

Ie. It seems that just searching the two-tailed Mann-Whitney U-test requires a post-grad qualification in statistics to determine if it is more suitable than a students t-test in science articles that need reviewing.

I would like to search general information, and find general information on topics i can normally get background information on, by 'Wikiiing'. I do not like to search general information and find highly complicated information which is too verbose or to specific to jargon in the field.

The text is quite informative, and very detailed, but overly reliant on jargon. Please dumb it down for the basic scientist undergrad.

Jay-ace-n (talk) 11:13, 27 October 2010 (UTC)[reply]

A lot of statistics articles are overly technical; see section 17 just above, where I list articles that I've been working on and ask people to list other articles needing "dumbing down", but nobody did this.

Could you point to exactly which articles and sections you think are too technical, and say exactly what is too technical? Your comment above about the Mann-Whitney U-test is one example, and a good one -- which article are you referring to exactly? Thanks, Benwing (talk) 04:10, 28 October 2010 (UTC)[reply]

Skewness

The Wikipedia article on Skewness cites reference #14 (in Czech) conerning Cyhelsky's Skewness Coefficient. However, the formulation as [(the number of observations below the mean minus the number of observations above the mean)/total number of observations] yields a negative value for a right-sided skew, which is commonly described as a "postive" skewness. I don't read Czech. Perhaps someone who does read Czech could "check" to see whether the Wikipedia formulation should be revised to result in a negative coefficient when the data show a left-side skew. Thinners (talk) 22:20, 27 October 2010 (UTC)[reply]

That's what the reference says. It specifically says that when the coefficient is negative, there is more values above the average than there is values below the average. Svick (talk) 21:13, 13 November 2010 (UTC)[reply]

WikiProject cleanup listing

I have created together with Smallman12q a toolserver tool that shows a weekly-updated list of cleanup categories for WikiProjects, that can be used as a replacement for WolterBot and this WikiProject is among those that are already included (because it is a member of Category:WolterBot cleanup listing subscriptions). See the tool's wiki page, this project's listing in one big table or by categories and the index of WikiProjects. Svick (talk) 20:54, 7 November 2010 (UTC)[reply]

yellow

Many peoples dont known why their look this pagr but at this content i am stil soryry about yoy who is stiil cheaking —Preceding unsigned comment added by 88.119.227.60 (talk) 14:48, 10 November 2010 (UTC)[reply]

Help for Levy's convergence theorem

Please see if you can help in the discussion at Talk:Lévy's convergence theorem. There are issues about citations using that name for what is presently in the article, about its differnce from Dominated convergence theorem, and a different meaning at Lévy's continuity theorem which is sometimes referred to as Lévy's convergence theorem. 17:35, 10 November 2010 (UTC)

Suggestion to the highly educated from a dilettante

I realize that this request, if generously granted, would mean more work for everyone, but it would be quite informative to see additional steps between simple and general cases for various formulas. If possible, could someone provide, for example, a three and/or four variate case in the article on joint probability density functions? It is sometimes difficult to really understand the patterns presented in many mathematical articles (particularly for those of us without formal maths training) for the general case of a particular theorem. Whether someone is willing to take the time to do that or not, I still greatly appreciate the efforts of contributors to Wikipedia since it's often my first stop at the base of a learning curve.

Chris —Preceding unsigned comment added by Chris Carleton (talk • contribs) 16:18, 18 November 2010 (UTC)[reply]

ƒ or f

Bkell has recently raised a question whether we should use symbol ƒ or f to denote functions. He points out that

It seems to me that since there is no special symbol for a function named g or h, it doesn't make sense to use some special symbol for a function named f. The symbol used in mathematical papers isn't a Latin small letter F with hook—it's just an italic f.—Bkell

It seems to me, however, that the HTML symbol ƒ (&fnof;) is specifically designed to denote the function of something, and thus is more appropriate. Besides, it better matches the default TeX rendering: . // stpasha » 16:49, 18 November 2010 (UTC)[reply]

The symbol used in mathematical papers is just a plain old lower-case letter f, but it's rendered in an italic serif font, which traditionally extends the letter below the baseline. The Latin small letter F with hook looks similar in a sans-serif font, but it is a different character. Slides produced for mathematical talks often use sans-serif fonts, and it is clear in such slides that the letter is f, not ƒ. Yes, the HTML symbol is named &fnof;, and the Unicode character table for Latin Extended-B [1] describes the character ƒ as "LATIN SMALL LETTER F WITH HOOK = script f = Florin currency symbol (Netherlands) = function symbol". However, it just doesn't make sense to use a special character for functions named f, when functions named g, h, etc. just use the regular old character. Really what should be done for consistency is to write <math>f</math>, because then the character is properly rendered in an italic serif font. —Bkell (talk) 17:38, 18 November 2010 (UTC)[reply]

Using ƒ for functions is inconsistent and an abuse of notation. What is even worse is that for reasons which escape me, many people write the character unitalicised, ƒ, which is outright ugly, and it defeats the original purpose of having a fancier version of italic f.—Emil J. 18:06, 18 November 2010 (UTC)[reply]


I completely agree with Bkell and EmilJ. If we only used f or some variation thereof, there might be a case (if a weak one) for ƒ. But with many other symbols also used for functions, it makes no sense to use a special symbol for one and ordinary letters from the Latin alphabet for the others; it's not only illogical, but the typographical clash is jarring. And perhaps the way we pronounce it (“eff of”) is the strongest hint of all. Moreover, the instinctive search would be for f, and such a search would fail to find instances of ƒ. I also agree with Bkell that professionally published mathematical books are good sources for guidance, and they almost universally use an italic f, just as they use italic g, h, and so on.

I think the fascination with ƒ for some may simply be visual. Most mathematical texts, like most other texts, are set in a serif typeface, for which the oblique font is also cursive, and the italic f is nearly always a descender. For good or for ill, the default typeface for Wikipedia is sans-serif, and the oblique f is not a descender. If we resort to tricks to force a descender, the clash with the running typeface is obvious to anyone paying attention, resulting in hideous copy. We seem to have a similar issue with the symbol used to indicate aperture in photography. The vast majority of publications use either an italic or roman f, e.g. f/4 or f/4; a few publications, and a small number of web pages use ƒ/4, but they're vastly in the minority. There is a template, {{f/}}, that forces a descender f, e.g., f/4, but the clash of typefaces is again glaring. And no one has produced a single example outside of Wikipedia that takes this approach. Once again, most books on photography are set in serif type, leading to passages such as “the lens should be set to f/4”. But there's nothing special about the f; it's simply that in a serif typeface, the “italic” font is truly italic, and the f is a descender. If we require that the f be a descender, Wikipedia use change the default typeface to serif (or one of the few sans-serif faces, such as that used in {{f/}}).

stpasha has a point with the better match of ƒ to the default TeX rendering. But we have the same issue with any quantity symbol—it's an unavoidable consequence of using a serif typeface in TeX and a sans-serif face for running text. One approach is to use <math> ... </math> constructions for quantity symbols in the running text, but we then get a rather ugly mismatch between the quantity symbols and the rest of the running text. Current practice in ISO standards seems to use this approach, leading to, at least to my eye, some of the most hideous typography I've ever seen.

Bottom line? I think we should follow logic and follow long-established practice among those who publish books professionally. JeffConrad (talk) 19:09, 18 November 2010 (UTC)[reply]

I too dislike ƒ. Just because ƒ is there doesn't mean we have to use it. In addition to not matching the surrounding typeface, it may cause accessibility issues for users using screen readers or on old software, as the software may not know how to interpret the symbol. I think it is much more consistent and elegant to use f, just as we use g, x, y, and so on. Ozob (talk) 23:23, 18 November 2010 (UTC)[reply]
I strongly disagree with you guys. ƒ is clearly better than f. Almost all mathematicians these days write in LaTeX, and their math-mode f looks almost identical to ƒ. Take a look: (compare that to ƒ, g, h) . Tradition dictates that the f's be more fancy than the g's and h's, which I think out-wieghs the need for consistent fonts. Ideally, the math-mode feature on Wikipedia would be more consistent with the rest of the article, so we could write all mathematical symbols on WIkipedia in math-mode, but that's not the case. As such, using the ƒ, g, and h convention is the best way to denote functions.--Dark Charles 19:58, 19 November 2010 (UTC)
Dark Charles, let me repeat that the "math-mode f" you refer to is just a plain old lower-case letter f in an italic serif font: for example, the italic serif font used here. Note how the f's extend below the baseline? That is how the letter f looks in most italic serif fonts. It isn't some special character. LaTeX does not use a special character for a "math-mode f".* Above I mentioned slides for mathematical talks. Often such slides are produced by a LaTeX package called Beamer, which by default uses a sans-serif font, and it is clear in these slides that functions called f are referred to with just a simple letter f; see [2] for example. Professionally typeset mathematics does not use a special character for functions called f. —Bkell (talk) 00:23, 20 November 2010 (UTC)[reply]
* This is a tiny lie. LaTeX actually does use italic letters from a special "math italic" font for math, which differs from the "text italic" font mainly in the kerning between characters and in the widths of some characters like the lower-case b. —Bkell (talk) 00:34, 20 November 2010 (UTC)[reply]
This conversation seems to have branched over to Wikipedia talk:WikiProject Mathematics#ƒ or f?, too. —Bkell (talk) 00:49, 20 November 2010 (UTC)[reply]
Okay, I checked my LaTeX and both the italics f and the math mode f are the same. However, as you said, generally math mode and italics aren't the same. And so, I think ƒ's should be used for f's as functions in that same spirit. What's more, the only example of a math textbook written in an abnormal font is Rudin's Principles of Mathematical Analysis (which is written in TImes New Roman) and Rudin uses ƒ not f.--Dark Charles 03:14, 20 November 2010 (UTC)
I repeat: That's because it's a serif font. In nearly all serif fonts, the ordinary italic f extends below the baseline. (Here is the ordinary Times New Roman italic ff.) You are not seeing a special form of the letter f used for math—you are seeing the ordinary italic f for the typeface used in the book. Find an italic f in Rudin's book in some ordinary text, and it will be exactly the same character. The difference between f (an italic sans-serif f) and f (an italic serif f) is because they are different typefaces, not different characters. Go find some math written in a sans-serif font, as I've previously suggested, and you'll see that the f does not extend below the baseline. —Bkell (talk) 04:51, 20 November 2010 (UTC)[reply]
I second Bkell's comment in spades. As I indicated above, we had essentially the same discussion about the symbol to use when indicating an aperture in photography, e.g., f/4. Examination of many works revealed that the f was simply in the “italic” font of the running typeface; because the vast majority of such works are printed in a serif typeface, the f is usually a hooked descender; in most of the few works I looked at set in sans-serif type, the f indeed matches the running face—it's just set in the “italic” (properly, oblique) font. The {{f/}} template attempts to get around this by forcing a series of sans-serif typefaces, beginning with Trebuchet. There's an obvious clash even with the default running face, e.g., “the lens was set to f/4”, but if the reader has set preferences to use a serif typeface, the clash is glaring, e.g, “the lens was set to f/4”. Forcing a typeface switch is a slightly different issue than using a special character, but in the end, the two are of the same ilk. Failure to separate form and content is usually a road to disaster (speaking from experience ...), and diddling typefaces or characters to achieve a specific appearance is but one example. It is a practice against which Wikipedia should resolutely set its face. JeffConrad (talk) 09:03, 20 November 2010 (UTC)[reply]
Okay, I guess I concede this one.--Dark Charles 10:08, 20 November 2010 (UTC)

I think we can conclude this discussion asserting that there was a consensus that f is preferred to ƒ.  // stpasha »  05:37, 23 November 2010 (UTC)[reply]

Request for addition

I would like to request addition of the content proposed here to the relevant article. As I don't have access to a source which provides information about the sample size issue, I hesitate to add it to the article myself. hujiTALK 00:07, 20 November 2010 (UTC)[reply]

Hi, there is a long and ongoing dispute between Kiefer.Wolfowitz (talk · contribs) and Edstat (talk · contribs) mainly centered around the apparent single purpose nature of Edstat's edits which are mainly related to Shlomo Sawilowsky or his work. I'm hoping that someone here may be able to shed some light on whether Edstat's edits are adding undue weight to Sawilowsky's work, or if this is merited. For someone with very little knowledge of statistics, this is difficult to determine. There is ongoing discussion on Kiefer.Wolfowitz's talk page but I think it may be best if this could be discussed here rather than there, where it may be more difficult to stick purely to content matters. SmartSE (talk) 11:48, 20 November 2010 (UTC)[reply]

I noticed that Sawilowskiy's bio page at Wayne's University [3] is a redirect to Wikipedia. It seems to me as an indication that Shlomo Sawilowsky himself is in fact one of the key editors of the Shlomo Sawilowsky wikipage. As for the Sawilowsky's paradox article, its notability has not been established yet, even though it was half a year since the time of that article's conception. Even more, Edstat (talk · contribs) seems to be unable or (more likely) unwilling to explain clearly what Sawilowsky's paradox is about (and Abelson's paradox as well). // stpasha » 15:18, 20 November 2010 (UTC)[reply]
Hmm, that is interesting and something that I've never seen on any other academic's personal biographies. I've nominated the Sawilowsky's paradox for deletion, due to a lack of secondary sources to demonstrate importance. Abelson's does just appear notable based on hits like this but if people here disagree I'm happy to nominate it for deletion too. SmartSE (talk) 15:51, 20 November 2010 (UTC)[reply]
The talk page of the Anova article shows the first interaction between Edstat and me (Kiefer.Wolfowitz).. Kiefer.Wolfowitz (talk) 20:23, 20 November 2010 (UTC)[reply]
All this forensics is a charade. Lets see who mentioned the many invectives Kiefer.Wolfowitz has used against me (repeated use of bold, outrageous statements, etc. Lets see who can find any edit I've made that Smartse has supported and not attacked. Lets see who can find any edit I've made that Stpasha supported. Please, the charade is so transparent - can't you find some new name from the cabal for this latest attack?
Go look at Kiefer.Wolfowitz's defense of outrageous entries he owns - filled with photographs and paragraphs and information not on the subject, and see who I earned his ire with an attempt to keep the material relevant to the entry. Go and see how Kiefer.wolfowitz stalks many pages that I edit, to delete, revert, and contort. Which of you will uncover his explanation that if the author isn't in "right university" or "right publication" according to his standards it is trash? There is more to say, but what would be the point. Goodbye.Edstat (talk) 23:46, 20 November 2010 (UTC)[reply]
Look Edstat,
Please discuss statistical content here. Please continue to allege misbehavior against me on the talk page, where you left a "Shame on you" section, a few days ago.
I never used such terms. When you criticized me for being a snob prejudiced against midwestern universities, I noted my previous (minor) editing of the articles of Boris Mordukhovich (at Wayne State, Sawilowsky's university) and of George Piranian (U. Michigan). You have repeatedly stated that another midwestern mathematician Per Enflo (Kent State U., a secondary state university, after Ohio State) has nothing to do with Banach, Hilbert, and Grothendieck, and that the images were added to puff him up. You can look at other articles to which I've made contributions in the last year (most recently the Shapley-Folkman lemma and see that I try to include images, when they are available, being a rather visual geometer myself). The harshest criticism I have made of you is that you made three edits with gross mis-use of sources when you were edit warring with David Eppstein, since such misuse is regarded as a major academic transgression. You exhaust me, and at times I have wished that you would exhaust yourself! (I am sorry that this blow-up has occured during Friday-Saturday.) Kiefer.Wolfowitz (talk) 00:01, 21 November 2010 (UTC)[reply]
No tags for this post.