Jump to content

Talk:Kolmogorov–Smirnov test

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Overly Complex

[edit]

This article seems overly complex. I have coded implimentations of a two sample KS test, I understand how the KS test works, but I can't understand anything past the first three introductory paragraphs. I would love it if someone could make this article a little more accessible. Bilz0r (talk) 09:59, 21 March 2009 (UTC)[reply]

This looks like a matter of not understanding mathematical notation, rather than of complexity. The empirical distribution function is just the function that jumps upward by 1/n at each of the n observed values. The Kolmogorov–Smirnov test statistic is the maximum amount by which that differs from the hypothesized cumulative distribution function. That's what the material just after the introductory section says. Michael Hardy (talk) 11:32, 21 March 2009 (UTC)[reply]
I see both points here - the explanation is actually quite concise and clear once the notation is understood. However, a person with limited statistical background would not be able to implement a K-S test solely from this article, and there are other websites I have come across that provide a clearer explanation of the procedure. I think this could be remedied with a simple example of a K-S test implementation on a simple distribution. In my opinion most mathematical articles on Wikipedia would be much more clear with a simple demonstration of how it could be used, but the large majority of them have none. Are such examples considered outside the bounds of Wikipedia? Otherwise I would be happy to throw an example or two on this page and others like it. PowerWill500 (talk) 08:02, 9 June 2009 (UTC)[reply]
I agree with the topmost point; the introductory paragraphs say virtually nothing intelligible about what this test is useful for or how it actually works. I'm a software engineer and someone referred me to this page. Damned if I know what they thought I would learn from it. If I could ask the page a question, it would be "So what?" and the page would be mute in response. I'd propose improvements, but first I'll have to learn about the KS test. — Preceding unsigned comment added by 76.123.2.3 (talk) 14:21, 24 November 2013 (UTC)[reply]

Graphical representation

[edit]

I'm thinking of adding a graphic to this page. something like a quantile quantile plot, because as a non-mathematician, this speaks to me in much clearer terms than sigma notation. Tombadog (talk) 23:30, 31 December 2007 (UTC)[reply]

Sure, that'd be useful. It's fairly common in the literature to accompany uses of the K-S test with some sort of graphical plot (e.g. a percentile plot) to make the point more clearly as well. --Delirium (talk) 10:34, 14 January 2008 (UTC)[reply]
Also appropriate may be to add a static graph of the Kolmogorov density function, a link to the interactive Kolmogorov-distribution applet or the corresponding examples of using the Kolmogorov-Smirnov test statistics to assess similarity of two distributions. Iwaterpolo (talk) 05:42, 3 February 2010 (UTC)[reply]

Graphic added. Bscan (talk) 19:53, 21 March 2013 (UTC)[reply]

Vodka Test

[edit]

Michael Hardy has deleted the name "vodka test" from the Kolomogorov-Smirnov article. But "vodka test" is a legitimate name that I've heard used. I think this name gets used because it's more memorable to english-speaking students than long Russian names. That's why it should be mentioned on the Wiki K-S page too.

Googling around, I found that this name isn't as common as I supposed, but here are three references: a presentation mapping software documentation, and a professor's class handouts. I could also site a stats book where the term is used (in a joking way). Finally, from the google results I found, I suspect this term is used in the SPSS manual (though I haven't checked).

I haven't heard this before, but I think probably if this is included in the article, it should say that it is a jocular mnemonic. Michael Hardy 01:54, 21 April 2006 (UTC)[reply]

I haven't been able to find how to conduct a two sample KS test within one population of data. Does anyone know how this is performed? DC

A handful of people or publications have mentioned the Smirnoff "joke" but this does not make it universal, interesting, or relevant. I've never heard the K-S test called the vodka test.

What is the valu to compare the one i have found, what is the p-value for a specific significance value, the test seems incomplete.

I think the "Vodka test" should be included in the article. That was how I learned about the test when I took a course in biostatistics.mezzaninelounge (talk) 02:41, 23 July 2009 (UTC)[reply]

The Anderson-Darling test

[edit]

Under "Miscellaneous", the Anderson-Darling test is recommended as it is just as sensitive at the tails of the distribution as the median. I think that it needs to be explicit here, as it is in the opening paragraphs of the article, that the Anderson-Darling test is only for testing normality. Buzwad 12:58, 18 May 2007 (UTC)[reply]

Examples

[edit]

It would be nice to have an example section that gives some detail on how to use the 2-sample K-S test. Dsspiegel 23:36, 18 October 2007 (UTC)Dsspiegel[reply]

The SOCR resource provides several hands-on examples of how to use the Kolmogorov-Smirnov test including web-based java applets for computing the corresponding test-statistics. Iwaterpolo (talk) 05:38, 3 February 2010 (UTC)[reply]

False attribution

[edit]

The author of the 1939 paper on the goodness of fit test was N. Smirnov, not V. I. Smirnov. GuidoGer (talk) 20:00, 21 March 2009 (UTC)[reply]

Funny German and Russian versions of the article have the same mistake (Igny (talk) 17:37, 24 March 2009 (UTC))[reply]

Is it true that the 1933 paper was published in Italian? (see footnote 3 in the main article)? Kolmogorov's wiki page Andrey_Kolmogorov cites a German paper from the same year; however the person being Russian it would be interesting that he would publish in these European languages first? Could someone being familiar with the person check these references and perhaps add the missing Russian original publication - if any? Otto.hanninen (talk) 08:38, 8 March 2011 (UTC)[reply]

Broken formulas

[edit]

All forumals have strange horizontal and vertical lines and commas. —Preceding unsigned comment added by 141.3.74.90 (talk) 10:56, 24 August 2009 (UTC)[reply]

Formulas look OK to me. Melcombe (talk) 11:13, 25 August 2009 (UTC)[reply]

The Kolmogorov-Smirnov statistic in more than one dimension

[edit]

I’ve checked the Peacock’s reference, and have to confess that I didn’t like it. The proposed multivariate extension is no-longer distribution-free, the author doesn’t even bother to derive its distribution, Monte-Carlo simulations in the article show that the distribution of the test statistic is 1.1 − 1.5 times greater than that of the K-S distribution for the case of selected 2-dimensional distributions. How does the statistic behave in more than 2 dimensions is not analyzed, but it is likely the discrepancies will become even larger. In short, this “multidimensional Kolmogorov−Smirnov statistic” cannot be reliably used for inference. I vote for undoing the recent addition [1] ... stpasha » talk » 22:59, 1 September 2009 (UTC)[reply]

Maybe I'm missing something here, but wouldn't a multivariate KS test merely amount to testing whether the mahalanobis distances of the multivariate sample conform to a chi-square distribution with a given degrees of freedom? --Steerpike (talk) 00:07, 10 April 2010 (UTC)[reply]

Shouldn't that be orderings, not orderings? —Quantling (talk | contribs) 20:06, 4 April 2011 (UTC)[reply]

Kolmogorov–Smirnov test - assuming infinite data

[edit]

It looks like the section "Kolmogorov-Smirnov test" is incorrect. It says to calculate the D statistic and then calculate a p value based on the asymptotic Kolmogorov distribution K. But that is only correct in the limit as the sample size goes to infinity. In reality, it should compare to the Kolmogorov distribution for finite sample size. The latter is much more difficult to calculate correctly in general. There is a good discussion of the issues involved at http://www.iro.umontreal.ca/~lecuyer/myftp/papers/ksdist.pdf. —Preceding unsigned comment added by 75.173.219.39 (talk) 18:22, 21 June 2010 (UTC)[reply]

Misspecified formula for test criterion

[edit]

The two sample K-S test criterion is misspecified. It should appear as √{(n+n')/nn'}*Dn,n'>Kα. See for example Richard Colin Campbell, Statistics For Biologists (3rd. Ed.), ISBN 9780521369329, page 84. Gvoz (talk) 13:43, 22 July 2010 (UTC)[reply]

No, the test is correctly specified as is. I don't have that particular book in front of me, but see the following link.
http://ocw.mit.edu/courses/mathematics/18-443-statistics-for-applications-fall-2006/lecture-notes/lecture14.pdf
Also, you can check numerically which one is correct. If n' → , then the two-sample test becomes the one-sample test because one distribution is fully specified. Thus we want to make sure that . This can be checked easily by plugging in numbers (e.g. n = 10, and n' = 1e7). This does not work for the modification you suggested.
Also, this comment about the convergence of the two-sample to the one-sample should perhaps be included in the article Bscan (talk) 16:04, 22 March 2013 (UTC)[reply]

Critical values on 2-sample test must be wrong

[edit]

for the 2-sample K-S test are either backwards or completely wrong, since given a test value they suggest we are more likely to accept the null hypothesis with higher probability. E.g., as currently written we could reject a null hypothesis with 95% probability and accept it with 99% probability! Dbooksta (talk) 16:19, 22 November 2013 (UTC)[reply]

The article is the correct way around. We are talking about the level of significance demanded before we reject the null hypothesis. e.g. if the probability of getting the observed distance between the two samples is 98% then we accept the null hypothesis if we are looking for 99% confidence but reject it if we are looking for 95% confidence.
This is what null hypothesis means.
Yaris678 (talk) 19:41, 22 November 2013 (UTC)[reply]
[edit]

Hello fellow Wikipedians,

I have just modified one external link on Kolmogorov–Smirnov test. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}).

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 19:44, 12 September 2016 (UTC)[reply]

Two-sample Kolmogorov–Smirnov test

[edit]

Hi SoylentPurple, sorry for the maybe trivial question, but in https://en.wikipedia.org/wiki/Kolmogorov–Smirnov_test#Two-sample_Kolmogorov–Smirnov_test you introduced the formula to directly translate the confidence level alpha into the constant c(alpha) you need in the Kolmogorov-Smirnov test. I would really appreciate if you could point me to a cite for this formula. — Preceding unsigned comment added by 92.194.42.241 (talk) 21:15, 29 August 2018 (UTC)[reply]

I did not add that formula. Maybe it comes from reference #5? SoylentPurple (talk) 00:24, 30 August 2018 (UTC)[reply]

Relationship to the Wasserstein metric?

[edit]

Has there been any work on the relationship of this to the said distance? — Preceding unsigned comment added by Frederic Y Bois (talkcontribs) 14:58, 21 June 2019 (UTC)[reply]

Self-Promotion

[edit]

A Mikenaaman appears to have edited this page to include his own work in the references section. The same letter (not a full journal article) (This is Mikenamman and the article was a full journal article)is now cited in four duplicate references (1, 2, 7, and 29). Furthermore, he's managed to insert himself into the first paragraph.

Mikenaaman, me, is the first person to ever describe the KS test in more than one dimension. KS only did the test for one dimension, but I generalized the result to all dimensions. I think that merits 

inclusion. In fact, I had to prove an impossible inequality to generalize the test. You should read the paper. It is pretty exciting stuff.

This looks quite ham-handed, but I'm not a Wikipedian and I don't know how to purge it from the article.  — Preceding unsigned comment added by 160.72.85.212 (talk) 20:30, 30 April 2021 (UTC)[reply] 

Extending my earlier comment on this, it appears to be WP:SELFCITE or WP:CITESPAM or both. — Preceding unsigned comment added by 160.72.85.212 (talk) 20:35, 30 April 2021 (UTC)[reply]

It was a full article. I can put you in touch with the editor of the journal. i found the best bound so i should be in the first paragraph Mikenaaman (talk) 08:52, 27 May 2021 (UTC)[reply]

@Mikenaaman: Your paper might be a great break-though, but to make large changes to the Wikipedia article a secondary source (or better multiple secondary sources) that explain the importance of your findings are needed. To better understand the policy a good starting point is WP:PSTS. Nuretok (talk) 09:37, 5 September 2021 (UTC)[reply]

Misleading summary in article introduction?

[edit]

The K-S test is currently summarised with the following sentence: "What is the probability that these two sets of samples were drawn from the same (but unknown) probability distribution?"

But this is an incorrect way to characterise the results of a hypothesis test. A more correct summary would be: "How unlikely would the data be if the two sets of samples were drawn from the same probability distribution?"

This might seem overly pedantic given that the quoted description is only claimed to describe the K-S test "in essence". But I do think the current wording is a problem, because it reinforces a common and very problematic misconception about hypothesis tests: that they give you the probability of a hypothesis being true or false.

See e.g.: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4877414/#:~:text=Common%20misinterpretations%20of%20single%20P,40%20%25%20chance%20of%20being%20true.

I would propose changing the current wording to something like my alternative suggestion. Tobycrisford (talk) 08:36, 9 January 2023 (UTC)[reply]


Given no reply, I have now made an update along these lines. I tried hard to make the wording as un-clumsy as I could, but it's inevitably a bit more confusing when trying to convey it in this way. — Preceding unsigned comment added by Tobycrisford (talkcontribs) 18:05, 16 January 2023 (UTC)[reply]

Outsource Kolomogorov distribution

[edit]

Biggerj1 (talk) 17:15, 26 July 2023 (UTC)[reply]