Talk:Order statistic

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

I created the page on order statistics. Let us see if we can merge these pages together..


„However, we know from the preceding discussion that the probability that this interval actually contains the population median is...“ I have absolutely no idea why the probability should be equal to the magic number shown below this sentence. There has been no preceding discussion that would prove (or at least show) a formula from which the magic number is derived. A result of this importance would definitely require some clarification. In many practical cases, you can get data sets from software benchmarks that fail in normality tests. Computing confidence intervals based on order statistics is a very important topic, probably worth a separate article, not just one single magic number for one special case of six measurement values. — Andrej, 2010-11-30 —Preceding unsigned comment added by Andrejpodzimek (talkcontribs) 15:26, 30 November 2010 (UTC)[reply]

No offence, but for how important this topic is in statistics and how elegant its theory is, the article does an atrocious job. — Miguel 14:07, 2005 May 1 (UTC)

Not done yet, but at least now the article does not just dump a pile of equations on the reader without explanation or context. — Miguel 15:04, 2005 May 1 (UTC)

Old derivation saved for reference[edit]

Let be iid continuously distributed random variables, and be the corresponding order statistics. Let f(x) be the probability density function and F(x) be the cumulative distribution function of Xi. Then the probability density of the kth statistic can be found as follows.

and the sum above telescopes, so that all terms cancel except the first and the last:

and the term over the underbrace is zero, so:


COMMENT BY an actuary (knows prob + stat, not academic) The section "Distribution of each order statistic of an absolutely continuous distribution" would be clearer if it went like this:

1. Explain that you will derive the CDF and then take its derivative to get the pdf. 2. Derive the CDF. 3. Take the derivative.

The section Probability Distributions of Order Statistics should (optimally) reference another article for why F(X) ~ uniform. It is not obvious to newbies.

The interpolatory comments (such as the one about time series) should be distinguished somehow (i.e., with parentheses) so that the reader knows that they are not central to the argument of the article.

---End last guy's comment---

I disagree with your first point. It's easy to get an expression for the CDF, so it's the obvious starting point for a proof. But by far the easiest way to get the sum for the CDF into closed form is by differentiating it, fiddling with it until it's in closed form, then integrating it - which gives you the PDF along the way. It makes for a slightly confusing proof, but the alternatives - either trying to find an expression for the PDF from first principles, or trying to deal with that sum without differentiating it - seem deeply unpleasant. (Plus, at least according to Mathematica, the CDF's closed form seems to involve hypergeometric functions - making it a lot more complicated than the PDF.)

86.3.124.147 (talk) 22:53, 2 April 2008 (UTC)[reply]

Attention needed[edit]

I have placed an "expert needed" tag on this article partly because of the empty sections but mainly because of the relation between the parts that derive the distributions of the order statistics. The first part of what is there might be considered a direct approach and is probably OK for that. But a more advanced approach would be to start from the uniform distributiuon case, and to derive the more general case from this, which involves less complicated formulae. I am slightly unhappy about the 'du' approach taken for the uniform case. I do think the "uniform" part needs to be finished off by giving an expicit statement for the distribition function, possibly using an incomplete beta function, but certainly as an integral ... from which the density for the general case could then be derived. Melcombe (talk) 08:51, 11 April 2008 (UTC)[reply]

Figure[edit]

The caption of the figure for the exponential example ("Probability distributions for the n = 5 order statistics of an exponential distribution with = 3") needs clarification.

  1. I assume the order statistics are for a sample of exponential random variables
  2. What is ? Is it , the parameter used on the exponential distribution page? It looks more like . —Preceding unsigned comment added by LachlanA (talkcontribs)
Perhaps the best way to avoid this question is to replace the plot with one referred to the standard exponential distribution (i.e., with unit scale). The plot should also use notation consistent with that used in the article. It's a rasterized picture, and so needs replacement anyway. I'll see what I can come up with. Lovibond (talk) 15:07, 21 May 2013 (UTC)[reply]
I've redrawn the pdfs, changing the distribution to have unit scale and hazard rate. The picture is now vector, as well. I've updated the caption to reflect the change of scale, as well as clarify things a little (I hope!), identifying the functions as pdfs, rather than simply distributions. Lovibond (talk) 20:53, 21 May 2013 (UTC)[reply]

Equation error?[edit]

I don't believe the last equation in the "Dealing with discrete variables" section is correct. If the equation above it in terms of p1, p2 and p3 is correct, then the last equation should have a (1 - F(x) + f(x))^j as the first element of the second term in the summation (p2+p3)^j, rather than the existing (1 - f(x))^j.—Preceding unsigned comment added by 129.188.33.26 (talk) 16:22, 4 August 2010 (UTC)[reply]

Probabilistic Analysis[edit]

Before I say anything; I am fairly new to order statistics so bear with me as some of my comments are likely to be because of a lack of experience with them. However, I would consider myself to be an excellent representative of the kind of person that would come to this article in search of a better understanding of order statistics.

Basically, I think the "probabilistic analysis"-section is very confusing.

Firstly, the last subsection here is more measure theoretic than probabilistic. Secondly, it makes a few sort of detailed claims on the substitution to be used but makes no effort at describing how the formula of interest is derived. In my opinion, this formula;

is what most readers seek a better understanding of but that is left out. A proof, some intuition or at least a reference to where that might be found would really be great. Moreover, the section about the uniform is not motivated. I see that someone else has commented that some property exists that "might not be obvious to newbies", well, that's me! Please elaborate.

I propose that instead that the section would start with a simpler formula. One idea would be to reference e.g. Wackerly, Mendenhall & Schaeffer, 2008; "Mathematical Statistics with Applications", Duxbury, 7th edition: theorem 6.5, p. 336;

The pdf of the 'th order statistic is given by

To me, the formula makes a lot of sense intuitively;

(I don't know if it's completely wrong but) maybe a motivation like this would be more useful to most readers? At least some form of introduction to the whole thing about the uniform. And some form of conclusion, what does that section prove? What have we shown?

 — Preceding unsigned comment added by Superpronker (talkcontribs) 13:53, 1 June 2011 (UTC)[reply] 

About Order Statistic of Uniform Distribution[edit]

"why F(X) ~ uniform" please refer to "Probability Integral Transformation" — Preceding unsigned comment added by BChenyu (talkcontribs) 16:18, 5 March 2012 (UTC)[reply]

Expectaion of Ordered Statistics[edit]

What is the expectation of the nth order statistic? E[X(n)] Or for that matter E[X(1)] or for any kth order statistic. — Preceding unsigned comment added by 199.119.232.221 (talk) 01:47, 26 November 2012 (UTC)[reply]

Useful to know, so I don't blame you for asking! Unfortunately, the answer depends upon the distribution. If your RV is continuous, obtain the density function of the kth order statistic. Once you have that, computing the expectation is in theory simple (though, in practice, it might not be!). Lovibond (talk) 21:10, 21 May 2013 (UTC)[reply]

Error in section "The joint distribution of the order statistics of an absolutely continuous distribution"?[edit]

I don't believe the equation that gives the joint pdf of two order statistics k and j. In particular for uniform [0,1] random variables and k = n, j = n-1, it doesn't seem to reduce to the equation given earlier (I get a 2 in the denominator which isn't present in the other equation).

I would like to add to this the following.

The joint density of all the order statistics of n independent and uniformly distributed random variables is given as n! over A=[0,1]x[0,1]x...x[0,1]. The integral of n! over A is not 1. — Preceding unsigned comment added by 145.236.187.133 (talk) 21:53, 16 October 2015 (UTC)[reply]

128.151.210.203 (talk) 17:32, 8 July 2013 (UTC)[reply]

Extension to section: Probability distributions of order statistics[edit]

General formulas are known for the cumulative distributions of the smallest and largest samples, provided IID samples. The formulation for the maximum of n random-variables (for both the continuous and discrete cases) is listed here (http://www.math.ucsd.edu/~gptesler/283/slides/longrep_f13-handout.pdf). A very similar formulation works for the minimum.

Allow to be i.i.d. random variables. The maximum is given by:

meaning the CDF of is given by


Similarly, the minimum is given by:

meaning the CDF of is given by

A similar trick can be used to prove the general formulation which works on the probability density/mass function.Mouse7mouse9 00:02, 14 December 2013 (UTC)

Flipped textual description in section "Dealing with discrete variables"?[edit]

I noticed that in the section "Dealing with discrete variables," two textual descriptions of functions seem to have a problem. The expression on the left side of the equals does not match the textual description on the right side of the equals in both cases (i.e., with respect to whether it is "greater than or equal" or just "greater than"):

The original editor of this section should know how to properly fix this. I can't vouch mathematically for which expression belongs with greater than and which belongs with greater than or equal to so I probably shouldn't be the one to fix it. - PuercoFantastico (talk) 20:36, 30 October 2015 (UTC)[reply]

It was correct with the "flipped textual description", since "Less than or equal to" is the negation of "greater than" (similarly "Less than" is the negation of "greater than or equal to"). I have undone the edit from 9/2018 that tried to "fix" it, and added one more line to each in order to clarify the logic and [hopefully] stop well-meaning people from thinking it's wrong again. 2603:9008:2102:49D8:89A2:8F44:8122:D3AA (talk) 19:19, 16 February 2020 (UTC)[reply]

Please be explicit re: the handling of independent but not identically distributed variables[edit]

In the introduction to "Probabilistic analysis" I read:

"Then they are independent, but not necessarily identically distributed, and their joint probability distribution is given by the Bapat–Beg theorem."

That is an awesome pointer for those of us who are looking to learn about exactly such a scenario. What I find not clear is the preceding text:

"When the random variables X1, X2..., Xn form a sample they are independent and identically distributed. This is the case treated below."

Well yes it is, but it's rather dense material and what I would like here is a clear indication of there ONLY this case is treated below or if you also treat the Bapat-Beg case. I conclude not, by the reference but suggest an explicit statement would be more pleasing, like:

"When the random variables X1, X2..., Xn form a sample they are independent and identically distributed. This is the only case treated below. In general, the random variables X1, ..., Xn can arise by sampling from more than one population. Then they are independent, but not necessarily identically distributed, and their joint probability distribution is given by the Bapat–Beg theorem which is not discussed further here."

I cannot propose an edit directly on the page for the very reason that I'm not actually sure that's a true claim. I am presuming it from the text I read. If the presumption is solid, the explicit text is ore pleasing. If it is not, then I read it as an invite to try and digest something further below but would like to find what.

For example the next sentence:

"From now on, we will assume that the random variables under consideration are continuous and, where convenient, we will also assume that they have a probability density function (PDF), that is, they are absolutely continuous." is again undlear and tempting as it doesn't state clearly that from now on IID is assumed. Which I am presuming it is.

Anyhow a simple polite all for a more explicit introduction here. --60.242.67.249 (talk) 11:00, 27 April 2021 (UTC)[reply]