A misleading limit

For each n \in \mathbb N let X_n \sim \mathrm {Po} (n). By considering the probability \mathbb P(X_n \le n) evaluate:

\displaystyle \lim_{n \to \infty} e^{-n} \left(1 + n + \frac {n^2} {2!} + \ldots + \frac {n^n} {n!}\right)

You might be surprised by the result.

You may use the central limit theorem without proof.

The answer is correct but I’m not quite sure if I understand this solution. I think it’s along the right lines though. It looks you’re taking a limit as n \to \infty getting a result in terms of n?

1 Like

Yeah, I was trying to say that in the limit - the cumulative distribution function of the Poisson R.V would tend to the cumulative distribution function of the Normal R.V with parameters n and n. I can see that how I’ve written may be confusing though - what would be the correct notation to express this?

1 Like

I’m not sure if that’s true. Remember that CLT says that if \langle X_i\rangle is a random sample of size n of X then:

\displaystyle \frac {S_n - \mu} {\frac {\sigma} {\sqrt n}} \xrightarrow D N(0,1)

where \xrightarrow D means distributional convergence, (this means the limit of the cumulative distribution function of the RV on the LHS as n \to \infty is the N(0,1) cumulative distribution function) with \displaystyle S_n = \frac 1 n \sum_{i = 1}^n X_i, \mu = \mathrm E[X], \sigma = \sqrt {\mathrm {var}(X)}.

What could X be in this instance?

[I realised I reused notation confusingly in this, the $X_n$ s here should not be the same as in the question…]

1 Like

Yes, sorry. Here is a (hopefully) fixed version.


Cant wait to learn how to do this in Further Stats!

1 Like

Yes this works!

You don’t learn it “properly” in FS1 - mainly because you’re given no notion of “convergence” of sequences of random variables. You might meet these in a first or second year probability theory module, but you’ll probably only see the Central Limit Theorem proved from scratch in the third year. (at Warwick this happens in ST318 Probability Theory)

Basically you learn that for large n, the sample mean S_n approximates \displaystyle N \left(\mu, \frac {\sigma^2} n\right), whatever the distribution of X, then you plug that into a calculator to find some probability. That’s about as far as A-level goes. Since “approximates” is vacuous without at least quantifying the error, this isn’t really a proper statement of CLT.

1 Like

Oh, ill have to wait 4 years then :sob:

To be fair - you might see a sketch proof in the first year, but that proof will assume Levy’s continuity theorem, intuitively obvious but the proof is very non-trivial. Hence I say “from scratch”.

It is frustrating when things are put off like this but luckily once you get to university it’s not done too often.