# Chapter 1 Random experiments

Deterministic vs. *random* experiments.

An experiment is *random* if although it is repeated in the same manner every time, can result in different outcomes:

- The set of all possible outcomes is completely determined before carrying it out.
- Before we carry it out, we cannot predict its outcome.
- It can be repeated as many times as we want always under the same conditions (leading to different outcomes).

## 1.1 Events (sets)

The **sample space** is the set of all possible outcomes of a random experiment, we will denote it by \(S\).

An **event** is a subset of the sample space (any set of outcomes of the random experiment).

**Special events**

The **certain event**, \(S\), always occurs. The **null (impossible) event**, \(\emptyset\), does never occur.

**Example**

**Random experiment:** toss three coins.

**Sample space:** \(S=\{HHH,HHT,HTH,THH,HTT,THT,TTH,TTT\}\).

**Some events:** \(A=\text{``exactly two heads''}=\{HHT,HTH,THH\}\); \(B=\text{``at least two heads''}=\{HHT,HTH,THH,HHH\}\).

**Package prob**

`install.packages('prob',repos='http://cran.rediris.es/')`

```
library(prob)
tosscoin(3)
```

```
## toss1 toss2 toss3
## 1 H H H
## 2 T H H
## 3 H T H
## 4 T T H
## 5 H H T
## 6 T H T
## 7 H T T
## 8 T T T
```

**Event (set) operations**

The

**union**of two events \(A\) and \(B\), denoted by \(A\cup B\) occurs when either of the two events (or both of them simultaneously) do occur. Represented by the grammatical expression \(A\) or \(B\). \[A\cup B\]The

**intersection**of two events \(A\) and \(B\), denoted by \(A\cap B\) occurs when both of them do simultaneously occur. It is represented by the grammatical expression \(A\) and \(B\). \[A\cap B\]The

**complementary event**to \(A\), denoted by \(\overline{A}\) (read*not A*) occurs when \(A\) does not occur.

- The
**set different**of \(A\) and \(B\) denoted by \(A\setminus B\) occurs when \(A\) does occurs, but \(B\) does not. \[A\setminus B=A\cap{\overline B}\]

```
S=probspace(tosscoin(2))
A=subset(S,toss1=="H")
B=subset(S,toss2=="H")
intersect(A,B)
```

```
## toss1 toss2 probs
## 1 H H 0.25
```

`union(A,B)`

```
## toss1 toss2 probs
## 1 H H 0.25
## 2 T H 0.25
## 3 H T 0.25
```

**Algebra of events (some properties of the ops. with events)**

**General properties**

**Commutative:**\(A\cup B=B\cup A\) and \(A\cap B=B\cap A\).**Associative:**\((A\cup B)\cup C=A\cup (B\cup C)\) and \((A\cap B)\cap C=A\cap (B\cap C)\).**Neutral element:**\(A\cup\emptyset=A=A\cap S\).**Complementation:**\(A\cup{\overline A}=S\) and \(A\cap{\overline A}=\emptyset\).**Idempotence:**\(A\cup A=A\) and \(A\cap A=A\).**Absortion:**\(A\cup S=S\) and \(A\cap\emptyset=\emptyset\).**Simplification:**\(A\cap(A\cup B)=A=A\cup(A\cap B)\).

**Distributive laws**

**Union wrt intersection:**\((A\cap B)\cup C=(A\cup C)\cap(B\cup C)\).**Intersection wrt union:**\((A\cup B)\cap C=(A\cap C)\cup(B\cap C)\).

**De Morgan’s laws**

**Grammatical expressions and events**

Griven three events \(A,B,C\) write as union and intersections

- \(A\) and \(B\) occur, but \(C\) does not.
- Exactly two of the three events occur.
- At most two of the three events occur.

## 1.2 Probability

A probability \(P\) assesses a number to every event \(A\) associated with the random experiment. It is interpreted as the likelihood (or chance) that \(A\) occurs.

**Probability axioms**

**Nonnegativity:**\(P(A)\geq 0\) for every event \(A\).**Additivity:**If \(A_i\) with \(i\in I\) denumerable are events and \(A_i\cap A_j=\emptyset\) whenever \(i\neq j\), then \[P\left(\bigcup_{i\in I} A_i\right)=\sum_{i\in I} P(A_i)\,.\]**Normalization:**\(P(S)=1\). \end{itemize}

(The structure of the family of events at with a probability is defined is known as \(\sigma\)-algebra. A \(\sigma\)-algebra 1. contains the certain event \(S\), 2. is closed under complementation, and 3. is closed under countable unions.)

```
S=probspace(tosscoin(2))
A=subset(S,toss1=="H")
B=subset(S,toss2=="H")
Prob(A)
```

`## [1] 0.5`

`Prob(intersect(A,B))`

`## [1] 0.25`

`probspace(tosscoin(1),probs=c(1,2))`

```
## toss1 probs
## 1 H 0.3333333
## 2 T 0.6666667
```

```
S=probspace(tosscoin(1),probs=c(1,2))
set.seed(1)
empirical(sim(S,ntrials=1000))
```

```
## toss1 probs
## 1 H 0.334
## 2 T 0.666
```

**Properties of a probability**

- \(P(\overline{A})=1-P(A)\).
- \(P(\emptyset)=0\).
- If \(A\subset B\), then \(P(A)\leq P(B)\).
- \(P(A\setminus B)=P(A)-P(A\cap B)\).
- \(P(A\cup B)=P(A)+P(B)-P(A\cap B)\).
- \(P(A\cup B\cup C)=P(A)+P(B)+P(C)\cdots\)

**Uniform discrete probability, Laplace’s rule**

If a random experiment can result in a finite number of equaly likely outcomes, the probability of some event \(A\) is \[P(A)=\frac{\text{number of outcomes favourable to }A}{\text{number of possible outcomes}}\,.\]

**Example** **Random experiment:** toss a 6-face die \[P(\text{'even outcome'})=\frac{3}{6}=\frac{1}{2}\,.\]

**How to interpret a probability (LLN)**

```
set.seed(2)
S=probspace(tosscoin(1))
plot(cumsum(sim(S,ntrials=1000)=="H")/(1:1000),
type="l",ylab="H freq",xlab="n toss")
abline(h=0.5,col="red")
```

## 1.3 Conditional probability

The **conditional probability** of \(A\) given \(B\) is the probability that \(A\) occurs given that \(B\) has occurred \[P(A|B)=\frac{P(A\cap B)}{P(B)}\,.\]

```
S=probspace(tosscoin(3))
H1=subset(S,toss1=="H")
H2=subset(S,toss2=="H")
H3=subset(S,toss3=="H")
Prob(H1,given=intersect(H2,H3))
```

`## [1] 0.5`

**Example** An urn contains 1 white ball and 2 red balls. One ball is taken from the urn at random. If it is a white ball, it is returned into the urn together with another white ball. If it is a red ball, it is returned together with other 2 red balls. In a second stage, another ball is taken from the urn at random.

**Events**

- \(W_1\equiv\)’first ball is white’
- \(R_1\equiv\)’first ball is red’
- \(W_2\equiv\)’second ball is white’
- \(R_2\equiv\)’second ball is red’

**Probabilities**

- \(P(W_1)=1/3\); \(P(R_1)=2/3\)
- \(P(W_2|W_1)=1/2\); \(P(R_2|W_1)=1/2\)
- \(P(W_2|R_1)=1/5\); \(P(R_2|R_1)=4/5\)

**Properties of the conditional probability**

**Nonnegativity:**\(P(A|B)\geq 0\) for every event \(A\).**Additivity:**If \(A_i\) with \(i\in I\) denumerable are events and \(A_i\cap A_j\cap B=\emptyset\) whenever \(i\neq j\), then \[P\left(\bigcup_{i\in I} A_i\Bigg|B\right)=\sum_{i\in I} P(A_i|B)\,.\]**Normalization:**\(P(S|B)=1\), actually \(P(B|B)=1\).

- and all other properties of a probability, e.g. \(P(\overline{A}|B)=1-P(A|B)\).

## 1.4 Bayes’ formula

**Multiplication rule** Given \(n\) events \(A_1,A_2,\ldots,A_n\), the probability that all of them occur simultaneously is \[P(A_1\cap A_2\cap\ldots\cap A_n)=P(A_1)P(A_2|A_1)\ldots P(A_n|A_1\cap\ldots\cap A_{n-1})\,.\]

**Example**

An urn contains 1 white ball and 2 red balls. One ball is taken from the urn at random. If it is a white ball, it is returned into the urn together with another white ball. If it is a red ball, it is returned together with other 2 red balls. In a second stage, another ball is taken from the urn at random.

Two balls are extracted fron the urn. What is the probability that both of them are red?

\[P(R_1\cap R_2)=P(R_1)P(R_2|R_1)=\frac{2}{3}\cdot\frac{4}{5}=\frac{8}{15}\,.\]

**Total probability rule**Let \(A_1,\ldots,A_n\) be mutually exclusive (disjoint) events whose union is the whole of the sample space (partition) and assume \(P(A_i)>0\) for every \(i\). For every event \(B\) we have \[\begin{align*} P(B)&=P(A_1\cap B)+\cdots+P(A_n\cap B)\\ &=P(A_1)P(B|A_1)+\cdots+P(A_n)P(B|A_n)\,. \end{align*}\]

**Example**

An urn contains 1 white ball and 2 red balls. One ball is taken from the urn at random. If it is a white ball, it is returned into the urn together with another white ball. If it is a red ball, it is returned together with other 2 red balls. In a second stage, another ball is taken from the urn at random.

What is the probability that the second ball extracted from the urn is red?

\[P(R_2)=P(W_1)P(R_2|W_1)+P(R_1)P(R_2|R_1)=\frac{1}{3}\cdot\frac{1}{2}+\frac{2}{3}\cdot\frac{4}{5}=0.7\,.\]

**Bayes’ rule**Let \(A_1,\ldots,A_n\) be mutually exclusive (disjoint) events whose union is the whole of the sample space (partition) and assume \(P(A_i)>0\) for every \(i\). For every event \(B\) with \(P(B)>0\) we have \[\begin{align*} P(A_i|B)&=\frac{P(A_i)P(B|A_i)}{P(B)}\\ &=\frac{P(A_i)P(B|A_i)}{P(A_1)P(B|A_1)+\cdots+P(A_n)P(B|A_n)}\,. \end{align*}\]

**Example**

An urn contains 1 white ball and 2 red balls. One ball is taken from the urn at random. If it is a white ball, it is returned into the urn together with another white ball. If it is a red ball, it is returned together with other 2 red balls. In a second stage, another ball is taken from the urn at random.

If the second ball extracted from the urn is red, what is the probability that the first one was alse red?

\[P(R_1|R_2)=\frac{P(R_1)P(R_2|R_1)}{P(R_2)}=\frac{\frac{2}{3}\cdot\frac{4}{5}}{0.7}=0.762\,.\]

## 1.5 Independence

Two events \(A\) and \(B\) are**independent**if \[P(A\cap B)=P(A)P(B)\,.\] In plain words, the fact that \(A\) occurs does not contain any information about the possible occurrence of \(B\).

**Conditional independence** \(A\) and \(B\) are **conditionally independent** given \(C\) with \(P(C)>0\) if \[P(A\cap B|C)=P(A|C)P(B|C).\]

(equivalently \(P(A|B\cap C)=P(A|C)\)).

**Examples**

**Random experiment:**Toss 2 fair coins (indep. but not cond. indep.)**Events:**\(H_1\equiv\)’Head coin 1’; \(H_2\equiv\)’Head coin 2’; \(E\equiv\)’Equal outcomes’.**Random experiment:**Select at random a coin from a fair one and another with two heads. Afterwards toss it twice (cond. indep. but not indep.)**Events:**\(H_1\equiv\)’Head coin 1’; \(H_2\equiv\)’Head coin 2’; \(F\equiv\)’Fair coin’.

**Independence of a collection of events** The events \(A_1,A_2,\ldots,A_n\) are **independent** if for any subset of them, \(I\subset\{1,2,\ldots,n\}\) \[P\left(\bigcap_{i\in I}A_i\right)=\prod_{i\in I} P(A_i).\]

**Example**

**Random experiment:**Toss 2 fair coins**Events:**\(H_1\equiv\)’Head coin 1’; \(H_2\equiv\)’Head coin 2’; \(E\equiv\)’Equal outcomes’.

## 1.6 Combinatorics

**The basic principle of counting (multiplication rule)** Two experiments are to be performed, if experiment 1 can result in any one of \(n_1\) possible outcomes and if, for each outcome of experiment 1, there are \(n_2\) possible outcomes of experiment 2, then together there are \(n_1\cdot n_2\) possible outcomes of the two experiments.

**Example** An exam consists of two questions and there is a unique set of eight possible answers. In how many ways can the exam be completed? (each question is to be linked to a unique answer and no pair of questions can be linked to the same answer)

**The generalized principle of counting (multiplication rule)** \(k\) experiments are to be performed, if experiment 1 can result in any one of \(n_1\) possible outcomes and if, for each outcome of experiment 1, there are \(n_2\) possible outcomes of experiment 2,…, and for each outcome the the previous experiments there are \(n_k\) outcomes of experiment \(k\), then together there are \(n_1\cdot n_2\cdots n_k\) possible outcomes of the sequence of experiments.

**Example** The first five digits of all phone numbers at uc3m are \(91\) \(624\).

- What is the probability that a randomly chosen phone number at uc3m has no repeated digits?
- What is the probability that none of the four remaining digits is repeated in a randomly chosen uc3m phone number?

**Permutations and \(k\)-permutations (sequences, order does matter)** Given \(n\) distinct objects:

- the number of possible sequences that can be formed with them is called the
**permutations**of \(n\) elements, \(n!=n(n-1)(n-2)\cdots 2\cdot 1\). - the number of possbile sequences of \(1\leq k\leq n\) that can be formed with them is called \(k\)-
**permutations**of \(n\) elements, \(n(n-1)\cdots (n-k+1)=n!/(n-k)!\) (by agreement \(0!=1\))

`factorial(n)`

**Example**

An exam consists of eight questions and there is a unique set of eight possible answers.

- In how many ways can the exam be completed?
- In how many ways can the first two questions be answered?

**Combinations (sets, order does not matter)** Given \(n\) objects: the number of possible sets of \(1\leq k\leq n\) elements that be formed with them is the number of **combinations** of \(k\) out of \(n\) objects \[{n\choose k}=\frac{n!}{k!(n-k)!}.\]

`choose(n,k)`

**Example** We toss a coin \(5\) times,

- what is the number of possible outcomes? (size of the sample space)
- how many outcomes contain exactly \(2\) heads?

**Partitions**

Given \(n\) objects: if there are \(r\) groups of objects, containing \(n_i\) objects each, the number of ways to arrange the \(n\) objects in a sequence is the number of **partitions** of \(n\) objects into \(r\) groups with the \(i\)-th group having \(n_i\) objects, \[{n\choose n_1,n_2,\ldots,n_r}=\frac{n!}{n_1!n_2!\cdots n_r!}.\]

**Example** We toss a 4-face die \(5\) times,

- what is the number of possible outcomes? (size of the sample space)
- how many outcomes contain \(2\) ones, \(2\) twos, \(1\) three (and \(0\) fours)?