3
5
2011
0

Lecture Notes for Stochastic Differential Equations --- A parapharse from Professor Xue's Lectures (Lecture 2)

Introduction

In this lecture, we reviewed the definition of the conditional probability with respect to a sub-$\sigma$-field. And then gave several special examples to compute the conditional probability. After mentioning the conditional probability with respect to a random variable, and some function measurability, an easier way to check a conditional probability is given. Finally, we introduce the so-called $\pi$-$\lambda$ theorem and given an application of it.

General Case

Consider the probability measure space $(\Omega, \mathcal F, \mathbb P)$. Event $A \in \mathcal F$ and $\mathcal G$ is a sub-$\sigma$-field. The conditional probability $\mathbb P\{A|\mathcal G\}$satiesfies

  1. $\mathbb P\{A|\mathcal G\}$ is $\mathcal G$-measurable
  2. $\int_G\mathbb P\{A|\mathcal G\}dP = \mathbb P\{A\cap G\}, \forall G \in \mathcal G$.

In the sense of a.e. equal, the random variable is uniquely determined.

These two requirements may seemingly being surprising at the first sight, yet they really have their probabilistic interpretations.

The first entry tells us that by the knowledge of $\mathcal G$, we could define $\mathbb P\{A|\mathcal G\}$.

The second entry is understood while determing the price of a gamble. If the expected gain is zero, then the price of entering a gamble must be $\mathbb P\{A|\mathcal G\}$. To see this, if $A$ happens, the gain is $1 - \mathbb P\{A|\mathcal G\}$, and if $A$ does not happen, the gain will be $- \mathbb P\{A|\mathcal G\}$. Together, the gain will be

$$(1 - \mathbb P\{A|\mathcal G\})\mathbf 1_A - \mathbb P\{A|\mathcal G\} \mathbf 1_{A^c} = \mathbf 1_A - \mathbb P\{A|\mathcal G\}$$

which will lead to the result, if we integral over $G$.

Ex. What is $\mathbb P\{A|\mathcal G\}$ if $A \in \mathcal G$?

Solution. This is Example 33.3. The key is $\mathbf 1_A$.

Ex. What is $\mathbb P\{A|\mathcal G\}$ if $\mathcal G = \{\emptyset, \Omega\}$?

Solution. This is Example 33.4. The key is $\mathbb P\{A\}$.

Ex. Suppose $A$ is independent of $\mathcal G$, i.e., $\mathbb P\{A\cap G\} = \mathbb P\{A\} \mathbb P\{G\}, \forall G\in\mathcal G$, then what is $\mathbb P\{A|\mathcal G\}$?

Solution. Intuitively it is $\mathbb P\{A\}$.

Ex. Suppose $\Omega = \mathbb R^2$, $\omega = (x, y)$, $\mathcal F = \mathcal B(\mathbb R^2)$, and

$$\mathbb P\{A\} = \int_A f(x, y)\,dx dy, \quad \forall A \in \mathcal F.$$

Here, $f(x, y)$ is Borel-measurable functon, $f: \mathbb R^2 \to \mathbb R$, and $f(x, y) \ge 0$ almost everywhere, and $\iint_{\mathbb R^2} f(x, y) \,dxdy =1$. Countable additivity is trivial to verify. It follows that $\mathbb P$ is a probability measure. Suppose $\mathcal G$ is a sub-$\sigma$-field generated by the form $E\times \mathbb R^1 = \{(x, y): x\in E\}, \forall E\in \mathcal B(\mathbb R^1)$. Let $A$ be $\mathbb R^1\times F = \{(x,y): y\in F\}, \forall F \in \mathcal B(\mathbb R^1)$. What is $\mathbb P\{A|\mathcal G\}$?

Solution. One claims that

$$\varphi(x, y) = \frac{\int_Ff(x,t)d\,t}{\int_{\mathbb R^1}f(x,t)\,dt}$$

is a version of $\mathbb P\{A|\mathcal G\}$. To verify this, one should verify second property of the definition. Since a general element in $\mathcal G$ takes the form $E \times \mathbb R^1$, it is essential to prove that

$$\int_{E \times \mathbb R^1} \varphi(x, y)\,dP(x, y) = \mathbb P\{A\cap(E\times\mathbb R^1)\}.$$

Notice that the right hand side of the equation is exactly $\mathbb P\{E \times F\}$. To evaluate the value on the left hand side, one must resort to the Fubini's Theorem and a previous Theorem.

It is also possible to define a conditionial probability with respect to random variable $X$. This could be done by define $\sigma(X) = \{X^{-1}(B), B\in \mathcal B(\mathbb R^1)\}$. And define $\mathbb P\{A|X\} = \mathbb P\{A|\sigma(X)\}$. 

 

Category: Lecture notes | Tags:
3
4
2011
0

Lecture Notes for Stochastic Differential Equations --- A parapharse from Professor Xue's Lectures

In this lecture note, we are going to discuss 7 topics, which is outlined as follows:

  1. Conditioned Expectation
  2. Discrete-Time Martingale
  3. Continuous-Time Martingale
  4. Stochastic Integral
  5. Strong Solution of SDE
  6. Weak Solution of SDE (optional)
  7. Applications (optional)

References

  1. Steven E. Shreve and Ioannis Karatres, Brownian Motion and Stochastic Calculus
  2. Rick Durret, Probability: Theory and Examples
  3. Patrick Billingslay: Probability and Measure

Topic 1: Conditioned Probability, Distribution and Expectations

Let $(\Omega, \mathcal F, \mathbb P)$ be a probability measure space, and $X$ be a random variable. Actually, $X$ is a functional from $\Omega$ to $\overline{\mathbb R}$ satisfying that for all $B \in \mathcal B(\mathbb R^1)$,

$$X^{-1}(B) \in \mathcal F.$$

This means every pre-image of a Borel set in the real number is in the $\sigma$-algebra.

Mathematical expectation is defined by

$$\mathbb E X = \int_\Omega X dP = \int_\Omega X \mu(dX).$$

where $\mu(B)$ represents the probability of the preimage of Borel set $B \in \mathcal B(\mathbb R^1)$, i.e.,

 $$\mu(B) = \mathbb P \{X^{-1}(B)\}.$$

Ex: Prove that $\mu$ is a measure on $(\mathbb R^1, \mathcal B(\mathbb R^1))$.

Proof. Let us write the definition of a measure. The following is copied from Wikipedia measure (mathematics). If $\mu$ is a measure, three conditions must hold, i.e.,

Let $\Sigma$ be a $\sigma$-algebra over $X$, a function $\mu$ from $\Sigma$ to the extended real number line is called a measure, if it satisfies the following properties:

  • Non-negativity
  • Null empty set
  • Countable additivity

First of all, $\mu$ is a set function from $\mathcal B(\mathbb R^1)$ to $[0, 1]$. Non-negativity follows from the non-negativity of  $\mathbb P$. To see the null empty set property, we check that if $B = \emptyset$implies that $X^{-1}(B) = \emptyset$. By the null empty set property of $\mathbb P$, we arrive at the null empty set property of $\mu$. And finally, for countable additivity, suppose $\{B_i\}_{i=1}^\infty$ are disjoint sets in $\mathcal B(\mathbb R^1)$, then by the definition of $\mu$, we have

$$\mu\left(\bigcup_{i=1}^\infty B_i\right) = \mathbb P\left\{X^{-1}\left(\bigcup_{i=1}^\infty B_i \right)\right\}.$$

By the operation of set, and the disjointness of $\{B_i\}_{i=1}^\infty$ we have

 $$\mathbb P\left\{X^{-1}\left(\bigcup_{i=1}^\infty B_i \right)\right\} = \mathbb P\left\{ \bigcup_{i=1}^\infty X^{-1}(B_i)\right\}.$$

and finally by the countable additivity, we have

$$\mathbb P\left\{ \bigcup_{i=1}^\infty X^{-1}(B_i)\right\} = \sum_{i=1}^\infty \mathbb P\{X^{-1}(B_i)\} = \sum_{i=1}^\infty \mu(B_i)$$

which arrive at the conclusion that satisfy countable additivity.

1.1 Sub-$\sigma$-field and Information

In this subsection, an heuristic example is provided to explain the meaning of a sub-$\sigma$-field. In general cases, a sub-$\sigma$-field could be approximatedly understood as information.

Example: [Toss of a coin 3 times]

All the possible out come constructs the sample space $\Omega$, which takes the form

$$\Omega = \{HHH, HHT, HTH, HTT, THH, THT, TTH, TTT\}.$$

After first toss, the sample space could be divided into two parts, as \Omega = \Omega_1 \cup \Omega_2, where

$\Omega_1 = \left\{HHH, HHT, HTH, HTT\right\}, and $\Omega_2 = \Omega - \Omega_1$.

We can consider the corresponding $\sigma$-algebra: $\mathcal G_1 = \{\emptyset, \Omega, \Omega_1, \Omega_2\}$, which stands for the ``information'' after the first toss. When a sample $\omega$ is given, whose first experiment is a head, we can tell that $\omega$is not in $\emptyset$, $\omega$is in $\Omega$, $\omega$ is in $\Omega_{11}$ and $\omega$is not in $\Omega_{12}$. And look it in another angle, we see that, this $\sigma$-algebra contains all the possible situations for different ``first toss'' cases.

It is quite easy to generalize to the ``information'' $\sigma$-field after second toss $\mathcal G_2 = \sigma\left(\{\Omega_{11}, \Omega_{12}, \Omega_{21}, \Omega_{22}\}\right)$.

Generally speaking, if $\mathcal G$is a sub-$\sigma$-field of  $\mathcal F$, the information of $\mathcal G$ is understood as

for all $A \in \mathcal G$, one know whether $\omega \in A$ or not. In other word, the indicator function  is well-defined $\mathbf 1_A(\omega)$.

1.2 Conditional Probability

In this subsection, a theoretical treatment of conditional probability is concerned. As we know in the elementary probability theory, the nature definition for conditional probability is govened by the following equation

$$\mathbb P \{ A \vert B\} = \frac{\mathbb P\{A\cap B\}}{\mathbb P\{B\}}.$$

Therefore, it is natural to raise the question how to define $\mathbb P\{A|\mathcal G\}$, where $\mathcal G$is a sub-$\sigma$-field?

Note: The lecture notes follows majorly from reference book 3---Patrick Billingsley's Probability and Measure, Section 33.

Sometimes, $\mathcal G$ is quite complicated. Thus, instead, we consider the simple case when $\mathcal G$ is generated by some disjoint $\{B_i\}_{i=1}^\infty$ instead, where $\Omega = \bigcup_{i=1}^\infty B_i$. Therefore, we have $\mathcal G = \sigma(\{B_1, B_2, \ldots, B_n, \ldots\})$.

Then $\mathbb P\{A|\mathcal G\}$can be defined pathwisely, by

$$\mathbb P\{A|\mathcal G\}_\omega = \mathbb P\{A|B_i\} = \sum_{i=1}^\infty \mathbf 1_{B_i} \frac{\mathbb P\{A\cap B_i\}}{\mathbb P\{B_i\}}$$

for all sample $\omega$ in sample space $\Omega$.

Before we goto see the formal definition, we examine an example, which comes from the problem of predicting the telephone call probability.

Example: [Apply Simple Case to Computing Conditional Probability]

Consider a Poisson process $\{N_t, t\ge 0\}$ on measure space $(\Omega, \mathcal F, \mathbb P)$. Let $\mathcal G = \sigma(\{N_t\})$ and $A = \{\omega\in \Omega| N_s(\omega) = 0, s<t\}$. Compute $\mathbb P\{A|\mathcal G\}$.

Solution. Recall some of the knowledge of Poisson process now. By Wikipedia Poisson Process, we have

  • $N_0=0$
  • Independent increments
  • Stationary increments
  • No counted occurrences are simultaneous

The result of this defintion is that $N_t \sim \mathrm{Pois}(\alpha)$ where $\alpha$ is the intensity.

Note: If $\alpha$ is a constant, this is the case of homogeneous Poisson process, which is also named as Lévy processes.

It follows that

$$ \mathbb P\left\{ N_t - N_s = n \right\} = e^{-\alpha(t-s)} \frac{(t-s)^n}{n!}$$

for all  $n=0,1,2,\ldots$ and $0 \le s < t < +\infty$.

Another explaination is also need for $\mathcal G = \sigma(\{N_t\})$. This is a sigma field

Now, let $B_i = \{\omega: N_t(\omega) = i\}, i=0, 1, 2, \ldots$. Obviously, the union of all these set is the sample space $\Omega$. Moreover, they are obviously disjoint. Then by the computation formula in the simple case, we have

$$\mathbb P\{A|\mathcal G\}_\omega = \mathbb P\{A|B_i\} = \frac{\mathbb P\{A\cap B_i\}}{\mathbb P\{B_i\}}, \quad \omega \in B_i.$$

 To compute $\mathbb P\{A\cap B_i\}$ and $\mathbb P\{B_i\}$, we have

$$\begin{split} \mathbb P\{A\cap B_i\} &= \mathbb P\{N_0=0, N_s=0, N_t=i\} \\ &= \mathbb P\{N_0=0, N_s=0\}\mathbb P\{N_s=0, N_t=i\}\\ &= e^{-\alpha t}\frac{\alpha (t-s)^i}{i!}\end{split}$$

and

$$\mathbb P\{B_i\}=\mathbb P\{N_0=0, N_t = i\}=e^{-\alpha t} \frac{(\alpha t)^i}{i!}$$

which gives rise to the final result

$$\mathbb P\{A|\mathcal G\}=\left(1 - \frac{s}t\right)^{N_t(\omega)}.$$

Ex. Prove that $\mathbb P\{A|\mathcal G\}$is $\mathcal G$-measurable;

Proof. Recall the definition of $\mathcal G$-measurable. If a random variable $X:\Omega\to\overline{\mathbb R}$ is $\mathcal G$-measurable, then for all $B \in \mathcal B(\mathbb R)$, we have its pre-image $X^{-1}(B) \in \mathcal G$. Or equivalently, $\{\omega\in\Omega|X(\omega) \in B\} \in \mathcal G$. In this case, we are going to prove that $\{\omega\in\Omega|\mathbb P\{A|\mathcal G\}_\omega \in B\} \in \mathcal G$. It reduced to the problem that $\{\omega\in\Omega|\mathbb P\{A|\mathcal G\}_\omega \in B\}$ could be written as some union of $B_i$?

Since for all $\omega \in B_i$, $\mathbb P\{A|\mathcal G\}$ is a constant.

Ex. Prove that $\int_{B_i}\mathbb P\{A|\mathcal G\} dP = \int_{B_i} \mathbf 1_A dP$ holds.

Proof. Note that the left hand side equals to $\int_{B_i}\frac{\mathbb P\{A\cap B_i\}}{\mathbb P\{B_i\}} dP which is identical with the right hand side.

Now we go further to the general cases for $\mathcal G$. Suppose, $(\Omega, \mathcal F, \mathbb P)$ is a probability measure space, $\mathcal G$ is a sub-$\sigma$-field, event $A\in \mathcal F$. Then, we claim the conditional probability of $A$ given $\mathcal G$ is a random variable satisfying (1) $\mathbb P\{A|\mathcal G\}$is $\mathcal G$-measurable; (2) $\int_{B_i}\mathbb P\{A|\mathcal G\} dP = \int_{B_i} \mathbf 1_A dP$ . The random variable exists and unique, by Radon-Nikodym Theorem.

Let $G \in \mathcal G$ and $\nu(G) = \mathbb P\{A\cap G\}$

Ex. Prove that $\nu$ is a measure on $(\Omega, \mathcal G)$.

Proof. $\nu$ is a function that $\nu: \mathcal G \to [0, 1]$. Non-negative property and emptyset property are trivial. Countable additivity follows from the countable additivity of probability measure $\mathbb P$.

Ex. Prove that $\nu$ is absolute continuous with respect to $\mathbb P$ on $(\Omega, \mathcal G)$.

Proof. By Wikipedia, absolute continuity, we know that if $\nu$ is absolute continuous with respect to $\mathbb P$ on $(\Omega, \mathcal G)$, this means that for all $A\in \mathcal G$ and $\mathbb P\{A\} = 0$ implies that $\nu(A) = 0$. This is obvious, since $0\le \mathbb P\{A \cap G\} = \nu(A) = \mathbb P\{A \cap G\} \le \mathbb P\{A\} = 0$.$0\le \mathbb P\{A \cap G\} = \nu(A) = \mathbb P\{A \cap G\} \le \mathbb P\{A\} = 0$..$0\le \mathbb P\{A \cap G\} = \nu(A) = \mathbb P\{A \cap G\} \le \mathbb P\{A\} = 0$ 

Ex. If  $f$ and $g$ are $\mathcal G$-measurable, and $\int_GfdP = \int_GgdP, \forall g \in \mathcal G$, then $f=g$ almost surely.

Proof.

Category: Lecture notes | Tags:

Host by is-Programmer.com | Power by Chito 1.3.3 beta | Theme: Aeros 2.0 by TheBuckmaker.com