Probability, Random Variables and Random Processes
Download Probability, Random Variables and Random Processes
Preview text
Appendix A
Probability, Random Variables and Random Processes
In this appendix basic concepts from probability, random processes and signal theory are reviewed.
1. Probability and Random Variables
¡ ¡ ¢ Probability Space Ω F P
Ω is the sample space or set of all possible outcomes.
F is a collection of events which are subsets of Ω (algebra, field)
£ ¡ £ ¤ ¥ £ A F B F A B F ; £ ¡ Ω F
£ ¤ £ A F A¯ F ¦¨§ ¡ © P is a function from F 0 1 which satisfies
¢ ¡ £ i) 0 P A 1 A F ¢ ii) P Ω 1 ¥ ¢ ¢ ¢ ¡ £ iii) If A B 0/ then P A B P A P B A B F ¢ A random variable X w is a function from Ω to R
¦X : Ω R
£ ¢ £ ! £ that satisfies
w Ω:X w x F x R
¢ The distribution function FX x of a random variable is defined as
¢" ¢# $ £ ¢ FX x P X w x P w Ω : X w x
Properties of distribution functions
% ¢# & ¢(' ¢ (i) P a X w b FX b FX a ¢ ¢ & (ii) FX x is continuous at x iff P X w x 0 ¢" ¢ (iii) limFX y FX x right continuous
)y x
1-1
1-2
APPENDIX A. PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES
¢" ¡ ¢" (iv) lim FX x x ∞
1 x¢lim¡ ∞ FX x
0
¢ ¢ £ If FX x is continuous for all x then there exists a function fX x such that x
¢" ¢ FX x
¡
fX
∞
u
du
This function is called the density function. If a random variable has a density function we shall say the random
variable is continuous. Properties of density functions
¢#£ & ¢ (i) P X w B ¥¤ B fX u du B ¦ R
¢ ¢" ¢ § dFX x
(ii) fX x FX x
dx
¢ If FX x is piecewise constant with a countable number of discontinuities then X is said to be a discrete random
¢ variable. For discrete random variables we will use their probability mass function pX x ∆ P X x
Expectation of a Random Variable
§ © ¢ The expectation of a continuous random variable is £∞
EX
¡
x fX
∞
x
dx
The expectation of a discrete random variable is
§ © ¢ E X x:pX∑¨ x© 0xpX x
¡ ¡ ¡ ¡ ¡ ¢ If X1 X2 Xn are random variables the joint distribution Fn x1 xn is defined as
¡ ¡ ¢" ¡ ¡ Fn x1 xn P X1 x1 Xn xn
¡ ¡ ¢ If these random variables are (jointly) continuous then their joint density fn x1 xn
¡ ¡ ¢ ¡ ¡ ¢ fn x1 xn
∂nFn x1 xn ∂x1 ∂xn
is defined as
¡ ¡ ¢ ¡ ¡ If these random variables are discrete then the joint probability mass function is pn x1 xn P X1 x1 Xn xn
A complex random variable is a function from Ω to C (Cl is the set of complex numbers)
¦X : Ω C
¡ £ ! ¡ £ ¢ ¢ ¢ ¢ ¢ such that ℜX xr ℑ X xi
F xr xi R
X w Re X w
j Im X w
Useful Bounds 1) Union Bound:
¥ ¢ P A B
P
M Ai
i 1
¢ ¢ P A P B
M
¢ ∑ P Ai
i1
1.
1-3
2) Chebyshev Bound: Let mX
§ © E X and σ2X ¡ 'P X
E § X ' ¢mX ©2 then
£¢
σ2
mX δ
X
δ2
Proof:
3) Chernoff Bound: Proof:
¢
' ¢ ¢ σ2x ¢ ' ' ¢¢ ¢ ¢
£∞ x mX 2 fX x dx ¡∞
£ ¤ ¤ ¥ x mX 2 fX x dx ¡x mX δ £δ2 ¤ ¤ ¥ fX x dx
¡¦x mX δ §¢
δ2P X mX δ
¢ § © ¡ P X u
e¡ su E esX
¢ s0
¢" ¡¡ % Let g x
¨
¢
1 xu
0 x u
Since s 0
g x¢ es¨ x¡ u©
Thus
¢
£∞
£∞
¢ ¢ ¢ P X u
fX x dx
u
¡
g
∞
x
fX
x
dx
§ ¢ © E g X § © § © E es¨ X¡ u©
e¡ su E esX
¡ ¡ ¡ ¡ ¢ Example: Let X1 Xn be random variables. Let H0 and H1 be two events. Let p0 x1 xn be the conditional ¡ ¡ ¡ ¡ ¢ ¡ ¡ density function of X1 Xn given H0 and p1 x1 xn be the conditional density function of X1 Xn given
¡¡¡ ¡¡¡¡¡ ¢¢¢ ¡¡ ¢¢ ¡ ¡ ¢ H1. Find a bound on
¢
Pe P p1 X1 Xn p0 X1 Xn H0 p1 X1 Xn ¢
P p0 X1 Xn 1 H0 p1 X1 Xn ¢
P ln © p0 X1 Xn 0 H0
¢¢ © § © ¢¢ ¢ LetY
ln p1 X p0 X
¢ Pe P Y 0 H0
£E esY H0
exp s ln p1 x
Rn
p0 x§
p0 x dx
A random variable is Gaussian if the density function is
¢" ' ' ¢ pX x 21πσ exp 2σ12 x µ 2
where µ is the mean and σ2 is the variance. The characteristic function of a random variable X is defined as
¢" § © φX s E esX . For a Gaussian random variable the characteristic function is
¢ φX s
e
s2 σ2 2
µs
1-4
APPENDIX A. PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES
Def: A function g : RN ¦
£ ¡ £ % % R is said to be concave (convex ) if for any x1 Rn x2 Rn and 0 θ 1 ¢ ' ¢ ¢ ' ¢ ¢ Θg x1 1 Θ g x2 g Θx1 1 Θ x2
where the vector addition is component-wise addition.
¦ ¥ A function g : Rn R is said to be convex (convex ) if ¢ ¢ ' ¢ ¢ ' ¢ ¢ Θg x1 1 θ g x2
g Θx1
1 Θ x2
¢ ¦ Jensen’s Inequality: If f x is a concave (convex ) function mapping Rn R then
§ ¢ © § © ¢ E f X f E X
¢ ¥ ¦ If f x is a convex (convex ) function mapping Rn R¢ then § ¢ © § © ¢ E f X f E X
¡ Proof for discrete random variables: (By induction) Let X take on values x1 x2, with nonzero probability
§ ¢ © ¢ § ©¢¢¢ ¢ ¢ ¢¢ E f X
p x1 f x1 p x2 f x2 f p x1 x1 p x2 x2 f EX
where the first inequality is due to the definition of convexity.
Assume if X is discrete taking values
¡ ¡ ¢" x1 xn¡ 1 then
¡n 1
¢ ¢ ¢ ¢ ∑ p xi f xi
i 1
n¡ 1 ∑ p xi 1
i1
n¡ 1 f i∑¡ 1 p xi f xi
¡ ¡ ¡ Now let X take values x1 x2 xn § ¢ © ¢ ¢" ¢ ¢ ¢ ¢ E f X
¢ Let α § ¢ © ¢ ¢ ¢ ¢ E f X § ¢ © ¢¢ ¢ ¢ E f X
¡ n ∑ p xi f xi i 1
n1
∑ p xi f xi
i1
¡n 1 ∑ p xj j 1
¡∑n
α
1 p xi
f
x
i 1 α
i
p xn f xn
¡∑n 1 p xi 1 i 1 α
¡ ∑ α f n 1 p xi x i 1 α i
p xn f xn
p xn f xn
¢ ¢ n¡ 1 f ∑ p xi xi i1
p xn xn
¡ ¡ ¢ ¡ ¡ Let X1
f
n
∑ p xi
xi
i1
Xn be a random vector. The covariance matrix of X1 Xn is defined to be
K ¡
¡¡ 1£ 1
K1£ 2
¡
KX
K2£ 1
¢
...
¤¦¥
K1£ n ¥¥ ... ¥
§
Kn£ 1
Kn£ n
1.
1-5
where and Def: A n ¢
Ki£ j E § Xi ' ¢µi Xj ' µj¢¡ ©
§ © µi E Xi ¡ ¡ ¢ n matrix is said to be nonnegative definite if for any vector a1 an
nn
¢
∑ ∑ aiki£ ja j
i 1j 1
0 and real
i.e.,
¢
aKX aT 0 and real
(positive definite if strict inequality holds). Claim: The covariance matrix is always nonnegative definite. Proof:
nn
∑ ∑ aikija j
i 1j 1
nn
§ ' ¢ ' ¢ © ∑ ∑ aiE Xi i 1j 1 £ nn
µi Xj
' ¢ ' ¢ ∑ ∑ E
ai Xi
i 1j 1 £
n
µi a j Xj
n
' ¢ ' ¢ ∑ ∑ E ai Xi µi aj Xj
i1
j1
µj¡ aj µ j ¡¡¤ µ j ¥¡¤
¡ ¡ ' ¢ ¡ ¡ Let X1
¤
n
2¢
∑ E
¢§¦ ¦
ai Xi
µi
¦ ¦
§
0
¦¦ i 1 ¦¦
¦
¦
Xn be a real random vector. The characteristic function of X1 X2
Xn is defined as
¡ ¡ ¢ ΨX1£ ¨ ¨ ¨ £ Xn ν1 νn
£
E
exp
n
j ∑ νiXi
¤
i1
¡ ¡ ¡ ¡ Def: The random vector X1 Xn is said to be jointly Gaussian if the characteristic function of X1 Xn is
¡ ¡ ¢" ' ΨX1£ ¨ ¨ ¨ £ Xn ν1 νn
exp jνT µ 1 νT Kν 2
¡ ¡ ¢ ¡ ¡ ¡ ¢ where νT ν1 νn µT µ1 µn and K is a real symmetric nonnegative definite n ¢ ¡ ¡ positive definite then the joint density of X1 Xn is
¢ ¢ ¢ ' ' ¢ ' ¢ p x 2π ¡ 1© 2 detK ¡ 1© 2 exp 1 2 x µ T K¡ 1 x µ
n matrix. If K is
Fact: Let X be a random n vector. Then X is jointly Gaussian iff X can be expressed as W Y µ where µ ¡ ¡ ¢£ ¡ ¡ ¡ µ1 µn lRn W is and n ¢ n matrix and Y1 Yn are independent mean zero Gaussian random variables (the
matrix W can be taken to be orthogonal, i.e. the rows of W are orthogonal).
Kx W KYW T
Now let X be a jointly Gaussian random vector (of length n) with mean µ covariance matrix K. Let F be a n by n
matrix. Consider the random variable
Y XFXT
1-6
APPENDIX A. PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES
We would like to be able to determine the density function of this random variable. Instead, we will determine the
characteristic function of this random variable. The characteristic function is
ΨY ν¢
§ ¢ © E exp jνY ' ¢ exp jνµT F ¡ 1 2 jνK ¡ 1µ
' ¢ det I 2 jνKF
For example, let n 1, then K σ2 and
¢ ' ' ¢ ¢ ΨY ν
exp jνµ2F 1 2 jνσ2F 1 2 jνσ2F
' Inverting this yields the Rician distributed random variable. For ν js, F 1 the characteristic function becomes
§ ¢ © § ¢ © ' ' ¢ ¢ E exp sY
E exp sX 2
exp sµ2 1 2sσ2 1 2sσ2
§ ©(% provided that Re s 1 2σ2.
2. Random Processes
¢ £ £ Def: A random process X t ;t T is an indexed collection of random variables (i.e. for each t ¢ set, X t is a random variable). ¢ £ Def: The covariance function of a random process X t ;t T is defined as
T , the index
¡ ¢" § ¢(' ¢ ¢ ¢(' ¢ ¢ © K s t E X s µ s X t µ t
¢ § ¢ © where µ t E X t . ¡ ¢ ¦ ¡ ¡ Def: A function K s t : R ¢ R
¢
R is said to be nonnegative definite if for any n 1 and time instants t1 tn
¢ and any function a t nn
¢
¢ ¡ ¢ ¢ ∑ ∑ a ti K ti tj a tj 0 (and is real)
i 1j 1
¢
¢ ¡ ¢ ¢ (positive definite if strict equality holds). Equivalently ¤ ¤ a t K t s a s dtds
0 and is real.
Claim: The covariance function is a nonnegative definite function.
¡ ¡ ¢ ¡ ¡ ¢ Def: A random process is said to be Gaussian if for any n and time instances t1 tn, X t1 X tn is jointly
Gaussian.
Probability, Random Variables and Random Processes
In this appendix basic concepts from probability, random processes and signal theory are reviewed.
1. Probability and Random Variables
¡ ¡ ¢ Probability Space Ω F P
Ω is the sample space or set of all possible outcomes.
F is a collection of events which are subsets of Ω (algebra, field)
£ ¡ £ ¤ ¥ £ A F B F A B F ; £ ¡ Ω F
£ ¤ £ A F A¯ F ¦¨§ ¡ © P is a function from F 0 1 which satisfies
¢ ¡ £ i) 0 P A 1 A F ¢ ii) P Ω 1 ¥ ¢ ¢ ¢ ¡ £ iii) If A B 0/ then P A B P A P B A B F ¢ A random variable X w is a function from Ω to R
¦X : Ω R
£ ¢ £ ! £ that satisfies
w Ω:X w x F x R
¢ The distribution function FX x of a random variable is defined as
¢" ¢# $ £ ¢ FX x P X w x P w Ω : X w x
Properties of distribution functions
% ¢# & ¢(' ¢ (i) P a X w b FX b FX a ¢ ¢ & (ii) FX x is continuous at x iff P X w x 0 ¢" ¢ (iii) limFX y FX x right continuous
)y x
1-1
1-2
APPENDIX A. PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES
¢" ¡ ¢" (iv) lim FX x x ∞
1 x¢lim¡ ∞ FX x
0
¢ ¢ £ If FX x is continuous for all x then there exists a function fX x such that x
¢" ¢ FX x
¡
fX
∞
u
du
This function is called the density function. If a random variable has a density function we shall say the random
variable is continuous. Properties of density functions
¢#£ & ¢ (i) P X w B ¥¤ B fX u du B ¦ R
¢ ¢" ¢ § dFX x
(ii) fX x FX x
dx
¢ If FX x is piecewise constant with a countable number of discontinuities then X is said to be a discrete random
¢ variable. For discrete random variables we will use their probability mass function pX x ∆ P X x
Expectation of a Random Variable
§ © ¢ The expectation of a continuous random variable is £∞
EX
¡
x fX
∞
x
dx
The expectation of a discrete random variable is
§ © ¢ E X x:pX∑¨ x© 0xpX x
¡ ¡ ¡ ¡ ¡ ¢ If X1 X2 Xn are random variables the joint distribution Fn x1 xn is defined as
¡ ¡ ¢" ¡ ¡ Fn x1 xn P X1 x1 Xn xn
¡ ¡ ¢ If these random variables are (jointly) continuous then their joint density fn x1 xn
¡ ¡ ¢ ¡ ¡ ¢ fn x1 xn
∂nFn x1 xn ∂x1 ∂xn
is defined as
¡ ¡ ¢ ¡ ¡ If these random variables are discrete then the joint probability mass function is pn x1 xn P X1 x1 Xn xn
A complex random variable is a function from Ω to C (Cl is the set of complex numbers)
¦X : Ω C
¡ £ ! ¡ £ ¢ ¢ ¢ ¢ ¢ such that ℜX xr ℑ X xi
F xr xi R
X w Re X w
j Im X w
Useful Bounds 1) Union Bound:
¥ ¢ P A B
P
M Ai
i 1
¢ ¢ P A P B
M
¢ ∑ P Ai
i1
1.
1-3
2) Chebyshev Bound: Let mX
§ © E X and σ2X ¡ 'P X
E § X ' ¢mX ©2 then
£¢
σ2
mX δ
X
δ2
Proof:
3) Chernoff Bound: Proof:
¢
' ¢ ¢ σ2x ¢ ' ' ¢¢ ¢ ¢
£∞ x mX 2 fX x dx ¡∞
£ ¤ ¤ ¥ x mX 2 fX x dx ¡x mX δ £δ2 ¤ ¤ ¥ fX x dx
¡¦x mX δ §¢
δ2P X mX δ
¢ § © ¡ P X u
e¡ su E esX
¢ s0
¢" ¡¡ % Let g x
¨
¢
1 xu
0 x u
Since s 0
g x¢ es¨ x¡ u©
Thus
¢
£∞
£∞
¢ ¢ ¢ P X u
fX x dx
u
¡
g
∞
x
fX
x
dx
§ ¢ © E g X § © § © E es¨ X¡ u©
e¡ su E esX
¡ ¡ ¡ ¡ ¢ Example: Let X1 Xn be random variables. Let H0 and H1 be two events. Let p0 x1 xn be the conditional ¡ ¡ ¡ ¡ ¢ ¡ ¡ density function of X1 Xn given H0 and p1 x1 xn be the conditional density function of X1 Xn given
¡¡¡ ¡¡¡¡¡ ¢¢¢ ¡¡ ¢¢ ¡ ¡ ¢ H1. Find a bound on
¢
Pe P p1 X1 Xn p0 X1 Xn H0 p1 X1 Xn ¢
P p0 X1 Xn 1 H0 p1 X1 Xn ¢
P ln © p0 X1 Xn 0 H0
¢¢ © § © ¢¢ ¢ LetY
ln p1 X p0 X
¢ Pe P Y 0 H0
£E esY H0
exp s ln p1 x
Rn
p0 x§
p0 x dx
A random variable is Gaussian if the density function is
¢" ' ' ¢ pX x 21πσ exp 2σ12 x µ 2
where µ is the mean and σ2 is the variance. The characteristic function of a random variable X is defined as
¢" § © φX s E esX . For a Gaussian random variable the characteristic function is
¢ φX s
e
s2 σ2 2
µs
1-4
APPENDIX A. PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES
Def: A function g : RN ¦
£ ¡ £ % % R is said to be concave (convex ) if for any x1 Rn x2 Rn and 0 θ 1 ¢ ' ¢ ¢ ' ¢ ¢ Θg x1 1 Θ g x2 g Θx1 1 Θ x2
where the vector addition is component-wise addition.
¦ ¥ A function g : Rn R is said to be convex (convex ) if ¢ ¢ ' ¢ ¢ ' ¢ ¢ Θg x1 1 θ g x2
g Θx1
1 Θ x2
¢ ¦ Jensen’s Inequality: If f x is a concave (convex ) function mapping Rn R then
§ ¢ © § © ¢ E f X f E X
¢ ¥ ¦ If f x is a convex (convex ) function mapping Rn R¢ then § ¢ © § © ¢ E f X f E X
¡ Proof for discrete random variables: (By induction) Let X take on values x1 x2, with nonzero probability
§ ¢ © ¢ § ©¢¢¢ ¢ ¢ ¢¢ E f X
p x1 f x1 p x2 f x2 f p x1 x1 p x2 x2 f EX
where the first inequality is due to the definition of convexity.
Assume if X is discrete taking values
¡ ¡ ¢" x1 xn¡ 1 then
¡n 1
¢ ¢ ¢ ¢ ∑ p xi f xi
i 1
n¡ 1 ∑ p xi 1
i1
n¡ 1 f i∑¡ 1 p xi f xi
¡ ¡ ¡ Now let X take values x1 x2 xn § ¢ © ¢ ¢" ¢ ¢ ¢ ¢ E f X
¢ Let α § ¢ © ¢ ¢ ¢ ¢ E f X § ¢ © ¢¢ ¢ ¢ E f X
¡ n ∑ p xi f xi i 1
n1
∑ p xi f xi
i1
¡n 1 ∑ p xj j 1
¡∑n
α
1 p xi
f
x
i 1 α
i
p xn f xn
¡∑n 1 p xi 1 i 1 α
¡ ∑ α f n 1 p xi x i 1 α i
p xn f xn
p xn f xn
¢ ¢ n¡ 1 f ∑ p xi xi i1
p xn xn
¡ ¡ ¢ ¡ ¡ Let X1
f
n
∑ p xi
xi
i1
Xn be a random vector. The covariance matrix of X1 Xn is defined to be
K ¡
¡¡ 1£ 1
K1£ 2
¡
KX
K2£ 1
¢
...
¤¦¥
K1£ n ¥¥ ... ¥
§
Kn£ 1
Kn£ n
1.
1-5
where and Def: A n ¢
Ki£ j E § Xi ' ¢µi Xj ' µj¢¡ ©
§ © µi E Xi ¡ ¡ ¢ n matrix is said to be nonnegative definite if for any vector a1 an
nn
¢
∑ ∑ aiki£ ja j
i 1j 1
0 and real
i.e.,
¢
aKX aT 0 and real
(positive definite if strict inequality holds). Claim: The covariance matrix is always nonnegative definite. Proof:
nn
∑ ∑ aikija j
i 1j 1
nn
§ ' ¢ ' ¢ © ∑ ∑ aiE Xi i 1j 1 £ nn
µi Xj
' ¢ ' ¢ ∑ ∑ E
ai Xi
i 1j 1 £
n
µi a j Xj
n
' ¢ ' ¢ ∑ ∑ E ai Xi µi aj Xj
i1
j1
µj¡ aj µ j ¡¡¤ µ j ¥¡¤
¡ ¡ ' ¢ ¡ ¡ Let X1
¤
n
2¢
∑ E
¢§¦ ¦
ai Xi
µi
¦ ¦
§
0
¦¦ i 1 ¦¦
¦
¦
Xn be a real random vector. The characteristic function of X1 X2
Xn is defined as
¡ ¡ ¢ ΨX1£ ¨ ¨ ¨ £ Xn ν1 νn
£
E
exp
n
j ∑ νiXi
¤
i1
¡ ¡ ¡ ¡ Def: The random vector X1 Xn is said to be jointly Gaussian if the characteristic function of X1 Xn is
¡ ¡ ¢" ' ΨX1£ ¨ ¨ ¨ £ Xn ν1 νn
exp jνT µ 1 νT Kν 2
¡ ¡ ¢ ¡ ¡ ¡ ¢ where νT ν1 νn µT µ1 µn and K is a real symmetric nonnegative definite n ¢ ¡ ¡ positive definite then the joint density of X1 Xn is
¢ ¢ ¢ ' ' ¢ ' ¢ p x 2π ¡ 1© 2 detK ¡ 1© 2 exp 1 2 x µ T K¡ 1 x µ
n matrix. If K is
Fact: Let X be a random n vector. Then X is jointly Gaussian iff X can be expressed as W Y µ where µ ¡ ¡ ¢£ ¡ ¡ ¡ µ1 µn lRn W is and n ¢ n matrix and Y1 Yn are independent mean zero Gaussian random variables (the
matrix W can be taken to be orthogonal, i.e. the rows of W are orthogonal).
Kx W KYW T
Now let X be a jointly Gaussian random vector (of length n) with mean µ covariance matrix K. Let F be a n by n
matrix. Consider the random variable
Y XFXT
1-6
APPENDIX A. PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES
We would like to be able to determine the density function of this random variable. Instead, we will determine the
characteristic function of this random variable. The characteristic function is
ΨY ν¢
§ ¢ © E exp jνY ' ¢ exp jνµT F ¡ 1 2 jνK ¡ 1µ
' ¢ det I 2 jνKF
For example, let n 1, then K σ2 and
¢ ' ' ¢ ¢ ΨY ν
exp jνµ2F 1 2 jνσ2F 1 2 jνσ2F
' Inverting this yields the Rician distributed random variable. For ν js, F 1 the characteristic function becomes
§ ¢ © § ¢ © ' ' ¢ ¢ E exp sY
E exp sX 2
exp sµ2 1 2sσ2 1 2sσ2
§ ©(% provided that Re s 1 2σ2.
2. Random Processes
¢ £ £ Def: A random process X t ;t T is an indexed collection of random variables (i.e. for each t ¢ set, X t is a random variable). ¢ £ Def: The covariance function of a random process X t ;t T is defined as
T , the index
¡ ¢" § ¢(' ¢ ¢ ¢(' ¢ ¢ © K s t E X s µ s X t µ t
¢ § ¢ © where µ t E X t . ¡ ¢ ¦ ¡ ¡ Def: A function K s t : R ¢ R
¢
R is said to be nonnegative definite if for any n 1 and time instants t1 tn
¢ and any function a t nn
¢
¢ ¡ ¢ ¢ ∑ ∑ a ti K ti tj a tj 0 (and is real)
i 1j 1
¢
¢ ¡ ¢ ¢ (positive definite if strict equality holds). Equivalently ¤ ¤ a t K t s a s dtds
0 and is real.
Claim: The covariance function is a nonnegative definite function.
¡ ¡ ¢ ¡ ¡ ¢ Def: A random process is said to be Gaussian if for any n and time instances t1 tn, X t1 X tn is jointly
Gaussian.
Categories
You my also like
Derivation of Gaussian Probability Distribution: A New Approach
537 KB1.9K358Lecture 3 Gaussian Probability Distribution Introduction
866.4 KB6K716Lecture 3 The Gaussian Probability Distribution Function
253.7 KB18.2K6.4K6 Gaussian Integers and Rings of Algebraic Integers
294.1 KB8.2K3.9KMATH180C: Introduction to Stochastic Processes II
2 MB3.8K1.1KThe Arithmetic of the Gaussian Integers
725.8 KB12.5K3.6KIntroduction to Gaussian Processes
616.1 KB7.8K783Continuous Random Variables
79.7 KB9.2K2.8KSta 2200 Probability And Statistics Ii
1.6 MB24.7K5.2K