# Computer Vision: Models, Learning and Inference (CV201) Midterm

Download Computer Vision: Models, Learning and Inference (CV201) Midterm

## Preview text

Computer Vision: Models, Learning and Inference (CV201)

Midterm Exam #1 Lecturer: Oren Freifeld Teaching Assistant: Meitar Ronen Department of Computer Science, Ben-Gurion University of the Negev

15/12/2019

• You can answer in either Hebrew or English. • The midterm exam is with “Closed Material” (i.e., you are not

allowed to bring your own formula pages, books, notebooks, etc.). • Calculators are allowed. There are 3 problems. • Problem #1: (2D random vector) 33 points • Problem #2: (MRF) 42 points • Problem #3: (a twist on the Ising model) 25 points Good luck.

1

Problem 1 (33 points) Let X = X1 X2 T be a two-dimensional random vector taking values in S = {0, 1}2. The probability mass function (pmf ) of X is given by:

p(0, 0) = 0.25 ; p(0, 1) = 0.25 ; p(1, 0) = 0.25 ; p(1, 1) = 0.25 . Part 1. Find the marginal pmf of X1 (denoted by p(x1)) and the marginal pmf of X2 (denoted by p(x2)). Part 2. Find the 2D mean vector of X, denoted by µX E(X). Part 3. Find the 2-by-2 covariance matrix of X, denoted by ΣX E((X − µX)(X − µX)T ) .

Part 4. Are X1 and X2 independent? Part 5. Are X1 and X2 uncorrelated?

Only for the parts 6-11, inclusive, let Y1 = X1 + X2 and let Y2 = |X1 − X2|.

Part 6. Find the pmf of Y = Y1 Y2 T . Part 7. Find the marginal pmf of Y1 (denoted by p(y1)) and the marginal pmf of

Y2 (denoted by p(y2)). Part 8. Find the 2D mean vector of Y , denoted by µY E(Y ). Part 9. Find the 2-by-2 covariance matrix of Y , denoted by

ΣY E((Y − µY )(Y − µY )T ) .

Part 10. Are Y1 and Y2 independent? Part 11. Are Y1 and Y2 uncorrelated?

2

Problem 2 (42 Points) Let X = X1 X2 X3 X4 T denote a discrete random vector, where all the 4 random variables, (Xi)4i=1, take values in S where S is a ﬁnite set. Assume the probability mass function (pmf ) of X is given by

p(x) = p(x1, x2, x3, x4)∝H12(x1, x2)H23(x2, x3)H34(x3, x4)H14(x1, x4) (1)

such that p(x) > 0 ∀x = x1 x2 x3 x4 ∈ S4. It follows that p is a Markov Random Field (MRF) w.r.t. a certain graph, Gx1234. Let Y = Y1 Y2 Y3 Y4 T

denote a 4-dimensional continuous random vector deﬁned as follows. For i = 1, 2, 3, 4, let yi = xi + ni, where x ∼ p(x) and ni i∼id N (0, σ2) with some σ > 0. It is further assumed that the (ni)4i=1 and x are independent. Note y =

y1 y2 y3 y4 T ∈ R4.

Part 1. Draw G . x1234 Let Cx1234 denote the set of cliques in G . x1234 Find C . x1234

Part 2. Write the likelihood, p(y|x) = p(y1, y2, y3, y4|x1, x2, x3, x4), in terms of {G(xi, yi)}4i=1

where

2

1

1 (yi − xi)2

G(xi, yi) N (yi; xi, σ ) (2πσ2)1/2 exp −2 σ2

. (2)

Part 3. Express p(x1, x2, x3, x4, y1, y2, y3, y4) in terms of H12, H23, H34, H14 and {G(xi, yi)}4i=1 .

Part 4. p(x1, x2, x3, x4, y1, y2, y3, y4) is an MRF w.r.t. a certain graph, G . x1234,y1234 Draw that graph.

Part 5. Answer Yes/No/Maybe to each of the following questions: Are x1 and x2 conditionally independent given x3? Are x1 and x3 conditionally independent given y1 and x2 and x4? Are x1 and y3 conditionally independent given x2?

Part 6. Draw G , y1234 the graph of the MRF associated with

p(y1, y2, y3, y4) =

p(x1, x2, x3, x4, y1, y2, y3, y4) .

(3)

x1,x2,x3,x4

3

Part 7. Suppose now that S = {−42, 42}. Thus, xi ∈ {−42, 42} for every i. Find the maximum-likelihood estimator,

xML = arg max p(y|x) .

(4)

x∈S4

To clarify, the maximization is done over all 4-length tuples, x = (xi)4i=1, such that xi ∈ {−42, 42}.

4

Problem 3 (25 points) Let X = X1 X2 X3 X4 T denote a discrete random vector where, for each i ∈ {1, 2, 3, 4}, xi ∈ {−1, 1}. Let the probability mass function (pmf ) of X be given by

p(x) ∝ exp (−β (x1x2 + x3x4 + x1x3 + x2x4))

(5)

for some known β > 0 where x = image,

x1 x2 x3 x4 T . Think of x as a 2-by-2

x1 x2 .

(6)

x3 x4

Please note that p is not the Ising model. The diﬀerence between this model and the Ising model over a 2-by-2 lattice is that here we have a minus sign in the exponent before β.

Part 1. Find E(X) = x xp(x). Part 2. Explain the eﬀect the value of β has on this model.

Part 3. Find arg maxx p(x), where the maximization is done over all 4-length vectors taking values in {−1, 1}4. Is the argmax unique?

Part 4. Find (the most compact expression of )

p(x2|x1, x3) = p(x1, x2, x3) (7) p(x1, x3)

according to this model.

5

Midterm Exam #1 Lecturer: Oren Freifeld Teaching Assistant: Meitar Ronen Department of Computer Science, Ben-Gurion University of the Negev

15/12/2019

• You can answer in either Hebrew or English. • The midterm exam is with “Closed Material” (i.e., you are not

allowed to bring your own formula pages, books, notebooks, etc.). • Calculators are allowed. There are 3 problems. • Problem #1: (2D random vector) 33 points • Problem #2: (MRF) 42 points • Problem #3: (a twist on the Ising model) 25 points Good luck.

1

Problem 1 (33 points) Let X = X1 X2 T be a two-dimensional random vector taking values in S = {0, 1}2. The probability mass function (pmf ) of X is given by:

p(0, 0) = 0.25 ; p(0, 1) = 0.25 ; p(1, 0) = 0.25 ; p(1, 1) = 0.25 . Part 1. Find the marginal pmf of X1 (denoted by p(x1)) and the marginal pmf of X2 (denoted by p(x2)). Part 2. Find the 2D mean vector of X, denoted by µX E(X). Part 3. Find the 2-by-2 covariance matrix of X, denoted by ΣX E((X − µX)(X − µX)T ) .

Part 4. Are X1 and X2 independent? Part 5. Are X1 and X2 uncorrelated?

Only for the parts 6-11, inclusive, let Y1 = X1 + X2 and let Y2 = |X1 − X2|.

Part 6. Find the pmf of Y = Y1 Y2 T . Part 7. Find the marginal pmf of Y1 (denoted by p(y1)) and the marginal pmf of

Y2 (denoted by p(y2)). Part 8. Find the 2D mean vector of Y , denoted by µY E(Y ). Part 9. Find the 2-by-2 covariance matrix of Y , denoted by

ΣY E((Y − µY )(Y − µY )T ) .

Part 10. Are Y1 and Y2 independent? Part 11. Are Y1 and Y2 uncorrelated?

2

Problem 2 (42 Points) Let X = X1 X2 X3 X4 T denote a discrete random vector, where all the 4 random variables, (Xi)4i=1, take values in S where S is a ﬁnite set. Assume the probability mass function (pmf ) of X is given by

p(x) = p(x1, x2, x3, x4)∝H12(x1, x2)H23(x2, x3)H34(x3, x4)H14(x1, x4) (1)

such that p(x) > 0 ∀x = x1 x2 x3 x4 ∈ S4. It follows that p is a Markov Random Field (MRF) w.r.t. a certain graph, Gx1234. Let Y = Y1 Y2 Y3 Y4 T

denote a 4-dimensional continuous random vector deﬁned as follows. For i = 1, 2, 3, 4, let yi = xi + ni, where x ∼ p(x) and ni i∼id N (0, σ2) with some σ > 0. It is further assumed that the (ni)4i=1 and x are independent. Note y =

y1 y2 y3 y4 T ∈ R4.

Part 1. Draw G . x1234 Let Cx1234 denote the set of cliques in G . x1234 Find C . x1234

Part 2. Write the likelihood, p(y|x) = p(y1, y2, y3, y4|x1, x2, x3, x4), in terms of {G(xi, yi)}4i=1

where

2

1

1 (yi − xi)2

G(xi, yi) N (yi; xi, σ ) (2πσ2)1/2 exp −2 σ2

. (2)

Part 3. Express p(x1, x2, x3, x4, y1, y2, y3, y4) in terms of H12, H23, H34, H14 and {G(xi, yi)}4i=1 .

Part 4. p(x1, x2, x3, x4, y1, y2, y3, y4) is an MRF w.r.t. a certain graph, G . x1234,y1234 Draw that graph.

Part 5. Answer Yes/No/Maybe to each of the following questions: Are x1 and x2 conditionally independent given x3? Are x1 and x3 conditionally independent given y1 and x2 and x4? Are x1 and y3 conditionally independent given x2?

Part 6. Draw G , y1234 the graph of the MRF associated with

p(y1, y2, y3, y4) =

p(x1, x2, x3, x4, y1, y2, y3, y4) .

(3)

x1,x2,x3,x4

3

Part 7. Suppose now that S = {−42, 42}. Thus, xi ∈ {−42, 42} for every i. Find the maximum-likelihood estimator,

xML = arg max p(y|x) .

(4)

x∈S4

To clarify, the maximization is done over all 4-length tuples, x = (xi)4i=1, such that xi ∈ {−42, 42}.

4

Problem 3 (25 points) Let X = X1 X2 X3 X4 T denote a discrete random vector where, for each i ∈ {1, 2, 3, 4}, xi ∈ {−1, 1}. Let the probability mass function (pmf ) of X be given by

p(x) ∝ exp (−β (x1x2 + x3x4 + x1x3 + x2x4))

(5)

for some known β > 0 where x = image,

x1 x2 x3 x4 T . Think of x as a 2-by-2

x1 x2 .

(6)

x3 x4

Please note that p is not the Ising model. The diﬀerence between this model and the Ising model over a 2-by-2 lattice is that here we have a minus sign in the exponent before β.

Part 1. Find E(X) = x xp(x). Part 2. Explain the eﬀect the value of β has on this model.

Part 3. Find arg maxx p(x), where the maximization is done over all 4-length vectors taking values in {−1, 1}4. Is the argmax unique?

Part 4. Find (the most compact expression of )

p(x2|x1, x3) = p(x1, x2, x3) (7) p(x1, x3)

according to this model.

5

## Categories

## You my also like

### Mrf Tyres Mixed Surface Rally Challenge Regulations

300.5 KB10.1K1.4K### Rhode Island Resource Recovery Corporation

132.9 KB16K7.7K### Sta 2200 Probability And Statistics Ii

1.6 MB24.7K5.2K### Probability and Statistics Basics

341.6 KB53.6K11.8K### Notes: Bernoulli, Binomial, and Geometric Distributions

96.2 KB3.7K1.2K### Today in Physics 217: vector derivatives

135.3 KB84.9K13.6K### Vector Differential Calculus Introduction: Basic Objects

107.4 KB4.5K2K### Proposed Common Vectorized Types for C

185 KB10K3K### Solution Manual For: Introduction to Linear Optimization by

136.6 KB56.7K8.5K