Let A 1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3 Find the dimensio

Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If null(A) of a 2 Times 5 matrix A has dimensions 3, What is the dimension of the column space of A? Give your reasons! dim(Col(A)) = Let T: V rightarrow W be a linear transformation from a vector space V to a vector space W. Let: R = range(T) = {w W|w = T(v) for some v V}. Prove: R is a vector subspace of W.

Solution

Ans-

An column array of p elements is called a vector of dimension p and is
written as
x
p1
The transpose of the column vector x
x
0
= [x
=
1
2
6
6
6
4
x
2
x
x
1
2
.
.
.
x
p
p1
3
7
7
7
5
:
is row vector
: : : x
p
]
A vector can be represented in p-space as a directed line with components
along the p axes.
38
Basic Matrix Concepts (cont’d)
Two vectors can be added if they have the same dimension. Addition
is carried out elementwise.
x +y =
2
6
6
6
4
x
x
1
2
.
.
.
x
p
3
7
7
7
5
+
2
6
6
6
4
y
y
1
2
.
.
.
y
p
3
7
7
7
5
=
2
6
6
6
4
x
x
x
1
2
p
+y
+y
.
.
.
+y
1
2
p
3
7
7
7
5
A vector can be contracted or expanded if multiplied by a constant c.
Multiplication is also elementwise.
cx = c
2
6
6
6
4
x
x
1
2
.
.
.
x
p
3
7
7
7
5
=
2
6
6
6
4
cx
cx
.
.
.
cx
1
2
p
3
7
7
7
5
39
x =
6 x = 6
x +y =
2
6
4
2
6
4
2
1
4
2
1
4
2
6
4
3
7
Examples
3
7
5
and x
2
1
4
5
+
2
6
4
3
7
5
=
5
2
0
2
6
4
3
7
0
=
h
2 1 4
6 2
6 1
6 (4)
5
=
2
6
4
3
7
5
=
2 +5
1 2
4 +0
3
7
i
2
6
4
5
=
12
6
24
2
6
4
3
7
5
7
1
4
3
7
5
40
41
Basic Matrix Concepts (cont’d)
Multiplication by c > 0 does not change the direction of x. Direction is
reversed if c < 0.
Basic Matrix Concepts (cont’d)
The length of a vector x is the Euclidean distance from the origin
L
x
=
v
u
u
u
t
p
X
j=1
x
2
j
Multiplication of a vector x by a constant c changes the length:
If c = L
1
x
L
cx
=
v
u
u
u
t
p
X
j=1
c
2
x
2
j
= jcj
, then cx is a vector of unit length.
v
u
u
u
t
p
X
j=1
x
2
j
= jcjL
x
42
The length of x =
Then
L
x
=
q
2
6
6
6
4
(2)
is a vector of unit length.
2
1
4
2
2
3
7
7
7
5
is
+(1)
z =
1
Examples
2
5

+(4)
2
6
6
6
4
2
1
4
2
3
7
7
7
5
2
=
+(2)
2
6
6
6
4
0:4
0:2
0:8
0:4
2
=
p
3
7
7
7
5
25 = 5
43
Angle Between Vectors
Consider two vectors x and y in two dimensions. If
is the angle
between x and the horizontal axis and
y and the horizontal axis, then
cos(
sin(
1
1
) =
x
L
) =
x
L
If is the angle between x and y, then
cos() = cos(
2

1
1
x
2
x
) = cos(
2
cos(
sin(
2
>
2
2
1
1
is the angle between
) =
y
) =
y
) cos(
1
L
L
1
y
2
y
;
) +sin(
):
Then
cos() =
x
1
y
1
L
+x
x
L
y
2
y
2
:
2
) sin(
1
44
Angle Between Vectors (cont’d)
45
Inner Product
The inner product between two vectors x and y is
Then L
x
=
p
x
0
x, L
y
=
x
q
0
y
y =
0
p
X
j=1
y and
cos() =
Since cos() = 0 when x
0
q
(x
0
x
j
x
x)
y
0
j
y
q
:
(y
0
y)
y = 0 and cos() = 0 for = 90
or = 270, then the vectors are perpendicular (orthogonal) when
x
0
y = 0.
46
Linear Dependence
Two vectors, x and y, are linearly dependent if there exist
two constants c
1
and c
2
, not both zero, such that
c
1
x +c
2
y = 0
If two vectors are linearly dependent, then one can be written as a
linear combination of the other. From above:
k vectors, x
1
; x
2
; : : : ; x
k
x = (c
2
=c
1
)y
, are linearly dependent if there exist constants
(c
1
; c
2
; :::; c
k
) not all zero such that
k
X
j=1
c
j
x
j
= 0
47
Vectors of the same dimension that are not linearly
dependent are said to be linearly independent
Let
Then c
1
x
1
x
+c
1
2
Linear Independence-example
=
x
2
2
6
4
1
2
1
+c
The unique solution is c
3
3
7
5
; x
x
3
2
= 0 if
c
1
+ c
=
2
2
6
4
1
0
1
+ c
3
7
5
; x
= 0
2c
1
+ 0 2c
3
= 0
c
1
1
c
= c
2
2
+ c
= c
3
3
3
3
=
= 0
2
6
4
1
2
1
3
7
5
= 0, so the vectors are linearly
independent.
48
Projections
The projection of x on y is dened by
Projection of x on y =
The length of the projection is
Length of projection =
jx
where is the angle between x and y.
0
yj
L
y
x
0
y
y
= L
0
y
y =
x
jx
0
yj
L
x
L
x
y
0
L
y
y
1
L
= L
y
y:
x
j cos()j;
49
A matrix A is an array of elements a
The transpose A
0
A =
2
6
6
6
4
a
a
a
Matrices
11
21
.
.
.
n1
ij
a
a
a
with n rows and p columns:
12
22
.
.
.
n2
a
a
.
.
.
a
1p
2p
.
.
.
np
3
7
7
7
5
has p rows and n columns. The j-th row of A
is the j-th
column of A
A
0
=
2
6
6
6
4
a
a
a
11
12
.
.
.
1p
a
a
a
21
22
.
.
.
2p
a
a
.
.
.
a
n1
n2
.
.
.
np
3
7
7
7
5
0
50
Matrix Algebra
Multiplication of A by a constant c is carried out element by element.
cA =
2
6
6
6
4
ca
ca
ca
11
21
.
.
.
n1
ca
ca
ca
12
22
.
.
.
n2
ca
ca
.
.
.
ca
1p
2p
.
.
.
np
3
7
7
7
5
51
Two matrices A
np
= fa
Matrix Addition
ij
g and B
g of the same dimensions
can be added element by element. The resulting matrix is C
np
= fb
ij
=
fc
ij
g = fa
ij
+b
ij
g
C = A +B
=
=
2
6
6
6
4
2
6
6
6
4
a
a
a
a
a
a
11
21
.
.
.
n1
11
21
n1
a
a
a
12
22
.
.
.
n2
+b
+b
.
.
.
+b
11
21
n1
a
a
.
.
.
a
a
a
a
12
22
n2
1p
2p
.
.
.
np
+b
+b
.
.
.
+b
3
7
7
7
5
12
22
n2
+
2
6
6
6
4
b
b
b
11
21
.
.
.
n1
a
a
.
.
.
a
1p
2p
np
b
b
b
12
22
.
.
.
n2
+b
+b
.
.
.
+b
b
b
.
.
.
b
1p
2p
np
3
7
7
7
5
1p
2p
.
.
.
np
3
7
7
7
5
np
52
6
\"
\"
\"
Examples
2 1 4
5 7 0
2 1 4
5 7 0
2 1
0 3
#
+
\"
#
#
0
=
=
2 1
5 7
\"
2
6
4
2 5
1 7
4 0
3
7
5
12 6 24
30 42 0
#
=
\"
4 0
5 10
#
#
53
Matrix Multiplication
Multiplication of two matrices A
np
and B
can be carried out only
if the matrices are compatible for multiplication:
– A
– B
np
mq
B
A
mq
np
: compatible if p = m.
: compatible if q = n.
mq
The element in the i-th row and the j-th column of A B is the inner
product of the i-th row of A with the j-th column of B.
54
\"
Multiplication Examples
2 0 1
5 1 3
\"
\"
2 1
5 3
1 4
1 3
#
#


#
\"

2
6
4
1 4
1 3
0 2
1 4
1 3
\"
2 1
5 3
#
#
3
7
5
=
=
=
\"
\"
\"
2 10
4 29
1 11
2 29
22 13
13 8
#
#
#
55
Identity Matrix
An identity matrix, denoted by I, is a square matrix with 1’s along the
main diagonal and 0’s everywhere else. For example,
I
22
=
\"
1 0
0 1
#
and I
If A is a square matrix, then AI = IA = A.
I
nn
A
np
= A
np
but A
np
I
nn
33
=
2
6
4
1 0 0
0 1 0
0 0 1
is not dened for p 6 = n.
3
7
5
56
Symmetric Matrices
A square matrix is symmetric if A = A
If a square matrix A has elements fa
ij
0
.
g, then A is symmetric if a
=
a
ji
.
Examples
\"
4 2
2 4
#
2
6
4
5 1 3
1 12 5
3 5 9
3
7
5
ij
57
Inverse Matrix
Consider two square matrices A
kk
and B
. If
AB = BA = I
then B is the inverse of A, denoted A
1
.
kk
The inverse of A exists only if the columns of A are linearly independent.
If A = diagfa
ij
g then A
1
= diagf1=a
ij
g.
58
Inverse Matrix
For a 2 2 matrix A, the inverse is
A
1
=
where det(A) = (a
\"
a
a
11
21
11
a
a
12
22
a
#
22
1
=
1
) (a
det(A)
12
a
\"
21
a
a
) denotes the
determinant of A.
22
21
a
a
12
11
#
;
59
Orthogonal Matrices
A square matrix Q is orthogonal if
QQ
0
= Q
Q = I;
or Q
0
= Q
1
.
0
If Q is orthogonal, its rows and columns have unit length (q
= 1)
and are mutually perpendicular (q
0
j
q
k
= 0 for any j 6 = k).
0
j
q
j
60
Eigenvalues and Eigenvectors
A square matrix A has an eigenvalue with corresponding eigenvector
z 6 = 0 if
Az = z
The eigenvalues of A are the solution to jA Ij = 0.
A normalized eigenvector (of unit length) is denoted by e.
A k k matrix A has k pairs of eigenvalues and eigenvectors

where e
0
i
e
i
= 1, e
0
i
e
1
j
; e
1

2
; e
2
:::
k
; e
k
= 0 and the eigenvectors are unique up to a
change in sign unless two or more eigenvalues are equal.
61
Spectral Decomposition
Eigenvalues and eigenvectors will play an important role in this course.
For example, principal components are based on the eigenvalues and
eigenvectors of sample covariance matrices.
The spectral decomposition of a k k symmetric matrix A is
A =
=
[
e
1
e
1
= PP
1
e
e
2
0
0
1
+
e
2
k
e
]
2
2
6
6
6
4
e
0
2

+::: +
1
k
e
0 0
0
0
.
.
.
2
.
.
.
.
.
.
0 0
k
e
.
.
.
0
k
k
3
7
7
7
5
[
e
1
e
2
e
k
]
0
62
Determinant and Trace
The trace of a k k matrix A is the sum of the diagonal elements, i.e.,
trace(A) =
P
k
i=1
a
ii
The trace of a square, symmetric matrix A is the sum of the eigenvalues,
i.e., trace(A) =
P
k
i=1
a
ii
=
P
k
i=1

i
The determinant of a square, symmetric matrix A is the product of the
eigenvalues, i.e., jAj =
Q
k
i=1

i
63
Rank of a Matrix
The rank of a square matrix A is
– The number of linearly independent rows
– The number of linearly independent columns
– The number of non-zero eigenvalues
The inverse of a k k matrix A exists, if and only if
rank(A) = k
i.e., there are no zero eigenvalues
64
Positive Denite Matrix
For a k k symmetric matrix A and a vector x = [x
the
quantity x
Note that x
If x
0
0
Ax is called a quadratic form
0
Ax =
P
k
i=1
P
k
j=1
a
ij
x
i
x
j
1
; x
Ax 0 for any vector x, both A and the quadratic form are said
to be non-negative denite.
If x
0
Ax > 0 for any vector x 6 = 0, both A and the quadratic form are
said to be positive denite.
2
; :::; x
k
]
0
65
Example 2.11
Show that the matrix of the quadratic form 3x
2
1
+ 2x
2
2
2
p
is
positive denite.
For
A =
the eigenvalues are
x
0
1
\"
= 4;
Ax = 4x
3
p
2

2
= 4y
p
2 2
#
;
= 1. Then A = 4e
0
e
2
1
1
e
0
1
+y
0;
and is zero only for y
1
= y
2
= 0.
x +x
2
2
0
e
2
e
0
2
x
1
e
0
1
+e
2
2x
e
0
2
1
x
2
. Write
66
y
1
; y
2
with P
Example 2.11 (cont’d)
cannot be zero because
0
\"
y
y
1
2
#
=
\"
orthonormal so that (P
x 6 = 0 it follows that y 6 = 0.
e
e
0
1
0
2
# \"
0
)
x
x
1
1
2
#
= P
0
22
x
21
= P. Then x = Py and since
Using the spectral decomposition, we can show that:
– A is positive denite if all of its eigenvalues are positive.
– A is non-negative denite if all of its eigenvalues are 0.
67
For x = [x
1
Distance and Quadratic Forms
; x
2
; :::; x
p
]
0
and a p p positive denite matrix A,
d
2
= x
0
Ax > 0
when x 6 = 0. Thus, a positive denite quadratic form can be inter-
preted as a squared distance of x from the origin and vice versa.
The squared distance from x to a xed point is given by the quadratic
form
(x )
0
A(x ):
68
Distance and Quadratic Forms (cont’d)
We can interpret distance in terms of eigenvalues and eigenvectors of
A as well. Any point x at constant distance c from the origin satises
x
0
Ax = x
0
(
p
X
j=1

j
e
j
e
0
j
)x =
the expression for an ellipsoid in p dimensions.
Note that the point x = c
e
1
1=2
1
e
1
) from the origin because it satises x
p
X
j=1

j
(x
0
e
j
)
2
= c
2
;
is at a distance c (in the direction of
0
Ax = c
2
. The same is true
for points x = c
1=2
j
, j = 1; :::; p. Thus, all points at distance c lie
on an ellipsoid with axes in the directions of the eigenvectors and with
lengths proportional to
e
j
1=2
j
.
69
Distance and Quadratic Forms (cont’d)
70
Square-Root Matrices
Spectral decomposition of a positive denite matrix A yields
with
kk
= diagf
j
A =
p
X
j=1
g, all
j

j
e
j
e
0
j
= PP;
> 0, and P
] an
orthonormal matrix of eigenvectors. Then
With
1=2
= diagf
A
A
1
1=2
j
1=2
= P
1
P
0
=
p
X
j=1
kk
1

g, a square-root matrix is
= P
1=2
P
0
=
p
X
j=1
q
j
= [e
e

j
j
e
e
j
0
j
e
0
j
1
e
2
::: e
p
71
Square-Root Matrices
The square root of a positive denite matrix A has the
following properties:
1.
Symmetry: (A
2. A
3. A
4. A
5. A
1=2
A
1=2
1=2
A
1=2
1=2
=
1=2
= A
P
1=2
A
p
j=1
1=2

= A
)
0
= A
1=2
j
1=2
= A
1
e
A
j
1=2
e
0
j
1=2
= PLambda
= I
1=2
P
0
Note that there are other ways of dening the square root of a positive
denite matrix: in the Cholesky decomposition A = LL
0
, with L a matrix
of lower triangular form, L is also called a square root of A.
72
Random Vectors and Matrices
A random matrix (vector) is a matrix (vector) whose elements are random
variables.
If X
np
is a random matrix, the expected value of X is the np matrix
E(X) =
where
E(X
with f
ij
(x
ij
2
6
6
6
4
E(X
11
) E(X
12
) E(X
)
E(X
21
) E(X
22
) E(X
)
.
.
.
E(X
ij
n1
) =
) E(X
Z
1
1
x
.
.
.
n2
ij
f
) E(X
ij
(x
ij
)dx
ij
.
.
.
1p
2p
np
)
) the density function of the continuous random variable
X
. If X is a discrete random variable, we compute its expectation as
a sum rather than an integral.
ij
3
7
7
7
5
;
73
Linear Combinations
The usual rules for expectations apply. If X and Y are two random
matrices and A and B are two constant matrices of the appropriate
dimensions, then
E(X +Y ) = E(X) +E(Y )
E(AX) = AE(X)
E(AXB) = AE(X)B
E(AX +BY ) = AE(X) +BE(Y )
Further, if c is a scalar-valued constant then
E(cX) = cE(X):
74
Mean Vectors and Covariance Matrices
Suppose that X is p 1 (continuous) random vector drawn from some
pdimensional distribution.
Each element of X, say X
has its own marginal distribution with
marginal mean
j
j
and variance


j
jj
=
=
Z
Z
1


x
j
(x
jj
f
j
j
dened in the usual way:
(x

j
)dx
j
)
2
j
f
j
(x
j
)dx
j
75
Mean Vectors and Covariance Matrices (cont’d)
To examine association between a pair of random variables we need
to consider their joint distribution.
A measure of the linear association between pairs of variables is given
by the covariance

jk
= E
=
Z
h
(X
1

Z
j

1

j
(x
)(X
j
k

j

)(x
k
k
)
i

k
)f
jk
(x
j
; x
k
)dx
j
dx
k
:
76
Mean Vectors and Covariance Matrices (cont’d)
If the joint density function f
) can be written as
the product of the two marginal densities, e.g.,
f
jk
(x
j
jk
; x
(x
k
j
; x
) = f
k
j
(x
);
then X
j
and X
k
are independent.
j
)f
k
(x
More generally, the pdimensional random vector X has
mutually independent elements if the pdimensional joint
density function can be written as the product of the p
univariate marginal densities.
If two random variables X
j
and X
k
k
are independent, then their covariance
is equal to 0. [Converse is not always true.]
77
Mean Vectors and Covariance Matrices (cont’d)
We use to denote the p 1 vector of marginal population means
and use to denote the p p population variance-covariance matrix:
= E
h
(X )(X )
0
i
:
If we carry out the multiplication (outer product)then is equal to:
E
2
6
6
4
(X
1

1
)
2
(X
1

1
)(X
2

2
) (X
)
(X
(X
2
p


2
p
)(X
1

1
) (X
2

2
)
2
(X
1
2

)
.
.
.
)(X
1

1
) (X
p

p
.
.
.
)(X
2

2
) (X

1
2
p
)(X
)(X
.
.
.

p
p
p
)


2
p
p
78
3
7
7
5
:
Mean Vectors and Covariance Matrices (cont’d)
By taking expectations element-wise we nd that
Since
jk
=
kj
=
2
6
6
6
4



11
21
.
.
.
p1



12
22


.
.
.
p2

1p
2p
.
.
.
pp
3
7
7
7
5
:
for all j 6 = k we note that is symmetric.
is also non-negative denite
79
Correlation Matrix
The population correlation matrix is the p p matrix with
off-diagonal elements equal to
and diagonal elements
equal to 1.
Since
ij
=
ji
2
6
6
6
4


1
21
.
.
.
p1

jk
12

1
.
.
.
p2
1
1p
2p
.
.
.
the correlation matrix is symmetric
The correlation matrix is also non-negative denite
3
7
7
7
5
:
80
Correlation Matrix (cont’d)
The p p population standard deviation matrix V
1=2
is a diagonal matrix
with
p

along the diagonal and zeros in all off-diagonal positions.
Then
jj
= V
and the population correlation matrix is
(V
1=2
)
1=2
1
P V
(V
1=2
1=2
Given , we can easily obtain the correlation matrix
)
1
81
Partitioning Random vectors
If we partition the random p1 vector X into two components X
of dimensions q 1 and (pq) 1 respectively, then the mean vector
and the variance-covariance matrix need to be partitioned accordingly.
Partitioned mean vector:
E(X) = E
\"
X
X
1
2
#
Partitioned variance-covariance matrix:
=
where
\"
11
V ar(X
1
=
) Cov(X
\"
)
Cov(X
2
is q q,
; X
1
12
) V ar(X
1
E(X
)
E(X
; X
2
)
2
is q (p q) and
1
2
)
#
#
=
=
22
\"
\"




1
2
11
0
12
#


12
22
#
1
;
; X
is (p q) (p q).
82
2

11
Partitioning Covariance Matrices (cont’d)
are the variance-covariance matrices of the
sub-vectors X
;
22
, respectively. The off-diagonal elements
in those two matrices reect linear associations among
elements within each sub-vector.
1
; X
2
There are no variances in
, only covariances. These
covariancs reect linear associations between elements
in the two different sub-vectors.
12
83
Linear Combinations of Random variables
Let X be a p 1 vector with mean and variance covariance matrix
, and let c be a p1 vector of constants. Then the linear combination
c
0
X has mean and variance:
E(c
0
X) = c
0
; and V ar(c
0
X) = c
In general, the mean and variance of a q 1 vector of linear combinations
Z = C
qp
X
are

p1
Z
= C
X
and
Z
= C
X
C
0
:
0
c
84
Cauchy-Schwarz Inequality
We will need some of the results below to derive some
maximization results later in the course.
Cauchy-Schwarz inequality Let b and d be any two
p 1 vectors. Then,
(b
0
d)
2
(b
0
b)(d
d)
with equality only if b = cd for some scalar constant c .
0
Proof: The equality is obvious for b = 0 or d = 0. For other cases,
consider b cd for any constant c 6 = 0 . Then if b cd 6 = 0, we have
0 < (b cd)
0
(b cd) = b
0
b 2c(b
0
d;
since b cd must have positive length.
d) +c
2
d
0
85
Cauchy-Schwarz Inequality
We can add and subtract (b
0 < b
0
b2c(b
0
d)+c
2
d
0
0
d)
d
(b
0
d
2
=(d
d)
0
d
2
0
d) to obtain
+
(b
Since c can be anything, we can choose c = b
0 < b
0
b
(b
0
d
d)
0
d
2
0
d
) (b
d)
0
d
0
2
d)
= b
2
0
0
b
(b
d=d
< (b
d)
for b 6 = cd (otherwise, we have equality).
0
0
d
d)
0
d
d. Then,
0
b)(d
0
2
+(d
0
d)

86
c
b
0
d
d
0
d
!
2
Extended Cauchy-Schwarz Inequality
If b and d are any two p 1 vectors and B is a p p positive denite
matrix, then
(b
0
d)
2
(b
0
Bb)(d
d)
with equality if and only if b = cB
1
0
B
1
d or d = cBb for some
constant c.
Proof: Consider B
1=2
=
P
p
i=1
p

i
e
i
e
0
i
; and B
1=2
=
P
.
Then we can write
b
0
d = b
0
Id = b
0
B
1=2
B
1=2
d = (B
1=2
b)
0
(B
1=2
:
To complete the proof, simply apply the Cauchy-Schwarz
inequality to the vectors b

and d

.
p
i=1
d) = b
1
(
p


0
d
i
)

e
i
87
e
0
i
Optimization
Let B be positive denite and let d be any p 1 vector. Then
is attained when x = cB
max
x6 =0
1
(x
x
0
0
d)
Bx
2
= d
0
B
1
d
d for any constant c 6 = 0.
Proof: By the extended Cauchy-Schwartz inequality: (x
0
d)
2
d).
Since x 6 = 0 and B is positive denite, x
0
Bx > 0 and we can divide both
sides by x
0
Bx to get an upper bound
(x
x
0
0
d)
Bx
2
d
0
B
1
d:
Differentiating the left side with respect to x shows that
maximum is attained at x = cB
1
d.
(x
0
Bx)(d
88
0
B
1
Maximization of a Quadratic Form
on a Unit Sphere
B is positive denite with eigenvalues
0 and
associated eigenvectors (normalized) e
max
x6 =0
min
x6 =0
x
0
Bx
x
x
0
0
x
Bx
x
0
x
=
=
1
p
Furthermore, for k = 1; 2; ; p 1,
x?e
max
1
;e
2
; ;e
k
x
0
Bx
x
0
x
=
1
1
; e

2
2

; ; e
p
; attained when x = e
; attained when x = e
k+1
. Then
is attained when x = e
See proof at end of chapter 2 in the textbook (pages 80-81).
1
p
:
p
k+1
:
89

 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If
 Let A = [1 3 0 1 6 2 0 0 1 4 2 1 0 0 0 0 1 3]. Find the dimensions of the null space and column space for A. Give your reasons! dim(null(A)) = dim(col(A)) = If

Get Help Now

Submit a Take Down Notice

Tutor
Tutor: Dr Jack
Most rated tutor on our site