Page 1
11LINEAR EQUATIONSUnit Structure :1.0Introduction1.1Objectives1.2System of Linear Equation1.3Solution of the system of Linear Equations by GaussianElimination method1.0 INTRODUCTIONLinear word comes from line. You know the equationof astraight line in two dimensions has the formax by c.T h i si salinear equation in two variables x any y. solving this equation meansto findxandyin. which satisfiedax by c. The geometricinterpretation of the equation is that the set of all points satisfyingthe equation forms a straight line in the plane through the point/,cc oand with slope/ab c. In this chapter, we shall review thetheory of such equations in n variables and interpret the solutiongeometrically.1.1 OBJECTIVESAfter going through this chapter, you will be able to :Understand the characteristic of the solutions.Solve if the equations are solvable.Interpret the system geometrically.1.2 SYSTEMS OF LINEAR EQUATIONSThe collection of linear equations :11 11n n 121 12n n 2a x + ...+ a x = ba x + ...+ a x = b11nn mam x + ...+ am x = bmunotes.in
Page 2
2is called a system of m linear equations in nunknowns1nx ,..., x. Hereij ia, b∈R are given. We shall write this ina short form asnij j ij= 1ax = b , 1≤i≤m………………….. (1.2.1)Solving this system means to find real numbers1nx ,..., xwhich satisfy the system. Anyn-tuple (1nx ,..., x)w h i c hsatisfies the system is called a solution of the system. If12mb= b = - = b = 0,w es a yt h a tt h es y s t e mi sh o m o g e n e o u sw h i c hcan be written is short form, innij jj= 1ax = 0 , 1≤i≤m…………………… (1.2.2)Note that0= (0, … ,0)always satisfies (1.2.2). This solutionis called the trivial solution. We say (1nx ,..., x) is a nontrivial if(1nx ,..., x)≠0 , . . , 0. That is if there exits it least one suchthatix≠0.Perhaps the most fundamental technique for finding thesolutions of a system of linear equations is the technique ofelimination. We can illustrate this technique on the homogenoussystem12 31232x - x + x = 0x+ 3 x + 4 x = 0.If weadded (-2) times the second equation to the firstequation we obtain23-7x - 7x = 0i.e.23x= - x. Similarly eliminatingx2from the above equation, we obtain137x + 7x = 0i.e.13x= - x.S owe conclude that if123x, x , xis a solution then12 3x= x = - x.T h u sthe set of solutions consists of all triples(a, a,-a).Let1na, . . , abe asolution of the homogeneous system(1.2.2).Then we see that1nαa, . . . ,αais again a solution of(1.2.2)for any.T h i sh a st h efollowing geometric interpretation in the case of the threedimensional space3.Let(r, s, t)be a solution of the systemax + by + cz = 0.T h a tis,ar + bs + ct = 0. Then the solution set is a plane through theorigin. So the plane contains(0, 0, 0)and(r, s, t).The line joiningthese two points isx-0 t-0 z-0=== -α0-r 0-s 0-t(say) i.e.munotes.in
Page 3
3x=αr, y =αs,z =αtis again a solution of the systemax + by +cz = 0.Also if1,....,nbbis another solution of(1.2.2),then11 n na+ b, . . , a + bis again a solution of(1.2.2).These two togethercan be described as the set of solutions of a homogeneous system oflinear equations closed under addition and scalar multiplication.However, the set of solutions of a non-homogeneous systemof linear equations need not be closed under addition and scalarmultiplication. For example, consider the equation4x-3y = 1,a=(1, 1)is a solution butα1, 1 =α,αis not a solution ifα≠1.A l s o ,1b= , 04is another solution but5a+b= , 14is not a solution.The homogeneous system given by(1.2.2)is called theassociated homogeneous system of(1.2.1).Let S be the set of solutions of the non-homogenous systemandhSbe the set of solution of the associated homogenous systemof equations. AssumeS.hSis always non-empty, as the privialsolution0,...,0hS.L e txSandnyS.W ew i l lshow that forany,xy S .SincexSwe have1nij j ijax bsimilarly,10nij jjayfor1im.F o rand1im,w eh a v e1nij j jjax y=11nnij jij jjjax ay1nij jjax1ib for i m Soxyis also a solution of (1.2.1).Now if1,...,snzzzand1,...,snxxthus1nij i ijaz band1nij i ijax b.Therefore111nnnij j jij jij jjjjax z a x a z 0iibb.munotes.in
Page 4
4That is, ifxandzare any two solutions of the non-homogeneous system thenx-zis a solution of the homogeneoussystem. That is,hSxz.S obythe above two observations we canconclude a single fact byfollowing way.Let us fixxS. Then if we definehhx+S = x+y y∈S .The first observation thatx+αyis also a solution if (1.2.1)impliesnxSS. Also for allz∈S ,z=x+ z-x ∈x+S.T h i simpliesS⊂x+S.S ohS=x+S.T h i sxis calledaparticularsolution of (1.2.1). So we have the fact :To find all the solution of (1.2.1) it is enough to find all thesolutions of the associated homogeneous system and any particularsolution (1.2.1).These are mainly for the purpose of reviewing the so-calledGaussian elimination method of solving linear equations. Here weeliminate one variable and reduce the system to another set of linearequations with fewero number of variables. We repeat the aboveprocess with the system so obtained by deleting again one equationtill we are finally left with a single equation. In this last equation,except for the first xi terms, the rest of the variables are treated as“free” and assigned arbitrary peal numbers.Let us clarify this by thefollowing examples.Example 1.2.1 :12:122Ex y zEx y z To eliminate y we do12EEand get the equation323xz.We treat z as the free variable and assign the value t to z. i.e. z = t. so213xt.S u b s t i t u t i n gya n dzi nE1we get/3xt.T h u st h esolution set S in given by21S = 1 - t,- t,t / t∈R3321= 1,0,0 + t - ,- , 1 / t∈R33.So (1,0,0) is a particular solution of the given system whichsatisfies both E1and E2, hence lies on both the planes defined by themunotes.in
Page 5
5equations E1and E2.A n d21-, -, 133is a point of3which lies onthe plane through the origin corresponding to the associatedhomogeneous system. Hence all the point on the line joining21-, - , 133and the origin also lie on the plane through the origin.Example 1.2.2 :Consider the system-11 2 3 421 2 3 431 2 3 4E: = x+ x+ x+ x= 1E: = x + x+ x - x= - 1E: = x + x+ x+ 5 x= 5Here E1-E2gives us 2x4=2 .S ox4=1 substituting this valuein above equation we get x1+x2+x3=0 .T h i si sal i n e a re q u a t i o ni nthree variables and we can think x2and x3as free variables. So welet x2=s and x3=ts ot h a tx1=-s-t. Hence the solution set isS:=- s - t , s , t , 1 s , t∈R= s -1, 1,0,0 + t -1,0, 1,0 + 0,0,0, 1 s,t∈R= 0,0,0, 1 + s -1, 1,0,0 + t -1,0, 1,0 s,t∈RCheck your Progress :Solve the following systems :1)3x + 4y + z = 0x+y+z=0Ans.t( -3,2, 1) t∈R2)x-y+4 z=42x + 6z = -2Ans. s( -1,-5,0) + t( -3, 1, 1) s t∈R3)3x + 4y = 0x+y=0Ans( . 0,0)Observation :By the above discussion we see that a homogeneous, systemneed not always have non-trivial solutions. We also observe that ifthe number of unknowns is more than the number of equations thenthe system always has a non-trivial solution.munotes.in
Page 6
6This can be geometrically interpreted as follows :Letax + b = 0, this single equation has two variables and itssolutions are all points lying on a line given by the equation.Again111222ax+ by+ cz=0ax + by + cz = 0always has non-trivial solutions whichlie on the line ofintersection of the above two planes.Theorem 1.2.1 :The systemnij jj= 1ax = 0for1≤i≤malways has non-trivialsolution ifm1.11 11n n∴a x +...+a x = 0.If each0iathen any value of the variables will be a solution andan o n-trivial solution ceptainly exists. Suppose some co-efficient,sayija≠0. Then we write-1j1 j 1 1 11j -1 j -1 ij +1 j +11n nx= - a ax + . . 1 + a x + a x + . . . . . + ax.Hence if we chooseiarbitrarily for allijand take-1j1 j 1 1 11j -1 j -1 1j +1 j +11n nα=- a aα+. . +aα+aα+. . +aα.t h e n1,..,nis asolution ofnxjijj= 1a= 0. Thus for m = 1 and n > 1 we get a non trivialsolution.We prove theresult by induction on m. As inductionhypothesis, let the system (m-1) equation in k variables where (m-1)
Page 7
711 1 11n n22 1 12n nE: = ax+ . . . . + ax= 0E: = ax + . . . . + ax= 0ii 1 1ih nE := a x + ....+ a x = 0mm 1 1mn nE := a x + ....+ a x = 0Since0ija, from Eiwe have-1jj j i j 1i, j -1 j -1x= - a a x + . . + a x +i, j +1 j +1in nax+ . . + a x.If we substitute this value ofxjin other equations we will getan e ws y s t e mo f(m-1)equations in(n-1)variables1j - 1 j + 1 nx, . . x , x , . . xas follows :For1≤k≤m , k≠i-1kkr kj i j ir rr≠jE: = a+ a (- a )a x= 0because by E1we get-111 1 1, j -1 j -1 1j i j i1 1i, j -1 j -1 i, j +1 j +1in nax + . . t a x + a - a ax + . . + a x t a x + . . + ax +1, 1 1 1.. 0jjnnax a x which implies.-111 1j ij i1 1a+ a - a a x + - - - - - +-11j -1 1j ij i, j -1 ja+ a - a a x + - - - - - +-11n 1j ij in na+ a - a a x= 0i.e.r≠jn-1i1r ij ij ir rr= 1E: = a+ a - a a x= 0so byinductionhypothesis(m-1)equationEkhas a non-trivial solution.1j - 1 j + 1 nx, . . . , x , x , . . , xasm- 1< n- 1. In particular,kx≠0forsomek≠j.W et a k ejjx=αso,-1ji j i r pr≠jα=- a aα.W ec l a i m1j - 1 jnα,..,α,α,..,αis a non-trivial solution.For1km,kkr r kj jrjEa x a 1kr r kj ijis srjrjaa a a 1kr kj ij ir prjaaaa kEforkimunotes.in
Page 8
8As1,...nis a solution ofiE,ir pir r ijraa a 1ij ir rrjaa0ir ir rrja 1,,n is non-trivial since0kfor somekjbythe inductionhypothesis.Thus1,....,nis a non-trivial solution of the original system.Exercise 1.1 :1)Find one non-trivial solution for each one of the followingsystems of equations.a)x+2 y+z=0b)3x + y + z = 0x+y+z=0c)2x-3y + 4z = 03x + y + z = 0d)2x + y + 4z + 19 = 0-3x + 2y-3z + 19 = 0x+y+z=02)Show that the only solution of the following systems ofequations are trivial.a)2x+3 y=0x-y=0b)4x + 5y = 0-6x + 7y = 0c)3x + 4y-2z = 0x+y+z=0-x-3y + 5z = 0d)4x-7y + 3z = 0x+y=0y-6z = 0munotes.in
Page 9
91.3 SOLUTION OF THE SYSTEM OF LINEAREQUATIONS BY GAUSS ELIMINATION METHODLet the system of m linear equations in n unknown be1,1nij j ijax b i m .Then the co-efficient matrix is-11 12 112............nmm m naa aaa a Also we define the augmented matrix by11 12 1 112.... b........ bnmm m n maa aaa a We will perform the followingoperations on the system oflinear equations, called elementary row operations :Multiply one equation by a non-row number.Add are equation to another.Interchange two equations.These operations are reflected in operations on the augmentedmatrix, which are also called elementary row operations.Suppose that a system of linear equations is changed by anelementary row operation. Then the solutions of new system areexactly the same as the solutions of the old systems. By making rowoperations, we will try to simplify the shape of the system so that itis earier to find the solutions.Let us define two matrices to be row equivalent if one can beobtained from the other by a succession of elementary rowoperations. If A is the matrix of co-efficients of a system of linearequations, and B the column vector as above, so that (A, B) in theaugmented matrix and if (A1,B1) in row-equivalent of the system.AX = B are the same as the solutions of the system A1X=B1.munotes.in
Page 10
10Example 1.3.1 :Considerthe system of linear equations32 2 1223 4xy zxyzxy z The augmented matrix is :3- 212111 - 1 - 1 - 22- 1304Subtract 3 times second row from first row :0- 545711 - 1 - 1 - 22- 1304Subtract 2 times second row from third row :0- 545711 - 1 - 1 - 20- 3528Interchange first and second row.11 - 1 - 1 20- 54 570- 35 28Multiply second row by-111 - 1 - 1 - 205 - 4 - 5 - 70- 3528munotes.in
Page 11
11Multiply second row by 3 and third row by 511- 1- 1- 201 51 21 5 - 2 10- 1 52 51 0 4 0 Add second row to third row.11 - 1 - 1- 20 15 -12 -15 -2100 1 3- 51 7The new system whose augmented matrix is the last matrixcoin be written as :215 12 15 2113 5 19xyzyzz Now it we consider,19 513ttz1215 19 5 15 2113255 51195255 51 19 52195 1315 54194ytttyttxtt This method is known as Gauss elimination method.Example 1.3.2 :Consider the system of linear equations.1234512512534512 3 4 51122112221xxxxxxxxxx xxxxxx x x x munotes.in
Page 12
12The augmented matrix is :111111-1 -1 0 0 1 -1-2 -2 0 0 1 10011 1 - 11122 21Adding 2ndrow to 1strow, two times 3rdrow to 1strow andsubtracting last row from 1strow we get.1111110011 200022 330011 1 - 10011 10Subtracting twice the 2ndrow from 3rdrow, 4throw from 2ndrow and 5throw from 2ndrow we get1111110011 200000 - 130000 0 - 40000 0- 3The equations represented by the last two rows are :123451234543ox ox ox ox oxox ox ox ox ox Which implies that the system is inconsistent.Exercise 1.2 :For each of the following system of equations, use Gaussianelimination to solve them.i)12 332 4xx x 12 312 322111 2 14xx xxx x munotes.in
Page 13
13ii)12 324xx x 12 312 323 1734 7xx xxx x iii)12340xxxx12 3 412 3 412342322232522 4xx x xxx x xxxxx iv)12 3 433xx x x 12 3 412 3422 2 832 1xx x xxx xx AnswerExercise 1.2 :i)51 7,, :48xxxx r u a l ii)inconsistentiii)inconsistentiv)15 5 1 1,, , ,48 4 8xxx munotes.in
Page 14
142VECTOR SPACEUnit Structure :2.0Introduction2.1Objectives2.2Definition and examples2.2.1Vector Space2.2.2Sub space2.2.3Basis and Dimension2.0 INTRODUCTIONThe concept of a vector is basic for the study of functions ofseveral variables. It provides geometric motivation for everythingthat follows. We know that a number can be used to represent apoint on a line, once a unit length is selected. A pair of numbers (x,y)can be used to represent a point in the plane where as a triple ofnumbers (x,y,z) can be used to represent a point in 3 dimensionalspace denoted by3.T h el i n ec a nb ec a l l e d1-dimensional spacedenoted byor plane. 2-) dimensional space denoted by2Continuing this way we can define a point in n-space as (x1,x2,-z, xn). Hereis a set of real numbers and x is an element inwhich we write asx.2is a set of ordered pair and(x,y)2.T h u sXi sa ne l e m e n to fnornXmeans X = (x1,x2,…, xn). These elements as a special case are called vectors fromrespective spaces. The vectors from a same set or space can be addedand multiplied by a number. It is now convenient to define in generalan o t i o nw h i c hi n c l u d e st h e s ea sas p e c i a lc a s e .2.1 OBJECTIVESAfter going through this chapter you will be able to :Verify that a given set is a vector space or not over a field.Get concept of vector subspace.Get concept of basis and dimension of vector space.munotes.in
Page 15
152.2 DEFINITION AND EXAMPLESWe define a vector space to be a set on which “addition” and“readar multiplication” are defined. More precisely, we can tell.2.2.1 Definition : Vector SpaceLet (F, + , . ) be a field. The elements of F will be calledscalars. Let V be a non-empty set whose elements will be calledvectors. Then V is a vector space over the field F, if1.There is defined an internal composition in V called addition ofvectors and denoted by ‘+’ in such a way that :i)VBCfor all,VBC(closer property)ii)BCCBfor all,VBC(commutative property)iii)
B CH BC Hfor all,,VBCH(Associate property)iv)an elementOVsuch thatOB Bfor allVB(Existence ofIdentity)v)To every vectorVB av e c t o rVB such that
BB (Existence of inverse)2.There is an external composition in V over F called scalarmultiplication and denoted multiplicatively in such a way that :i)aVBfor allaFandVB(Closer property)ii)
aa aBC B Cfor allaFand,VBC(Distributiveproperty)iii)
ab a bB B Bfor all,,ab FVB(distributive property)iv)
,ab a b a b FB B andVB.v)1BBfor allVBand 1 in the unity element of F.When V is a vector space over the field F, we shall say thatV(F) is a vector space or sometimes simply V is a vector space. If Fis the fieldor real numbers, V is called a real vector space;similarly if F is Q or C, we call V as a rational vector space orcomplex vector space.Note 1 :There should not be any confusion about the use of theword vector. Here by vector we do not mean the vector quantitywhich we have defined in vector algebra as a directed line segment.Here weshall call the elements of the set V as vectors.munotes.in
Page 16
16Note 2 :The symbol ‘+’ is used for addition of vectors which is alsoused to denote the addition of two scalars in F. There should be noconfusion about the two compositions. Similarly for scalarmultiplication, we mean multiplication of an element of V by anelement of F.Note 3 :In a vector space we shall be dealing with two types of zeroelements. One is the zero elemen of F which is owl well known 0.Another is the zero vector in V i.e. if3,( 0 , 0 , 0 )VO.Example 1 : The n-triple space, Fn.Let F be any field and let V be the set of all n-tuples
12,, . . . ,nxx xBof scalars xiin F. If
12,, . . . ,nyy yCwith yiin F,the sum ofBandCis defined by
11,...nnxyxyBC .T h eproduct of scalar c and vectorBis defined by
12,, . . . ,ncc x c x c xB.This vector addition and scalar multiplication satisfy all conditionsof vector space.(Verfication is left for the students).For n = 1, 2 or 3, F =,2or3are basic examples ofvector space.Example 2 : The space of mxn matrices, Mmxn(F).Let F be any field and let m and n be positive integers. LetMmxn(F) be the set of all mxn matrices over the field F. The sum oftwo vectors A and B in Mmxn(F) is defined by
ij ijijAB A B .The product of a scalar C and the matrix A is definedby
ijijCA CA.Example 3 : The space of functions from a set to a field.Let F be any field and let S be any non-empty set. Let V bethe set of all function from the set S into F. The sum of two vectors¦and g in V is the vector¦+ g i.e. the function from S into F,defined by
.nnnfg f f f g fThe product of the scalar c and the function¦is the functione¦defined by
nncf f cf f.For this example we shall indicate how one verifies that V isav e c t o rspace over F. Here\^::Vf f S Fl.W eh a v e ,munotes.in
Page 17
17
.fg s f sg s s S Since¦(s) and g(s) are in F and F isa field, therefore¦(x) +g(x) in also in F. Thus¦+g is also afunction from S to F. Therefore,fg V f g V . Therefore V isclosed under addition.Associatvity of addition :We have
fg h x fg xh x ¯ ¢±(by def.)
() ()fx g x h x ¯¢±(by def)
()fx g x h x ¯¢±[
,( ) ,( )fxg xh x'are elements of F and addition in F isassociative]
fx g hxfg h x ¯¢±
fghf g h= Commutativity of addition :We have
fg x f xg x
gx f x['addition is commutative in F]
¦gxfggf=Existence of Additive identity :Let us define a functionl:OS Flsuch thatl
Ox Ox S .ThenlOVand it is called zero function.We havel
l
() ()fO x f xO x f xOf x lfOf==The functionlOis the additive identity.Existence of additive inverse :Let.fVLet us define a function:fS Flby
fx fx x S ¯ ¢±.T h e nfVand we have
ff x f x f x ¯ ¯ ¢±¢±munotes.in
Page 18
18
l
fx fxfx fxOO x ¯ ¢±
lff O= =the function-¦in the additive inverse of f.Now for scalar multiplicationif FandfV,t h e n
,xScf x cf x.Now
fx FandcF. Therefore c¦(x) is in F. Thus V isclosed with respect to scalar multiplication.Next we observe thati)IfcFand,fg Vthen
cf g x c f g x cfx gx ¯ ¯ ¯ ¢± ¢ ± ¢±() ()() ( )() ( )cF x cg xcf x cg x()cf g c f c g= ii)If c1,c2Fa n dfV,t h e n
1212¦()cc xc c f x ¯ ¢±1212() ()() ( ) () ( )cf x cf xcf x cf x
12 1 2cc fc fc f= iii)If12,cc FandfVthen
121212cc f x cc f x c c x ¯ ¯ ¢±¢±
1212¦cc fxcc x ¯¢± ¯¢±
12 1 2¦cc f c c=iv)If 1 is the unity element of F andfV,t h e n
11¦1fx x f xff=Hence V is a vector space over F.munotes.in
Page 19
19Example 4 : The set of all convergent sequences over the field ofreal numbers.Let V denote the set of all convergent sequences over thefield of real numbers.Let\^\^12 ,,, . . . , . . . . . ,nnB B B B B\^\^12 ,,, . . . , . . . . .nnC C C C Cand\^\^12 ,, ,..., .....nnH H H H Hbe any three convergent sequence.1.Properties of vector addition.i)We have\^\^nnBC B C\^nnB Cwhich is also aconvergent sequence. Therefore V is closed for addition ofsequences.ii)Commutativity of addition : We have\^\^nnBC B C \^\^\^\^nn n n n nB C C B C B C Biii)Associativity of addition : We have
\^\^\^nn n ¯B CH B C H¢±\^\^
\^
\^\^\^\^
nn nnn nnn nnnnB C HBC HB C H ¯B C H¢±B C Hiv)Existence of additive identity : The zero sequence\^\^00 , 0 , . . , 0 , . .is the additive identity.v)Existence of additive inverse : for every sequence\^nBasequence\^nBsuch that\^\^\^\^0nnn nB B B B.2)Properties of scalar multiplication :i)Leta.T h e n\^\^nnaa aB B Bwhich is also aconvergent sequence becauselim limnnhhaakBkBB B.Thus V is closed for scalar multiplication.munotes.in
Page 20
20ii)Letaand,VBC,t h e nw eh a v e
\^\^\^nnnnaaa ¯BC B C B C¢±
\^\^\^\^\^\^nnnnnnnnaaa a aaaa aB CB CBCBC B Ciii)Let,abandVB,
\^
\^nnab ab abB B B\^\^\^\^\^nnnnnnab a bab a bB BBBBB B Biv)
\^
\^
\^nnnab ab ab a bB B B B\^\^
nnab a b ab ¯B B B¢±v)\^\^\^11 1nnnB B B B BThus all the postulates of a vector space are satisfied. HenceV in a vector space over the field of real numbers.Check your Progress :1.Show that the following are vector spaces over the field.i)The set of all real valued functions defined in some interval[0,1].ii)The set of all polynominals of degree at most n.Example 5 :Let V be the set of allpairs (x, y) of real numbers and let F bethe field of real numbers. Let us define
111,, , 0xy x y x x (, )cx y(, 0 )cx.Is V with these operations, a vector space over the field?Solution :If any of the postulates of the vector space in not satisfied,then V will not be a vector space. We shall show that for theoperation of addition of vectors as defined in this problem theidentity element does not exist. Suppose
11,xyis additive identityelement.munotes.in
Page 21
21Then we must have
11,,, ,xy x y xy xy
1,0xxº
,xywhich is notpossible unless y = 0. Thusno element
11,xyof V s.t.
11,,, ,xy x y xy xy V .As the additive identity element does not exist in V, it is not avector space.Exercise : 2.11)What is the zero vector in the vector space4?2)Is the set of all polynomials inxof degree2bav e c t o rs p a c e ? .3)Show that the complex field¢is a vector space over the realfield.4)Prove that the set
\^,: ,Va b a bis a vector space over thefieldfor the compositions of addition and scalar multiplicationdefined as
,, , , ,ab cd a cb d k ab k ak b .5)Let V be the set of all pairs(x, y)of real numbers and let F be thefield of real numbers. Define
11 1 1 1,,,,xy x y x x y y cxy
,ex y.S h o wt h a tw i t hthese operations V is not a vector space over.2.2.2 Definition : Vector subspaceLet V be a vector space over the field F and letWC V.T h e nWi sc a l l e das u b s p a c eo fVi fWi t s e l fi sav e c t o rs p a c eo v e rFw i t hrespect to the operations of vector addition and scalar multiplicationin V.Theorem 1 :The necessary and sufficient condition for a non-empty subsetWo fav e c t o rs p a c eV(F) to be a subspace of V is,ab Fand,Wa b WBC º BCProof :The condition necessary :If W is a subspace of V, by the definition it is also a vectorspace and hence it must be closed under scalar multiplication andvector addition. Therefore,aF W a WB º B and,,bF W b WC ºC and,aW b Wab WB C º B C.H e n c ethe condition is necessary.munotes.in
Page 22
22The condition sufficient :Now W is a non-empty subset of V satisfying the givencondition i.e.,ab Fand,Wa b WBC º BC.T a k i n g1, 1abwe have,WWBC BC.T h u sWi sc l o s e dw i d e ra d d i t i o nt a k i n g1, 0ab we haveWWB B . Thus additive inverse of eachelement of W is also in W.Taking,ab00,w eh a v et h a ti fWB,WB B00Wº00Wº0.Thus the zero vector of V belongs to W which is also the zerovector in W.Since the elements of W are also the elements of V, thereforevector addition will be associatine as well as commutative in V.Now takingC0,w es e ethat if,ab FandWB,t h e nal WB 0i.e.aWB 0i.e.WB0So W is closed under scalarmultiplication.The remaining postulates of a vector space willhold in Wsince they hold in V of which W is a subset. Thus W(F) is a vectorspace. Hence W(F) is a subspace of V(F).Example 5 :a)If V is any vector space V is a subspace of V. The subsetconsisting of the zero vector alone is a subspace of V, called the zerosubspace of V.b)The space of polynomial functions over the field F is asubspace of the space of all functions from F intoF .c)The symmetric matrices form a subspace of the space of allnxnmatrices over F.d)An nxn matrix A over the field¢of complex numbers isHermitian ifkjjkAAfor each j, k, the bar denoting complexconujugation. Azxzmatrix is Hermitian if and only if it has the formz x iyx iy ¯¡°¡°X¢±wherex, y, zandware real numbers.munotes.in
Page 23
23The set of all Hermitian matrices is not a subspace of thespace of allnxnmatrices over¢.F o ri fAi sH e r m i t i a n ,i t sd i a g o n a lentries A11,A22are all real number but the diagonal entries of iA arein general not real. On the other hand, it is easily verified that the setofnxncomplex Hermitian matrices is a vector space over the fieldwith the usual operations.Theorem 2 :Let V be a vector space over the field F. The intersection ofany collection of subspaces of V is a subspace of V.Proof :Let\^Wabe a collection of subspaces of V and letWWabe theirintersection. By definition of W, it is the set of allelements belonging to every Wa.S i n c ee a c hWais a subspace, eachcontains the zero vector. Thus the zero vector is in the intersectionWa n ds oWi sn o n-empty. LetbWCand,ab F.S o ,b o t h,BCbelong to each Wa.B u tWais a subspace of V and henceWab aB C.T h u sWabB C.S oWi sasubspace of V.The above theorem follows that if S is any collection ofvectors in V, then there is a smallest subspace of V which containsS,that is, a subspace which contains S and which is continued inevery other subspace containing S.Definition :Let S be a set of vectors in a vector space V.Thesubspace spannedby S is defined to be the intersection W of allsubspace of V which contain S when S is a finite set of vectors,\^12,, . . . .nSBB B,w es h a l ls i m p l yc a l lWt h es u b s p a c es p a n n e db ythe vectors12,, . . . ,nBB B.Definition:L i n e a rCombination :Let V(F) be a vector space. If12,, . . . ,nVBB B,t h e na n yv e c t o r11...nnaaB B Bwhen12,, . . . .naa a Fis called a linearcombination of the vectors12,, . . .nBB BDefinition:L i n e a rS p a n:Let V(F) be a vector space and S be any non-empty subset ofV. Then the linear span of S is the set of all linear combinations offinite sets of elements of S and is denoted by L(S). Thus we have\11 2 212()... : , ,..,nnnLS a a aSB B B B B B and^12,, . . .naa a F.munotes.in
Page 24
24Theorem 3 :The linear span L(S) of any subset S of a vector space V(F) isas u b s p a c eo fVg e n e r a t e db ySi.e. L(S) = {S}.Proof :Let,BCbe any two elements of L(S).Then11....nnaaB B Band11....nnbbC C Cwhere,, ,, 1 , . . . , 1 , . . . .iiiiab F S i mj nB C .Ifa, lbe any two elements of F, thenabB C
11..mmaa aB B
11...nnbb bC C=
11..aaB
mmaaB
11bbC
..nnbbC=
11..maa aa mB B
11...nnbb bbC C.ThusabB Chas been expressed as a linear combination of afinite set11,..., , ,...,mnBB CCof the element of S. Consequently()ab L SB C.T h u s,ab Fand,( )()bL S a b L SCº B C .Hence L(S) is a subspace of V(F). Also each element of Sbelongs to L(S) as ifrSB,t h e n1rrBBand this implies that()rLSB. Thus L(S) in a subspace ov V and S in contained in L(S).NowifWi sa n ys u b s p a c eo fVcontainingS, then eachelement of L(S) must be in W because W is to be closed undervector addition and scalar multiplication. Therefore L(S) will becontained in W. Hence L(S) = {S} i.e. L(S) in the smallest subspaceof V containing S.Check your progress :1)Let
\^21 2,, 0 :,Wa aa a.S h o wt h a tWi sas u b s p a c e so f3.2)Show that the set W of the elements of the vector space3of theform (x + 2y, y,-x + 3y) where,xyis a subspaces of3.3)Which of the following are subspaces of3.i)
\^,2 ,3 : , ,xyzx y zii)
\^,, :xxx xiii)
\^,, :,, a r e r a t i o n a l n u m b e r sxyz xyzmunotes.in
Page 25
25Definition:L i n e a rD e p e n d e n c e:Let V(F) be a vector space,A finite set\^1,...,nBBof vectorsof V is said to be linearly dependent ifthere exist scalars1,..., Fnaanot all of them O such that11.. 0nnaaB B.Definition:L i n e a ri n d e p e n d e n c e:Let V(F) be a vector space. A finite set\^1,...nBBof vectors ofV is said to be linearly independent if every relation of the form11..aB0, ,10nn iiaa F i n aB bbº for each1inbb.Any infinite set of vectors of V is said to be linearlyindependent if its every finite subsetislinearly independent,otherwise it is linearly dependent.Exercises :2.21)Which of the following sets of vectors
1,..naaBinnaresubspaces ofn?
3np.i)allBsuch that10ap.ii)allBsuch that12 33aa a.iii)allBsuch that221aa.iv)allBsuch that120aa.v)allBsuch that2ais national..2)State whether the following statements are true or false.i)As u b s p a c eo f3must always contain the origin.ii)The set of vectors
2,xyB for which22xyis asubspace of2iii)The set of ordered triads (x,y,z) of real numbers withx>0isas u b s p a c eo f3.iv)The set of ordered trials (x, y, z) of real numbers withx+y=0is a subspaces of3.3)In3,examine each of the following sets of vectors for lineardependence.i)
\^2,1, 2 , 8, 4,8ii)
\^1, 2, 0 , 0, 3,1 , 1, 0,1iii)
\^2,3,5 , 4,9, 25iv)
\^1, 2,1 , 3,1, 5 , 3, 4, 7munotes.in
Page 26
264)Is the vector (2,-5,3) in the subspace of3spanned by thevectors (1,-3,2), (2,-4,-1), (1,-5, 7)?5)Show that the set {1, x, x(1-x)} is a linearly independent set ofvectors in the space of all polynomials over.2.2.3 Basis and dimension :In this section we will assign a task to give dimension tocertain vector spaces. We usually associate ‘dimension’ withsomething geometrical. But after developing the concept of a basisfor a vector space we can give a suitablealgebraicdefinition of thedimension of a vector space.Definition: Basis of a vector spaceA subset S of a vector space V(F) issaidto be a basis ofV(F), ifi)S consists of linearly independent vectors.ii)Sgenerates V(F) i.e. L(S) = V i.e. each vector in V is a linearcombination of a finite number of elements of S.Example 1 :LetnV, If
1,.....,nnxx xwe callxj,t h ei t hc o-ordinate ofx.. Let ei:(0, …, 0, 1, 0, .. 0) be the vector whoseit isco-ordinate in 1 and others are 0. It is easy to show that\^1iei nbbis a basis of V. This called the standard basis ofn.Example 2 :The infinite set\^21, , , ..., , ..nSx x xis a basis of the vectorspace F[x] of polynomials over the field F.Definition :FiniteDimensionalVectors Spaces. The vector spaceV(F) is said to be finite dimensional or finitely generatedif thereexists a finite subset Sof V such that V = L(S).The vector space which is not finitely generated may bereferred to as an infinite dimensional space.Theorem 1 :Let V be a vector space spanned by a finite set of vectors12,, . . . .mCC C.T h e na n yi n d e p e n d e n ts eto fv e c t o r si nVisfinite andcontains no more thanmelements.munotes.in
Page 27
27Proof :To prove the theorem it suffices to show thateverysubset Sof V which contains more thanmvectors is linearly dependent. Let Sbe such a set. In S there are distinct vectors12,, . . .nBB Bwhere n > m.Since1,...mCCspanV, there exists scalarsAijin F such that1mjij iiAB C. For any n scalars12,, . . . ,nxx xwe have111....nnnjjjxx xB B B
111111nmji j ijjnmji j j ijimnij j iijxAxA xAxCC¬C®Since n > m, the homogeneous system of linear equation10,nij jjAx1imbbhas non trivial solution i.e.12,, . . .nxx xare not all0. So for11 2 212.... 0, , ,....,nnnxa x x x x xB B are not all 0. Hence Sin a linearly dependent set.Corollony 1 :If V is a finite-dimensional vector space, thenanytwo baresof V have the same number ofelements.Proof :Since V is finite dimensional, it has a finiteb a s i s\^12,, . . .mCC C.By above theorem every basis of V is finite and contains no morethan m elements. Thus if\^12,, . . . . ,nBB Bis a basis,nmb.B yt h esame argument.mnbHence m = n.This corollany allows us to define the dimension of a finitedimensional space V by dimV. This leads us to reformulate,Theorem 1 as fallows :Corollany 2 :Let V be a finite-dimensional vector space and letn= dim V. Then (a) any subset of V which contains more thannvectors in linearly dependent (b) no subset of V which containsfewerthan n vectors canspanV.munotes.in
Page 28
28Lemma.Let S be a linearly independent subset of a vector space V.supposeCis a vector in V which in not in the subspace spanned byS. then the set obtained by adjoiningCto S in linearly independent.Proof :Suppose1,....,mBBare distinct vectors in S and that11..0mmex e l BC .T h e n0lefor otherwise111..meell¬ ¬C B B® ®andCis in the subspace spanned by S.thus11.. 0mmeeB B and since S is a linearly independent set eachei=0.Theorem 2 :If W is a subspace of a finite dimensional vector space Vevery linearly independent subset of W is finite and is part of a basisfor W.Proof :Suppose0Sis a linearly independent subset of W. if S is alinearly independentsubset of W containing0S.H e n c eSi sa l s oalinearly independent subset of V. Since V in finite dimensional, Scontains no more than dim V elements.We extend0Sto a basis for W, as follows. If0SspansW,then0Sis a basis for W and our job isdone. If0Sdoesnot span W,we use the preceding lemma to find a vectorC,i nWs u c ht h a tt h eset\^10 1SSC*is independent. If S1spans W, our work is over. Ifnot, apply the lemma to obtain a vector2Cin n such that\^21 2SSC*is independent. If we continue in this way them by atmost dim V steps.wer e a c has e t\^01,....mmSSC C*which is a basisfor W.Carollony 1 :If W is a proper subspace of finite-dimensionalvector space V, then W is finite dimensional and dim W < dim V.Proof :Let us consider W contains a vector0Bv. So there is a basisof W containingBwhich contains nomorethan dim V elements.Hence W is finite-dimensional and dim Wbdim V.Since W is aproper subspace, there is a vectorCin V which is not in W.munotes.in
Page 29
29AdjoiningCto any basis of W, we obtain a linearly independentsubset of V.Thus dim W < dim V.Theorem 3 :If W1,a n dW2are finite-dimensional subspaces of a vectorspaces V, then W1+W2is finite-dimensional and dim W1+d i m W2=dim
1212dim( )WW WW.Proof :12WWhas a finite basis\^11,..., , ,...,nmBB CCwhich is part of abasis\^11,..., , ,...,nmBB CCfor W, and part of a basis\^11,..., , ,...,nnBB HH.The subspaces W1+W2is spanned by the vectors\^111,..., , ,..., , ,....nmnBB CC HHand these vectors form an independentset. For suppose0iijjrrxyzB C H.Thenrriijjzx yH B Cwhich shows thatrrzHbelongto W1.A srrzHalso belongs to W2it follows thatppiizeH Bfor certain sectors1,...., .kccBecause the set\^11,..., , ,...,nnBB HHis independent, each of the scalars0rz.Thus0iijjxyB Cand since\^11,..., , ,...,nmBB CCin also anindependent set, each0ixand eachTj=0.T h u s\^111,..., , ,..., , ,....nmnBB CC HHis a basisfor W1+W2.F i n a l l y d i mW1+dimW2
km kn
1212dim dimkm k nWW WW Example 1 :The basis set of the vector space of all 2X2 matrices over thefield F is10010010,,,00001001£² ¯ ¯ ¯ ¯¦¦¦¦¡° ¡° ¡° ¡°¤»¡° ¡° ¡° ¡°¦¦¢± ¢± ¢± ¢±¦¦¥¼Sot h ed i m e n s i o no fthat vectorspace is 4.munotes.in
Page 30
30Exercises :2.31)Show that the vectors
1231, 0, 1 , 1, 2,1 , 0, 3, 2B B B form a basis for3.2)Tell with reason whether or not the vectors (2,1,0), (1,1,0) and(4,2,0) form a basis of3.3)Show that the vectors1(1,1, 0)Cand2(1, ,1 )iiC are in thesubspaceWo fc3spanned by (1,0,i) and (1+I, 1,-1), and that1Cand2Cform a basis of W.4)Prove that the space of allmxnmatrices over the field F hasdimensionmnbyexhibitingab a s i sf o rt h i ss p a c e .5)If\^123,,BBBis a basis of3()Vshow that\^12 23 31,,B B B BB Bis also a basis of3()V.AnswerExercises 2.11.(0,0,0,0)2. YesExercises 2.21.(i) not a subspace(ii) Subspace(iii) not a subspace(iv) not a subspace(v) not a subspace.2.(i) tpul(ii) false(iii) false(iv) tpul3.(i) dependent(ii) independent(iii) dependent(iv) independent4.nomunotes.in
Page 31
313LINEAR TRANSFORMATIONUnit Structure :3.0Introduction3.1Objective3.2Definition and Examples3.3Image and Kervel3.4Linear Algebra3.5Invertible linear transformation3.6Matrix of linear transformation3.0 INTRODUCTIONIf X and Y are any two arbitrary sets, there is no obviousrestriction on the kind of maps between X and Y, except that it isone-one or onto. However if X and Y have some additionalstructure, we wish to consider those maps which in, some sense‘pruserve’the extra structure on the sets X and Y. A ‘lineartransformation’ pruserves algebraic operations. The sum of twovectors is mapped to the sum of their images and the scalar multipleof a vector is mapped to the same scalar multiple of its image.3.1OBJECTIVEThis chapter will help you to understandWhat is linear transformation.Zero and image of it.Application of linear transformation in matrix.3.2DEFINITION AND EXAMPLESLet U(F) and V(F) be two vector spaces over the some fieldF. A linear transformation from U into V is a function T from U intoVs u c ht h a t
TT( ) T( )al a lBC B Cfor all,BCin U and for all a, bin F.munotes.in
Page 32
32Example 1 :The function
32T:VVldefined by
T, , ,abc ab,,abc.L e t
111222 3,, , ,,abc abc VBb Cx . If,abthen
111222TT, , , ,ab a a b cb a b c ¯B C ¢±
12 12212 1211 2211 2 2111222T, , ,,,,,,T, , T, ,TTaa ba ab bb ac bcaa ba ab bbaa ab ba bbaab ba baa b c ba b cab B C=T is a linear transformation from
3Vto
2V.Example 2 : The most basic example of linear transformation is:nmTF Fldefined by1122..T...nnAa¬ ¬MMMMMM® ®where A isaf i x e dm x nmatrix.Example3:Let V(F) be the vector space of all palimonies over ditf(x) =01aa xbe a polymonial of...nnaxE Vni nt h einderminate F. Let us define
1112. .nnDf x a a x na x if n > 1and Df(x)= 0 if f(x) is a constant polynomial. Then thecorresponding D from V into V is a linear transformation on V.Example 5 :Let
Vbe the vector space of all continuous functions frominto. IffVand we define T by
Txofx f t¨dt x,then T is a linear transformation from V into V.Some particular transformation :1)Zero Transformation : Let U(F) and V(F) be two vector spaces.The functions T, from U into V defined by
0TB(zero vector ofmunotes.in
Page 33
33V)UBin a linear transformation from U into V. it is called zerotransformation and isdenoted iflO.2)Identity operator : Let V(F) be a vector space. The function Iform V into V defined by
IB B VBis a linear transformationfrom V into V, I is known as identity operator on V.3)Negative of alinear transformation : Let U(F) and V(F) be twovector spaces. Let T be a linear transformation from U into V. Thecorresponding-Td e f i n e db y
TTU ¯B B B ¢±is a lineartransformation from U into V.-Ti sc a l l e dt h en e g a t i v eo ft h el i n e a rtransformation of T.Some properties of linear transformation :Let T be a linear transformation from a vector space U(F) intoa vector space V(F). Theni)T (O) = O where O on the left hand side is zero vector of U andO on the right hand sideii)
TTUB B B iii)
TT T ,UBC B C B Civ)
11 2 211 2 2T...TT . . . Tnnnnaa a a aaB B B B B Bwhere12,, . . . .nUBB Band12,, . . .naa a F3.2 IMAGE AND KERNEL OF A LINEARTRANSFORMATIONDefinition :Let U(F) and V(F) be two vector spaces and let T be alinear transformation from U into V. Then the range of T is the set ofall vectors in V such that
TC Bfor someBin U. This is called theimage set of U under T and by ImT, i.e.
\^TT:mIUB B .Definition :Let U(F) and V(F) be two vector spaces and let T be alinear transformation from U into V. Then the kernel of T written askept T is the set of all vectorsBin U such thatT( )OB(zero vectorof V). Thus ker T =
\^:UT O VB B ,k e rTi sa l s oc a l l e dn u l lspace of T.munotes.in
Page 34
34Theorem 1:If U(F) and V(F) are two vector spaces and T is a lineartransformation from U into V then i) Ker T is a subspace of U (ii)ImTi sas u b s pace of V.Proof :i)ker
\^T: TUO VB B SinceT( ) ,OO Vtherefore atleast0ker T. Thus ker T is a non-empty subset of U. Let12,BBker T, Then
12,TO TOB B.Let,ab F.T h e n12abUBBand
1212TTlab a TBB B B12ker TaO bO O O O V a b = B B .Thus Ker T is a subspaceii)Obviously ImTi san o n-empty subset of V.Let12,TmICC.T h e12,B BEU such that
11 2 2T, TB C B CThen
1212.T . Tab a bC C B B
12T.xb BNow, U is a vector space.=12,UB Band,ab F12ab Uº B B Consequently
12 1 2T.Im Tab d a bB C C Thus,ImTis s subspace of V.Theorem 2 : Rank nullify theorem.Let U(F) and V(F) be two vector spaces and T be a lineartransformation. Suppose U is finite dimensional. Then,dim dim dim ImUK e r T T Proof :If\^0ImT,t h e nkerT Vand theorem is proved for thetrivial case.Let\^12,, . . . ,rvv vbe a basis ofImTfor1rp.Let12,, . . . ,rvv v Usuch that
iivT vLet\^12, , ...,qvv vbe the basis ofkerT.We have to show that\^1212, ,..., , , , ...,rqvv vvv vforms a basis of U.munotes.in
Page 35
35LetuU.T h e n
Tu I m T.H e n c e t here are real numbers12,, . . . ,rcc csuch that
11 2 2...rrTu vv vv vv
11 2 2...rrvT u vT u vT u
11 1 2 2...rrTv u vu v u =\^
11 2 2... 0rrTu vu vu vu =\^11 2 2...rruv uv u v u k e r T This would again mean that there are numbers12, ,...,qaa asuch that\^11 2 2...rruv uv u v u =11 2 2...qqau au a ui.v.11 2 211 2 2......rrqquv u v u v u a u a u a uSo, u is generated by1212,, . . . ,,,, . . . ,rquu uu u u.Next to show that these vectors arelinearly independent, let1212,, . . . ,,,, . . . ,rqxx xyy ybe the real numbers, such that11 2 211 2 2...... 0rrqqxu xu xu yu yu y u Then
00T
1111......rrqqTx u xu y u yu
1111......rrqqxT u xT u T y u y u 11... 0rrxv xvBut1,...,rvvbeing basis ofImTare linearly independent.So,12,..., 0rxx x .=,1... 0qqyu y uBy the same argument12... 0qyy y So,121,, . . . ,,, . . . ,rquu uu uare linearly independent.Thus,dimUr qdim Im dim kerTT Example 1 :Let33:Tlis defined by
,, 2 , , 2Txy z x y z y z x y z .munotes.in
Page 36
36Let us show that T is a linear transformation. Let us also findker , ImTT,t h e i rb a s e sa n dd i m e n s i o n s .To check the linearity let,aband
11 1 2 22,, , ,,xyz xyz
11 1 2 22,, ,,Tax y z bx y z
12 12 12,,Ta x b x a y b y a z b z 12 1 2 1 2 1212 1222,,ax bx ay by az bz ay by az bz ax bx
12 1 222ay by az bz
11 122 2 1 12222,,ax y z bx y z ay z by z
11 1 2 2 222ax y z bx y z
11 1 1 1 1 1 1 2 2 2 2 2 22, , 2 2 , ,ax y z y z x y z bx y z y z x
222yz
11 122 2,, ,,aT x y z bT x y z .Hence, T is a linear transformation.Now,
,, k e rxyz Tiff
,, 0 , 0 , 0Txy z i.e. iff
2, , 2 0 , 0 , 0xy z y z x yz This gives us20020xy zyzxy z By second equationyzsubstituting in third equation20 3xz z x z º 31 1xyz=
,, 3 , 1 , 1xyz z
\^ker 3, 1,1 :Tz z= So,kerTis generated by
3, 1,1 .H e n c e i t s b a s i s i s
\^3, 1,1 anddimension is 1.Now,
,, 2 , , 2Txyz x y z y z x y z
1, 0, 1 2, 1, 1 1, 1, 1xyz But
1, 1, 2 3 1, 0, 1 1 2, 1, 1 ,munotes.in
Page 37
37Hence,
\^,, 1 , 0 , 1 2 , 1 , 1 3 1 , 0 , 1 1 2 , 1 , 1Txyz x y z
31 , 0 , 1 2 2 , 1 , 1xz y =Basis of
\^Im 1, 0,1 , 2,1,1T and its dimension is 2.Exercise : 3.11.Let F be field of complex numbers and let T be the function from3Finto3Fdefined by
123 1 2 3 1 2 31 2,,2, 2 , 2Txx x x x x x x x x x . Verify that T is alinear transformation.2.Show that the following maps are not linear.i)
33:; , , , , 0TTxy z xyl ii)
2222:; , ,TTxy x yl iii)
32:; , ,, 0FFxyz xl iv)
2:; ,SS x y x yl 3.In each of the following find
1, 0Tand
0,1Twhere22:Tlis a linear transformation.i)
3, 1 1, 2 , 1, 0 1, 1TT ii)
4,1 1,1 , 1,1 3, 2TT iii)
1, 1 2, 1 , 1, 1 6, 3TT 4.Let33:Tlbe the linear transformation defined by
,, 2 , , 2Txy z x y z y z x y z .F i n d a b a s i s a n d t h edimensionofImTandkerT.3.3 ALGEBRA ON LINEAR ALGEBRADefinition :Let F be a field. A vector space V over F is called on linearalgebra over F if there is defined an additional operation in V calledmultiplication of vectors and satisfying the following postulates.1.V, VBC B C2.
,, VBC HB CH BCH 3.
BC H B C B Hand
,, VBC HBHCHB C H4.
,Vec eBC H CB C B CandeFmunotes.in
Page 38
38If there is an element 1 in V such that11 V ,BBBBthenwe call V a linear algebra with identity over F. Also 1 is then calledthe identify of V. The algebra V is commutative if,VBCCBB CPolynomials :Let T be a linear transformation on a vector spaceV(F). Then TT is also a linear transformation on V, we shall write T1=Ta n d T2= TT. Since the product of linear transformation is anassociative operation, therefore ifmis a positive integer, we shalldefine Tm=T T T…u p t omtimes. Obviously Tmis a lineartransformation on V. Also wedefine T0= I (identity transformation).Ifmandnare non-negative integers, it can be easily seen thatmn m nTT Tand
nmm nTT,T h es e tL ( V , V )o fa l ll i n e a rtransformation on V is a vector space over the field F. If20112, ,....,.. ( , )nnaa a n e F a T a T a T L V V i.e. P(T) is also a lineartransformation on V because it is a linear combination over F ofelements of L(V,V). We call P(T) as a polynomialin lineartransformation T. The polynomials in a linear transformation behavelike ordinary polynomials.3.4 INVERTIBLE LINEAR TRANSFORMATIONDefinition :Let U and V be vector spaces over the field F. Let T bea linear transformation from U into V such that T is one-one onto.Then T is called invertible.If T is one-one and onto then we define a function from Vinto U, called the inverse of T and denoted by T-1as follows :LetCbe any vector in V. Since T is onto, thereforeVUC ºBsuch that
TB C.AlsoBdetermined in this way is a unique element of U becauseTi so n e-one and therefore0,UBB and
00TTBv B º C Bv Bwe define
1TCto beB.T h e n1:TVVlsuch that
1TTC B B C. The function1Tisitself one-one and onto.munotes.in
Page 39
39Properties :1.1Tis also a linear transformation from V into U.2.Let T be an invertible linear transformation on a vector spaceV(F). Then1TT=I=T1T.3.If A, B and C are linear transformations on a vector space V(F)such that AB = CA = I, then A is invertible and A-1=B=C .4.Let A be an invertible linear transformation on a vector spaceV(F). The A possesses unique inverse. (The proof of the aboveproperties are left for the students)Example :If A is a linear transformation on a vector space V such that20AA I, then A is invertible.20AA I,t h e nA2-A=-I. first we shall prove that A isone-one.Let12,.VBB Then
12AAB B
1222122211222212121212 1 2AA AAAAAA AAAA AAIIII ¯ ¯ºB B¢± ¢±ºB BºB v B B BºB Bº B B ¯ ¯º B B¢± ¢±ºB B ºB B=Ai so n e-one.Now to prove that A is onto LetVB.T h e n
AVB B .We have
2AA A A ¯B B B B¢±
2AAI BB BThus
VA VB ºB B such that
AA ¯B B B¢±=Ai so n t o .Hence A is invertible.munotes.in
Page 40
40Check your progress :1.Show that the identity operator on a vector space is alwaysinvertible.2.Describe34:Tlwhich has its range the subspace spannedby the vectors (1,2,0,-4), (2,0,-1,3)3.Let T and U be the linear operators on2defined by
,( , )Ta b b aand(,) (,)Ua b a b.G i v er u l e sl i k ethe one definingT and U for each of the transformations U + T, UT, TU, T2,U24.Show that the operator T on3defined by T (x,y,z) = (x+z, x-z,y) is invertible.3.5 REPRESENTATIONOF TRANSFORMATION BYMATRICESMatrix of a linear transformation :Let U be ann-dimensional vector space over the field F and LetVb ea nm-dimensional vector space over F. Let\^1,....,nBa a and\^1,....,nBC Cbe ordered bases for U and V respective. Suppose T isa linear transformation from U into V is a basis of into V.Now for
,jjUT VB B and .
11 2 2....jj jmj mTa a a=B C C C1nij iiaCThis gives rise to anmxnmatrix[]jjawhose jth columnrepresents the numbers that appear in the presentation of
jTBas acombination of elements of B. Thus the first column is<>112111 21 11,, . . . . ,Tmmaaaa aa¬®,t h esecond column is (a12,… . ,am2)Tand soon. We call [aij] the matrix of T with respect to the ordered basis B,B of U, V respectively. We will denote the matrix so induced by
1.BBmT ¯¢±.munotes.in
Page 41
41Example :Let41:( )TP lgiven by
1234 1 3 2 4,,,.Txx xx x x x x x The basis of4be
\^11,1,1,1 , 1,1,1, 0 , 1,1, 0, 0 , 1, 0, 0, 0Band thatof1()Pbe\^21, 1 .Bx x
(1,1,1,1) 2 2 2(1 ) 0(1 )31(1,1,1, 0) 2 (1 ) 122(1,1, 0, 0) 1 1(1 ) 0(1 )11(1, 0, 0, 0) 1 (1 ) (1 )22Txx xTxx xTxx xTxx Then
21312122110022BBmT¬ ¯¢±®3.6MATRIX OF SUM OF LINEARTRANSFORMATION :Theorem :Let1:TV Wland2:TV Wlbe two lineartransformation. Let\^11 2,, . . . ,mBVV Vand\^21 2,, . . . ,mBXX Xbe thebases of V and W respectively.Then
1112221212BBBBBBmT T mT mT ¯ ¯ ¯ ¢± ¢ ± ¢ ±Proof :For
1,iiVT WV V and
2iTWV.S i n c e\^21 2,, . . . ,nBXX Xis the basis of W,
11niij jjTa wV and
21niij jjTb wV ;1,...., .imNow
1212jjjTT T TV V V
nNij j ij jnijinij ij jjiwij jjiaW bwab wcwmunotes.in
Page 42
42Whereij ij ijca b
2221111212ij ij ijTT Tij ij ijBBBBBBcacca bmT T mT mT ¯ ¯ ¯=¡° ¡°¡°¢± ¢±¢± ¯ ¯ ¯º¡° ¡° ¡°¢± ¢± ¢± ¯ ¯ ¯= ¢± ¢ ± ¢ ±Matrix of scalar multiplication of linear transformation :Theorem :Let:TV Wlbe a linear transformation. Let\^11,...,mBV Vand\^21 2,, . . . ,mBXX Xbe the bases of V and Wrespectively.Then
2211,.BBBBmk T kmT k ¯ ¯¢±¢ ±For
,iiVT WV V and B2is the basis of W.So,
1;1niijjTa i mV X bbNow
1;niiijjTk k T k aV V X
11;;nijjnijjkabXXWhereij ijbK a
2211.TTijijBBBBbk amk T kmT ¯ ¯º¡° ¡°¢± ¢± ¯ ¯º¢±¢ ±Matrix of composite linear transformation :Theorem :Let:TV Wland:SW Ulbe two lineartransformations. Let\^11,...,mBV V,\^21,...,nBX Xand\^31,...,kBu ube the bases of V, W and U respectively.Then
222111BBBBBBmS T mS mT ¯ ¯ ¯¢± ¢ ± ¢ ±munotes.in
Page 43
43Proof :ForiVVandjwW,
iTWVand
jSw U
1niij jjTa w=V
1kiir rrSw b u
1.( ( ) ) ;niiij jjST S T S a w¬V V ®1()nij jjaSw
1111111nkij jrjjpnkij jr rjjknij jr rrjnir rrab uab uab ucu¨
Where1nirij jrjca bWhich is the(i, p)thelement of the matrix.<><>ir ij jrTTTir jr ijca bcb a ¯ ¯¡° ¡°¢± ¢± ¯ ¯¡° ¡°¢± ¢±
222111.BBBBBBmS T mS mT ¯ ¯ ¯=¢± ¢ ± ¢ ±Example :22:Tland22:Slbe two liner raw formations definedbyT( X ,Y )=( X+Y ,X–Y)andS(X, Y) = (ZX + Y, X + 2Y)Let in basis is{(1, 2) , (0, 1)}.munotes.in
Page 44
44ThenT(1, 2) = (3,-1) = a(1, 2) + C(0, 1)which impliesa=3 ,b=-7T(0, 1) = (1,-1) =(1, 2) (0,1)pq which implies1, 3pq So<>()BBmT ¯¡°¡°¢±31-7 -3Similarly<>()BBmS ¯¡°¡°¢±41-3 0<><><>(.) () ()BBBBBBmS T mS mT ¯ ¯¡° ¡ °¡° ¡ °¢± ¢ ±31 4 1-7 -3 -3 0 ¯¡°¡°¢±s1-q -3(SOT) (X, Y) = S(T(X,Y) = S(n+y, n-q)=( 3x+y,3x–y)By above way are can find<>()BBmS O Tand verify the result.Exercise 3.21. Let33:Tland33:Slbe two lincer transformationsdifined by(, , ) (, 2, 3)Txyz x y z and(, , ) ( , , ) .Sxyz x yy zz x Let{}.B(1 , 0 , 0 ) ( ,0 , 1 , 0 )( ,0 , 0 , 1 )Verify that<><><>() ( )( )BBBBBBmS O T ms mT 2. Let221:Tland222:TlBe two linear transformations defined by1(, ) ( 2 3, 2)Tx y x y x y and2(, ) ( , ) ,Tx y x y x y {)}B(1 , 0 )( ,0 , 1be the basis of2.S h o wt h a tB<><><>1212()( )( )BBBBBBmT T mT mT munotes.in
Page 45
45RANK OF MATRIX andlinear of transformation :Row Space and column space Let is consider a matrixAas follows463045azA ¯¡°¡°¢±We can considerAas a matrix of two vectans in4an as amatrix of four rentors in2.We will consider linear span of two rectons i.e.,pWL{(4 , 6 , 9 , 2 ) ( ,3 , 0 , 4 , 5 )}.It is called naw space of matrixA.Similany colum space of Ais represented bycWL{(4 , 3 )( ,6 , 0 )( ,9 , 4 )( ,2 , 5 )}.Definition :Row space and column space : Let A be a matrix of ordermxn. Then the subspace ofngenerated by the now vectors of A iscalled the now space and the subspace ofmgenerated by thecalumn vectors of A is called the Column space of A.Example :A ¯¡°¡°¡°¡°¡°¢±110000111110Here now space is123{, , }RR RWhere1R23(1 , 0 , 1 , 0 ), R= (0 , 1 , 0 , 1 ), R= (1 , 1 ,1,0).T h es e t123{, , }RR Ris line on by independent.So now space = L {12 3}.Hence dim (now space) = 3.Now calumniation we have two vectans1234C= {1 , 0 , 1 }, C = {0 , 1 , 1 }, C = {1 , 0 ,1} ,C = { 0,1,0} .Here13CC and124{C , C, C}are line an by independent,=column space124{, , }LC C C =dim (column space) = 3.munotes.in
Page 46
46The dim of the now space of a matrix A is called now rank ofA and the dim of the column space of A in called calumn rank.In enery matrix dim (now space) = dim (calumn space) if nowrank = calumn rank.Rank of zero matrix is new.Rank of identify matrix of order n is n.Rank oftARank of A wheretAis transpore of A.or mxn matrix A, now space is subspace ofn.=now rankbn.Similanery column rankbm=rank ofAbmin (m, n}.Example :A ¯¡°¡°¡°¡°¡°¢±231564789Here123(1, 2, 3), (4, 5, 6), (7, 8, 9)RR R 32 1RZ RR 12{, }RRare lance by independent=now rank = Z=rank of A = ZChange of Basis.Sometimes it is imperative to change the basis in representinga linear transformation T, because relative to this new basis, therepresentation of T may become very much simplified. We shall thrufare turn own attention to established are important resultconcerningthe matrix represecutations of a liner transformation when the basisin changed.Theaum :If two sets of vectors12 nX= { x, x , . . x }andxx x x 12 n={ , , . . . }are the bases of a rectum space,nVthese thereexists a nonsingular matrix B = [bij] such thati1 2 i 2nx= b i j x+ b x + . . + b n i x,i= 1 , 2 , . . . n .munotes.in
Page 47
47The nonsingular matrix B is defined as a transformation matrixin.nVProof :Suppose that X andxare two bases in.nVThenxi s (I = 1,2, .. n)can be expressed12,, . . . , . .nxn xi e 12.. ,ij zini nixb xb x b x 1, 2, , , ,inWhereijbs are realer.Let is define its matrix12[. . . ]nBb bb Where[. . . ]Ti ij zi nibb bb Is an vector.We have to show that B is non singular.For realer12...xx we write,11....nnxx x x11 1 1111(. . .) . . . (. .)nnnnnn nxbx b x x bx bx 111 1111(. . ) . . . ( . . )xxnnnnnn nxb x bxb xb Now11... 0nnxb x b Implies11.. 0, 1, 2, ...,ini nxb x b i n Substituting this in the aboveequation we have11... 0.nnxx x x Since,1,..,uxxare lines only independent, it follows that1.... 0nxxand hence1,.. .nbbare linearly independent.=Bi sn o ns i n g u l a r .Theorem:Suppose that A is an mxn matrix of the linear transformation:nmTV Wl with respect to the bases1{, . . . , }nXn x andmunotes.in
Page 48
481{, . . . , } .myy y IfÂin an mxn matrix of T with respect to diffluentbases1{, . . .}nxx x and1,... }myy y than there exist nonsingular matrix B and C of ordern and m respectively, such thatl1.AC A B Proof :If[]kiAain the matrix of T with aspects to the bases X and Y, wehave the lunation,11 2 2()...ii imi mTx a y a y a y similarly,forl[] ,ijAa we can write,mmmmm12 2() , , , . . ,mmjTx a j a y a y .By the previous formula thus exist non singular coordinatetransformation matrix[]ijBband[]kiCesatisfying.l12..ii j z jnj njb xb x b x l12..ii j z imi myc yc y c y Hencellll111()mmmji i i k i ncikTx a y a cy m11mmki ij kkccx y¬®Alternatively sinner T in liner, we have,l1() ( )wji j iiTx bTx T11nmij ki ninba y11mnki ij nkiab y¬®1..i e c a ABacA Bº Ex. A Linear transformation32:Tlin defined by123 1 2 3 2 3(, , ) ( , )T x x x x x x zx x Of the bares in3are1, 2 3v( x x ,x ) = { ( 1,0,0) (, 0, 1,0) (, 0,0, 1) }munotes.in
Page 49
49lmm123v( x , x , x ) ={ ( 2 , 2 ,1 ) (, 0 ,1 , 0 ) (, 1 , 0 ,1 ) }and these in2are12w(y , y )={(2,1),(1,1)}lmm12w( y , y ) ={ ( 1 , 0 ) (, 1 ,1 ) }Here we will find the matrix A w.p.t. the bases defined by Vand W, andlAw.p.t.vandlwof the linear transformation T. Alsowe have to determine non singhan matrixes B and C such thatl1.AC A B HereT( 1,0,0) = ( 1,0) =1( 2, 1) -1 ( 1, 1)T( 0, 1,0) = ( 1,2) = -1( 2, 1) + 3 ( 1, 1)ˑ0 , 0 , 1 = 1 , 1 = 0 , 1 + 1 , 1 <> ¯¡°¡°¢±w1- 1 0∴ m 7 = -1 3 1i.e. T(V) = WA.Similarly considering the baseslVof3andlwof2we find<>ll ¯¡°¡°¢±wv0- 1 1m( T) = = A52 1ll.. ( )st T v wA The matrix B and C are determined by the change ofrelationship aslandvV B ww c ¯ ¯¡° ¡°¡° ¡°¡° ¡°¡° ¡°¢± ¢±201 100∴ Y = 9% ය 1 0 = 0 1 010 1 00 1B=201210101 ¯¡°¡°¡°¡°¢±l.ww c ¯ ¯ ¯¡°¡° ¡°¡°¡° ¡°¢± ¢±¢±11 1221 22aa11 21i∴ = aa01 11munotes.in
Page 50
50 ¯¡°¡°¢±10c=-1 -1We can such the resultlcA A BLINEAR FUNCTIONALS : DUAL SPACEDefinitionA linear transformation T from a vector space V one a fieldinto this fieldof scalars in called a linear functional on thespace V. The set of all function on V in a victor space defined as thedual space of V and in denoted by V.Example :Let V be the vector space of all mal valued function internalsover the intervalatbbb . Then the transformation:fvldefined by() 1 ()taxt x t d tl¨ is a linear functional onV.This mapping f assigns to each internals functionx(e)a real numberon the internal.atcbb Example :Let\^12,, . . . ,nxx x be a basic of the n dimensional vectorspace v one.A n yv e c t o rxi nVcan be reprinted by11 2 2... ,nnxx x x x x x whenixsone realans inwe nowconsider a fined vectorZinVand spurned it by11 2 2....nnzc n c x c x when'icsare sealers in.Denote by1212[. . . ] [ . . . ] ,TTnnwe ee a n d n x x x the coordinatevectors of z and x respective by. Thus the linear transformation2:fvl defined byt21 1 2 2nnf( x )= c x + c x +. . .+ c x =w uina linear functional on V. In fact,()fxlis abstained as an innerproduct of coadunate vectors of z and x. Restricting Z to be the basisvector,ixswe getZnumber of linear functionalnif( i = 1 , 2 ,ൺ, Q inVgiven by()xi ifn xbecause now theeliminating vectorTie= [ 0 , - 1 , . . . o ]is the co ordinate vector ofixwith respect to the basis vectors\^12,, . . . ,nxx x and shouldreplaes W. This functional may be reconcile as a linearmunotes.in
Page 51
51transformation as V that maps each vector in V onto its i-th coordinates. Sulative to the respected basis. A note worthy property ofthus function is,() , , 1 , . . .xi j ijifx f i j n x whenijxin thekpronceluers delta defined by,1,ijxi jijv becausejeis the coordinate vector ofjxwith uspect to the basis\^12,, . . . ,nxx x and.jiue a n d we Mover if x in a zero vecor in V, then0() 0fx the sealer zero in.Now we shall state and prove a very email they are on dualbasic.Theaum :Let\^12,, . . . ,nxx x be a basis of the n-delusional vector space V onea field. Then the unique linear functional12,,xxx nfff defined by() , , 1 , 2 , . . . ,xi j ijfx f i j n farm a basis of the dual space V*defined as the dual basis of\^12,, . . . ,nxx xand any element f in V*can be expressed as11() . . . ( )xnx nff x f f x f and for eachvector X in V, we have12 24,( ) ( ) . . . ( )xxxnxfx x f x x fx x .Proof :We have to prove that (a)'xifsare linear by independent in V*.(i) Any vector f in v* can be expressed as a liner combination of'.xifsSuppose that far realer12,, . . .nxx x i n we have11 2 2...nxnx n qxf x f x f f Then11 2 20(... ) ( )xxnx n i ixf xf x f n f x which, by taking intoaccount that0() , , 1 , 2 , . . . , ( )0ni j ijtn f i j n a n d f x we get.0, 1, 2, ... .ixi n Hence,'nifsare linearly independent in V*.Suppose now that f is any linear functional in V*. Than we can findn sealers12,, ,naa ai n satisfying() , 1 , 2 , . . . ,fx i a ii n because f n pursuable a known liner functioned. For any vector11 2 2...nnxx n x x x x in V, because of the inanity of f, weget11 2 2() ( ) ( ) . . . ( )nnfx xfx x x x fx munotes.in
Page 52
5211 2 2...nnxa xa x a 11 2 2() ()()nnnx uaf n af x f x D 12 2(, . . ) ( ) .nnnn uaf af a f n Since x is arbitrary and a combination of linear functional is again alinear functional, we have the rulation.11 2 2...xnnn nfa f a f a f .Definition :Dual Basis :For any basis12{, , . . . }nxx x of a vector space V one a field,t h u sexists a unique basis12{, , . . . , }nff f of V* such thatij i jf( x )= f ,i, j =1 ,2,..,nof the basis12 n{f , f, . . . , f}of V* is saidto be dual to the given basis12 n{x , x, . . . x} o f V .Suppose :2{(3, 2) , (1,1)}.BV be a basis inWe will find a dual basisx1 x212{f , f } i n V * w h e n B = {x , x }.L e t ,11 2 2x=x x +x x i . e . ¯ ¯¡° ¡°¡° ¡°¢± ¢± ¯ ¯¡°¡°¡°¡°¢±¢±11122212xxi.e. = [x x ]xxx31=x21Which gives11 2 2 1 2x= x- x , x = - 2 x+ 3 xi. e.n1 1 2 n212f(x ) = x- x, f(x ) = - 2 n+ 3 xExample :2123{1 , 1 , }Let B n n t x t t= in a basis in2()t3one.11 2 2 33()xt x x x x x x 2123(1) (1 ) ( )xx t x t t
12 2 3 3() 2xx x x t x 212()oxt a at a t= 012 123 23,,xx axx axº D 1012 212 32,,xaaa xaa xaº =me obtain the dual basis as,0122 1 2 3 2,( ),( ) ( ) .nnfn a a afx a a f n a munotes.in
Page 53
53AnswerExercise :3.13. (i)¬®2,1 ( , -1, -1)3(ii)¬ ¬® ®21 1-, 1 , , - 333(iii)(1 , 0 , 1 )( ,4 , 2 )4.{(1, 0,1) , (2,1,1)} in a basis of Inc(T) din(Im T) = 2.{(3, 1,1)} is a basis of kur T and dim (ku T) = 1munotes.in
Page 54
544DETERMINANTUnit Structure :4.0Introduction4.1Objective4.2Determinant as n-farm4.3Expansion of determinants4.4Some properties4.5Some Basic Results.4.6Laplaer Expansion4.7 The Rank of a Matrix4.8Gramer’s Rule4.0 INTRODUCTIONIn previous three chapters we have discarded about vcetansline on equations and lincer transformations. Every when we see thatwe need to check wheather the vectors are linearly independent ornot. In this chapter we mill develop a computational technique tofind that by using determinants.4.1 OBJECTIVEThis chapter will help you to know about determinantsand itsproperties.Expiring of determinants by various methods.Calculation of rank of a matrix using determinants.Existence and uniqueness of a system of equations.4.2 DETERMINANT AS N-FARMTo discuses determinants, we always consider a square metersDeterminant of a square meters in a value associated withthematrix.munotes.in
Page 55
55Definition :Define a determinant function as dut :(, )Mnlwhere(, )Mnis a collection of square meters of order n, such that(i)the value of determinant remain same by adding any multipleof 1stnow to i th now i.c.Det12(, , , , 1 ,ij i nRR R K R R R 12det ( , , .., ,.., )jnRR R Rfan i jv (ii)the value of the determinant changes by sign by swapping anytwo rows i.e. det12(, , . . , , . . , , . . , )jj nRR R R R 12det ( , ,.. ,.., , .., )ji nRR R R R (iii)if element of any now in multiplied by k then am value ofdeterminant in is k tines its original value i.c.Det12(, , . . , , . . , )inRR R R N 12det ( , ,.., , .., )0inkR R R Rfar kv (iv)det (I) = 1, where In an identify matrix.If is n-linear skin symmetries function on.. .(, )nn nxx xAM nl= 1212det det ( , ,.., )det det ( , ,.., )nnAR R Ror A C C Cº WhichiRdenote i-th row oriCdenote i-th column of meters A.Eachniinn nRo r CAx x x= =Determinant in an n-linear skew symmetric function from..nn nxx xt o e.g. the function22x in give by,abad becd¬¬¬ l®®® there&abcd¬ ¬® ®are ordered pairs from2each.Also,abab leilinearcd ¯¡°¡°¢±Skeen symmetric and alternating function.munotes.in
Page 56
564.3 EXPANSION OF DETERMINANTLet()ijAabe an arbitrary nxn matrix as follows11 12 11212ji nii i j i nnn n j n naaaaAa a a aaa aa¬®LetijAbe the(n-1) x (n-2) matrix obtained by deleting the i-throw and j-th column from A.11 1111 1211 12 1 1 1 1 111 21 1 1 1 112 1 1aa a a aij j naa a a aii j i i nAijaa a a aii i j i j i naa a a ann n j n j n n¬ ®We will give an expressian for the determinant of an nxnmatrix in terms of determinants of (n-1) x (n-1) matrix. We define,211detdet ( ) ( 1) det ( )iiniiin inaaaA $ This sum is called the expansion of the determinant accordingto the i-th expand det A according to the first now,11 11 12 1213 13 14 14det det ( ) det ( )det ( ) det ( )Aa A a AaA aA Where A in an 4 x 4 matrix.For example det¬®32- 1A= 5 0 213- 4Then02 52 5 0det 3 2 ( 1)34 14 1 3A = 3(-6)-2 (-20-2)-1 (15)=-18 + 44–15 = 44–33 = 114.4 SOME PROPERTIESThe determinant satisfies the following properties :1.As a function of each column vector, the determinant inlinear, i.e. if the j-th columnjcis equal to a sum of two, columnvectors, sayjcc cthenmunotes.in
Page 57
571111(, . . , , , , . . )jjnDc c c c c c 11111(, . . ,, . . , )(, . . , , , . . , )jnjnDc c cDc c c c F 2. If two columns are equal i.e. if,jkc c where j kvhun det(A) = 03. If one adds a sealer multiple of one column to another three thevalue of the determinant does not change i.e.1(, . . , . . . , ) ( 1 , . . , )kinnDc c x c c Dc c We can prove this profile as follows.11(, . . , , . . )( , .. , .. , .., )kxi njk nDc c c cDc c c c 11(, . . , . . , , . . , )(, . . , . . , , . . )jjnjn nxd c c c cDc c c c As the second determinant on right has same columns andhence its value is zero.All the properties stated above are valid for both row andcolumn operations.Using the above properties we can computer the determinantsvery efficiently. By using the property if asealer multiple of are now(or column) in added to another now then the value of thedeterminant does not change, we will by to make as many enemiesin the meters square to 0 and then expand.Example :21 203 141 1DFirst we willinterchange first two columns to keep first entryas 1.12 230 114 1D= munotes.in
Page 58
58Next we will make first two entries of 2ndand 3rdcolumns aszero. So we subtract trick the first now from 2ndand 3rdcolumns.10 036712 1D So if nc expand it along the first now nc get only a 2 x 2 determinantas.67(6 14) 2021D Exercises 4.11. Computer the following determinates.30 1 2 04() 1 2 5( ) 1 3 5142 1 0 10iii 312 24 3() 4 5 1 () 13012 3 0 21iiiiv 2. Computer the following determinants.11 2 4 1 1 2 001 13 03 2 1()()21 1 0 0 4 1 231 25 31 5 7iii11 1 1 11 1 111 1 1 2 2 13()()11 1 1 4 419111 2 881 2 7iiiiv 4.5 SOME BASIC RESULTS(1)If k is constant and A is n x n matrix thusnKA k A whereKA is obtained by multiplying each element of A by K.munotes.in
Page 59
59Proof :Each term of the determinantsKAhas a factor k. This k can betaken out column from each now i.e... ( )nKA k x k x x k n thus AKA k A= (2)If A and B are two n x n matrixAB A B ButAB A Bv This can be proved by simple examples.(3)11AAProof :,AB A B considering= 111,BAw e g e tAA A A 11111IA AAAAAº (4)ttAA w h e r e A is transfer of matrix A.Proof :The determinant can be expanded by any now or column. LetAisexpanded using i-th now. This i-th now in i-th column intA.S ot h eexpansion remain same i.e. the value of the determinant remainsame.tAA= (5)If A has a now (or column) of zeros then0.Amunotes.in
Page 60
60Proof :By expanding along the new now, the value of the determinantbecomes zero.(6)If A and B are square meters then1BAB AProof :111BAB B A B B ABA (7)The determinants is linear in each now (column) if the otherrows (column) are fined.123123123ak a k akbb bcc c123123 223123 123aa a kkkbbb bbbcc c ccc 12123123 2 2 3123 1 2 3aaa a a akb kb kb k b b bcee c c c4.6 LAPLAER EXPANSIONDefinition :Mainor–The minor of an element of a square matrix in thedeterminant obtained by deleting the now and column which interestin that element.So minor of the elementijais obtained by deleting i-th nowand j-th column, deleted by.ijMmunotes.in
Page 61
61For example, let123456789A ¯¡°¡°¡°¡°¢±Then,Minor of 1 is1156389M Minor of 8 is32136.46Metc Laplaer expansion–1(1 )nijij ijJAaM If wc consider i-th previous example.111256463,68979MM 132145237,67889MM 2223131212,67978MM 313223133,65646MM 3312345M =BY Laplaer Expansion11 1 12 12 13 1321 21 22 22 23 23Aa M a M a MMa Ma M D 31 1 2 32 33 331 ( 3) 2 ( 6) 3 7 4 (6)aM a M aMxxx x 5(1 2 ) 6(6 ) 7 (3 ) 8 (6 )(3 )xxax 31 2 2 12 46 03 62 14 82 730 munotes.in
Page 62
624.7 THE RANK OF A MATRIXTheorem :Let12,, . . . ,ncc c be column vectors of disunion n. They are linear bydependent if and only it.12det ( , ,... ) 0ncc c Proof :Let12,, . . . ,ncc c are linear by dependent. So there exists a solution.11 2 2... 0nnxc x e x c with numbers1,...,nxx not all 0.Let0jxv11 1 1... ...jjjj n nxc x c x c xc= 111......njnjjnkkkxxipc c cxxackj v ThusDet1() d e t (, . . . , , . . . , )inAc c e 11det ( ,..., ,... )nkk nkca c c 11det ( ,..., ,..., )nkk nkca c ckjv Wherekcoccurs in the j-th place. Butkcalso occurs in the k-thplace andkjv.H e n c et w oc o l u m no ft h ed e t e r m i n a n ti ne q u a l .S othe value of the determinant in 0.Conversely :If all the columns of a matrix A in linear by independent, thenthe matrix A is now equivalent to a triangular matrix B as11 1222000inznnnbb bBb bb¨munotes.in
Page 63
63When all the diagonal elements11 22,, . . . , 0 .nnbb bv But by the rule of expiations. Det11 22() . . . , 0 .nnBb b bv Now, B is obtained by some operation like multiplying a rowby non zero scalar.Which multiplies the determinant by this scalar; orinterchanging rows, which multiplies the determinant by-1,o radding a multiple of are now to another, which does not charge thevalue of the determinant sincedet ( ) 0Bv it follows that det() 0 .AvHence the proof.Corollary :If12,,ncc c are column vectors ofnsuch that12(, , . . . ) 0 ,nDc c cv and if B in a column vector, then thus existnumber1,. . .,nxx such that11... .nnxc x c B Thesenumbers are uniquely determined by B.Proof :12(, , . . . , ) 0nDe c c=v 12,, . . .ncc c are linear by independent and hens from a basis of.nSo,nBEcan be written as a linear combination of12,, . . . ,ncc c forsome unique numbers12,, . . . ,.nxx x 11 2 2...nnxc x c x c B= for unique12,, . . . , .nxx x The above corollary can be placed an important feature ofsystem of linear equations like :If a system of nlinear equations in n unknowns has a matrixof coefficients whose determinant in not zero, then the system has aunique solution.Now recall that the rank of a matrix in dimension of rowspace on column space i.e. number of independent now of columnvectors in the matix.Let a 3 x 4 matrix is given and we have to find out its rank. Itsrank is at most 3. If we can show at last one 3 x 3 determinant frommunotes.in
Page 64
64the co-efficient matrix is non-new, we consider the rank of thematrix in 3. If all 3 x 3 determinants are new, we have to check 2 x 2determinants. If at last one of then is non-zero, we can conclude therank of the matrix in 2.For example let¬®3514A= 2 - 1 1 1542 5351 5142- 11 = 0 , - 111 = 0542 42 5One can check that any such 3 x 3 determinant from Ain zero.So, rank() 3 .AvNow35=- 1 3ำ2- 1No need to check other 2 x 2 determinant. We can concludethat rank (A) = 2.Exercise 4.21.Find the rank of the following matrix.¬¬®®31 2 5 1 1 - 12(i ) 1 2 - 1 2 (i i )2 - 2 0 211 01 2- 83- 1¬¬®®11 1 1 311- 111 - 11 - 243 2(i i i )(i v )1 -1 1 -1 -1 9 7 31 -1 -1 2 7 4 2 12. Find the values of the determinants using Laplaec expansion.¬ ¬® ®246 203(i )7 8 3 (i i )5 1 0592 046munotes.in
Page 65
65¬ ¬® ®1- 2 - 3 320(i i i )- 2 3 4 (i v )1 1 33- 45 2 - 1 23. Check only uniqueness of the solution for the following systemsof equations.nzyny(i ) 2- y + 3= 9n+3 -z=43+ 2+ z = 1 0nzy(i i ) 2+ y - z = 5x-y+2 =3-x+2 +z= 1nzwnzw(i i i ) 4+ y + z + w = 1n-y+2 -3 =02+ y + 3+ 5 = 0n+ y-z-w=2yz wnz(i v ) x + 2 - 3+ 5 = 02+ y - 4- w = 1n+x+z+w =0-n- y-z+w =4[Hint : Just check that determinant of the co-efficient matrix is nonfew far the uniqueness ]4.8 GRAMER’S RULEDeterminants can be used to solve a system of linearequations.Thearim :Let12,, . . . ,ncc c be column vectors such that12(, , . . . , ) 0 .nDc e ev Let B be a column vector and12,, . . . ,nxx x arenumbers such that11 2 2... ,nnxc x c x c B then for each J = 1,2, … , n.munotes.in
Page 66
661212(, ,,, )(, ,, )nnDc e B enjDe e e Where B column vector replaces the columnjcin numerator ofjx.Proof :1212 1 1 2 2( , ,..., ,... )( , ,...,... ),nnnDc c B cDc c xc n c x c 12 1 112 2 2( , ,... , ..., )( , ,..., ,..., )nnDc c xc cDc c xc c 12(, . . . , , . . . )jj nDc c xc c 12(, . . . , , . . . )nn ndc c x c c 11 2 1(, , . . . , , . . . , ).........nxD c c c c 1212(, , . . . , . . . )...........(, , . . . . , , . . . , )jjnnnnxD cc c cxD cc c c In every term of this sam except the j-th term, two column vectorsare equal. Hence every term except the j-th term isequal to 0. so weget.1212(, , . . . ,, . . . )(, , . . . , , . . . )njjnDc c B cxD cc c c 1212(, , . . , , . . . )(, , . . . , , . . . )njjnDc c B cxDc c c c= So we can salve a system of equations using above rule. Thisrule in known as Gramers rule.Example :xy znyz3+ 2+ 4= 12- y + z = 0x+2 +3 = 1By Gramer’s rulemunotes.in
Page 67
671240- 11123x=3242- 1 112 3314201113y=3242- 1412 332 12- 1012 1z=3242- 1112 312x=- ,y=0,z=55Exercise 4.31. Solne the following equation by Gramer’s rule.znzyxyi) x + y + z = 6 ii) x + y - 2 = -10x-y+z=2 2 -y+3 =- 1x+2 -z=2 4 +6 +z=2xzxzxyyzxzxyziii) - 2 - y - 3 = 3 iv) 4 - y + 3 = 2z- 3+ z = - 1 3 x + 5- 2= 32- 3= - 1 1 3+ 2+ 4=6munotes.in
Page 68
68AnswerExercise 4.11. (i)-42(ii)-114(iii)14(iv)-92. (i)-18(ii)-45(iii) 4(iv) 192Exercise 4.21. (i) 3(ii) 2(iii) 4(iv) 22. (i)-204(ii) 72(iii)-6(iv) 233. (i) unique(ii) unique(iii) unique(iv) unique.Exercise 4.31. (i) (1,2,3)(ii) (5,3,4)(iii) (-4,2, 1)(iv) (0,1,1)munotes.in
Page 69
Chapter 6Characteristic PolynomialChapter Structure6.1 Introduction6.2 Objectives6.3 Diagonaizable Linear Transformations6.4 Triangulable Linear Transformations6.5 Nilpotent Linear Transformations6.6 Chapter End Exercises6.1 IntroductionIn the following chapter we will attempt to understand the conceptof characteristic polynomial. In earlier two units of this course we haveseen that studying matrices and studying linear transformations on a fi-nite dimensional vector space is one and the same thing. Although thisac a s ei ti si m p o r t a n tt on o t et h a tw eg e td i ↵ e r e n tm a t r i c e si fw ec h a n g ethe basis of a underlying vector space. We also saw that although weget di↵erent matrices for di↵erent bases, corresponding matrices aresimilar. Being similar is an equivalence relation on the space ofn⇥nmatrices, it is interesting to seek bases of underlying vector space inwhich for a given linear transformation corresponding matrix is sim-plest in appearance and because of similarity being equivalence relationwe do not loose anything important as far as linear transformation isconcerned. Studying so called eigenvalues of a linear transformationaddresses the issue of such bases in which given linear transformationhas a matrix in simplest form. During the quest of finding such basesmentioned above we come to know various beautiful properties of lineartransformation and its relation to linear structure on the vector space.85munotes.in
Page 70
6.2 ObjectivesAfter going through this chapter you will be able to:•find eigenvalues of a given linear transformation•find basis of a vector space in which matrix of a linear transformationhas diagonal or at least triangular form•Properties of eigenvalues that characterize properties of linear trans-formationLetVbe a finite dimensional vector space over a field of scalarsF.LetTbe a linear transformation onV.Definition 16.Eigenvalue (also known as characteristic value) of a lin-ear transformationTis a scalar↵inFsuch that there exist a nonzerovectorv2VwithT(v)=↵v.a n y s u c hvis known as eigenvector corre-sponding to eigenvalue↵.C o l l e c t i o no fa l ls u c hv2Vfor a particulareigenvalue is a subspace ofVknown as eigen space or characteristicspace associated with↵.Theorem 6.2.1.LetTbe a linear transformation on a finite dimen-sional spaceV. Then↵is characteristic value ofTif and only if theoperatorT ↵Iis singular.Proof.this is the proof of the theorem.Remark 6.2.1.Al i n e a rt r a n s f o r m a t i o ni ss i n g u l a ri fd e t e r m i n a n to fits matrix is zero.Thus↵is eigenvalue ofTif and only if determinant of matrix ofT ↵Iis zero. We see this determinant is a polynomial in↵and hence rootsof the polynomialdet(T xI)a r ee i g e n v a l u e so fT.Definition 17. det(T xI)i sk n o w na sc h a r a c t e r i s t i cp o l y n o m i a lo fT.Example 7.Find eigenvalues and eigenvectors of the following matrixA=0@1012311111ASolution:Step 1: Consider det(A I)=086munotes.in
Page 71
detA=det0@1 0123 111 1 1A=0This gives characteristic polynomial ofAand its roots are eigen-values ofA.E i g e n v e c t o r i s a s o l u t i o n v e c t o r o f h o m o g e n e o u ssystem of linear equations (A I)x=0w h e r e is an eigenvalueofA.Thus characteristic polynomial ofAis obtained by finding deter-minant in above equation and it is found to be the followingp( )p( )= 3 5 2+5 1Step 2: Roots ofp( )a r e2+p(3),1a n d2 p3. These are eigenvaluesofA.Step 3: Solve following system of linear equations and we consider firsteigenvalue 2 +p3(A (2 +p3)I)x=0Solving we getX1= 32+12(2 +p3), 12+12(2 +p3,1 Step 4: similarly for other two eigenvalues we get following eigenvectorsX3= 12+12(2 +p3), 12+12(2 +p3,1 X2= 1,1,0 Remark 6.2.2.Roots of characteristic polynomial may repeat and be-havior of a linear transformation (or its corresponding matrix) dependscrucially on multiplicity of eigenvalues and dimension of correspondingeigen spaces. One simple example of matrix with repeated eigenvaluesis the following matrix0@1010110031ADefinition 18. Algebraic multiplicity of an eigenvalueMultiplic-ity of an eigenvalue as a root of characteristic polynomial is known asalgebraic multiplicity of corresponding eigenvalue.Definition 19. Geometric multiplicity of an eigenvalueDimen-sion of the eigenspace or characteristic space of an eigenvalue is knownas the geometric multiplicity of a that eigenvalue.87munotes.in
Page 72
Theorem 6.2.2.Let↵be an eigenvalue of a linear transformationT.Iff(x)is a polynomial in indeterminatex. Thenf(T)v=f(↵)vProof.First consider the case off(x) being a monomial. Letf(x)=xk.Let us apply induction onk.Letk=1 . I nt h i sc a s ef(x)=x.i . e .f(T)=T. Here it follows thatf(T)v=f(↵)v.Let us assume the lemma fork=r.T h u s w e a s s u m e t h a tTr(v)=(↵r)v.C o n s i d e rk=r+1.Tr+1(v)=TrTv=(Tr)↵v=↵(Tr)v.Using induction hypothesis we get thatTr+1(v)=↵r+1v.T h u st h el e m m ai se s t a b l i s h e df o ra l lm o n o m i a l s .Letf(x)=a0+a1x+...+akxk.T h u sf(T)v=a0v+a1Tv+...+akTkv.Using the lemma for monomials we get thatf(T)v=a0+a1↵v+...+ak↵kv=f(↵)v. And the result is established for all polynomials.Remark 6.2.3.This is very important lemma and in future we willuse it at various occasions.Theorem 6.2.3.Let↵1and↵2be two distinct eigenvalues of a lineartransformationTon a finite dimensional vector spaceV. Letv1andv2be respective eigenvectors. Thenv1andv2are linearly independent.Proof.Suppose otherwise thatv1andv2are linearly dependent. Thenthere exist a nonzero constantcsuch thatv2=cv1.ThereforeT(v2)=cT(v1). Thus we get that↵2v2=↵1cv1. Which is same asv2=↵1c↵2v1.This leads to the conclusion that↵1c↵2=c.S i n c ev1andv2are distinct(as↵1and↵2are distinct) we havec6=1l e a d i n gt o↵1↵2= 1. This issame as↵1=↵2.T h i s i s a c o n t r a d i c t i o n t o↵1and↵2being distinct.Thereforev1andv2are linearly independent.Now recall that geometric multiplicity of an eigenvalue is a numberof linearly independent eigenvectors corresponding to that eigenvalue.In other words geometric multiplicity is nothing but dimension of eigenspace (ie.characteristic space) of an eigenvalue. Note however that di-rect sum of all eigenspaces of a linear operator need not exhaust entirevector space on which linear transformation is defined and now onwardsour attempt will be to see what best next can be done in case we fail torecover vector space V from direct sum of eigen spaces corresponding toa particularly given linear transformation. Question we want to addressis that in which circumstances does direct sum of eigen spaces exhaustentire vector space. We will see that these linear transformations areprecisely the one which are diagonalizable. In the following section wewill make these ideas precise.Theorem 6.2.4.LetTbe a linear transformation on a finite dimen-sional vector spaceV. Let↵1,↵2,. . . ,↵kbe the distinct eigenvalues of88munotes.in
Page 73
Tand letWibe the eigenspace corresponding to the eigenvalue↵i.Bibe an ordered basis forWi. LetW=W1+W2+...+Wk. ThendimW=dimW1+dimW2+...+dimWk. AlsoB=(B1,B2,. . . Bn)is aordered basis forW.proofVectors inBifor 1ikare linearly independent eigenvec-tors ofTcorresponding to eigenvalue↵i. Also vectors inBiare linearlyindependent to those inBjfori6=j.Because they are eigenvectors cor-responding to di↵erent eigenvalues. Thus vectors inBare all linearlyindependent. Note that vectors inBspanW.T h i s i s b e c a u s e t h a t i showWis defined.Theorem 6.2.5.Set of all linear transformations on a finite dimen-sional vector space forms a vector space over the field F. Here naturallythe binary operation is composition of functions. This new vector spaceis isomorphic to space ofn⇥nmatrices and hence has dimensionn2.Let us denote this space byL(V,V).LetTbe a linear transformation on finite dimensional vector spaceV.ThusT2L(V,V). Consider the first (n2+1) powers ofTinL(V,V).:I,T,T2,. . . ,Tn2Since dimension ofL(V,V)i sn2and above we haven2+1e l e m e n t s ,these must be linearly dependent.ie. there existn2+1s c a l a r s ,n o ta l lzero, such that we have:c0+c1T+c2T2+...+cn2Tn2=0 ( 6 . 1 )Or which is same thing as saying thatTsatisfies polynomial of degreen2.W eh a v en o wd e fi n i t i o n :DefinitionAny polynomialf(x)s u c ht h a tf(T)=0i sk n o w na sa n -nihilating polynomial of a linear transformationT.Polynomial given in 1 is one such polynomial. Thus set of annihilatingpolynomials is nonempty and we can think of annihilating polynomialof least degree which is monic.DefinitionAnnihilating polynomial of least degree which is monicis known as minimal polynomial of a linear transformationT.Example 8.Find minimal polynomial of the following matrixA=0BB@21000200002000051CCA89munotes.in
Page 74
Solution:Step 1: Find characteristic polynomial ofA.Characteristic polynomial ofAis the following:p( )=( 2)3( 5)Step 2: By definition of minimal polynomial, minimal polynomialm( )must divide characteristic polynomial hence it must be one of thefollowing:( 2)3( 5)( 2)2( 5)( 2)( 5)Step 3: Note that minimal polynomial is a polynomial of least degreewhich is satisfied by the matrixA. Only polynomial amongstabove polynomial is the second one hence minimal polynomial is( 2)2( 5)Remark 6.2.4.1. Set of all annihilating polynomials of a lineartransformationTis an ideal inF[x]. SinceFis a field, this idealis a principal ideal and monic generator of this ideal is nothingbut minimal polynomial ofT.2. Since minimal polynomial is monic it is unique.Theorem 6.2.6.LetTbe a linear transformation on a n-dimensionalvector spaceV. The characteristic and minimal polynomial ofThavethe same roots except for multiplicities.Proof.Letpbe the minimal polynomial forT.L e t↵be a scalar. Wewant to show thatp(↵)=0i fa n do n l yi f↵is an eiegnvalue ofT.First suppose thatp(↵) = 0. Then by remainder theorem of polynomi-als,p=(x ↵)q(6.2)whereqis a polynomial. Sincededq < degp,t h ed e fi n i t i o no fm i n i -mal polynomialptells us thatq(T)6=0 . C h o o s eav e c t o rvsuch that90munotes.in
Page 75
q(T)v6=0 . L e tq(T)v=w.T h e n0=p(T)v=(T ↵I)q(T)v=(T ↵I)wand thus↵is an eigenvalue.Conversely, let↵be an eigenvalue ofT,s a yT(w)=↵wwithw6=0 .Sincepis a polynomial we have seen thatp(T)w=p(↵)wSincep(T)=0a n dw6=0 ,w eh a v et h a tp(↵)=0 . T h u se i g e n v a l u e↵is a root of minimal polynomialp.Remark 6.2.5.1. Since every root of the minimal polynomial isalso a root of characteristic polynomial we see that minimal poly-nomial divides characteristic polynomial. This is famous CaleyHamilton theorem and it states that linear transformationTsat-isfies characteristic polynomial in the sense that iff(x)i sac h a r -acteristic polynomial thenf(T)=0 .2. Similar matrices have the same minimal polynomialCheck Your Progress1. LetA:=0@444 2 3 61361ACompute (a)the characteristic poly-nomial (b) the eigenvalues (c) All eigenvectors (d) Identify alge-braic and geometric multiplicities of each of the eigenvalue.2. LetAbe the real 3⇥3 matrix. Find minimal polynomial ofA0@31 122 122 01A6.3 Digonalizable LinearTransformations91munotes.in
Page 76
Definition 20.LetVbe a vector space of dimensionn,andTbe alinear transformation onV.L e tWbe a subspace ofV.W e s a y t h a tWis invariant underTif for each vectorv2Wthe vectorT(v)i si nW.i . e .i fT(W)✓W.Definition 21.LetTbe a linear transformation on a finite dimen-sional vector spaceV.W e s a y t h a tTis diagonalizable if there exist abasis ofVconsisting of all eigenvectors ofT.Remark 6.3.1.Matrix of digonalizableTin the basis ofVconsistingof eigenvectors ofTis diagonal matrix with eigenvalues along the di-agonal of the matrix.Example 9.Find the basis in which following matrix is in diagonalformA=0@1012313331ASolution:Since characteristic polynomial of the matrix isp( )= 3 7 2+9 3w eg e tf o l l o w i n ge i g e n v a l u e sf o rA3+p(6),1a n d3 p6.Since all eigenvalues are distinct given matrix is and in the basis formedby three independent eigenvectors matrix becomes diagonal.X1= 52+12(3 +p6),32 16(3 +p6,1 X2= 1,1,0 X3= 52+12(3 p6),32 163 p6,1 Required diagonal matrix is the matrix whose diagonal is formed bythree eigenvalues respectively.Check Your ProgressLetAbe a matrix over any fieldF.L e t Abe the characteristicpolynomial ofAandp(t)=t4+12F[t]. State with reason whetherthe following are true or false1. Let A=p,t h e nAis invertible2. If A=p,t h e nAis diagonalizable overF3. Ifp(B)=0f o rs o m em a t r i xBbe 8⇥8m a t r i x ,t h e npis thecharacteristic polynomial ofB.92munotes.in
Page 77
4. There is unique monic polynomialq2F[t]o fd e g r e e4s u c ht h a tq(A)=0Theorem 6.3.1.LetTbe a diagonalizable linear transformation onandimensional spaceV. Let↵1,↵2,. . . ,↵kbe the distinct eigenvaluesofT. Letd1,d2,. . . ,dkbe the respective multiplicities with which theseeigenvalues are repeated. Then characteristic polynomial forTisf=(x ↵1)d1+(x ↵2)d2+...+(x ↵k)dkd1+d2+...+dk=nProof.IfTis diagonalizable then in the basis consisting of eigenvectorsmatrix ofTis a diagonal matrix with all eigenvalues lying along thediagonal. We know that characteristic polynomial of a diagonal matrixis a product of linear factors of the form-f=(x ↵1)d1+(x ↵2)d2+...+(x ↵k)dkCheck Your Progress1. LetTbe the linear operator inR4which is represented in thestandard ordered basis by the following matrix0BB@0000a0000b0000c01CCAUnder what conditions ona, b,andcisTdiagonalizable.2. LetNbe 2⇥2 complex matrix such thatN2=0 ¿P r o v et h a teitherN=0o rNis similar overCto✓0010◆Lemma 6.3.1.LetWbe an invariant subspace forT. The characteris-tic polynomial for the restriction operatorTwdivides the characteristicpolynomial forT. The minimal polynomial forTwdivides the minimalpolynomial forT.93munotes.in
Page 78
Proof.We haveA=BC0D (6.3)whereA=[T]BandB=[Tw]0B.B e c a u s e o f t h e b l o c k f o r m o f t h e m a t r i xdet(A xI)=det(B xI)det(xI D)( 6 . 4 )That proves statement of characteristic polynomials. Note that we haveused notationIfor identity matrices of three di↵erent sizes.Note that kth power of the matrixAhas the block formAk=BkCk0Dk (6.5)whereCkis somer⇥(n r)m a t r i x .T h e r e f o r e ,a n yp o l y n o m i a lw h i c hannihilatesAalso annihilatesB(andDtoo). So the minimal polyno-mial forBdivides the minimal polynomial forA6.4 Triangulable Linear TransformationsDefinition 22. Triangulable Linear TransformationThe lineartransformationTis called triangulable if there is an ordered basis ofVin whichTis represented by a triangular matrix.Lemma 6.4.1.LetVbe a finite dimensional vector space and letTbea linear transformation onVsuch that minimal polynomial forTis aproduct of linear factorsp=(x ↵1)r1+...+(x ↵k)rk(6.6)where↵i2F.LetWbe a proper subspace ofVwhich is invariant underT. Thenthere exist a vectorv2Vsuch that1.vis not inW;2.(T ↵I)vis inW, for some characteristic value↵of the trans-formationT.Proof.Letube any vector inVwhich is not inWThen there existap o l y n o m i a lgsuch thatg(T)u2W.T h e ngdivides the minimal94munotes.in
Page 79
polynomialpforT.S i n c euis not inW, the polynomialgis notconstant. Therefore,g=(x ↵1)l1+(x ↵2)l2+...+(x ↵k)lkwhere at least one of the integersliis positive. We choosejsuch thatlj>0t h e n(x ↵j)d i v i d e sg. Henceg=(x ↵j)h(6.7)By the definiion ofg,t h ev e c t o rv=h(T)ucan not be inW.B u t ,(T ↵jI)v=(T ↵jI)h(T)u=g(T)u(6.8)is inW..We obtain triangular matrix representation of a linear transforma-tion by applying following procedure:1. Apply above lemma to trivial subspaceW=0t og e tv e c t o rv1.2. Oncev1,v2,v3,. . . vl 1are determined form a subspaceWspannedby these vectors and apply above lemma to thisWto obtainvlin the following way-Note that the subspaceWspanned byv1,v2,. . . ,vl 1is invariantunderT.T h e r e f o r e b y a b o v e l e m m a t h e r e e x i s t a v e c t o rvlinVwhich is not inWsuch that (T ↵lI)vlis inWfor certaineigenvalue↵lofT.T h i s c a n b e d o n e b e c a u s e m i n i m a l p o l y n o m i a lofTis factored into linear factors and above lemma is applicable.We will illustrate this procedure with the help of example.Theorem 6.4.1.In the basis obtained by above procedure the matrixofTis triangularProof.By above procedure we get ordered basis{v1,v2,. . . vn}This basisis such thatT(vj)l i e si nt h es p a c es p a n n e db yv1,v2,. . . vjand we havefollowing form-T(vj)=a1jv1+a2jv2+...+ajjvj,1jn(6.9)In this type of representation we get that the matrix ofTis triangular..Check Your ProgressLet0@0102 222 321ACheck whether above matrix is similar over a field of real numbers toat r i a n g u l a rm a t r i x ?I fs ofi n ds u c hat r i a n g u l a rm a t r i x .95munotes.in
Page 80
Theorem 6.4.2. Primary Decomposition TheoremLetTbe a linear transformation on a finite dimensional vector spaceVover the fieldF. Letpbe a minimal polynomial forT,p=pr11...prkk(6.10)where eachpiare distinct irreducible monic polynomials overFand theriare positive integers. LetWibe the null space ofpi(T)ri,i=1,2,. . . ,k.Then1.V=W1 ... Wk;2. eachWiis invariant underT;3. ifTiis the transformation induced onWibyT, then the minimalpolynomial forTiisprii..Proof.Before proceeding to a proof of above theorem we note that realpoint is in obtaining so called primary decomposition stated in the the-orem explicitly for a given linear transformation. Thus we present aproof in the form of algorithm which for givenTwill produce primarydecomposition of a given linear transformationT. Following steps de-scribe the method to obtain primary decomposition theorem1. For givenT, obtain minimal polynomial forT.L e t i t b e i n t h efollowing formp=pr11...prkk(6.11)2. For eachi,l e tfi=pprii=⇧j6=iprjj(6.12)Note that allfiare distinct and are relatively prime.3. Find polynomialsgisuch thatkXi=1figi=1(6.13)4. LetEi=hi(T)=fi(T)gi(T)96munotes.in
Page 81
5. Fori6=jwe haveE1+...+Ek=I(6.14)EiEj=0,i6=j(6.15)6. TheseEiserve the purpose of obtaining invariant subspacesWiwhich decomposeVinto direct sum, and eachEiis a projectionoperator.7. It can be verified that minimal polynomial ofTiwhich is a re-striction ofTtoWiisprii.6.5 Nilpotent Linear TransformationsDefinition 23. Nilpotent TransformationLetNbe a linear trans-formation on the vector spaceV.Nis said to be nilpotent if there existsome positive integerrsuch thatNr=0 .Theorem 6.5.1.LetTbe a linear transformation on the finite dimen-sional vector spaceVover the fieldF. Suppose that minimal polynomialforTdecomposes overFinto linear a product of linear polynomials.Then there is a diagonalizable transformationDonVand nilpotentoperatorNonVsuch thatT=D+N;( 6 . 1 6 )DN=ND.(6.17)The transformationDandNare uniquely determined and each of themis a polynomial inT.We will now see the process to findDandNfor a given lineartransformationT.1. Calculate minimal polynomial ofTand factor into linear polyno-mialspi=x ↵i.2. In the notation of above theorem, we calculateEiand note thatrange ofEiis null spaceWiof (T ↵iI)ri.3. LetD=↵1E1+...+↵kEkand observe thatDis a diagonalizabletransformation. We callDdiagonalizable part ofT.4. LetN=T D.W e p r o v e b e l o w t h a tNso defined is nilpotenttransformation.97munotes.in
Page 82
Proof thatNdefined as above is a nilpotent transformation.Note that the range space ofEiis the null spaceWiof (T ↵i)ri.I=E1+E2+...+Ek(6.18))T=TE1+...+TEk(6.19)D=↵1E1+...+↵kEk(6.20)ThereforeN=T DbecomesN=(T ↵1I)E1+...+(T ↵kI)Ek(6.21)N2=(T ↵1I)2E1+...+(T ↵kI)2Ek(6.22)Nr=(T ↵1I)rE1+...+(T ↵kI)rEk(6.23)Whenr rifor everyi,t h e nNr=0 ,b e c a u s et h et r a n s f o r m a t i o n(T ↵kI)iwill be a null transformation on the range ofEi.T h e r e f o r eNis nilpotent transformation.Example 10.Find the basis in which following matrix has triangularform and find that triangular form.A=0@0102 222 321ASolution:Process to find triangular form of a matrix is as follows-Step 1: Find at least one eigenvalue and corresponding eigenvector ofA.F o r a b o v e m a t r i x c h a r a c t e r i s t i c p o l y n o m i a l i sf( )= 3.Hence 0 is an eigenvalue which is repeated thrice. Eigenvectorsare 1,0,1 , 0,0,0 and 0,0,0 Step 2: Now Note thatu1= 1,0,1 2kerAand kerA⇢kerA2⇢kerA3.I fu22kerA2 kerAthenAu22kerA=Au2=↵u1for some scalar↵.A=0@2 22000 22 21Aand kerA2=<