Paper-I-Algebra-I-munotes

Page 1

11LINEAR EQUATIONSUnit Structure :1.0Introduction1.1Objectives1.2System of Linear Equation1.3Solution of the system of Linear Equations by GaussianElimination method1.0 INTRODUCTIONLinear word comes from line. You know the equationof astraight line in two dimensions has the formax by c.T h i si salinear equation in two variables x any y. solving this equation meansto findxandyin. which satisfiedax by c. The geometricinterpretation of the equation is that the set of all points satisfyingthe equation forms a straight line in the plane through the point/,cc oand with slope/ab c. In this chapter, we shall review thetheory of such equations in n variables and interpret the solutiongeometrically.1.1 OBJECTIVESAfter going through this chapter, you will be able to :Understand the characteristic of the solutions.Solve if the equations are solvable.Interpret the system geometrically.1.2 SYSTEMS OF LINEAR EQUATIONSThe collection of linear equations :11 11n n 121 12n n 2a x + ...+ a x = ba x + ...+ a x = b11nn mam x + ...+ am x = bmunotes.in

Page 2

2is called a system of m linear equations in nunknowns1nx ,..., x. Hereij ia, b∈R are given. We shall write this ina short form asnij j ij= 1ax = b , 1≤i≤m………………….. (1.2.1)Solving this system means to find real numbers1nx ,..., xwhich satisfy the system. Anyn-tuple (1nx ,..., x)w h i c hsatisfies the system is called a solution of the system. If12mb= b = - = b = 0,w es a yt h a tt h es y s t e mi sh o m o g e n e o u sw h i c hcan be written is short form, innij jj= 1ax = 0 , 1≤i≤m…………………… (1.2.2)Note that0= (0, … ,0)always satisfies (1.2.2). This solutionis called the trivial solution. We say (1nx ,..., x) is a nontrivial if(1nx ,..., x)≠0 , . . , 0. That is if there exits it least one suchthatix≠0.Perhaps the most fundamental technique for finding thesolutions of a system of linear equations is the technique ofelimination. We can illustrate this technique on the homogenoussystem12 31232x - x + x = 0x+ 3 x + 4 x = 0.If weadded (-2) times the second equation to the firstequation we obtain23-7x - 7x = 0i.e.23x= - x. Similarly eliminatingx2from the above equation, we obtain137x + 7x = 0i.e.13x= - x.S owe conclude that if123x, x , xis a solution then12 3x= x = - x.T h u sthe set of solutions consists of all triples(a, a,-a).Let1na, . . , abe asolution of the homogeneous system(1.2.2).Then we see that1nαa, . . . ,αais again a solution of(1.2.2)for any.T h i sh a st h efollowing geometric interpretation in the case of the threedimensional space3.Let(r, s, t)be a solution of the systemax + by + cz = 0.T h a tis,ar + bs + ct = 0. Then the solution set is a plane through theorigin. So the plane contains(0, 0, 0)and(r, s, t).The line joiningthese two points isx-0 t-0 z-0=== -α0-r 0-s 0-t(say) i.e.munotes.in

Page 3

3x=αr, y =αs,z =αtis again a solution of the systemax + by +cz = 0.Also if1,....,nbbis another solution of(1.2.2),then11 n na+ b, . . , a + bis again a solution of(1.2.2).These two togethercan be described as the set of solutions of a homogeneous system oflinear equations closed under addition and scalar multiplication.However, the set of solutions of a non-homogeneous systemof linear equations need not be closed under addition and scalarmultiplication. For example, consider the equation4x-3y = 1,a=(1, 1)is a solution butα1, 1 =α,αis not a solution ifα≠1.A l s o ,1b= , 04is another solution but5a+b= , 14is not a solution.The homogeneous system given by(1.2.2)is called theassociated homogeneous system of(1.2.1).Let S be the set of solutions of the non-homogenous systemandhSbe the set of solution of the associated homogenous systemof equations. AssumeS.hSis always non-empty, as the privialsolution0,...,0hS.L e txSandnyS.W ew i l lshow that forany,xy S .SincexSwe have1nij j ijax bsimilarly,10nij jjayfor1im.F o rand1im,w eh a v e1nij j jjax y=11nnij jij jjjax ay1nij jjax1ib for i m Soxyis also a solution of (1.2.1).Now if1,...,snzzzand1,...,snxxthus1nij i ijaz band1nij i ijax b.Therefore111nnnij j jij jij jjjjax z a x a z  0iibb.munotes.in

Page 4

4That is, ifxandzare any two solutions of the non-homogeneous system thenx-zis a solution of the homogeneoussystem. That is,hSxz.S obythe above two observations we canconclude a single fact byfollowing way.Let us fixxS. Then if we definehhx+S = x+y y∈S .The first observation thatx+αyis also a solution if (1.2.1)impliesnxSS. Also for allz∈S ,z=x+ z-x ∈x+S.T h i simpliesS⊂x+S.S ohS=x+S.T h i sxis calledaparticularsolution of (1.2.1). So we have the fact :To find all the solution of (1.2.1) it is enough to find all thesolutions of the associated homogeneous system and any particularsolution (1.2.1).These are mainly for the purpose of reviewing the so-calledGaussian elimination method of solving linear equations. Here weeliminate one variable and reduce the system to another set of linearequations with fewero number of variables. We repeat the aboveprocess with the system so obtained by deleting again one equationtill we are finally left with a single equation. In this last equation,except for the first xi terms, the rest of the variables are treated as“free” and assigned arbitrary peal numbers.Let us clarify this by thefollowing examples.Example 1.2.1 :12:122Ex y zEx y z  To eliminate y we do12EEand get the equation323xz.We treat z as the free variable and assign the value t to z. i.e. z = t. so213xt.S u b s t i t u t i n gya n dzi nE1we get/3xt.T h u st h esolution set S in given by21S = 1 - t,- t,t / t∈R3321= 1,0,0 + t - ,- , 1 / t∈R33.So (1,0,0) is a particular solution of the given system whichsatisfies both E1and E2, hence lies on both the planes defined by themunotes.in

Page 5

5equations E1and E2.A n d21-, -, 133is a point of3which lies onthe plane through the origin corresponding to the associatedhomogeneous system. Hence all the point on the line joining21-, - , 133and the origin also lie on the plane through the origin.Example 1.2.2 :Consider the system-11 2 3 421 2 3 431 2 3 4E: = x+ x+ x+ x= 1E: = x + x+ x - x= - 1E: = x + x+ x+ 5 x= 5Here E1-E2gives us 2x4=2 .S ox4=1 substituting this valuein above equation we get x1+x2+x3=0 .T h i si sal i n e a re q u a t i o ni nthree variables and we can think x2and x3as free variables. So welet x2=s and x3=ts ot h a tx1=-s-t. Hence the solution set isS:=- s - t , s , t , 1 s , t∈R= s -1, 1,0,0 + t -1,0, 1,0 + 0,0,0, 1 s,t∈R= 0,0,0, 1 + s -1, 1,0,0 + t -1,0, 1,0 s,t∈RCheck your Progress :Solve the following systems :1)3x + 4y + z = 0x+y+z=0Ans.t( -3,2, 1) t∈R2)x-y+4 z=42x + 6z = -2Ans. s( -1,-5,0) + t( -3, 1, 1) s t∈R3)3x + 4y = 0x+y=0Ans( . 0,0)Observation :By the above discussion we see that a homogeneous, systemneed not always have non-trivial solutions. We also observe that ifthe number of unknowns is more than the number of equations thenthe system always has a non-trivial solution.munotes.in

Page 6

6This can be geometrically interpreted as follows :Letax + b = 0, this single equation has two variables and itssolutions are all points lying on a line given by the equation.Again111222ax+ by+ cz=0ax + by + cz = 0always has non-trivial solutions whichlie on the line ofintersection of the above two planes.Theorem 1.2.1 :The systemnij jj= 1ax = 0for1≤i≤malways has non-trivialsolution ifm1.11 11n n∴a x +...+a x = 0.If each0iathen any value of the variables will be a solution andan o n-trivial solution ceptainly exists. Suppose some co-efficient,sayija≠0. Then we write-1j1 j 1 1 11j -1 j -1 ij +1 j +11n nx= - a ax + . . 1 + a x + a x + . . . . . + ax.Hence if we chooseiarbitrarily for allijand take-1j1 j 1 1 11j -1 j -1 1j +1 j +11n nα=- a aα+. . +aα+aα+. . +aα.t h e n1,..,nis asolution ofnxjijj= 1a= 0. Thus for m = 1 and n > 1 we get a non trivialsolution.We prove theresult by induction on m. As inductionhypothesis, let the system (m-1) equation in k variables where (m-1)

Page 7

711 1 11n n22 1 12n nE: = ax+ . . . . + ax= 0E: = ax + . . . . + ax= 0ii 1 1ih nE := a x + ....+ a x = 0mm 1 1mn nE := a x + ....+ a x = 0Since0ija, from Eiwe have-1jj j i j 1i, j -1 j -1x= - a a x + . . + a x +i, j +1 j +1in nax+ . . + a x.If we substitute this value ofxjin other equations we will getan e ws y s t e mo f(m-1)equations in(n-1)variables1j - 1 j + 1 nx, . . x , x , . . xas follows :For1≤k≤m , k≠i-1kkr kj i j ir rr≠jE: = a+ a (- a )a x= 0because by E1we get-111 1 1, j -1 j -1 1j i j i1 1i, j -1 j -1 i, j +1 j +1in nax + . . t a x + a - a ax + . . + a x t a x + . . + ax +1, 1 1 1.. 0jjnnax a x which implies.-111 1j ij i1 1a+ a - a a x + - - - - - +-11j -1 1j ij i, j -1 ja+ a - a a x + - - - - - +-11n 1j ij in na+ a - a a x= 0i.e.r≠jn-1i1r ij ij ir rr= 1E: = a+ a - a a x= 0so byinductionhypothesis(m-1)equationEkhas a non-trivial solution.1j - 1 j + 1 nx, . . . , x , x , . . , xasm- 1< n- 1. In particular,kx≠0forsomek≠j.W et a k ejjx=αso,-1ji j i r pr≠jα=- a aα.W ec l a i m1j - 1 jnα,..,α,α,..,αis a non-trivial solution.For1km,kkr r kj jrjEa x a 1kr r kj ijis srjrjaa a a   1kr kj ij ir prjaaaa  kEforkimunotes.in

Page 8

8As1,...nis a solution ofiE,ir pir r ijraa a 1ij ir rrjaa0ir ir rrja   1,,n is non-trivial since0kfor somekjbythe inductionhypothesis.Thus1,....,nis a non-trivial solution of the original system.Exercise 1.1 :1)Find one non-trivial solution for each one of the followingsystems of equations.a)x+2 y+z=0b)3x + y + z = 0x+y+z=0c)2x-3y + 4z = 03x + y + z = 0d)2x + y + 4z + 19 = 0-3x + 2y-3z + 19 = 0x+y+z=02)Show that the only solution of the following systems ofequations are trivial.a)2x+3 y=0x-y=0b)4x + 5y = 0-6x + 7y = 0c)3x + 4y-2z = 0x+y+z=0-x-3y + 5z = 0d)4x-7y + 3z = 0x+y=0y-6z = 0munotes.in

Page 9

91.3 SOLUTION OF THE SYSTEM OF LINEAREQUATIONS BY GAUSS ELIMINATION METHODLet the system of m linear equations in n unknown be1,1nij j ijax b i m .Then the co-efficient matrix is-11 12 112............nmm m naa aaa a Also we define the augmented matrix by11 12 1 112.... b........ bnmm m n maa aaa a We will perform the followingoperations on the system oflinear equations, called elementary row operations :Multiply one equation by a non-row number.Add are equation to another.Interchange two equations.These operations are reflected in operations on the augmentedmatrix, which are also called elementary row operations.Suppose that a system of linear equations is changed by anelementary row operation. Then the solutions of new system areexactly the same as the solutions of the old systems. By making rowoperations, we will try to simplify the shape of the system so that itis earier to find the solutions.Let us define two matrices to be row equivalent if one can beobtained from the other by a succession of elementary rowoperations. If A is the matrix of co-efficients of a system of linearequations, and B the column vector as above, so that (A, B) in theaugmented matrix and if (A1,B1) in row-equivalent of the system.AX = B are the same as the solutions of the system A1X=B1.munotes.in

Page 10

10Example 1.3.1 :Considerthe system of linear equations32 2 1223 4xy zxyzxy z       The augmented matrix is :3- 212111 - 1 - 1 - 22- 1304Subtract 3 times second row from first row :0- 545711 - 1 - 1 - 22- 1304Subtract 2 times second row from third row :0- 545711 - 1 - 1 - 20- 3528Interchange first and second row.11 - 1 - 1 20- 54 570- 35 28Multiply second row by-111 - 1 - 1 - 205 - 4 - 5 - 70- 3528munotes.in

Page 11

11Multiply second row by 3 and third row by 511- 1- 1- 201 51 21 5 - 2 10- 1 52 51 0 4 0   Add second row to third row.11 - 1 - 1- 20 15 -12 -15 -2100 1 3- 51 7The new system whose augmented matrix is the last matrixcoin be written as :215 12 15 2113 5 19xyzyzz       Now it we consider,19 513ttz1215 19 5 15 2113255 51195255 51 19 52195 1315 54194ytttyttxtt     This method is known as Gauss elimination method.Example 1.3.2 :Consider the system of linear equations.1234512512534512 3 4 51122112221xxxxxxxxxx xxxxxx x x x        munotes.in

Page 12

12The augmented matrix is :111111-1 -1 0 0 1 -1-2 -2 0 0 1 10011 1 - 11122 21Adding 2ndrow to 1strow, two times 3rdrow to 1strow andsubtracting last row from 1strow we get.1111110011 200022 330011 1 - 10011 10Subtracting twice the 2ndrow from 3rdrow, 4throw from 2ndrow and 5throw from 2ndrow we get1111110011 200000 - 130000 0 - 40000 0- 3The equations represented by the last two rows are :123451234543ox ox ox ox oxox ox ox ox ox  Which implies that the system is inconsistent.Exercise 1.2 :For each of the following system of equations, use Gaussianelimination to solve them.i)12 332 4xx x 12 312 322111 2 14xx xxx x munotes.in

Page 13

13ii)12 324xx x 12 312 323 1734 7xx xxx x iii)12340xxxx12 3 412 3 412342322232522 4xx x xxx x xxxxx    iv)12 3 433xx x x  12 3 412 3422 2 832 1xx x xxx xx   AnswerExercise 1.2 :i)51 7,, :48xxxx r u a l   ii)inconsistentiii)inconsistentiv)15 5 1 1,, , ,48 4 8xxx     ™™™™munotes.in

Page 14

142VECTOR SPACEUnit Structure :2.0Introduction2.1Objectives2.2Definition and examples2.2.1Vector Space2.2.2Sub space2.2.3Basis and Dimension2.0 INTRODUCTIONThe concept of a vector is basic for the study of functions ofseveral variables. It provides geometric motivation for everythingthat follows. We know that a number can be used to represent apoint on a line, once a unit length is selected. A pair of numbers (x,y)can be used to represent a point in the plane where as a triple ofnumbers (x,y,z) can be used to represent a point in 3 dimensionalspace denoted by3.T h el i n ec a nb ec a l l e d1-dimensional spacedenoted byor plane. 2-) dimensional space denoted by2Continuing this way we can define a point in n-space as (x1,x2,-z, xn). Hereis a set of real numbers and x is an element inwhich we write asx‰.2is a set of ordered pair and(x,y)2‰.T h u sXi sa ne l e m e n to fnornX‰means X = (x1,x2,…, xn). These elements as a special case are called vectors fromrespective spaces. The vectors from a same set or space can be addedand multiplied by a number. It is now convenient to define in generalan o t i o nw h i c hi n c l u d e st h e s ea sas p e c i a lc a s e .2.1 OBJECTIVESAfter going through this chapter you will be able to :Verify that a given set is a vector space or not over a field.Get concept of vector subspace.Get concept of basis and dimension of vector space.munotes.in

Page 15

152.2 DEFINITION AND EXAMPLESWe define a vector space to be a set on which “addition” and“readar multiplication” are defined. More precisely, we can tell.2.2.1 Definition : Vector SpaceLet (F, + , . ) be a field. The elements of F will be calledscalars. Let V be a non-empty set whose elements will be calledvectors. Then V is a vector space over the field F, if1.There is defined an internal composition in V called addition ofvectors and denoted by ‘+’ in such a way that :i)VB C‰for all,VBC‰(closer property)ii)B CC Bfor all,VBC‰(commutative property)iii)

B C H  B C Hfor all,,VBCH‰(Associate property)iv)an elementOV‰such thatOB Bfor allVB‰(Existence ofIdentity)v)To every vectorVB‰ av e c t o rVB ‰such that
B B (Existence of inverse)2.There is an external composition in V over F called scalarmultiplication and denoted multiplicatively in such a way that :i)aVB‰for allaF‰andVB‰(Closer property)ii)
aa aB C  B Cfor allaF‰and,VBC‰(Distributiveproperty)iii)
ab a b B  B Bfor all,,ab F‰VB‰(distributive property)iv)

,ab a b a b FB B  ‰andVB‰.v)1BBfor allVB‰and 1 in the unity element of F.When V is a vector space over the field F, we shall say thatV(F) is a vector space or sometimes simply V is a vector space. If Fis the fieldor real numbers, V is called a real vector space;similarly if F is Q or C, we call V as a rational vector space orcomplex vector space.Note 1 :There should not be any confusion about the use of theword vector. Here by vector we do not mean the vector quantitywhich we have defined in vector algebra as a directed line segment.Here weshall call the elements of the set V as vectors.munotes.in

Page 16

16Note 2 :The symbol ‘+’ is used for addition of vectors which is alsoused to denote the addition of two scalars in F. There should be noconfusion about the two compositions. Similarly for scalarmultiplication, we mean multiplication of an element of V by anelement of F.Note 3 :In a vector space we shall be dealing with two types of zeroelements. One is the zero elemen of F which is owl well known 0.Another is the zero vector in V i.e. if3,( 0 , 0 , 0 )VO.Example 1 : The n-triple space, Fn.Let F be any field and let V be the set of all n-tuples
12,, . . . ,nxx xBof scalars xiin F. If
12,, . . . ,nyy yCwith yiin F,the sum ofBandCis defined by
11,...nnxyxyB C .T h eproduct of scalar c and vectorBis defined by
12,, . . . ,ncc x c x c xB.This vector addition and scalar multiplication satisfy all conditionsof vector space.(Verfication is left for the students).For n = 1, 2 or 3, F =,2or3are basic examples ofvector space.Example 2 : The space of mxn matrices, Mmxn(F).Let F be any field and let m and n be positive integers. LetMmxn(F) be the set of all mxn matrices over the field F. The sum oftwo vectors A and B in Mmxn(F) is defined by
ij ijijAB A B  .The product of a scalar C and the matrix A is definedby
ijijCA CA.Example 3 : The space of functions from a set to a field.Let F be any field and let S be any non-empty set. Let V bethe set of all function from the set S into F. The sum of two vectors¦and g in V is the vector¦+ g i.e. the function from S into F,defined by



.nnnfg f f f g f  The product of the scalar c and the function¦is the functione¦defined by


nncf f cf f.For this example we shall indicate how one verifies that V isav e c t o rspace over F. Here\^::Vf f S Fl.W eh a v e ,munotes.in

Page 17

17



.fg s f sg s s S   ‰Since¦(s) and g(s) are in F and F isa field, therefore¦(x) +g(x) in also in F. Thus¦+g is also afunction from S to F. Therefore,fg V f g V ‰ ‰. Therefore V isclosed under addition.Associatvity of addition :We have




fg h x fg xh x ¯  ¢±(by def.)
() ()fx g x h x ¯ ¢±(by def)

()fx g x h x ¯ ¢±[
,( ) ,( )fxg xh x'are elements of F and addition in F isassociative]




fx g hxfg h x  ¯ ¢±

fghf g h=  Commutativity of addition :We have



fg x f xg x 

gx f x ['addition is commutative in F]

¦gx fggf=  Existence of Additive identity :Let us define a functionl:OS Flsuch thatl
Ox Ox S ‰.ThenlOV‰and it is called zero function.We havel


l
() ()fO x f xO x f xOf x   lfOf= =The functionlOis the additive identity.Existence of additive inverse :Let.fV‰Let us define a function:fS Flby

fx fx x S ¯   ‰¢±.T h e nfV‰and we have




ff x f x f x ¯ ¯   ¢±¢±munotes.in

Page 18

18



l
fx fxfx fxOO x ¯ ¢±
lff O= =the function-¦in the additive inverse of f.Now for scalar multiplicationif F‰andfV‰,t h e n


,xScf x cf x‰.Now
fx F‰andcF‰. Therefore c¦(x) is in F. Thus V isclosed with respect to scalar multiplication.Next we observe thati)IfcF‰and,fg V‰then





cf g x c f g x cfx gx ¯   ¯  ¯   ¢± ¢ ± ¢±() ()() ( )() ( )cF x cg xcf x cg x  ()cf g c f c g=  ii)If c1,c2‰Fa n dfV‰,t h e n


1212¦()cc xc c f x ¯  ¢±1212() ()() ( ) () ( )cf x cf xcf x cf x 
12 1 2cc fc fc f=  iii)If12,cc F‰andfV‰then




121212cc f x cc f x c c x ¯ ¯ ¢±¢±



1212¦cc fxcc x ¯¢± ¯¢±

12 1 2¦cc f c c=iv)If 1 is the unity element of F andfV‰,t h e n



11¦1fx x f xff=Hence V is a vector space over F.munotes.in

Page 19

19Example 4 : The set of all convergent sequences over the field ofreal numbers.Let V denote the set of all convergent sequences over thefield of real numbers.Let\^\^12 ,,, . . . , . . . . . ,nnB B B B  B\^\^12 ,,, . . . , . . . . .nnC C C C  Cand\^\^12 ,, ,..., .....nnH H H H  Hbe any three convergent sequence.1.Properties of vector addition.i)We have\^\^nnB C B C\^nnB Cwhich is also aconvergent sequence. Therefore V is closed for addition ofsequences.ii)Commutativity of addition : We have\^\^nnB C B C \^\^\^\^nn n n n nB C C B C B  C Biii)Associativity of addition : We have
\^\^\^nn n ¯B C H  B C H¢±\^\^
\^
\^\^\^\^
nn nnn nnn nnnnB C HB C HB C H ¯B C H¢±B C Hiv)Existence of additive identity : The zero sequence\^\^00 , 0 , . . , 0 , . .is the additive identity.v)Existence of additive inverse : for every sequence\^nBasequence\^nBsuch that\^\^\^\^0nnn nB  B B  B.2)Properties of scalar multiplication :i)Leta‰.T h e n\^\^nnaa aB B  Bwhich is also aconvergent sequence becauselim limnnhhaakBkBB B.Thus V is closed for scalar multiplication.munotes.in

Page 20

20ii)Leta‰and,VBC‰,t h e nw eh a v e
\^\^\^nnnnaaa ¯B C  B C  B C¢±
\^\^\^\^\^\^nnnnnnnnaaa a aaaa aB CB CB CB C B Ciii)Let,ab‰andVB‰,

\^
\^nnab ab ab B  B B\^\^\^\^\^nnnnnnab a bab a bB BB BB B B Biv)

\^
\^
\^nnnab ab ab a bB B  B  B\^\^
nnab a b ab ¯B  B B¢±v)\^\^\^11 1nnnB B  B  B BThus all the postulates of a vector space are satisfied. HenceV in a vector space over the field of real numbers.Check your Progress :1.Show that the following are vector spaces over the field.i)The set of all real valued functions defined in some interval[0,1].ii)The set of all polynominals of degree at most n.Example 5 :Let V be the set of allpairs (x, y) of real numbers and let F bethe field of real numbers. Let us define


111,, , 0xy x y x x  (, )cx y(, 0 )cx.Is V with these operations, a vector space over the field?Solution :If any of the postulates of the vector space in not satisfied,then V will not be a vector space. We shall show that for theoperation of addition of vectors as defined in this problem theidentity element does not exist. Suppose
11,xyis additive identityelement.munotes.in

Page 21

21Then we must have


11,,, ,xy x y xy xy   ‰
1,0xxº
,xywhich is notpossible unless y = 0. Thusno element
11,xyof V s.t.



11,,, ,xy x y xy xy V   ‰.As the additive identity element does not exist in V, it is not avector space.Exercise : 2.11)What is the zero vector in the vector space4?2)Is the set of all polynomials inxof degree2bav e c t o rs p a c e ? .3)Show that the complex field¢is a vector space over the realfield.4)Prove that the set
\^,: ,Va b a b‰is a vector space over thefieldfor the compositions of addition and scalar multiplicationdefined as




,, , , ,ab cd a cb d k ab k ak b  .5)Let V be the set of all pairs(x, y)of real numbers and let F be thefield of real numbers. Define



11 1 1 1,,,,xy x y x x y y cxy  
,ex y.S h o wt h a tw i t hthese operations V is not a vector space over.2.2.2 Definition : Vector subspaceLet V be a vector space over the field F and letWC V.T h e nWi sc a l l e das u b s p a c eo fVi fWi t s e l fi sav e c t o rs p a c eo v e rFw i t hrespect to the operations of vector addition and scalar multiplicationin V.Theorem 1 :The necessary and sufficient condition for a non-empty subsetWo fav e c t o rs p a c eV(F) to be a subspace of V is,ab F‰and,Wa b WBC ‰ º B C‰Proof :The condition necessary :If W is a subspace of V, by the definition it is also a vectorspace and hence it must be closed under scalar multiplication andvector addition. Therefore,aF W a W‰B ‰º B ‰and,,bF W b W‰C ‰ºC ‰and,aW b Wab WB‰ C‰ º B C‰.H e n c ethe condition is necessary.munotes.in

Page 22

22The condition sufficient :Now W is a non-empty subset of V satisfying the givencondition i.e.,ab F‰and,Wa b WBC‰ º B C‰.T a k i n g1, 1abwe have,WWB C‰ BC‰.T h u sWi sc l o s e dw i d e ra d d i t i o nt a k i n g1, 0ab we haveWWB ‰ B ‰. Thus additive inverse of eachelement of W is also in W.Taking,ab00,w eh a v et h a ti fWB‰,WB B‰00Wº ‰00Wº‰0.Thus the zero vector of V belongs to W which is also the zerovector in W.Since the elements of W are also the elements of V, thereforevector addition will be associatine as well as commutative in V.Now takingC0,w es e ethat if,ab F‰andWB‰,t h e nal WB ‰0i.e.aWB ‰0i.e.WB‰0So W is closed under scalarmultiplication.The remaining postulates of a vector space willhold in Wsince they hold in V of which W is a subset. Thus W(F) is a vectorspace. Hence W(F) is a subspace of V(F).Example 5 :a)If V is any vector space V is a subspace of V. The subsetconsisting of the zero vector alone is a subspace of V, called the zerosubspace of V.b)The space of polynomial functions over the field F is asubspace of the space of all functions from F intoF .c)The symmetric matrices form a subspace of the space of allnxnmatrices over F.d)An nxn matrix A over the field¢of complex numbers isHermitian ifkjjkAAfor each j, k, the bar denoting complexconujugation. Azxzmatrix is Hermitian if and only if it has the formz x iyx iy ¯ ¡°¡°X¢±wherex, y, zandware real numbers.munotes.in

Page 23

23The set of all Hermitian matrices is not a subspace of thespace of allnxnmatrices over¢.F o ri fAi sH e r m i t i a n ,i t sd i a g o n a lentries A11,A22are all real number but the diagonal entries of iA arein general not real. On the other hand, it is easily verified that the setofnxncomplex Hermitian matrices is a vector space over the fieldwith the usual operations.Theorem 2 :Let V be a vector space over the field F. The intersection ofany collection of subspaces of V is a subspace of V.Proof :Let\^Wabe a collection of subspaces of V and letWWabe theirintersection. By definition of W, it is the set of allelements belonging to every Wa.S i n c ee a c hWais a subspace, eachcontains the zero vector. Thus the zero vector is in the intersectionWa n ds oWi sn o n-empty. LetbWC‰and,ab F‰.S o ,b o t h,BCbelong to each Wa.B u tWais a subspace of V and henceWab aB C‰.T h u sWabB C‰.S oWi sasubspace of V.The above theorem follows that if S is any collection ofvectors in V, then there is a smallest subspace of V which containsS,that is, a subspace which contains S and which is continued inevery other subspace containing S.Definition :Let S be a set of vectors in a vector space V.Thesubspace spannedby S is defined to be the intersection W of allsubspace of V which contain S when S is a finite set of vectors,\^12,, . . . .nSBB B,w es h a l ls i m p l yc a l lWt h es u b s p a c es p a n n e db ythe vectors12,, . . . ,nBB B.Definition:L i n e a rCombination :Let V(F) be a vector space. If12,, . . . ,nVBB B‰,t h e na n yv e c t o r11...nnaaB B Bwhen12,, . . . .naa a F‰is called a linearcombination of the vectors12,, . . .nBB BDefinition:L i n e a rS p a n:Let V(F) be a vector space and S be any non-empty subset ofV. Then the linear span of S is the set of all linear combinations offinite sets of elements of S and is denoted by L(S). Thus we have\11 2 212()... : , ,..,nnnLS a a aSB B B B B B ‰and^12,, . . .naa a F‰.munotes.in

Page 24

24Theorem 3 :The linear span L(S) of any subset S of a vector space V(F) isas u b s p a c eo fVg e n e r a t e db ySi.e. L(S) = {S}.Proof :Let,BCbe any two elements of L(S).Then11....nnaaB B Band11....nnbbC C Cwhere,, ,, 1 , . . . , 1 , . . . .iiiiab F S i mj n‰B C ‰  .Ifa, lbe any two elements of F, thenabB C
11..mmaa aB B
11...nnbb b C C=
11..aaB
mmaa B
11bb C
..nnbb C=

11..maa aa mB B

11...nnbb bb C C.ThusabB Chas been expressed as a linear combination of afinite set11,..., , ,...,mnBB CCof the element of S. Consequently()ab L SB C‰.T h u s,ab F‰and,( )()bL S a b L SCº B C ‰.Hence L(S) is a subspace of V(F). Also each element of Sbelongs to L(S) as ifrSB‰,t h e n1rrBBand this implies that()rLSB‰. Thus L(S) in a subspace ov V and S in contained in L(S).NowifWi sa n ys u b s p a c eo fVcontainingS, then eachelement of L(S) must be in W because W is to be closed undervector addition and scalar multiplication. Therefore L(S) will becontained in W. Hence L(S) = {S} i.e. L(S) in the smallest subspaceof V containing S.Check your progress :1)Let
\^21 2,, 0 :,Wa aa a‰.S h o wt h a tWi sas u b s p a c e so f3.2)Show that the set W of the elements of the vector space3of theform (x + 2y, y,-x + 3y) where,xy‰is a subspaces of3.3)Which of the following are subspaces of3.i)
\^,2 ,3 : , ,xyzx y z‰ii)
\^,, :xxx x‰iii)
\^,, :,, a r e r a t i o n a l n u m b e r sxyz xyzmunotes.in

Page 25

25Definition:L i n e a rD e p e n d e n c e:Let V(F) be a vector space,A finite set\^1,...,nBBof vectorsof V is said to be linearly dependent ifthere exist scalars1,..., Fnaa‰not all of them O such that11.. 0nnaaB B.Definition:L i n e a ri n d e p e n d e n c e:Let V(F) be a vector space. A finite set\^1,...nBBof vectors ofV is said to be linearly independent if every relation of the form11..aB 0, ,10nn iiaa F i n aB ‰ bbº for each1inbb.Any infinite set of vectors of V is said to be linearlyindependent if its every finite subsetislinearly independent,otherwise it is linearly dependent.Exercises :2.21)Which of the following sets of vectors
1,..naaBinnaresubspaces ofn?
3np.i)allBsuch that10ap.ii)allBsuch that12 33aa a .iii)allBsuch that221aa.iv)allBsuch that120aa.v)allBsuch that2ais national..2)State whether the following statements are true or false.i)As u b s p a c eo f3must always contain the origin.ii)The set of vectors
2,xyB ‰for which22xyis asubspace of2iii)The set of ordered triads (x,y,z) of real numbers withx>0isas u b s p a c eo f3.iv)The set of ordered trials (x, y, z) of real numbers withx+y=0is a subspaces of3.3)In3,examine each of the following sets of vectors for lineardependence.i)

\^2,1, 2 , 8, 4,8ii)


\^1, 2, 0 , 0, 3,1 , 1, 0,1iii)

\^2,3,5 , 4,9, 25iv)


\^1, 2,1 , 3,1, 5 , 3, 4, 7munotes.in

Page 26

264)Is the vector (2,-5,3) in the subspace of3spanned by thevectors (1,-3,2), (2,-4,-1), (1,-5, 7)?5)Show that the set {1, x, x(1-x)} is a linearly independent set ofvectors in the space of all polynomials over.2.2.3 Basis and dimension :In this section we will assign a task to give dimension tocertain vector spaces. We usually associate ‘dimension’ withsomething geometrical. But after developing the concept of a basisfor a vector space we can give a suitablealgebraicdefinition of thedimension of a vector space.Definition: Basis of a vector spaceA subset S of a vector space V(F) issaidto be a basis ofV(F), ifi)S consists of linearly independent vectors.ii)Sgenerates V(F) i.e. L(S) = V i.e. each vector in V is a linearcombination of a finite number of elements of S.Example 1 :LetnV, If
1,.....,nnxx x‰we callxj,t h ei t hc o-ordinate ofx.. Let ei:(0, …, 0, 1, 0, .. 0) be the vector whoseit isco-ordinate in 1 and others are 0. It is easy to show that\^1iei nbbis a basis of V. This called the standard basis ofn.Example 2 :The infinite set\^21, , , ..., , ..nSx x xis a basis of the vectorspace F[x] of polynomials over the field F.Definition :FiniteDimensionalVectors Spaces. The vector spaceV(F) is said to be finite dimensional or finitely generatedif thereexists a finite subset Sof V such that V = L(S).The vector space which is not finitely generated may bereferred to as an infinite dimensional space.Theorem 1 :Let V be a vector space spanned by a finite set of vectors12,, . . . .mCC C.T h e na n yi n d e p e n d e n ts eto fv e c t o r si nVisfinite andcontains no more thanmelements.munotes.in

Page 27

27Proof :To prove the theorem it suffices to show thateverysubset Sof V which contains more thanmvectors is linearly dependent. Let Sbe such a set. In S there are distinct vectors12,, . . .nBB Bwhere n > m.Since1,...mCCspanV, there exists scalarsAijin F such that1mjij iiAB Cœ. For any n scalars12,, . . . ,nxx xwe have111....nnnjjjxx xB B Bœ
111111nmji j ijjnmji j j ijimnij j iijxAxA xAxCC¬­ž­Cž­ž­žŸ®œœœœœœSince n > m, the homogeneous system of linear equation10,nij jjAxœ1imbbhas non trivial solution i.e.12,, . . .nxx xare not all0. So for11 2 212.... 0, , ,....,nnnxa x x x x x B B are not all 0. Hence Sin a linearly dependent set.Corollony 1 :If V is a finite-dimensional vector space, thenanytwo baresof V have the same number ofelements.Proof :Since V is finite dimensional, it has a finiteb a s i s\^12,, . . .mCC C.By above theorem every basis of V is finite and contains no morethan m elements. Thus if\^12,, . . . . ,nBB Bis a basis,nmb.B yt h esame argument.mnbHence m = n.This corollany allows us to define the dimension of a finitedimensional space V by dimV. This leads us to reformulate,Theorem 1 as fallows :Corollany 2 :Let V be a finite-dimensional vector space and letn= dim V. Then (a) any subset of V which contains more thannvectors in linearly dependent (b) no subset of V which containsfewerthan n vectors canspanV.munotes.in

Page 28

28Lemma.Let S be a linearly independent subset of a vector space V.supposeCis a vector in V which in not in the subspace spanned byS. then the set obtained by adjoiningCto S in linearly independent.Proof :Suppose1,....,mBBare distinct vectors in S and that11..0mmex e l B C .T h e n0lefor otherwise111..meell¬ ¬­­žžC  B  B­­žž­­žžŸ® Ÿ®andCis in the subspace spanned by S.thus11.. 0mmeeB B and since S is a linearly independent set eachei=0.Theorem 2 :If W is a subspace of a finite dimensional vector space Vevery linearly independent subset of W is finite and is part of a basisfor W.Proof :Suppose0Sis a linearly independent subset of W. if S is alinearly independentsubset of W containing0S.H e n c eSi sa l s oalinearly independent subset of V. Since V in finite dimensional, Scontains no more than dim V elements.We extend0Sto a basis for W, as follows. If0SspansW,then0Sis a basis for W and our job isdone. If0Sdoesnot span W,we use the preceding lemma to find a vectorC,i nWs u c ht h a tt h eset\^10 1SSC*is independent. If S1spans W, our work is over. Ifnot, apply the lemma to obtain a vector2Cin n such that\^21 2SSC*is independent. If we continue in this way them by atmost dim V steps.wer e a c has e t\^01,....mmSSC C*which is a basisfor W.Carollony 1 :If W is a proper subspace of finite-dimensionalvector space V, then W is finite dimensional and dim W < dim V.Proof :Let us consider W contains a vector0Bv. So there is a basisof W containingBwhich contains nomorethan dim V elements.Hence W is finite-dimensional and dim Wbdim V.Since W is aproper subspace, there is a vectorCin V which is not in W.munotes.in

Page 29

29AdjoiningCto any basis of W, we obtain a linearly independentsubset of V.Thus dim W < dim V.Theorem 3 :If W1,a n dW2are finite-dimensional subspaces of a vectorspaces V, then W1+W2is finite-dimensional and dim W1+d i m W2=dim
1212dim( )WW WW .Proof :12WWhas a finite basis\^11,..., , ,...,nmBB CCwhich is part of abasis\^11,..., , ,...,nmBB CCfor W, and part of a basis\^11,..., , ,...,nnBB HH.The subspaces W1+W2is spanned by the vectors\^111,..., , ,..., , ,....nmnBB CC HHand these vectors form an independentset. For suppose0iijjrrxyzB C Hœœœ.Thenrriijjzx yH B Cœœœwhich shows thatrrzHœbelongto W1.A srrzHœalso belongs to W2it follows thatppiizeH Bœœfor certain sectors1,...., .kccBecause the set\^11,..., , ,...,nnBB HHis independent, each of the scalars0rz.Thus0iijjxyB Cœœand since\^11,..., , ,...,nmBB CCin also anindependent set, each0ixand eachTj=0.T h u s\^111,..., , ,..., , ,....nmnBB CC HHis a basisfor W1+W2.F i n a l l y d i mW1+dimW2

km kn


1212dim dimkm k nWW WW  Example 1 :The basis set of the vector space of all 2X2 matrices over thefield F is10010010,,,00001001£² ¯  ¯  ¯  ¯¦¦¦¦¡° ¡° ¡° ¡°¤»¡° ¡° ¡° ¡°¦¦¢± ¢± ¢± ¢±¦¦¥¼Sot h ed i m e n s i o no fthat vectorspace is 4.munotes.in

Page 30

30Exercises :2.31)Show that the vectors


1231, 0, 1 , 1, 2,1 , 0, 3, 2B  B B form a basis for3.2)Tell with reason whether or not the vectors (2,1,0), (1,1,0) and(4,2,0) form a basis of3.3)Show that the vectors1(1,1, 0)Cand2(1, ,1 )iiC are in thesubspaceWo fc3spanned by (1,0,i) and (1+I, 1,-1), and that1Cand2Cform a basis of W.4)Prove that the space of allmxnmatrices over the field F hasdimensionmnbyexhibitingab a s i sf o rt h i ss p a c e .5)If\^123,,BBBis a basis of3()Vshow that\^12 23 31,,B B B BB Bis also a basis of3()V.AnswerExercises 2.11.(0,0,0,0)2. YesExercises 2.21.(i) not a subspace(ii) Subspace(iii) not a subspace(iv) not a subspace(v) not a subspace.2.(i) tpul(ii) false(iii) false(iv) tpul3.(i) dependent(ii) independent(iii) dependent(iv) independent4.no™™™™munotes.in

Page 31

313LINEAR TRANSFORMATIONUnit Structure :3.0Introduction3.1Objective3.2Definition and Examples3.3Image and Kervel3.4Linear Algebra3.5Invertible linear transformation3.6Matrix of linear transformation3.0 INTRODUCTIONIf X and Y are any two arbitrary sets, there is no obviousrestriction on the kind of maps between X and Y, except that it isone-one or onto. However if X and Y have some additionalstructure, we wish to consider those maps which in, some sense‘pruserve’the extra structure on the sets X and Y. A ‘lineartransformation’ pruserves algebraic operations. The sum of twovectors is mapped to the sum of their images and the scalar multipleof a vector is mapped to the same scalar multiple of its image.3.1OBJECTIVEThis chapter will help you to understandWhat is linear transformation.Zero and image of it.Application of linear transformation in matrix.3.2DEFINITION AND EXAMPLESLet U(F) and V(F) be two vector spaces over the some fieldF. A linear transformation from U into V is a function T from U intoVs u c ht h a t
TT( ) T( )al a lB C  B Cfor all,BCin U and for all a, bin F.munotes.in

Page 32

32Example 1 :The function

32T:VVldefined by

T, , ,abc ab,,abc‰.L e t


111222 3,, , ,,abc abc VBb Cx ‰. If,ab‰then


111222TT, , , ,ab a a b cb a b c ¯B C  ¢±









12 12212 1211 2211 2 2111222T, , ,,,,,,T, , T, ,TTaa ba ab bb ac bcaa ba ab bbaa ab ba bbaab ba baa b c ba b cab     B C=T is a linear transformation from
3Vto
2V.Example 2 : The most basic example of linear transformation is:nmTF Fldefined by1122..T...nnAa¬ ¬MM­­žž­­žž­­žž­­MMžž­­žž­­žž­­žž­­­­žž­­žž­­žž­­žž­­žž­­žž­­žž­­­­žž­­žž­­žž­­­­žžMMŸ® Ÿ®where A isaf i x e dm x nmatrix.Example3:Let V(F) be the vector space of all palimonies over ditf(x) =01aa x be a polymonial of...nnaxE V ni nt h einderminate F. Let us define
1112. .nnDf x a a x na x if n > 1and Df(x)= 0 if f(x) is a constant polynomial. Then thecorresponding D from V into V is a linear transformation on V.Example 5 :Let
Vbe the vector space of all continuous functions frominto. IffV‰and we define T by


Txofx f t¨dt x‰,then T is a linear transformation from V into V.Some particular transformation :1)Zero Transformation : Let U(F) and V(F) be two vector spaces.The functions T, from U into V defined by
0TB(zero vector ofmunotes.in

Page 33

33V)UB‰in a linear transformation from U into V. it is called zerotransformation and isdenoted iflO.2)Identity operator : Let V(F) be a vector space. The function Iform V into V defined by
IB B VB‰is a linear transformationfrom V into V, I is known as identity operator on V.3)Negative of alinear transformation : Let U(F) and V(F) be twovector spaces. Let T be a linear transformation from U into V. Thecorresponding-Td e f i n e db y


TTU ¯B  B  B ‰¢±is a lineartransformation from U into V.-Ti sc a l l e dt h en e g a t i v eo ft h el i n e a rtransformation of T.Some properties of linear transformation :Let T be a linear transformation from a vector space U(F) intoa vector space V(F). Theni)T (O) = O where O on the left hand side is zero vector of U andO on the right hand sideii)

TTUB  B B ‰iii)


TT T ,UBC  B  C B C‰iv)



11 2 211 2 2T...TT . . . Tnnnnaa a a aaB B B  B B Bwhere12,, . . . .nUBB B‰and12,, . . .naa a F‰3.2 IMAGE AND KERNEL OF A LINEARTRANSFORMATIONDefinition :Let U(F) and V(F) be two vector spaces and let T be alinear transformation from U into V. Then the range of T is the set ofall vectors in V such that
TC Bfor someBin U. This is called theimage set of U under T and by ImT, i.e.
\^TT:mIUB B ‰.Definition :Let U(F) and V(F) be two vector spaces and let T be alinear transformation from U into V. Then the kernel of T written askept T is the set of all vectorsBin U such thatT( )OB(zero vectorof V). Thus ker T =
\^:UT O VB‰ B  ‰,k e rTi sa l s oc a l l e dn u l lspace of T.munotes.in

Page 34

34Theorem 1:If U(F) and V(F) are two vector spaces and T is a lineartransformation from U into V then i) Ker T is a subspace of U (ii)ImTi sas u b s pace of V.Proof :i)ker
\^T: TUO VB ‰ B ‰SinceT( ) ,OO V‰therefore atleast0‰ker T. Thus ker T is a non-empty subset of U. Let12,BB‰ker T, Then

12,TO TOB B.Let,ab F‰.T h e n12abUB B‰and


1212TTlab a TB B  B B12ker TaO bO O O O V a b   ‰ = B B ‰.Thus Ker T is a subspaceii)Obviously ImTi san o n-empty subset of V.Let12,TmICC‰.T h e12,B BEU such that

11 2 2T, TB C B CThen

1212.T . Tab a bC C   B  B
12T.xb   BNow, U is a vector space.=12,UB B‰and,ab F‰12ab Uº B B ‰Consequently
12 1 2T.Im Tab d a bB   C C ‰Thus,ImTis s subspace of V.Theorem 2 : Rank nullify theorem.Let U(F) and V(F) be two vector spaces and T be a lineartransformation. Suppose U is finite dimensional. Then,dim dim dim ImUK e r T T   Proof :If\^0ImT,t h e nkerT Vand theorem is proved for thetrivial case.Let\^12,, . . . ,rvv vbe a basis ofImTfor1rp.Let12,, . . . ,rvv v U‰such that
iivT vLet\^12, , ...,qvv vbe the basis ofkerT.We have to show that\^1212, ,..., , , , ...,rqvv vvv vforms a basis of U.munotes.in

Page 35

35LetuU‰.T h e n
Tu I m T‰.H e n c e t here are real numbers12,, . . . ,rcc csuch that
11 2 2...rrTu vv vv vv  


11 2 2...rrvT u vT u vT u
11 1 2 2...rrTv u vu v u  =\^
11 2 2... 0rrTu vu vu vu   =\^11 2 2...rruv uv u v u k e r T  ‰This would again mean that there are numbers12, ,...,qaa asuch that\^11 2 2...rruv uv u v u  =11 2 2...qqau au a u  i.v.11 2 211 2 2......rrqquv u v u v u a u a u a u     So, u is generated by1212,, . . . ,,,, . . . ,rquu uu u u.Next to show that these vectors arelinearly independent, let1212,, . . . ,,,, . . . ,rqxx xyy ybe the real numbers, such that11 2 211 2 2...... 0rrqqxu xu xu yu yu y u Then
00T
1111......rrqqTx u xu y u yu     


1111......rrqqxT u xT u T y u y u  11... 0rrxv xv But1,...,rvvbeing basis ofImTare linearly independent.So,12,..., 0rxx x  .=,1... 0qqyu y u By the same argument12... 0qyy y   So,121,, . . . ,,, . . . ,rquu uu uare linearly independent.Thus,dimUr q dim Im dim kerTT  Example 1 :Let33:Tlis defined by

,, 2 , , 2Txy z x y z y z x y z     .munotes.in

Page 36

36Let us show that T is a linear transformation. Let us also findker , ImTT,t h e i rb a s e sa n dd i m e n s i o n s .To check the linearity let,ab‰and

11 1 2 22,, , ,,xyz xyz    ‰


11 1 2 22,, ,,Tax y z bx y z 
12 12 12,,Ta x b x a y b y a z b z   12 1 2 1 2 1212 1222,,ax bx ay by az bz ay by az bz ax bx    
12 1 222ay by az bz  



11 122 2 1 12222,,ax y z bx y z ay z by z  


11 1 2 2 222ax y z bx y z  
11 1 1 1 1 1 1 2 2 2 2 2 22, , 2 2 , ,ax y z y z x y z bx y z y z x       
222yz

11 122 2,, ,,aT x y z bT x y z   .Hence, T is a linear transformation.Now,
,, k e rxyz T‰iff

,, 0 , 0 , 0Txy z i.e. iff

2, , 2 0 , 0 , 0xy z y z x yz        This gives us20020xy zyzxy z     By second equationyzsubstituting in third equation20 3xz z x z    º   31 1xyz=  

,, 3 , 1 , 1xyz z  
\^ker 3, 1,1 :Tz z=    ‰So,kerTis generated by
3, 1,1 .H e n c e i t s b a s i s i s
\^3, 1,1 anddimension is 1.Now,

,, 2 , , 2Txyz x y z y z x y z     


1, 0, 1 2, 1, 1 1, 1, 1xyz         But


1, 1, 2 3 1, 0, 1 1 2, 1, 1       ,munotes.in

Page 37

37Hence,




\^,, 1 , 0 , 1 2 , 1 , 1 3 1 , 0 , 1 1 2 , 1 , 1Txyz x y z        



31 , 0 , 1 2 2 , 1 , 1xz y    =Basis of

\^Im 1, 0,1 , 2,1,1T    and its dimension is 2.Exercise : 3.11.Let F be field of complex numbers and let T be the function from3Finto3Fdefined by

123 1 2 3 1 2 31 2,,2, 2 , 2Txx x x x x x x x x x    . Verify that T is alinear transformation.2.Show that the following maps are not linear.i)

33:; , , , , 0TTxy z xyl      ii)

2222:; , ,TTxy x yl    iii)

32:; , ,, 0FFxyz xl        iv)
2:; ,SS x y x yl     3.In each of the following find
1, 0Tand
0,1Twhere22:Tlis a linear transformation.i)



3, 1 1, 2 , 1, 0 1, 1TT  ii)



4,1 1,1 , 1,1 3, 2TT    iii)



1, 1 2, 1 , 1, 1 6, 3TT   4.Let33:Tlbe the linear transformation defined by

,, 2 , , 2Txy z x y z y z x y z     .F i n d a b a s i s a n d t h edimensionofImTandkerT.3.3 ALGEBRA ON LINEAR ALGEBRADefinition :Let F be a field. A vector space V over F is called on linearalgebra over F if there is defined an additional operation in V calledmultiplication of vectors and satisfying the following postulates.1.V, VBC‰ B C‰2.

,, VBC HB CH  BCH ‰3.
BC H B C B Hand
,, VB C HBH CHB C H‰4.


,Vec eBC  H CB C B C‰andeF‰munotes.in

Page 38

38If there is an element 1 in V such that11 V ,BBBB‰thenwe call V a linear algebra with identity over F. Also 1 is then calledthe identify of V. The algebra V is commutative if,VBCCBB C‰Polynomials :Let T be a linear transformation on a vector spaceV(F). Then TT is also a linear transformation on V, we shall write T1=Ta n d T2= TT. Since the product of linear transformation is anassociative operation, therefore ifmis a positive integer, we shalldefine Tm=T T T…u p t omtimes. Obviously Tmis a lineartransformation on V. Also wedefine T0= I (identity transformation).Ifmandnare non-negative integers, it can be easily seen thatmn m nTT T and
nmm nTT,T h es e tL ( V , V )o fa l ll i n e a rtransformation on V is a vector space over the field F. If20112, ,....,.. ( , )nnaa a n e F a T a T a T L V V  ‰i.e. P(T) is also a lineartransformation on V because it is a linear combination over F ofelements of L(V,V). We call P(T) as a polynomialin lineartransformation T. The polynomials in a linear transformation behavelike ordinary polynomials.3.4 INVERTIBLE LINEAR TRANSFORMATIONDefinition :Let U and V be vector spaces over the field F. Let T bea linear transformation from U into V such that T is one-one onto.Then T is called invertible.If T is one-one and onto then we define a function from Vinto U, called the inverse of T and denoted by T-1as follows :LetCbe any vector in V. Since T is onto, thereforeVUC‰ ºB‰such that
TB C.AlsoBdetermined in this way is a unique element of U becauseTi so n e-one and therefore0,UBB ‰and

00TTBv B º C  Bv Bwe define
1TCto beB.T h e n1:TVVlsuch that

1TTC B” B C. The function1Tisitself one-one and onto.munotes.in

Page 39

39Properties :1.1Tis also a linear transformation from V into U.2.Let T be an invertible linear transformation on a vector spaceV(F). Then1TT=I=T1T.3.If A, B and C are linear transformations on a vector space V(F)such that AB = CA = I, then A is invertible and A-1=B=C .4.Let A be an invertible linear transformation on a vector spaceV(F). The A possesses unique inverse. (The proof of the aboveproperties are left for the students)Example :If A is a linear transformation on a vector space V such that20AA I , then A is invertible.20AA I ,t h e nA2-A=-I. first we shall prove that A isone-one.Let12,.VBB‰ Then

12AAB B

















1222122211222212121212 1 2AA AAAAAA AAAA AAIIII ¯  ¯ºB B¢± ¢±ºB BºB v B B BºB Bº B  B ¯  ¯º B  B¢± ¢±ºB B ºB B=Ai so n e-one.Now to prove that A is onto LetVB‰.T h e n
AVB B ‰.We have


2AA A A ¯B B  B  B¢±


2AAI BB  BThus
VA VB‰ ºB B ‰such that
AA ¯B B B¢±=Ai so n t o .Hence A is invertible.munotes.in

Page 40

40Check your progress :1.Show that the identity operator on a vector space is alwaysinvertible.2.Describe34:Tlwhich has its range the subspace spannedby the vectors (1,2,0,-4), (2,0,-1,3)3.Let T and U be the linear operators on2defined by
,( , )Ta b b aand(,) (,)Ua b a b.G i v er u l e sl i k ethe one definingT and U for each of the transformations U + T, UT, TU, T2,U24.Show that the operator T on3defined by T (x,y,z) = (x+z, x-z,y) is invertible.3.5 REPRESENTATIONOF TRANSFORMATION BYMATRICESMatrix of a linear transformation :Let U be ann-dimensional vector space over the field F and LetVb ea nm-dimensional vector space over F. Let\^1,....,nBa a  and\^1,....,nBC Cbe ordered bases for U and V respective. Suppose T isa linear transformation from U into V is a basis of into V.Now for
,jjUT VB‰ B ‰and .
11 2 2....jj jmj mTa a a=B C C C1nij iiaCœThis gives rise to anmxnmatrix[]jjawhose jth columnrepresents the numbers that appear in the presentation of
jTBas acombination of elements of B. Thus the first column is<>112111 21 11,, . . . . ,Tmmaaaa aa¬­ž­ž­ž­ž­ž­ž­ž­­ž­ž­ž­žŸ®,t h esecond column is (a12,… . ,am2)Tand soon. We call [aij] the matrix of T with respect to the ordered basis B,B of U, V respectively. We will denote the matrix so induced by
1.BBmT ¯¢±.munotes.in

Page 41

41Example :Let41:( )TP lgiven by

1234 1 3 2 4,,,.Txx xx x x x x x The basis of4be



\^11,1,1,1 , 1,1,1, 0 , 1,1, 0, 0 , 1, 0, 0, 0Band thatof1()Pbe\^21, 1 .Bx x 
(1,1,1,1) 2 2 2(1 ) 0(1 )31(1,1,1, 0) 2 (1 ) 122(1,1, 0, 0) 1 1(1 ) 0(1 )11(1, 0, 0, 0) 1 (1 ) (1 )22Txx xTxx xTxx xTxx       Then
21312122110022BBmT¬­ž­ž­ž­ž ¯­ž­¢±ž­ž­­ž­žŸ®3.6MATRIX OF SUM OF LINEARTRANSFORMATION :Theorem :Let1:TV Wland2:TV Wlbe two lineartransformation. Let\^11 2,, . . . ,mBVV Vand\^21 2,, . . . ,mBXX Xbe thebases of V and W respectively.Then


1112221212BBBBBBmT T mT mT ¯   ¯   ¯  ¢± ¢ ± ¢ ±Proof :For
1,iiVT WV‰ V ‰and
2iTWV‰.S i n c e\^21 2,, . . . ,nBXX Xis the basis of W,
11niij jjTa wV  œand
21niij jjTb wV  œ;1,...., .imNow



1212jjjTT T T V  V V
nNij j ij jnijinij ij jjiwij jjiaW bwab wcw  œœœœmunotes.in

Page 42

42Whereij ij ijca b 


2221111212ij ij ijTT Tij ij ijBBBBBBcacca bmT T mT mT ¯  ¯ ¯= ¡° ¡°¡°¢± ¢±¢± ¯  ¯  ¯º ¡° ¡° ¡°¢± ¢± ¢± ¯   ¯   ¯=  ¢± ¢ ± ¢ ±Matrix of scalar multiplication of linear transformation :Theorem :Let:TV Wlbe a linear transformation. Let\^11,...,mBV Vand\^21 2,, . . . ,mBXX Xbe the bases of V and Wrespectively.Then

2211,.BBBBmk T kmT k ¯  ¯‰¢±¢ ±For
,iiVT WV‰ V ‰and B2is the basis of W.So,
1;1niijjTa i mV X  bbœNow

1;niiijjTk k T k aV V Xœ
11;;nijjnijjkabXXœœWhereij ijbK a

2211.TTijijBBBBbk amk T kmT ¯  ¯º¡° ¡°¢± ¢± ¯  ¯º¢±¢ ±Matrix of composite linear transformation :Theorem :Let:TV Wland:SW Ulbe two lineartransformations. Let\^11,...,mBV V,\^21,...,nBX Xand\^31,...,kBu ube the bases of V, W and U respectively.Then


222111BBBBBBmS T mS mT ¯   ¯   ¯¢± ¢ ± ¢ ±munotes.in

Page 43

43Proof :ForiVV‰andjwW‰,
iTWV‰and
jSw U‰
1niij jjTa w=V œ
1kiir rrSw b uœ
1.( ( ) ) ;niiij jjST S T S a w¬­ž­V V ž­ž­žŸ®œ1()nij jjaSwœ

1111111nkij jrjjpnkij jr rjjknij jr rrjnir rrab uab uab ucu¬­ž­ž­ž­ž­žŸ®œœœœœœœ
Where1nirij jrjca bœWhich is the(i, p)thelement of the matrix.<><>ir ij jrTTTir jr ijca bcb a ¯  ¯¡° ¡°¢± ¢± ¯  ¯¡° ¡°¢± ¢±


222111.BBBBBBmS T mS mT ¯   ¯   ¯=¢± ¢ ± ¢ ±Example :22:Tland22:Slbe two liner raw formations definedbyT( X ,Y )=( X+Y ,X–Y)andS(X, Y) = (ZX + Y, X + 2Y)Let in basis is{(1, 2) , (0, 1)}.munotes.in

Page 44

44ThenT(1, 2) = (3,-1) = a(1, 2) + C(0, 1)which impliesa=3 ,b=-7T(0, 1) = (1,-1) =(1, 2) (0,1)pq  which implies1, 3pq So<>()BBmT ¯¡°¡°¢±31-7 -3Similarly<>()BBmS ¯¡°¡°¢±41-3 0<><><>(.) () ()BBBBBBmS T mS mT  ¯   ¯¡° ¡ °¡° ¡ °¢± ¢ ±31 4 1-7 -3 -3 0 ¯¡°¡°¢±s1-q -3(SOT) (X, Y) = S(T(X,Y) = S(n+y, n-q)=( 3x+y,3x–y)By above way are can find<>()BBmS O Tand verify the result.Exercise 3.21. Let33:Tland33:Slbe two lincer transformationsdifined by(, , ) (, 2, 3)Txyz x y z   and(, , ) ( , , ) .Sxyz x yy zz x  Let{}.B(1 , 0 , 0 ) ( ,0 , 1 , 0 )( ,0 , 0 , 1 )Verify that<><><>() ( )( )BBBBBBmS O T ms mT 2. Let221:Tland222:TlBe two linear transformations defined by1(, ) ( 2 3, 2)Tx y x y x y      and2(, ) ( , ) ,Tx y x y x y    {)}B(1 , 0 )( ,0 , 1be the basis of2.S h o wt h a tB<><><>1212()( )( )BBBBBBmT T mT mT    munotes.in

Page 45

45RANK OF MATRIX andlinear of transformation :Row Space and column space Let is consider a matrixAas follows463045azA ¯¡°¡°¢±We can considerAas a matrix of two vectans in4an as amatrix of four rentors in2.We will consider linear span of two rectons i.e.,pWL{(4 , 6 , 9 , 2 ) ( ,3 , 0 , 4 , 5 )}.It is called naw space of matrixA.Similany colum space of Ais represented bycWL{(4 , 3 )( ,6 , 0 )( ,9 , 4 )( ,2 , 5 )}.Definition :Row space and column space : Let A be a matrix of ordermxn. Then the subspace ofngenerated by the now vectors of A iscalled the now space and the subspace ofmgenerated by thecalumn vectors of A is called the Column space of A.Example :A ¯¡°¡°¡°¡°¡°¢±110000111110Here now space is123{, , }RR RWhere1R23(1 , 0 , 1 , 0 ), R= (0 , 1 , 0 , 1 ), R= (1 , 1 ,1,0).T h es e t123{, , }RR Ris line on by independent.So now space = L {12 3}.Hence dim (now space) = 3.Now calumniation we have two vectans1234C= {1 , 0 , 1 }, C = {0 , 1 , 1 }, C = {1 , 0 ,1} ,C = { 0,1,0} .Here13CC and124{C , C, C}are line an by independent,=column space124{, , }LC C C  =dim (column space) = 3.munotes.in

Page 46

46The dim of the now space of a matrix A is called now rank ofA and the dim of the column space of A in called calumn rank.In enery matrix dim (now space) = dim (calumn space) if nowrank = calumn rank.Rank of zero matrix is new.Rank of identify matrix of order n is n.Rank oftARank of A wheretAis transpore of A.or mxn matrix A, now space is subspace ofn.=now rankbn.Similanery column rankbm=rank ofAbmin (m, n}.Example :A ¯¡°¡°¡°¡°¡°¢±231564789Here123(1, 2, 3), (4, 5, 6), (7, 8, 9)RR R    32 1RZ RR 12{, }RRare lance by independent=now rank = Z=rank of A = ZChange of Basis.Sometimes it is imperative to change the basis in representinga linear transformation T, because relative to this new basis, therepresentation of T may become very much simplified. We shall thrufare turn own attention to established are important resultconcerningthe matrix represecutations of a liner transformation when the basisin changed.Theaum :If two sets of vectors12 nX= { x, x , . . x }andxx x x  12 n={ , , . . . }are the bases of a rectum space,nVthese thereexists a nonsingular matrix B = [bij] such thati1 2 i 2nx= b i j x+ b x + . . + b n i x,i= 1 , 2 , . . . n .munotes.in

Page 47

47The nonsingular matrix B is defined as a transformation matrixin.nVProof :Suppose that X andxare two bases in.nVThenxi s (I = 1,2, .. n)can be expressed12,, . . . , . .nxn xi e  12.. ,ij zini nixb xb x b x    1, 2, , , ,inWhereijbs are realer.Let is define its matrix12[. . . ]nBb bb Where[. . . ]Ti ij zi nibb bb   Is an vector.We have to show that B is non singular.For realer12...xx we write,11....nnxx x x 11 1 1111(. . .) . . . (. .)nnnnnn nxbx b x x bx bx     111 1111(. . ) . . . ( . . )xxnnnnnn nxb x bxb xb              Now11... 0nnxb x b Implies11.. 0, 1, 2, ...,ini nxb x b i n      Substituting this in the aboveequation we have11... 0.nnxx x x  Since,1,..,uxxare lines only independent, it follows that1.... 0nxxand hence1,.. .nbbare linearly independent.=Bi sn o ns i n g u l a r .Theorem:Suppose that A is an mxn matrix of the linear transformation:nmTV Wl   with respect to the bases1{, . . . , }nXn x  andmunotes.in

Page 48

481{, . . . , } .myy y   IfÂin an mxn matrix of T with respect to diffluentbases1{, . . .}nxx x    and1,... }myy y   than there exist nonsingular matrix B and C of ordern and m respectively, such thatl1.AC A B Proof :If[]kiAain the matrix of T with aspects to the bases X and Y, wehave the lunation,11 2 2()...ii imi mTx a y a y a y      similarly,forl[] ,ijAa we can write,mmmmm12 2() , , , . . ,mmjTx a j a y a y      .By the previous formula thus exist non singular coordinatetransformation matrix[]ijBband[]kiCesatisfying.l12..ii j z jnj njb xb x b x  l12..ii j z imi myc yc y c y      Hencellll111()mmmji i i k i ncikTx a y a cyœœ œ  m11mmki ij kkccx y¬­ž­ž­ž­žŸ®œœAlternatively sinner T in liner, we have,l1() ( )wji j iiTx bTxœ T11nmij ki ninba yœœ11mnki ij nkiab y¬­ž­ž­ž­žŸ®œœ1..i e c a ABacA Bº  Ex. A Linear transformation32:Tlin defined by123 1 2 3 2 3(, , ) ( , )T x x x x x x zx x  Of the bares in3are1, 2 3v( x x ,x ) = { ( 1,0,0) (, 0, 1,0) (, 0,0, 1) }munotes.in

Page 49

49lmm123v( x , x , x ) ={ ( 2 , 2 ,1 ) (, 0 ,1 , 0 ) (, 1 , 0 ,1 ) }and these in2are12w(y , y )={(2,1),(1,1)}lmm12w( y , y ) ={ ( 1 , 0 ) (, 1 ,1 ) }Here we will find the matrix A w.p.t. the bases defined by Vand W, andlAw.p.t.vandlwof the linear transformation T. Alsowe have to determine non singhan matrixes B and C such thatl1.AC A B HereT( 1,0,0) = ( 1,0) =1( 2, 1) -1 ( 1, 1)T( 0, 1,0) = ( 1,2) = -1( 2, 1) + 3 ( 1, 1)ˑ 0 , 0 , 1 = 1 , 1 = 0  , 1 + 1 , 1 <> ¯¡°¡°¢±w1- 1 0∴ m 7 = -1 3 1i.e. T(V) = WA.Similarly considering the baseslVof3andlwof2we find<>ll ¯¡°¡°¢±wv0- 1 1m( T) = = A52 1ll.. ( )st T v wA  The matrix B and C are determined by the change ofrelationship aslandvV B ww c   ¯  ¯¡° ¡°¡° ¡°¡° ¡°¡° ¡°¢± ¢±201 100∴ Y = 9% ය  1 0 = 0 1 010 1 00 1B=201210101 ¯¡°¡°¡°¡°¢±l.ww c ¯ ¯  ¯¡°¡° ¡°¡°¡° ¡°¢± ¢±¢±11 1221 22aa11 21i∴ = aa01 11munotes.in

Page 50

50 ¯¡°¡°¢±10c=-1 -1We can such the resultlcA A BLINEAR FUNCTIONALS : DUAL SPACEDefinitionA linear transformation T from a vector space V one a fieldinto this fieldof scalars in called a linear functional on thespace V. The set of all function on V in a victor space defined as thedual space of V and in denoted by V.Example :Let V be the vector space of all mal valued function internalsover the intervalatbbb . Then the transformation:fvldefined by() 1 ()taxt x t d tl¨  is a linear functional onV.This mapping f assigns to each internals functionx(e)a real numberon the internal.atcbb Example :Let\^12,, . . . ,nxx x    be a basic of the n dimensional vectorspace v one.A n yv e c t o rxi nVcan be reprinted by11 2 2... ,nnxx x x x x x  whenixsone realans inwe nowconsider a fined vectorZinVand spurned it by11 2 2....nnzc n c x c x when'icsare sealers in.Denote by1212[. . . ] [ . . . ] ,TTnnwe ee a n d n x x x    the coordinatevectors of z and x respective by. Thus the linear transformation2:fvl  defined byt21 1 2 2nnf( x )= c x + c x +. . .+ c x =w uina linear functional on V. In fact,()fxlis abstained as an innerproduct of coadunate vectors of z and x. Restricting Z to be the basisvector,ixswe getZnumber of linear functionalnif( i = 1 , 2 ,ൺ, Q inVgiven by()xi ifn xbecause now theeliminating vectorTie= [ 0 , - 1 , . . . o ]is the co ordinate vector ofixwith respect to the basis vectors\^12,, . . . ,nxx x  and shouldreplaes W. This functional may be reconcile as a linearmunotes.in

Page 51

51transformation as V that maps each vector in V onto its i-th coordinates. Sulative to the respected basis. A note worthy property ofthus function is,() , , 1 , . . .xi j ijifx f i j n x          whenijxin thekpronceluers delta defined by,1,ijxi jijv      becausejeis the coordinate vector ofjxwith uspect to the basis\^12,, . . . ,nxx x  and.jiue a n d we Mover if x in a zero vecor in V, then0() 0fx the sealer zero in.Now we shall state and prove a very email they are on dualbasic.Theaum :Let\^12,, . . . ,nxx x  be a basis of the n-delusional vector space V onea field. Then the unique linear functional12,,xxx nfff  defined by() , , 1 , 2 , . . . ,xi j ijfx f i j n      farm a basis of the dual space V*defined as the dual basis of\^12,, . . . ,nxx xand any element f in V*can be expressed as11() . . . ( )xnx nff x f f x f    and for eachvector X in V, we have12 24,( ) ( ) . . . ( )xxxnxfx x f x x fx x      .Proof :We have to prove that (a)'xifsare linear by independent in V*.(i) Any vector f in v* can be expressed as a liner combination of'.xifsSuppose that far realer12,, . . .nxx x i n  we have11 2 2...nxnx n qxf x f x f f        Then11 2 20(... ) ( )xxnx n i ixf xf x f n f x          which, by taking intoaccount that0() , , 1 , 2 , . . . , ( )0ni j ijtn f i j n a n d f x              we get.0, 1, 2, ... .ixi n Hence,'nifsare linearly independent in V*.Suppose now that f is any linear functional in V*. Than we can findn sealers12,, ,naa ai n  satisfying() , 1 , 2 , . . . ,fx i a ii n   because f n pursuable a known liner functioned. For any vector11 2 2...nnxx n x x x x in V, because of the inanity of f, weget11 2 2() ( ) ( ) . . . ( )nnfx xfx x x x fx   munotes.in

Page 52

5211 2 2...nnxa xa x a 11 2 2() ()()nnnx uaf n af x f x          D  12 2(, . . ) ( ) .nnnn uaf af a f n      Since x is arbitrary and a combination of linear functional is again alinear functional, we have the rulation.11 2 2...xnnn nfa f a f a f   .Definition :Dual Basis :For any basis12{, , . . . }nxx x of a vector space V one a field,t h u sexists a unique basis12{, , . . . , }nff f  of V* such thatij i jf( x )= f ,i, j =1 ,2,..,nof the basis12 n{f , f, . . . , f}of V* is saidto be dual to the given basis12 n{x , x, . . . x} o f V .Suppose :2{(3, 2) , (1,1)}.BV   be a basis inWe will find a dual basisx1 x212{f , f } i n V * w h e n B = {x , x }.L e t ,11 2 2x=x x +x x i . e . ¯  ¯¡° ¡°¡° ¡°¢± ¢± ¯ ¯¡°¡°¡°¡°¢±¢±11122212xxi.e. = [x x ]xxx31=x21Which gives11 2 2 1 2x= x- x , x = - 2 x+ 3 xi. e.n1 1 2 n212f(x ) = x- x, f(x ) = - 2 n+ 3 xExample :2123{1 , 1 , }Let B n n t x t t=             in a basis in2()t3one.11 2 2 33()xt x x x x x x 2123(1) (1 ) ( )xx t x t t 
12 2 3 3() 2xx x x t x      212()oxt a at a t=        012 123 23,,xx axx axº  D            1012 212 32,,xaaa xaa xaº    =me obtain the dual basis as,0122 1 2 3 2,( ),( ) ( ) .nnfn a a afx a a f n a               munotes.in

Page 53

53AnswerExercise :3.13. (i)¬­ž­ž­žŸ®2,1 ( , -1, -1)3(ii)¬  ¬­­žž­­žž­­žžŸ® Ÿ ®21 1-, 1 , , - 333(iii)(1 , 0 , 1 )( ,4 , 2 )4.{(1, 0,1) , (2,1,1)}  in a basis of Inc(T) din(Im T) = 2.{(3, 1,1)} is a basis of kur T and dim (ku T) = 1™™™™munotes.in

Page 54

544DETERMINANTUnit Structure :4.0Introduction4.1Objective4.2Determinant as n-farm4.3Expansion of determinants4.4Some properties4.5Some Basic Results.4.6Laplaer Expansion4.7 The Rank of a Matrix4.8Gramer’s Rule4.0 INTRODUCTIONIn previous three chapters we have discarded about vcetansline on equations and lincer transformations. Every when we see thatwe need to check wheather the vectors are linearly independent ornot. In this chapter we mill develop a computational technique tofind that by using determinants.4.1 OBJECTIVEThis chapter will help you to know about determinantsand itsproperties.Expiring of determinants by various methods.Calculation of rank of a matrix using determinants.Existence and uniqueness of a system of equations.4.2 DETERMINANT AS N-FARMTo discuses determinants, we always consider a square metersDeterminant of a square meters in a value associated withthematrix.munotes.in

Page 55

55Definition :Define a determinant function as dut :(, )Mnlwhere(, )Mnis a collection of square meters of order n, such that(i)the value of determinant remain same by adding any multipleof 1stnow to i th now i.c.Det12(, , , , 1 ,ij i nRR R K R R R      12det ( , , .., ,.., )jnRR R Rfan i jv       (ii)the value of the determinant changes by sign by swapping anytwo rows i.e. det12(, , . . , , . . , , . . , )jj nRR R R R      12det ( , ,.. ,.., , .., )ji nRR R R R         (iii)if element of any now in multiplied by k then am value ofdeterminant in is k tines its original value i.c.Det12(, , . . , , . . , )inRR R R   N  12det ( , ,.., , .., )0inkR R R Rfar kv      (iv)det (I) = 1, where In an identify matrix.If is n-linear skin symmetries function on.. .(, )nn nxx xAM nl=‰       1212det det ( , ,.., )det det ( , ,.., )nnAR R Ror A C C Cº          WhichiRdenote i-th row oriCdenote i-th column of meters A.Eachniinn nRo r CAx x x‰=‰     =Determinant in an n-linear skew symmetric function from..nn nxx xt o    e.g. the function22x in give by,abad becd¬¬¬­ž­ ­žž­l­­žžž­­­žžž­­­žŸ®Ÿ®Ÿ®    there&abcd¬ ¬­­žž­­žž­­žž­­Ÿ® Ÿ®are ordered pairs from2each.Also,abab leilinearcd ¯¡°¡°¢±Skeen symmetric and alternating function.munotes.in

Page 56

564.3 EXPANSION OF DETERMINANTLet()ijAabe an arbitrary nxn matrix as follows11 12 11212ji nii i j i nnn n j n naaaaAa a a aaa aa¬­ž­ž­ž­ž­ž­ž­ž­­ž­žŸ®LetijAbe the(n-1) x (n-2) matrix obtained by deleting the i-throw and j-th column from A.11 1111 1211 12 1 1 1 1 111 21 1 1 1 112 1 1aa a a aij j naa a a aii j i i nAijaa a a aii i j i j i naa a a ann n j n j n n¬ ­ž­ž­ž­ž­ž   ­ž­­ž­ž­ž  ­ž­ž­ž­ž­­ž Ÿ®We will give an expressian for the determinant of an nxnmatrix in terms of determinants of (n-1) x (n-1) matrix. We define,211detdet ( ) ( 1) det ( )iiniiin inaaaA  $            This sum is called the expansion of the determinant accordingto the i-th expand det A according to the first now,11 11 12 1213 13 14 14det det ( ) det ( )det ( ) det ( )Aa A a AaA aA        Where A in an 4 x 4 matrix.For example det¬­ž­ž­ž­ž­ž­ž­­žŸ®32- 1A= 5 0 213- 4Then02 52 5 0det 3 2 ( 1)34 14 1 3A   = 3(-6)-2 (-20-2)-1 (15)=-18 + 44–15 = 44–33 = 114.4 SOME PROPERTIESThe determinant satisfies the following properties :1.As a function of each column vector, the determinant inlinear, i.e. if the j-th columnjcis equal to a sum of two, columnvectors, sayjcc c thenmunotes.in

Page 57

571111(, . . , , , , . . )jjnDc c c c c c     11111(, . . ,, . . , )(, . . , , , . . , )jnjnDc c cDc c c c      F         2. If two columns are equal i.e. if,jkc c where j kvhun det(A) = 03. If one adds a sealer multiple of one column to another three thevalue of the determinant does not change i.e.1(, . . , . . . , ) ( 1 , . . , )kinnDc c x c c Dc c       We can prove this profile as follows.11(, . . , , . . )( , .. , .. , .., )kxi njk nDc c c cDc c c c         11(, . . , . . , , . . , )(, . . , . . , , . . )jjnjn nxd c c c cDc c c c              As the second determinant on right has same columns andhence its value is zero.All the properties stated above are valid for both row andcolumn operations.Using the above properties we can computer the determinantsvery efficiently. By using the property if asealer multiple of are now(or column) in added to another now then the value of thedeterminant does not change, we will by to make as many enemiesin the meters square to 0 and then expand.Example :21 203 141 1DFirst we willinterchange first two columns to keep first entryas 1.12 230 114 1D=  munotes.in

Page 58

58Next we will make first two entries of 2ndand 3rdcolumns aszero. So we subtract trick the first now from 2ndand 3rdcolumns.10 036712 1D  So if nc expand it along the first now nc get only a 2 x 2 determinantas.67(6 14) 2021D     Exercises 4.11. Computer the following determinates.30 1 2 04() 1 2 5( ) 1 3 5142 1 0 10iii 312 24 3() 4 5 1 () 13012 3 0 21iiiiv 2. Computer the following determinants.11 2 4 1 1 2 001 13 03 2 1()()21 1 0 0 4 1 231 25 31 5 7iii11 1 1 11 1 111 1 1 2 2 13()()11 1 1 4 419111 2 881 2 7iiiiv 4.5 SOME BASIC RESULTS(1)If k is constant and A is n x n matrix thusnKA k A whereKA is obtained by multiplying each element of A by K.munotes.in

Page 59

59Proof :Each term of the determinantsKAhas a factor k. This k can betaken out column from each now i.e... ( )nKA k x k x x k n thus AKA k A=   (2)If A and B are two n x n matrixAB A B ButAB A B v  This can be proved by simple examples.(3)11AAProof :,AB A B considering=     111,BAw e g e tAA A A   11111IA AAAAAº   (4)ttAA w h e r e A  is transfer of matrix A.Proof :The determinant can be expanded by any now or column. LetAisexpanded using i-th now. This i-th now in i-th column intA.S ot h eexpansion remain same i.e. the value of the determinant remainsame.tAA= (5)If A has a now (or column) of zeros then0.Amunotes.in

Page 60

60Proof :By expanding along the new now, the value of the determinantbecomes zero.(6)If A and B are square meters then1BAB AProof :111BAB B A B B ABA     (7)The determinants is linear in each now (column) if the otherrows (column) are fined.123123123ak a k akbb bcc c 123123 223123 123aa a kkkbbb bbbcc c ccc  12123123 2 2 3123 1 2 3aaa a a akb kb kb k b b bcee c c c4.6 LAPLAER EXPANSIONDefinition :Mainor–The minor of an element of a square matrix in thedeterminant obtained by deleting the now and column which interestin that element.So minor of the elementijais obtained by deleting i-th nowand j-th column, deleted by.ijMmunotes.in

Page 61

61For example, let123456789A ¯¡°¡°¡°¡°¢±Then,Minor of 1 is1156389M  Minor of 8 is32136.46Metc   Laplaer expansion–1(1 )nijij ijJAaM œ  If wc consider i-th previous example.111256463,68979MM       132145237,67889MM     2223131212,67978MM     313223133,65646MM       3312345M  =BY Laplaer Expansion11 1 12 12 13 1321 21 22 22 23 23Aa M a M a MMa Ma M    D       31 1 2 32 33 331 ( 3) 2 ( 6) 3 7 4 (6)aM a M aMxxx x             5(1 2 ) 6(6 ) 7 (3 ) 8 (6 )(3 )xxax            31 2 2 12 46 03 62 14 82 730                 munotes.in

Page 62

624.7 THE RANK OF A MATRIXTheorem :Let12,, . . . ,ncc c  be column vectors of disunion n. They are linear bydependent if and only it.12det ( , ,... ) 0ncc c  Proof :Let12,, . . . ,ncc c  are linear by dependent. So there exists a solution.11 2 2... 0nnxc x e x c with numbers1,...,nxx not all 0.Let0jxv11 1 1... ...jjjj n nxc x c x c xc=          111......njnjjnkkkxxipc c cxxackj vœ     ThusDet1() d e t (, . . . , , . . . , )inAc c e    11det ( ,..., ,... )nkk nkca c cœ    11det ( ,..., ,..., )nkk nkca c ckjvœ         Wherekcoccurs in the j-th place. Butkcalso occurs in the k-thplace andkjv.H e n c et w oc o l u m no ft h ed e t e r m i n a n ti ne q u a l .S othe value of the determinant in 0.Conversely :If all the columns of a matrix A in linear by independent, thenthe matrix A is now equivalent to a triangular matrix B as11 1222000inznnnbb bBb bb¬­ž­ž­ž­ž­ž­ž­­žŸ®munotes.in

Page 63

63When all the diagonal elements11 22,, . . . , 0 .nnbb bv   But by the rule of expiations. Det11 22() . . . , 0 .nnBb b bv    Now, B is obtained by some operation like multiplying a rowby non zero scalar.Which multiplies the determinant by this scalar; orinterchanging rows, which multiplies the determinant by-1,o radding a multiple of are now to another, which does not charge thevalue of the determinant sincedet ( ) 0Bv  it follows that det() 0 .AvHence the proof.Corollary :If12,,ncc c are column vectors ofnsuch that12(, , . . . ) 0 ,nDc c cv  and if B in a column vector, then thus existnumber1,. . .,nxx  such that11... .nnxc x c B Thesenumbers are uniquely determined by B.Proof :12(, , . . . , ) 0nDe c c=v     12,, . . .ncc c are linear by independent and hens from a basis of.nSo,nBEcan be written as a linear combination of12,, . . . ,ncc c  forsome unique numbers12,, . . . ,.nxx x  11 2 2...nnxc x c x c B= for unique12,, . . . , .nxx x  The above corollary can be placed an important feature ofsystem of linear equations like :If a system of nlinear equations in n unknowns has a matrixof coefficients whose determinant in not zero, then the system has aunique solution.Now recall that the rank of a matrix in dimension of rowspace on column space i.e. number of independent now of columnvectors in the matix.Let a 3 x 4 matrix is given and we have to find out its rank. Itsrank is at most 3. If we can show at last one 3 x 3 determinant frommunotes.in

Page 64

64the co-efficient matrix is non-new, we consider the rank of thematrix in 3. If all 3 x 3 determinants are new, we have to check 2 x 2determinants. If at last one of then is non-zero, we can conclude therank of the matrix in 2.For example let¬­ž­ž­ž­ž­ž­ž­­žŸ®3514A= 2 - 1 1 1542 5351 5142- 11 = 0 , - 111 = 0542 42 5One can check that any such 3 x 3 determinant from Ain zero.So, rank() 3 .AvNow35=- 1 3ำ2- 1No need to check other 2 x 2 determinant. We can concludethat rank (A) = 2.Exercise 4.21.Find the rank of the following matrix.¬¬­­žž­­žž­­žž­­žž­­žž­­žž­­­­žžŸ®Ÿ®31 2 5 1 1 - 12(i ) 1 2 - 1 2 (i i )2 - 2 0 211 01 2- 83- 1¬¬­­žž­­žž­­žž­­žž­­žž­­žž­­ž­ž­­­žž­­žž­­žž­­žžŸ®Ÿ®11 1 1 311- 111 - 11 - 243 2(i i i )(i v )1 -1 1 -1 -1 9 7 31 -1 -1 2 7 4 2 12. Find the values of the determinants using Laplaec expansion.¬ ¬­­žž­­žž­­žž­­žž­­žž­­žž­­­­žžŸ® Ÿ®246 203(i )7 8 3 (i i )5 1 0592 046munotes.in

Page 65

65¬  ¬­­žž­­žž­­žž­­žž­­žž­­žž­­­­žžŸ® Ÿ ®1- 2 - 3 320(i i i )- 2 3 4 (i v )1 1 33- 45 2 - 1 23. Check only uniqueness of the solution for the following systemsof equations.nzyny(i ) 2- y + 3= 9n+3 -z=43+ 2+ z = 1 0nzy(i i ) 2+ y - z = 5x-y+2 =3-x+2 +z= 1nzwnzw(i i i ) 4+ y + z + w = 1n-y+2 -3 =02+ y + 3+ 5 = 0n+ y-z-w=2yz wnz(i v ) x + 2 - 3+ 5 = 02+ y - 4- w = 1n+x+z+w =0-n- y-z+w =4[Hint : Just check that determinant of the co-efficient matrix is nonfew far the uniqueness ]4.8 GRAMER’S RULEDeterminants can be used to solve a system of linearequations.Thearim :Let12,, . . . ,ncc c  be column vectors such that12(, , . . . , ) 0 .nDc e ev   Let B be a column vector and12,, . . . ,nxx x  arenumbers such that11 2 2... ,nnxc x c x c B then for each J = 1,2, … , n.munotes.in

Page 66

661212(, ,,, )(, ,, )nnDc e B enjDe e e       Where B column vector replaces the columnjcin numerator ofjx.Proof :1212 1 1 2 2( , ,..., ,... )( , ,...,... ),nnnDc c B cDc c xc n c x c              12 1 112 2 2( , ,... , ..., )( , ,..., ,..., )nnDc c xc cDc c xc c               12(, . . . , , . . . )jj nDc c xc c    12(, . . . , , . . . )nn ndc c x c c       11 2 1(, , . . . , , . . . , ).........nxD c c c c      1212(, , . . . , . . . )...........(, , . . . . , , . . . , )jjnnnnxD cc c cxD cc c c        In every term of this sam except the j-th term, two column vectorsare equal. Hence every term except the j-th term isequal to 0. so weget.1212(, , . . . ,, . . . )(, , . . . , , . . . )njjnDc c B cxD cc c c       1212(, , . . , , . . . )(, , . . . , , . . . )njjnDc c B cxDc c c c=         So we can salve a system of equations using above rule. Thisrule in known as Gramers rule.Example :xy znyz3+ 2+ 4= 12- y + z = 0x+2 +3 = 1By Gramer’s rulemunotes.in

Page 67

671240- 11123x=3242- 1 112 3314201113y=3242- 1412 332 12- 1012 1z=3242- 1112 312x=- ,y=0,z=55Exercise 4.31. Solne the following equation by Gramer’s rule.znzyxyi) x + y + z = 6 ii) x + y - 2 = -10x-y+z=2 2 -y+3 =- 1x+2 -z=2 4 +6 +z=2xzxzxyyzxzxyziii) - 2 - y - 3 = 3 iv) 4 - y + 3 = 2z- 3+ z = - 1 3 x + 5- 2= 32- 3= - 1 1 3+ 2+ 4=6munotes.in

Page 68

68AnswerExercise 4.11. (i)-42(ii)-114(iii)14(iv)-92. (i)-18(ii)-45(iii) 4(iv) 192Exercise 4.21. (i) 3(ii) 2(iii) 4(iv) 22. (i)-204(ii) 72(iii)-6(iv) 233. (i) unique(ii) unique(iii) unique(iv) unique.Exercise 4.31. (i) (1,2,3)(ii) (5,3,4)(iii) (-4,2, 1)(iv) (0,1,1)™™™™munotes.in

Page 69

Chapter 6Characteristic PolynomialChapter Structure6.1 Introduction6.2 Objectives6.3 Diagonaizable Linear Transformations6.4 Triangulable Linear Transformations6.5 Nilpotent Linear Transformations6.6 Chapter End Exercises6.1 IntroductionIn the following chapter we will attempt to understand the conceptof characteristic polynomial. In earlier two units of this course we haveseen that studying matrices and studying linear transformations on a fi-nite dimensional vector space is one and the same thing. Although thisac a s ei ti si m p o r t a n tt on o t et h a tw eg e td i ↵ e r e n tm a t r i c e si fw ec h a n g ethe basis of a underlying vector space. We also saw that although weget di↵erent matrices for di↵erent bases, corresponding matrices aresimilar. Being similar is an equivalence relation on the space ofn⇥nmatrices, it is interesting to seek bases of underlying vector space inwhich for a given linear transformation corresponding matrix is sim-plest in appearance and because of similarity being equivalence relationwe do not loose anything important as far as linear transformation isconcerned. Studying so called eigenvalues of a linear transformationaddresses the issue of such bases in which given linear transformationhas a matrix in simplest form. During the quest of finding such basesmentioned above we come to know various beautiful properties of lineartransformation and its relation to linear structure on the vector space.85munotes.in

Page 70

6.2 ObjectivesAfter going through this chapter you will be able to:•find eigenvalues of a given linear transformation•find basis of a vector space in which matrix of a linear transformationhas diagonal or at least triangular form•Properties of eigenvalues that characterize properties of linear trans-formationLetVbe a finite dimensional vector space over a field of scalarsF.LetTbe a linear transformation onV.Definition 16.Eigenvalue (also known as characteristic value) of a lin-ear transformationTis a scalar↵inFsuch that there exist a nonzerovectorv2VwithT(v)=↵v.a n y s u c hvis known as eigenvector corre-sponding to eigenvalue↵.C o l l e c t i o no fa l ls u c hv2Vfor a particulareigenvalue is a subspace ofVknown as eigen space or characteristicspace associated with↵.Theorem 6.2.1.LetTbe a linear transformation on a finite dimen-sional spaceV. Then↵is characteristic value ofTif and only if theoperatorT↵Iis singular.Proof.this is the proof of the theorem.Remark 6.2.1.Al i n e a rt r a n s f o r m a t i o ni ss i n g u l a ri fd e t e r m i n a n to fits matrix is zero.Thus↵is eigenvalue ofTif and only if determinant of matrix ofT↵Iis zero. We see this determinant is a polynomial in↵and hence rootsof the polynomialdet(TxI)a r ee i g e n v a l u e so fT.Definition 17. det(TxI)i sk n o w na sc h a r a c t e r i s t i cp o l y n o m i a lo fT.Example 7.Find eigenvalues and eigenvectors of the following matrixA=0@1012311111ASolution:Step 1: Consider det(AI)=086munotes.in

Page 71

detA=det0@10123111 11A=0This gives characteristic polynomial ofAand its roots are eigen-values ofA.E i g e n v e c t o r i s a s o l u t i o n v e c t o r o f h o m o g e n e o u ssystem of linear equations (AI)x=0w h e r eis an eigenvalueofA.Thus characteristic polynomial ofAis obtained by finding deter-minant in above equation and it is found to be the followingp()p()=352+51Step 2: Roots ofp()a r e2+p(3),1a n d2p3. These are eigenvaluesofA.Step 3: Solve following system of linear equations and we consider firsteigenvalue 2 +p3(A(2 +p3)I)x=0Solving we getX1=32+12(2 +p3),12+12(2 +p3,1Step 4: similarly for other two eigenvalues we get following eigenvectorsX3=12+12(2 +p3),12+12(2 +p3,1X2=1,1,0Remark 6.2.2.Roots of characteristic polynomial may repeat and be-havior of a linear transformation (or its corresponding matrix) dependscrucially on multiplicity of eigenvalues and dimension of correspondingeigen spaces. One simple example of matrix with repeated eigenvaluesis the following matrix0@1010110031ADefinition 18. Algebraic multiplicity of an eigenvalueMultiplic-ity of an eigenvalue as a root of characteristic polynomial is known asalgebraic multiplicity of corresponding eigenvalue.Definition 19. Geometric multiplicity of an eigenvalueDimen-sion of the eigenspace or characteristic space of an eigenvalue is knownas the geometric multiplicity of a that eigenvalue.87munotes.in

Page 72

Theorem 6.2.2.Let↵be an eigenvalue of a linear transformationT.Iff(x)is a polynomial in indeterminatex. Thenf(T)v=f(↵)vProof.First consider the case off(x) being a monomial. Letf(x)=xk.Let us apply induction onk.Letk=1 . I nt h i sc a s ef(x)=x.i . e .f(T)=T. Here it follows thatf(T)v=f(↵)v.Let us assume the lemma fork=r.T h u s w e a s s u m e t h a tTr(v)=(↵r)v.C o n s i d e rk=r+1.Tr+1(v)=TrTv=(Tr)↵v=↵(Tr)v.Using induction hypothesis we get thatTr+1(v)=↵r+1v.T h u st h el e m m ai se s t a b l i s h e df o ra l lm o n o m i a l s .Letf(x)=a0+a1x+...+akxk.T h u sf(T)v=a0v+a1Tv+...+akTkv.Using the lemma for monomials we get thatf(T)v=a0+a1↵v+...+ak↵kv=f(↵)v. And the result is established for all polynomials.Remark 6.2.3.This is very important lemma and in future we willuse it at various occasions.Theorem 6.2.3.Let↵1and↵2be two distinct eigenvalues of a lineartransformationTon a finite dimensional vector spaceV. Letv1andv2be respective eigenvectors. Thenv1andv2are linearly independent.Proof.Suppose otherwise thatv1andv2are linearly dependent. Thenthere exist a nonzero constantcsuch thatv2=cv1.ThereforeT(v2)=cT(v1). Thus we get that↵2v2=↵1cv1. Which is same asv2=↵1c↵2v1.This leads to the conclusion that↵1c↵2=c.S i n c ev1andv2are distinct(as↵1and↵2are distinct) we havec6=1l e a d i n gt o↵1↵2= 1. This issame as↵1=↵2.T h i s i s a c o n t r a d i c t i o n t o↵1and↵2being distinct.Thereforev1andv2are linearly independent.Now recall that geometric multiplicity of an eigenvalue is a numberof linearly independent eigenvectors corresponding to that eigenvalue.In other words geometric multiplicity is nothing but dimension of eigenspace (ie.characteristic space) of an eigenvalue. Note however that di-rect sum of all eigenspaces of a linear operator need not exhaust entirevector space on which linear transformation is defined and now onwardsour attempt will be to see what best next can be done in case we fail torecover vector space V from direct sum of eigen spaces corresponding toa particularly given linear transformation. Question we want to addressis that in which circumstances does direct sum of eigen spaces exhaustentire vector space. We will see that these linear transformations areprecisely the one which are diagonalizable. In the following section wewill make these ideas precise.Theorem 6.2.4.LetTbe a linear transformation on a finite dimen-sional vector spaceV. Let↵1,↵2,. . . ,↵kbe the distinct eigenvalues of88munotes.in

Page 73

Tand letWibe the eigenspace corresponding to the eigenvalue↵i.Bibe an ordered basis forWi. LetW=W1+W2+...+Wk. ThendimW=dimW1+dimW2+...+dimWk. AlsoB=(B1,B2,. . . Bn)is aordered basis forW.proofVectors inBifor 1ikare linearly independent eigenvec-tors ofTcorresponding to eigenvalue↵i. Also vectors inBiare linearlyindependent to those inBjfori6=j.Because they are eigenvectors cor-responding to di↵erent eigenvalues. Thus vectors inBare all linearlyindependent. Note that vectors inBspanW.T h i s i s b e c a u s e t h a t i showWis defined.Theorem 6.2.5.Set of all linear transformations on a finite dimen-sional vector space forms a vector space over the field F. Here naturallythe binary operation is composition of functions. This new vector spaceis isomorphic to space ofn⇥nmatrices and hence has dimensionn2.Let us denote this space byL(V,V).LetTbe a linear transformation on finite dimensional vector spaceV.ThusT2L(V,V). Consider the first (n2+1) powers ofTinL(V,V).:I,T,T2,. . . ,Tn2Since dimension ofL(V,V)i sn2and above we haven2+1e l e m e n t s ,these must be linearly dependent.ie. there existn2+1s c a l a r s ,n o ta l lzero, such that we have:c0+c1T+c2T2+...+cn2Tn2=0 ( 6 . 1 )Or which is same thing as saying thatTsatisfies polynomial of degreen2.W eh a v en o wd e fi n i t i o n :DefinitionAny polynomialf(x)s u c ht h a tf(T)=0i sk n o w na sa n -nihilating polynomial of a linear transformationT.Polynomial given in 1 is one such polynomial. Thus set of annihilatingpolynomials is nonempty and we can think of annihilating polynomialof least degree which is monic.DefinitionAnnihilating polynomial of least degree which is monicis known as minimal polynomial of a linear transformationT.Example 8.Find minimal polynomial of the following matrixA=0BB@21000200002000051CCA89munotes.in

Page 74

Solution:Step 1: Find characteristic polynomial ofA.Characteristic polynomial ofAis the following:p()=(2)3(5)Step 2: By definition of minimal polynomial, minimal polynomialm()must divide characteristic polynomial hence it must be one of thefollowing:(2)3(5)(2)2(5)(2)(5)Step 3: Note that minimal polynomial is a polynomial of least degreewhich is satisfied by the matrixA. Only polynomial amongstabove polynomial is the second one hence minimal polynomial is(2)2(5)Remark 6.2.4.1. Set of all annihilating polynomials of a lineartransformationTis an ideal inF[x]. SinceFis a field, this idealis a principal ideal and monic generator of this ideal is nothingbut minimal polynomial ofT.2. Since minimal polynomial is monic it is unique.Theorem 6.2.6.LetTbe a linear transformation on a n-dimensionalvector spaceV. The characteristic and minimal polynomial ofThavethe same roots except for multiplicities.Proof.Letpbe the minimal polynomial forT.L e t↵be a scalar. Wewant to show thatp(↵)=0i fa n do n l yi f↵is an eiegnvalue ofT.First suppose thatp(↵) = 0. Then by remainder theorem of polynomi-als,p=(x↵)q(6.2)whereqis a polynomial. Sincededq < degp,t h ed e fi n i t i o no fm i n i -mal polynomialptells us thatq(T)6=0 . C h o o s eav e c t o rvsuch that90munotes.in

Page 75

q(T)v6=0 . L e tq(T)v=w.T h e n0=p(T)v=(T↵I)q(T)v=(T↵I)wand thus↵is an eigenvalue.Conversely, let↵be an eigenvalue ofT,s a yT(w)=↵wwithw6=0 .Sincepis a polynomial we have seen thatp(T)w=p(↵)wSincep(T)=0a n dw6=0 ,w eh a v et h a tp(↵)=0 . T h u se i g e n v a l u e↵is a root of minimal polynomialp.Remark 6.2.5.1. Since every root of the minimal polynomial isalso a root of characteristic polynomial we see that minimal poly-nomial divides characteristic polynomial. This is famous CaleyHamilton theorem and it states that linear transformationTsat-isfies characteristic polynomial in the sense that iff(x)i sac h a r -acteristic polynomial thenf(T)=0 .2. Similar matrices have the same minimal polynomialCheck Your Progress1. LetA:=0@4442361361ACompute (a)the characteristic poly-nomial (b) the eigenvalues (c) All eigenvectors (d) Identify alge-braic and geometric multiplicities of each of the eigenvalue.2. LetAbe the real 3⇥3 matrix. Find minimal polynomial ofA0@31122122 01A6.3 Digonalizable LinearTransformations91munotes.in

Page 76

Definition 20.LetVbe a vector space of dimensionn,andTbe alinear transformation onV.L e tWbe a subspace ofV.W e s a y t h a tWis invariant underTif for each vectorv2Wthe vectorT(v)i si nW.i . e .i fT(W)✓W.Definition 21.LetTbe a linear transformation on a finite dimen-sional vector spaceV.W e s a y t h a tTis diagonalizable if there exist abasis ofVconsisting of all eigenvectors ofT.Remark 6.3.1.Matrix of digonalizableTin the basis ofVconsistingof eigenvectors ofTis diagonal matrix with eigenvalues along the di-agonal of the matrix.Example 9.Find the basis in which following matrix is in diagonalformA=0@1012313331ASolution:Since characteristic polynomial of the matrix isp()=372+93w eg e tf o l l o w i n ge i g e n v a l u e sf o rA3+p(6),1a n d3p6.Since all eigenvalues are distinct given matrix is and in the basis formedby three independent eigenvectors matrix becomes diagonal.X1=52+12(3 +p6),3216(3 +p6,1X2=1,1,0X3=52+12(3p6),32163p6,1Required diagonal matrix is the matrix whose diagonal is formed bythree eigenvalues respectively.Check Your ProgressLetAbe a matrix over any fieldF.L e tAbe the characteristicpolynomial ofAandp(t)=t4+12F[t]. State with reason whetherthe following are true or false1. LetA=p,t h e nAis invertible2. IfA=p,t h e nAis diagonalizable overF3. Ifp(B)=0f o rs o m em a t r i xBbe 8⇥8m a t r i x ,t h e npis thecharacteristic polynomial ofB.92munotes.in

Page 77

4. There is unique monic polynomialq2F[t]o fd e g r e e4s u c ht h a tq(A)=0Theorem 6.3.1.LetTbe a diagonalizable linear transformation onandimensional spaceV. Let↵1,↵2,. . . ,↵kbe the distinct eigenvaluesofT. Letd1,d2,. . . ,dkbe the respective multiplicities with which theseeigenvalues are repeated. Then characteristic polynomial forTisf=(x↵1)d1+(x↵2)d2+...+(x↵k)dkd1+d2+...+dk=nProof.IfTis diagonalizable then in the basis consisting of eigenvectorsmatrix ofTis a diagonal matrix with all eigenvalues lying along thediagonal. We know that characteristic polynomial of a diagonal matrixis a product of linear factors of the form-f=(x↵1)d1+(x↵2)d2+...+(x↵k)dkCheck Your Progress1. LetTbe the linear operator inR4which is represented in thestandard ordered basis by the following matrix0BB@0000a0000b0000c01CCAUnder what conditions ona, b,andcisTdiagonalizable.2. LetNbe 2⇥2 complex matrix such thatN2=0 ¿P r o v et h a teitherN=0o rNis similar overCto✓0010◆Lemma 6.3.1.LetWbe an invariant subspace forT. The characteris-tic polynomial for the restriction operatorTwdivides the characteristicpolynomial forT. The minimal polynomial forTwdivides the minimalpolynomial forT.93munotes.in

Page 78

Proof.We haveA=BC0D(6.3)whereA=[T]BandB=[Tw]0B.B e c a u s e o f t h e b l o c k f o r m o f t h e m a t r i xdet(AxI)=det(BxI)det(xID)( 6 . 4 )That proves statement of characteristic polynomials. Note that we haveused notationIfor identity matrices of three di↵erent sizes.Note that kth power of the matrixAhas the block formAk=BkCk0Dk(6.5)whereCkis somer⇥(nr)m a t r i x .T h e r e f o r e ,a n yp o l y n o m i a lw h i c hannihilatesAalso annihilatesB(andDtoo). So the minimal polyno-mial forBdivides the minimal polynomial forA6.4 Triangulable Linear TransformationsDefinition 22. Triangulable Linear TransformationThe lineartransformationTis called triangulable if there is an ordered basis ofVin whichTis represented by a triangular matrix.Lemma 6.4.1.LetVbe a finite dimensional vector space and letTbea linear transformation onVsuch that minimal polynomial forTis aproduct of linear factorsp=(x↵1)r1+...+(x↵k)rk(6.6)where↵i2F.LetWbe a proper subspace ofVwhich is invariant underT. Thenthere exist a vectorv2Vsuch that1.vis not inW;2.(T↵I)vis inW, for some characteristic value↵of the trans-formationT.Proof.Letube any vector inVwhich is not inWThen there existap o l y n o m i a lgsuch thatg(T)u2W.T h e ngdivides the minimal94munotes.in

Page 79

polynomialpforT.S i n c euis not inW, the polynomialgis notconstant. Therefore,g=(x↵1)l1+(x↵2)l2+...+(x↵k)lkwhere at least one of the integersliis positive. We choosejsuch thatlj>0t h e n(x↵j)d i v i d e sg. Henceg=(x↵j)h(6.7)By the definiion ofg,t h ev e c t o rv=h(T)ucan not be inW.B u t ,(T↵jI)v=(T↵jI)h(T)u=g(T)u(6.8)is inW..We obtain triangular matrix representation of a linear transforma-tion by applying following procedure:1. Apply above lemma to trivial subspaceW=0t og e tv e c t o rv1.2. Oncev1,v2,v3,. . . vl1are determined form a subspaceWspannedby these vectors and apply above lemma to thisWto obtainvlin the following way-Note that the subspaceWspanned byv1,v2,. . . ,vl1is invariantunderT.T h e r e f o r e b y a b o v e l e m m a t h e r e e x i s t a v e c t o rvlinVwhich is not inWsuch that (T↵lI)vlis inWfor certaineigenvalue↵lofT.T h i s c a n b e d o n e b e c a u s e m i n i m a l p o l y n o m i a lofTis factored into linear factors and above lemma is applicable.We will illustrate this procedure with the help of example.Theorem 6.4.1.In the basis obtained by above procedure the matrixofTis triangularProof.By above procedure we get ordered basis{v1,v2,. . . vn}This basisis such thatT(vj)l i e si nt h es p a c es p a n n e db yv1,v2,. . . vjand we havefollowing form-T(vj)=a1jv1+a2jv2+...+ajjvj,1jn(6.9)In this type of representation we get that the matrix ofTis triangular..Check Your ProgressLet0@0102222321ACheck whether above matrix is similar over a field of real numbers toat r i a n g u l a rm a t r i x ?I fs ofi n ds u c hat r i a n g u l a rm a t r i x .95munotes.in

Page 80

Theorem 6.4.2. Primary Decomposition TheoremLetTbe a linear transformation on a finite dimensional vector spaceVover the fieldF. Letpbe a minimal polynomial forT,p=pr11...prkk(6.10)where eachpiare distinct irreducible monic polynomials overFand theriare positive integers. LetWibe the null space ofpi(T)ri,i=1,2,. . . ,k.Then1.V=W1...Wk;2. eachWiis invariant underT;3. ifTiis the transformation induced onWibyT, then the minimalpolynomial forTiisprii..Proof.Before proceeding to a proof of above theorem we note that realpoint is in obtaining so called primary decomposition stated in the the-orem explicitly for a given linear transformation. Thus we present aproof in the form of algorithm which for givenTwill produce primarydecomposition of a given linear transformationT. Following steps de-scribe the method to obtain primary decomposition theorem1. For givenT, obtain minimal polynomial forT.L e t i t b e i n t h efollowing formp=pr11...prkk(6.11)2. For eachi,l e tfi=pprii=⇧j6=iprjj(6.12)Note that allfiare distinct and are relatively prime.3. Find polynomialsgisuch thatkXi=1figi=1(6.13)4. LetEi=hi(T)=fi(T)gi(T)96munotes.in

Page 81

5. Fori6=jwe haveE1+...+Ek=I(6.14)EiEj=0,i6=j(6.15)6. TheseEiserve the purpose of obtaining invariant subspacesWiwhich decomposeVinto direct sum, and eachEiis a projectionoperator.7. It can be verified that minimal polynomial ofTiwhich is a re-striction ofTtoWiisprii.6.5 Nilpotent Linear TransformationsDefinition 23. Nilpotent TransformationLetNbe a linear trans-formation on the vector spaceV.Nis said to be nilpotent if there existsome positive integerrsuch thatNr=0 .Theorem 6.5.1.LetTbe a linear transformation on the finite dimen-sional vector spaceVover the fieldF. Suppose that minimal polynomialforTdecomposes overFinto linear a product of linear polynomials.Then there is a diagonalizable transformationDonVand nilpotentoperatorNonVsuch thatT=D+N;( 6 . 1 6 )DN=ND.(6.17)The transformationDandNare uniquely determined and each of themis a polynomial inT.We will now see the process to findDandNfor a given lineartransformationT.1. Calculate minimal polynomial ofTand factor into linear polyno-mialspi=x↵i.2. In the notation of above theorem, we calculateEiand note thatrange ofEiis null spaceWiof (T↵iI)ri.3. LetD=↵1E1+...+↵kEkand observe thatDis a diagonalizabletransformation. We callDdiagonalizable part ofT.4. LetN=TD.W e p r o v e b e l o w t h a tNso defined is nilpotenttransformation.97munotes.in

Page 82

Proof thatNdefined as above is a nilpotent transformation.Note that the range space ofEiis the null spaceWiof (T↵i)ri.I=E1+E2+...+Ek(6.18))T=TE1+...+TEk(6.19)D=↵1E1+...+↵kEk(6.20)ThereforeN=TDbecomesN=(T↵1I)E1+...+(T↵kI)Ek(6.21)N2=(T↵1I)2E1+...+(T↵kI)2Ek(6.22)Nr=(T↵1I)rE1+...+(T↵kI)rEk(6.23)Whenrrifor everyi,t h e nNr=0 ,b e c a u s et h et r a n s f o r m a t i o n(T↵kI)iwill be a null transformation on the range ofEi.T h e r e f o r eNis nilpotent transformation.Example 10.Find the basis in which following matrix has triangularform and find that triangular form.A=0@0102222321ASolution:Process to find triangular form of a matrix is as follows-Step 1: Find at least one eigenvalue and corresponding eigenvector ofA.F o r a b o v e m a t r i x c h a r a c t e r i s t i c p o l y n o m i a l i sf()=3.Hence 0 is an eigenvalue which is repeated thrice. Eigenvectorsare1,0,1,0,0,0and0,0,0Step 2: Now Note thatu1=1,0,12kerAand kerA⇢kerA2⇢kerA3.I fu22kerA2kerAthenAu22kerA=Au2=↵u1for some scalar↵.A=0@2220002221Aand kerA2=<1,1,0,0,1,1>Takingu2=1,1,0andu3=1,0,0>.Step 3: In the basisu1,u2andu3,g i v e nm a t r i xAhas triangular form-A=0@01 000 20021A98munotes.in

Page 83

6.6 Chapter End Exercise1. LetAbe an invertible matrix. Ifvis an eigenvector ofA,s h o wit is also an eigenvector of bothA2andA2. What are the cor-responding eigenvalues?2. LetCbe a 2⇥2m a t r i xo fr e a ln u m b e r s .G i v eap r o o fo rc o u n t e rexample to the assertion that ifChas two distinct eigenvaluesthen so doesC2.3. LetAben⇥nhave an eigenvaluewith corresponding eigen-vectorv,t h e ns t a t ew i t hr e a s o nw h e t h e rf o l l o w i n gi st r u eo rf a l s e(a)is an eigenvalue ofA(b) Ifvis an also an eigenvector ofBwith eigenvalueµ,t h e nµis an eigenvalue ofAB.(c) Let2F.T h e nis an eigenvalues ofA.4. LetA=0@63241210531AIs A similar over the firldRto a diagonal matrix? IsAis similarover the fieldCto a diagonal matrix?5. LetAandBbe ann⇥nmatrices over the fieldF.P r o v et h a ti f(IABis invertible, then (IBA) is invertible and(IBA)1=I+B(IAB)1A6. Use above result to prove thatAandBaren⇥nmatrices over thefieldF,t h e nABandBAhave precisely the same characteristicvalues inF.7. Leta, bandcbe elements of a fieldF,a n dl e tAbe the followingmatrix overF;A=0@00c10b01a1A99munotes.in

Page 84

Prove that the characteristic polynomial ofAisx3ax2bxcand that this is also the minimal polynomial forA.8. Find a 3⇥3m a t r i xf o rw h i c ht h em i n i m a lp o l y n o m i a li sx2.9. Is it true that every matrixAsuch thatA2=Ais similar to adiagonal matrix. If true, prove your assertion otherwise give acounter example.10. LetTbe a linear operator onV.i f e v e r y s u b s p a c e o fVis invariantunderT,t h e np r o v et h a tTis a scalar multiple of the identityoperator.11. LetTbe a linear operator on a finte dimensional vector spaceover an algebraically closed fieldF.L e tfbe a polynomial overF.P r o v e t h a t↵is a characteristic value off(T)i fa n do n l yi f↵=f(), whereis a characteristic value ofT.100munotes.in

Page 85

Chapter 5Inner Product SpacesChapter Structure5.1 Introduction5.2 Objectives5.3 Inner Product5.4 Orthogonalization5.5 Adjoint of a Linear Transformation5.5.1 Unitary Operators5.5.2 Normal Operators5.6 Chapter End Exercises5.1 IntroductionVector space structure on a set is purely an algebraic structure. Wesimply mention the way, by means of this structure, how to add andsubtract two vectors. In general we also talk about geometrical proper-ties of vectors. Then question arise which concepts in general describegeometric properties of vectors. Once such concepts are there with usthen we can discuss terms like orthogonality in case of vector spaceswhere apparently elements do not look like Euclidean vectors. Innerproduct is that concept. Inner product in case of Euclidean vectors issimply a dot product of two vectors. We have seen that at elementarylevel dot products to a quite larger extent describe geometry of Eu-clidean spaces. In other words all propositions of geometry, in one orother way, are consequence of the fact that dot product is defined onEuclidean spaces. Thus we define a real valued function known as innerproduct on a vector space and try to see what impressions this functionmakes on vector space structure and linear transformations defined onthese vector spaces.69munotes.in

Page 86

5.2 ObjectivesAfter going through this chapter you will be able to:•decide whether given real valued function is an inner product on avector space•decide whether given pair of vectors is orthogonal•decide how vivid vector space structure becomes and di↵erent looklinear transformations get because of defining inner product on vectorspaces.5.3 Inner ProductDefinition 1.LetFbe a field of real numbers or field of complexnumbers andVis a vector space overF. An inner product onVis afunction which assigns to each ordered pair of vectorsu, v2Vas c a l a r2Fin such a way that for allu, v, w2Vand all scalars↵1.hu+v,wi=hu, wi+hv,wi2.h↵u, vi=↵hu, vi3.hu, vi=hv,ui4.hu, ui6=0i fu6=0Observation 5.3.1.Without complex conjugation in the definition,we would have the contradiction:hu, ui>0a n dhıu, ıui=1hu, ui>0f o ru6=0Example 1.OnRnthere is an inner product which is known asStan-dard Inner Product.I ti sd e fi n e da st h ed o tp r o d u c to ft w oc o o r d i -nate vectors.Example 2.Foru=(x, y),v=(x1,y1)i nR2,l e thu, vi=xx1yx1xy1+4yy1Thenhu, videfines an inner product on the vectors ofR2.Example 3.LetVbeFn⇥n,t h es p a c eo fa l ln⇥nmatrices overF.HereFis either field of real numbers or field of complex numbers. Thenthe following defined is an inner product onV.hA, Bi=Xj,kAjkBjk70munotes.in

Page 87

•Verify that above inner product can be expressed in the followingway-hA, Bi=tr(AB⇤)=tr(B⇤A)•LetVandWbe vector spaces over a fieldF. WhereFis eitherafi e l do fr e a ln u m b e r so rfi e l do fc o m p l e xn u m b e r s . L e tTbean o n - s i n g u l a rl i n e a rt r a n s f o r m a t i o nf r o mVintoW.I fh., .iisan inner product onV.T h e n p r o v e t h a thTu,Tviis an innerproduct onW.•Letvbe the vector space of all continuous complex valued func-tions on the unit interval, 0t1. Lethf,gi=Z10f(t)g(t)dtProve thath., .iso defined is an inner product.•LetVbe a vector space on the field of complex numbers. Leth., .ibe an inner product defined onV. Then prove that the followingholds.hu, vi=Rehu, vi+ıRehu, ıviDefinition 2.An inner product space is a real or complex vector space,together with a specified inner product on that space. A finite dimen-sional real inner product space is called Euclidean space and a finitedimensional complex inner product space is called a unitary space.Definition 3.The quadratic form determined by the inner product isthe function that assigns to each vectoruthe scalar||u||2defined as||u||2=hu, uiNote that||u||satisfies the following identity||u±v||2=||u||2±2Rehu, vi+||v||28u, v2V•For a real inner product prove the following:hu, vi=14||u+v||214||uv||2•For a complex inner product prove the followinghu, vi=14||u+v||214||uv||2+ı4||u+iv||2ı4||uiv||2Above identities are known aspolarization identities71munotes.in

Page 88

Theorem 5.3.1.Let V be an inner product space, then for any vectorsu, vinVand scalar↵1.||↵u||=|↵|||u||;2.||u|| 0foru6=0;3.|hu, vi|  ||u||||v||;4.||u+v||  ||u||+||v||.Proof.1. First follows easily in the following way||↵u||2=h↵u, ↵ui=↵↵hu, ui=↵↵||u||2Therefore it follows that||↵u||=|↵|||u||2. This follows immediately from definition of inner product.3. This inequality is true ifu=0 . S u p p o s eu6=0L e tw=vhv,ui||u||2uthenhw,ui=0a n d0| |w||2||w||2=(hwhw,ui||u||2u, whw,ui||u||2ui)=hw,wihw,uihu, wi||u||2=||w||2|hu, vi|2||u||2Hence third inequality.4. Using third inequality in the following way we get the fourthinequality–||u+v||2=||u||2+hu, vi+hv,ui+||v||2=||u||2+2Rehu, vi+||v||2| |u||2+2||u||||v||+||v||2=(||u||+||v||)2Therefore||u+v||  ||u||+||v||72munotes.in

Page 89

The third inequality above is calledCauchy-Schwarz inequality.Equality occurs in the third if and only ifuandvare linearly dependent.5.4 OrthogonalizationDefinition 4.Letuandvbe vectors in an inner product spaceV.Thenuis orthogonal tovifhu, vi=0 . I fSis a set of vectors inVthenSis said to be orthogonal set if for each pair of distinct vectorsu, v2Swe havehu, vi=0 . S u c ha no r t h o g o n a ls e ti ss a i dt ob eo r t h o n o r m a li ffor everyu2Swe have that||u||=1 .Example 4.Find unit vector orthogonal tov1=1,1,2,v2=0,1,3Solution:Ifwis a vector which is orthogonal tov1andv2then itsatisfies thathw,v1i=0a n dhw,v2i=0T h i sl e a d st oh o m o g e n e o u ssystem of linear equation-x+y+2z=0y+3z=0wherew=x, y, zUpon solving this system of equation we getx=1,y=3,z= 1. Normalizing this vector we get unit vectororthogonal tov1andv2which is thusv1=1/p11,3/p11,1/p11.Theorem 5.4.1.An orthogonal set of non-zero vectors is linearly in-dependent.Proof.LetSbe an finite or infinite orthogonal set of nonzero vectors inag i v e ni n n e rp r o d u c ts p a c e . S u p p o s eu1,u2,. . . ,unare distinct vectorsinSand thatv=↵1u1+↵2u2+...+↵nunthenhv,uki=hXj↵juj,uki=Xj↵jhuj,uki=↵khuk,ukiSincehuk,ukh6=0i tf o l l o w st h a t↵k=hv,uki||uk||2Thus whenv=0w eg e tt h a te a c h↵k=0 ;t h e r e f o r eSis an independentset.73munotes.in

Page 90

Corollary 5.4.1.If a vectorvis a linear combination of an orthogonalsequence of non zero vectorsu1,u2,. . . ,un, thenvis the particular linearcombinationv=mXk=1hv,uki||uk||2ukTheorem 5.4.2.LetVbe an inner product space and letu1,u2,. . . ,unbe any independent vectors inV. Then one may construct orthogonalvectorsv1,v2,. . . vninVsuch that for eachk=1,2,. . . nthe setv1,v2,. . . ,vnis a basis for the subspace generated byu1,u2,. . . ,un.Proof.We will achieve claim of made in the theorem by explicitly deter-miningv1,v2,. . . vnwhenu1,u2,. . . ,unare given. This process is knownasGram-Schmidt orthogonalization processLetv1=u1Supposev1,v2,. . . ,vmvectors of soughtnvectors are constructed suchthat these vectors span the subspace spanned byu1,u2,. . . ,um.T h e nvm+1is defined as followsvm+1=um+1mXk=1hum+1,vki||vk||2vkThenvm+16=0o t h e r w i s euk+1is a linear combination ofv1,v2,. . . ,vmand hence linear combination ofu1,u2,. . . ,um.F u r t h e r m o r e ,i f1jmthenhvm+1,vji=hum+1,vjimXk=1hum+1,vki||vk||2hvk,vji=hum+1,vjihum+1,vji=0Thereforev1,v2,. . . ,vm+1is an orthogonal set consisting ofm=1n o n z e r ovectors in the subspace spanned byu1,u2,. . . ,um+1.T h i s s e t t h e r e f o r eis a basis for this subspace. This completes the construction and proofof the theorem.Corollary 5.4.2.Every finite dimensional inner product space has anorthonormal basis.Proof.LetVbe a finite dimensional inner product space andu1,u2,. . . ,unbe a basis forV.W ea p p l yt h eG r a m - S c h m i d tp r o c e s st oc o n s t r u c to r -thogonal basisv1,v2,. . . ,vn.T h e n w e o b t a i n o r t h o n o r m a l b a s i s s i m p l yby replacingvkbyvk||vk||.74munotes.in

Page 91

Example 5.Consider the following basis of Euclidean spaceR3.v1=( 1,1,1)v2(0,1,1) andv3=( 0,0,1)Transform this basis to orthonormal basis using Gram-Schmidt orthog-onalization process.Solution:u1=v1||v1||=(1,1,1)p3Now we findw2as follows-w2=v2hv2,u1iu1=( 0,1,1,)23((1,1,1)p3)=(23,13,13)Normalizingw2we getu2=w2||w2||=(2p6,1p6,1p6)Now we writew3=v3hv3,u1iu1hv3,u2iu2=( 0,12,12)Normalizing we getu3=( 0,1p2,1p2)u1,u2andu3together form the required orthogonal basis.Example 6.Find real orthogonal matrixPsuch thatPtAPis diagonalfor the following matrixA.A=2421112111235Solution:First find characteristic polynomial forAwhich is (1)2(4). Thus eigenvalues ofAare 1 (with multiplicity two) and4 (with multiplicity one) Solve (AI)X=0f o r=1a n dw eg e tfollowing homogeneous system of equations:xyz=0xyz=0xyz=0That isx+y+z=0T h i ss y s t e mh a st w oi n d e p e n d e n ts o l u t i o n s . O n esuch solution isv1=( 1,1,0).We seek a second solutionv2=(a, b, c)which is orthogonal tov1that is such thata+b+c=0a n dab=0One of the solution to these equations isv2=( 1,1,2) Next we nor-malizev1andv2to obtain the unit orthogonal solutionsu1=( 1/p2,1/p2,0),u2=( 1/p6,1/p6,2/p6)75munotes.in

Page 92

Similarly solution of (AI)X=0f o r=4i sv3=( 1,1,1)and normalize it to obtainu3=( 1/p3,1/p3,1/p3) MatrixPwhosecolumns are vectorsu1,u2andu3form an orthogonal matrix such thatPtAPis diagonal matrix and corresponding diagonal matrix is thefollowing-P=241/p21/p61/p31/p21/p61/p302/p61/p335PtAP=2410001000435Definition 5.IfWis a finite dimensional subspace of a inner productspaceVandu1,u2,. . . ,unis any orthonormal basis forW,t h e nt h evectorudefined as follows is known asorthogonal projection ofv2V.u=Xkhv,uki||uk||2ukThe mapping that assigns to each vector inVits orthogonal projectionis calledorthogonal projection ofVonWDefinition 6.LetVbe an inner product space andSbe any set ofvectors inV. The orthogonal complement ofSis the setS?of allvectors inVwhich are orthogonal to every vector inS.Theorem 5.4.3.LetWbe a finite dimensional subspace of an innerproduct spaceVand letEbe orthogonal projection ofVonW. ThenEis idempotent linear transformation ofVontoW,W?is the nullspace ofEandV=WMW?In this case,IEis the orthogonal projection ofVonW?. It is anidempotent linear transformation ofVontoW?with null spaceW.Bessel’s InequalityLetv1,v2,. . . vnbe an orthonormal set of nonzero vectors in an innerproduct spaceV.I fuis any vector inV, then-Xk|hu, vki|2||vk||2| |u||2Theorem 5.4.4.LetVbe a finite dimensional vector space andfbe alinear functional onV. Then there exists a unique vectorvinVsuchthatf(u)=hu, vifor alluinV.Note thatvlies in the orthogonal complement of the null space off.76munotes.in

Page 93

Theorem 5.4.5.For any linear transformation/operatorTon a finitedimensional inner product spaceV, there exists a unique linear operatorT⇤onVsuch thathTu,vi=hu, T⇤vifor alluandvinV.Proof.Letvbe any vector inV.T h e nu7! hTu,viis a linear functionalonV. As we have stated above there exist unique vectorv0inVsuchthathTu,vi=hu, v0ifor everyuinV.L e tT⇤denote the mappingv7!v0v0=T⇤vFor anyu, v, winVconsider the following for any scalar↵hu, T⇤(↵v+w)i=hTu,↵v+wi=hTu,↵vi+hTu,wi=↵hTu,vi+hTu,wi=↵hu, T⇤vi+hu, T⇤wi=hu, ↵T⇤vi+hu, T⇤wi=hu, cT⇤v+T⇤wiThushu, T⇤(↵v+w)i=hu, cT⇤v+T⇤wihenceT⇤is linear. Now we prove uniqueness. Note that for anyvinV,t h ev e c t o rT⇤vis uniquely determined as the vectorv0such thathTu,vi=hu, v0ifor everyu.Theorem 5.4.6.LetVbe a finite dimensional inner product space andletB=u1,u2,. . ,unbe an ordered orthonormal basis forV. LetTbe alinear operator onVand letAbe the matrix ofTin the basisB. ThenAkj=hTuj,uki.Proof.SinceBis an orthonormal basis, we haveu=nXk=1hu, ukiukThe matrixAis defined byTuj=nXk=1Akjukand sinceTuj=nXk=1hTuj,ukiukwe have thatAkj=hTuj,uki.77munotes.in

Page 94

Corollary 5.4.3.LetVbe a finite dimensional inner product space,and letTbe a linear operator onV. In any orthonormal basis forV,the matrix ofT⇤is the conjugate transpose of the matrix ofT.Proof.LetB=u1,u2,. . ,unbe an orthonormal basis forV,l e tA=[T]BandB=[T⇤]B.F r o ma b o v et h e o r e mw eh a v eAkj=hTuj,ukiBkj=hT⇤uj,ukiBy the definition ofT⇤we haveBkj=hT⇤uj,uki=huk,T⇤uji=Tuk,uj=Ajk5.5 Adjoint of a Linear TransformationDefinition 7.LetTbe a linear operator on an inner product spaceV.Then we say thatThas an adjoint onVif there exists a linear operatorT⇤onVsuch thathTu,vi=hu, T⇤vifor alluandvinV.Note that for a linear operator on finite dimensional inner productspace there always exist an adjoint but there exist infinite diemnsionalinner product spaces and linear operators on them for which there isno corresponding adjoint operator.Theorem 5.5.1.LetVbe a finite dimensional inner product space. IfTandUare linear operators onVand↵is scalar,1.(T+U)⇤=T⇤+U⇤2.(↵T)⇤=↵T⇤3.(TU)⇤=U⇤T⇤4.(T⇤)⇤=T78munotes.in

Page 95

Proof.1. Letuandvbe inV.T h e nh(T+U)u, vi=hTu+Uu,vi=hTu,vi+hUu,vi=hu, T⇤vi+hu, U⇤vi=hu, T⇤v+U⇤vi=hu,(T⇤+U⇤)viFrom the uniqueness of adjoints we have that (T+U)⇤=T⇤+U⇤2. Considerh↵Tu, vi=hTu,↵vi=hu, T⇤↵vi=hu,↵T⇤viFrom the uniqueness of adjoints we get (↵T)⇤=↵T⇤.3. Note the followinghTUu,vi=hUu,T⇤vi=hu, U⇤T⇤viUniqueness of adjoint of a linear operator proves the third iden-tity.4. Note the followinghT⇤u, vi=hv,T⇤ui=hTv,ui=hu, TViand the fourth identity follows.Note that ifTis a linear operator on finite dimensionalcomplex inner product space, thenT=U1+iU2whereU1=U⇤1andU2=U⇤2.This expression forTis unique andU1=12(T+T⇤)U2=12i(TT⇤)Definition 8.Al i n e a ro p e r a t o rTsuch thatT=T⇤is calledselfadjointorHermitian.79munotes.in

Page 96

5.5.1 Unitary OperatorsDefinition 9.LetVandWbe inner product spaces over the samefield and letTbe a linear transformation fromVintoW.W es a yt h a tTpreserves inner product ifhTu,Tvi=hu, vifor allu, vinV.Aniso-morphismofVontoWis a vector space isomorphism which preservesinner products.Definition 10.Aunitary operatoron an inner product space is anisomorphism onto itself.Theorem 5.5.2.LetVandWbe inner product spaces over the samefield and letTbe a linear transformation fromVintoW. ThenTpreserves inner product if and only if||Tu||=||u||for everyuinV.Proof.IfTpreserves inner product then it follows that||Tu||=||u||.For converse part, we prove the result for real inner product spaces. Forcomplex inner product spaces result follows on similar lines except thatwe have to consider polarization identity for complex inner productspaces. So, let our inner product spaces are real and let||Tu||=||u||.Consider polarization identity-hu, vi=14||u+v||214||uv||2hTu,Tvi=14||Tu+Tv||214||TuTv||2=14||T(u+v)||214||T(uv)||2=14||u+v||214||uv||2=hu, viTheorem 5.5.3.LetUbe a linear operator on an inner product spaceV. ThenUis unitary if and only if the adjointU⇤ofUexists andUU⇤=U⇤U=I.Proof.SupposeUis unitary. ThenUis invertible andhUu,vi=hUu,UU1vi=hu, U1vifor alluandvfromV.F r o md e fi n i t i o no fa d j o i n to fa no p e r a t o rt h e nit follows thatU1satisfies properties of adjoint and henceU1is theadjoint ofU.I t i s t r i v i a l n o w t o s e eUU⇤=U⇤U=IConversely let80munotes.in

Page 97

adjoint exists and it isU1. We need to show thatUpreserves innerproduct.hUu,Uvi=hu, U⇤Uvi=hu, Ivi=hu, vifor alluandv.Definition 11.Ac o m p l e xn⇥nmatrixAis called unitary, ifA⇤A=I.Definition 12.Ar e a lo rc o m p l e xn⇥nmatrixAis said to be orthog-onal, ifAtA=I.Definition 13.LetAandBbe complexn⇥nmatrices. We say thatBis unitarily equivalent toAif there is ann⇥nunitary matrixPsuch thatB=P1AP. We say thatBis orthogonally equivalent toAif there is ann⇥northogonal matrixPsuch thatB=P1AP.5.5.2 Normal OperatorsDefinition 14.LetVbe a finite dimensional inner product space andTal i n e a ro p e r a t o ro nV.W e s a yt h a tTis normal if it commutes withits adjoint.i.e.,TT⇤=T⇤T.Observation 5.5.1.Any self-adjoint operator is normal. Any unitaryoperator is normal. Any scalar multiple of normal operator is normal.Note however that sums and products of normal operators are not, ingeneral, normal.Theorem 5.5.4.LetVbe an inner product space andTbe a selfadjoint linear operator onV. Then each characteristic value ofTisreal, and characteristics vectors associated ofTassociated with distinctcharacteristic values are orthogonal.Proof.Let↵be a characteristic value ofT.T h u sTu=↵ufor somenonzero vectoru.Then↵hu, ui=h↵u, ui=hTu,ui=hu, Tui=hu, ↵ui=↵hu, ui81munotes.in

Page 98

Sinceu6=0 ,w em u s th a v e↵=↵.i.e.,↵is real. Suppose we also haveTv=vwithv6=0 . T h e n↵hu, vi=hTu,vi=hu, Tvi=hu, vi=hu, vi=hu, viIf↵6=, then it follows thathu, vi=0 . P r o v i n go r t h o g o n a l i t yo fuandv.Theorem 5.5.5.On a finite dimensional inner product space of pos-itive dimension, every self adjoint operator has a (nonzero) character-istic vector.Proof.LetVbe a finite dimnsional inner product space of dimensionn,w h e r en>0. LetTbe a self adjoint operator onV.C h o o s e a northonormal basisBforVand letA=[T]BSinceT=T⇤,w eh a v eA=A⇤.L e tWbe the space ofn⇥1m a t r i c e so v e rC,w i t hi n n e rp r o d u c thX, Yi=Y⇤X.T h e nU(X)=AXdefines a self adjoint operatorUonW.T h e c h a r a c t e r i s t i c p o l y n o m i a l ,det(xIA), is a polynomial ofdegreenover the field of complex numbers. Every polynomial overChas a root. Thus there exists a complex number↵such thatdet(↵IA)=0 . T h i sm e a n st h a tA↵Iis singular, or that there exist a nonzeroXsuch thatAX=↵X.S i n c e m u l t i p l i c a t i o n b yAis self adjointit follows that↵is real. IfVis real then one may chooseXwith realentries. For thenAandA↵Ihave real entries, and sinceA↵Iissingular, the system (A↵I)X= 0 has a nonzero real solutionX.I nthis way we have thatTu=↵u..Theorem 5.5.6.LetVbe a finite dimensional inner product spaceand letTbe any linear operator onV. SupposeWis a subspace ofVwhich is invariant underT. Then the orthogonal complement ofWisinvariant underT⇤.Proof.:Wis invariant underTmeans ifuis inWthenTuis inW.Letvbe inW?.L e tu2W.T h e nTu2W. Now0=hTu,vi=hu, T⇤viThis shows thatu?T⇤v.T h i sp r o v e st h a tT⇤vis inW?.T h e r e f o r ei fvbe inW?thenT⇤vis inW?. Hence the proof.Theorem 5.5.7.IfTis a normal operator on finite dimensional innerproduct spaceVthen the operatorUdefined for any scalar↵byU=T↵Iis normal.82munotes.in

Page 99

Proof.Note that (T↵I)⇤=T⇤↵I.UU⇤=(T↵I)(T↵I)⇤=(T↵I)(T⇤↵I)=(T⇤↵I)(T↵I)=U⇤UThusUas defined above is normal.Theorem 5.5.8.LetVbe a finite dimensional inner product space andTa normal operator onV. Supposeuis a vector inV. Thenuis acharacteristic vector ofTwith characteristic value↵if and only ifuisa characteristic vector ofT⇤with characteristic value↵Proof.SupposeUis any normal operator onV.T h e n||Uu||2=hUu,Uui=hu, U⇤Uui=hu, UU⇤ui=hU⇤u, U⇤ui=||U⇤u||2Which implies that||Uu||=||U⇤u||.I f↵is any scalar then we saw inabove theorem that the operatorU=T↵Iis normal. Thus||(T↵I)u||=||(T⇤↵I)u||and (T↵I)u=0i fa n do n l yi f (T⇤↵I)u=0 .Definition 15.Ac o m p l e xn⇥nmatrix is called normal ifAA⇤=A⇤ATheorem 5.5.9.write theorem statement here.Proof.this is the proof of the theorem.If there is some corollaryto this theorem then you may write like this:Corollary 5.5.1.corollary to above theorem.If you want give some problems for practice then write this:Check Your ProgressProve or give counter example for thefollowing assertions wherev,w,zare vectors in a real inner productspaceH.1. Ifhv,wi=0a n dhv,zi=0t h e nhw,zi=02. Ifhv,zi=hw,zifor allz2H,t h e nv=w3. IfAis ann⇥nsymmetric matrix thenAis invertible.5.6 Chapter End Exercise1. Prove that an angle inscribed in a semicircle is a right angle.83munotes.in

Page 100

2. Letv,wbe vectors in the planeR2with lengths 3,5r e s p e c t i v e l y .What is the maxima and minima of the length ofv+w?3. LetAbe the following matrix. Show that the bilinear mapR3⇥R3!Rdefined byhx, yi=xTAyis a scalar product.A=2411/201/21000 1354. LetS⇢R4be the vectors that satisfyX=(x1,x2,x3,x4)t h a tsatisfyx1+x2x3+x4= 0. What is dimension ofS. Findorthogonal complement ofS.5. LetL:R3!R3be a linear map with the property thatLv?vfor everyv2R3Prove thatLcan not be invertible.Is a similar assertion is true for linear mapL:R2!R2?6. In a complex vector space with hermitian inner product on it, ifam a t r i xAsatisfies< x, Ax >=0f o ra l lv e c t o r sx,s h o wt h a tA=0 .7. LetAbe a square matrix of real numbers whose columns are(non zero) orthogonal vectors. Then show thatATAis a diagonalmatrix.84munotes.in

Page 101

Chapter 7Bilinear FormsChapter Structure7.1 Introduction7.2 Objectives7.3 Bilinear Form and its Types7.4 Chapter End Exercises7.1 IntroductionLinear transformation is a linear function of one variable. Thenquestion arises of defining a linear function of two variables, conceptof bilinear form arose out of this need. But again what does it meanby linearity in two variables? Natural answer to this question is thatwhat defines a bilinear form. Theory of bilinear forms (and multilinearforms) has developed by generalizing concepts from one variable ina most natural way. Here natural means we take for generalizationobvious choices and establish that they are unambiguous. We havetaken most of the text from book by Ho↵man and Kunz and care hasbeen taken that reader will have to go to original text, the least numberof times.7.2 ObjectivesAfter going through this chapter you will be able to:•Check whether given expression is bilinear form and classify whetherit is degenerate, non-degenerate, symmetric, skewsymmetric bilniearform•Find matrix of a bilinear form in the given basis and switching fromone basis to the other•Diagonalization of a bilinear form and find its signature101munotes.in

Page 102

7.3 Bilinear Form and its typesDefinition 24. Bilinear FormBilinear form on a vector spaceVisaf u n c t i o no ft w ov a r i a b l e so nV,w i t hv a l u e si nt h efi e l dFsatisfyingthe bilinear axioms which are-f(v1+v2,w)=f(v1,w)+f(v2,w)f(↵v, w)=↵f(v,w)f(v,w1+w2)=f(v,w1)+f(v,w2)f(v,↵w)=↵f(v,w)for allv,w,v1,w1,v2,w22Vand↵2FBilinear form will be denoted byhv,wi.Definition 25.Ab i l i n e a rf o r mi ss a i dt ob es y m m e t r i ci fhv,wi=hw,viand skew symmetric ifhv,wi=hw,viDefinition 26.Two vectorsu,vare called orthogonal with respect tosymmetric form ifhu, vi=0Definition 27.Ab a s i sBofVis called orthonormal basis with respectto the form if,hvi,vji=0f o ra l li6=jandhvi,vii=1f o ra l li.Remark 7.3.1.If the form is either symmetric or skew symmetric,then the linearity in the first variable follows from linearity in the secondvariable.ExampleLetAbe ann⇥nmatrix inFand definehv,wi=XtAYwhereXandYare co ordinates ofvandwrespectively in some basisofV.Then we see that this defines a bilinear form onV.T h i s c o i n c i d e s w i t husual inner product onVifA=I.Definition 28.Am a t r i xAis called symmetric ifAt=A.102munotes.in

Page 103

Theorem 7.3.1.Bilinear form given in above example is symmetric ifand only if matrixAis symmetric.Proof.Assume thatAis symmetric. SinceYtAXis a 1⇥1m a t r i x ,i ti sequal to its transpose:YtAX=(YtAX)t=XtAtY=XtAYand hencehY,Xi=hX,Yiand it follows that form is symmetric. Converselylet the form is symmetric. SetX=eiandY=ejwhereeiandejare elements of fixed basis. We find thathei,eji=etiAej=aijwhilehej,eii=etjAei=ajiand as the form is symmetric we get thataij=ajiand the matrixAis symmetric.Computation of the value of bilinear formLetv,w2VandletXandYbe their coordinates in the basisBso thatv=BXandw=BYThenhv,wi=hXivixi,XjvjyjiThis expands using bilinearity toPi,jxiyjhvi,vji=Pi, jxiaijyj=XtAYhv,wi=XtAYThus if we identifyVwithFnusing basisBthen bilinear form<, >corresponds toXtAYCorollary 7.3.1.LetAbe a matrix of a bilinear form with respect to abasis. The matricesA0which represents the same form with respect todi↵erent bases are the matricesA0=QAQtwhereQis arbitrary matrixinGLn(F).Proof.The change of basis is e↵ected byB=B0Pfor some matrixP.ThenX0=PX,Y0=PY.I fA0is the matrix of the form with respect toan e wb a s i sB0,t h e nb yd e fi n i t i o no fA0,hv,wi=X0tA0Y0=XtPtA0PYbut we also havehv,wi=XtAY.T h e r e f o r ePtA0P=ATheorem 7.3.2.The following properties of a realn⇥nare equivalent1.Arepresents dot product, with respect to some basis ofRn2. There is an invertible matrixP2GLn(R)such thatA=PtP3.Ais symmetric and positive definite.Proof.1 implies 2:The matrix of the dot product with respect tothe standard basis is the identity matrix:X.Y=XtIY.I f w echange basis, the matrix of the form changes toA=(P(1)t)I(P1)=(P(1)t)(P1)103munotes.in

Page 104

wherePis the matrix of change of basis. ThusAis of the formPtPand assertion in (2) follows.2 implies 3:PtPis always a symmetric and positive definite hencethis implication in (2) to (3) follows.3 implies 1:IfAis symmetric and positive definite then forhX,Yi=XtAYis also symmetric and positive definite.Definition 29.Ab i l i n e a rf o r mw h i c ht a k e so nb o t hp o s i t i v ea sw e l las negative values is called indefinite formFor example the Lorentz form defined bellow is an indefinite bilinearform.XtAY=x1y1+x2y2+x3y3c2x4y4The coecientcrepresenting speed of light can be normalized to 1,and then the matrix of the form with respect to given basis is given by0BB@11111CCATheorem 7.3.3.Suppose the symmetric formh,i>is not identicallyzero, then there exist a vectorv2Vwhich is not self orthogonal:hv,vi6=0.Proof.Since the form is not identically zero, we have two vectorsu, v2Vsuch thathu, vi6=0 . I fhv,vi6=0o rhu, ui6=0t h e nw eh a v et h etheorem proved. Otherwise supposehv,vi=0a n dhu, ui=0 . D e fi n ew=u+vand expandhw,wiusing bilinearity. We get,hw,wi=hu+v,u+vi=hu, ui+hu, vi+hv,ui+hv,vi=0 +hu, vi+hv,ui+0=2hu, visincehu, vi6= 0 it follows thathw,wi6=0 .Definition 30.LetWbe a subspace ofVthen following defined setis a subspace ofVknown as orthogonal complement ofW.W?=v2V|hv,Wi=0104munotes.in

Page 105

Theorem 7.3.4.Letw2Vbe a vector such thathw,wi6=0. LetW=↵wbe the span ofw. ThenVis the direct sum ofWand itsorthogonal complement:V=WMW?Proof.We prove this theorem in two main steps.1.WTW?=02.WandW?spanVFirst assertion follows aswis not self orthogonal and thereforeh↵w, wi=0,↵=0 . F o rs e c o n ds t e pw en e e dt os h o we v e r yv e c t o rv2Vcanbe expressed asv=↵w+v0for some unique↵andv02W?.I f w etake↵=hv,wihw,wiand setv0=v↵wthen the claim follows.Definition 31.Av e c t o rv2Vis called null vector for the given formif=0f o ra l lw2V.Definition 32.The null space of the form is the set of all null vectorsofVN=v|hv,Vi=0=V?Definition 33.As y m m e t r i cf o r mi ss a i dt ob en o n d e g e n e r a t ei ft h enull space is 0.Definition 34.An orthogonal basisB=(v1,v2,. . . ,vn)f o rV,w i t hrespect to a symmetric formh,iis a basis ofVsuch thatvi?vjfor alli6=jRemark 7.3.2.The matrixAof a form is defined byaij=hvi,vji,the basisBis orthogonal if and only ifAis diagonal matrix. If thesymmetric formh,iis nondegenerate and basisB=(v1,v2,. . . ,vn)i sorthogonal, thenhvi,vii6=0f o ra l li,t h ed i a g o n a le n t r i e so fAarenonzero.Theorem 7.3.5.Leth,ibe a symmetric form on a real vector spaceV.Vector space formThere is an orthogonal basis forV. More pre-cisely, there exist a basisB=(v1,v2,. . . ,vn)such thathvi,vji=0fori6=jand such that for eachi,hvi,viiis either1,1,or0.Matrix formLetAbe a real symmetricn⇥nmatrix. There is amatrixQ2GLn(R)such thatQAQtis a diagonal matrix each ofwhose diagonal entries is 1, -1, or 0.Remark 7.3.3.Matrix form of the above theorem follows from itsvector space form by noting that symmetric matrixAis a matrix ofsymmetric form on a vector space.105munotes.in

Page 106

Proof.We apply induction on dimensionnof vector spaceV. Assumethe result to be true for all vector spaces of dimension less than or equalton1. LetVbe a vector space of dimensionn.L e tt h ef o r mb en o tidentically zero. Then we know that there is a vectorv=v1which isnot self orthogonal i.e.hv1,v1i6=0 . L e tWbe the span ofv1.T h e nby earlier theorem we have thatV=WW?.a n d s o a b a s i s f o rVcan be obtained by combiningv1with any basis (v2,. . . ,vn)o fW?.using induction hypothesis, since dimension ofW?in1, we can take(v2,. . . ,vn)t ob eo r t h o g o n a l . T h e n(v1,v2,. . . ,vn)i so r t h o g o n a lb a s i so fV.F o r ,hv1,vii= 0 ifi>1b e c a u s evi2W?,a n dhvi,vji=0i fi, j >1andi6=j,b e c a u s e(v2,. . . ,vn)i sa no r t h o g o n a lb a s i s .W en o r m a l i z et h ebasis so constructed by solvingc2=±hvi,viiand replacingvibycvi.Thenhvi,viiis changed to±1.Remark 7.3.4.We can permute an orthogonal basis obtained in abovetheorem so that indices withhvi,vii=1a r et h efi r s to n e s ,a n di n d i c e swithhvi,vii=1w i l la p p e a ra f t e r w a r d sa n dt h o s ew i t hhvi,vii=0will appear in the last. Then the matrixAof the form will be0@IpIm0z1ATheorem 7.3.6. Sylvester’s LawThe numbersp, m, zappearing inabove matrix are uniquely determined by the form. In other words theydo not depend on the choice of orthogonal basisBsuch thathvi,vii=±1or0.Theorem 7.3.7.LetTbe a normal operator on a finite dimensionalcomplex inner product spaceVor a self adjoint operator on a finitedimensional real inner product spaceV. Let↵1,↵2,. . . ↵kbe the distinctcharacteristic values ofT. LetWjbe the characteristic space associatedwith↵jandEjthe orthogonal projection ofVonWj. ThenWjisorthogonal toWiwheni6=j,Vis the direct sum ofW1,. . . ,WK, andT=↵1E1+...+↵kEkProof.Letube a vector inWjandvbe a vector inWi,a n ds u p p o s ethati6=j.T h e n↵jhu, vi=hTu,vi=hu, T⇤vi=hu,↵ivi. Hence(↵j↵i)hu, vi=0a n ds i n c e↵j↵i6=0i tf o l l o w st h a thu, vi=0 .ThusWjis orthogonal toWiwheni6=j.F r o m t h e f a c t t h a tVhasan orthonormal basis consisting of characteristic vectors it follows that106munotes.in

Page 107

V=W1+...+Wk.I fuj2Wj(j=1,. . . k)a n du1+....+uk=0 ,t h e n0=hui,Xjuji=Xjhui,uji=||ui||2for everyi,s ot h a tVis a direct sum ofW1,. . . Wk.T h e r e f o r eE1+...Ek=IandT=TE1+...+TEk=↵1E1+...+↵kEKSuch a decomposition ofTis known as thespectral resolutionofT.BecauseE1,. . . Ekare canonically associated withTandI=E1+...+Ekthe family of projectionsE1,. . . Ekis called theresolution of theidentitydefined byT.7.3.1 Solved ProblemsIf you want give some problems for practice then write this:Example 11.Leth., .ibe a bilinear form onR2defined byh(x1,x2),(y1,y2)i=2x1y13x1y2+x2y21. Find the matrixAof this bilinear form in the basis{u1=( 1,0) andu2=(1,1)}2. Find the matrixBof given bilinear form in the basis{v1=(2,1) andv2=( 1,1)}3. Find the transition matrixPfrom the basis{ui}to{vi}andverify thatB=PtAPSolution:1. SetA=(aij)w h e r eaij=hui,ujia11=hu1,u1i=h(1,0),(0,1)i=20+0=2107munotes.in

Page 108

Rest of the entries in the matrix are calculated using followingformulaea12=hu1,u2ia21=hu2,u1ia22=hu2,u2iThus the matrixAis as followsA=✓2120◆2. Similarly matrixBisB=✓3906◆3. Now we writeV1a n dv2in terms ofu1andu2.(2,1) =u1+u2(1,1) = 2u1u2ThusP=✓1211◆and soPt=✓1121◆ThusPtAP=P=✓3906◆=BExample 12.For the following real symmetric matrixA,fi n dan o n -singular matrixPsuch thatPtAPis diagonal and also find its signature.A=0@1323752581ASolution:First form the block matrix (A, I)(A, I)=0@1321003750102580011AApply the row operationsR2!3R1+R2andR3!2R1+R3to(A, I)a n dt h e nc o r r e s p o n d i n gc o l u m no p e r a t i o n sC2!3C1+C2andC3!2C1+C3to A to obtain1All solved problems are taken from Scaum’s Outline Series-Theory and Prob-lems of Linear Algebra by Lipschutz108munotes.in

Page 109

0@13210 002131 00142011Aand then0@10010 002031 00142011ANext apply the row operationR3!R2+2R3and then correspondingcolumn operationC3!C2+2C3to obtain0@10010 002131 00091121Aand then0@10 010 002031 0001 81121ANowAhas been diagonalized. SetP=0@13101 100 21A;t h e nPtAP=0@10 0020001 81AThe signatureSofAisS=21=1 .Check Your Progress1. Determine which of the following bilinear forms are symmet-ric/skewsymmetric/nondegenerate/ degenerate:A=0@10 100111 21AA=0@1212011121AA=0@10 100111 21AA=0BB@110 11102000 1121 51CCA7.4 Chapter End Exercise109munotes.in

Page 110

1. Determine canonical form of the following real nondegeneratesymmetric bilinear formA=0BB@11011234034114151CCA2. Leth., .ibe a bilinear form onR2defined byh(x1,x2),(y1,y2)i=3x1y12x1y2+4x2y1x2y2(a) Find the matrixAof this bilinear form in the basis{u1=(1,1) andu2=( 1,2)}(b) Find the matrixBof given bilinear form in the basis{v1=(1,1) andv2=( 3,1)}(c) Find the transition matrixPfrom the basis{ui}to{vi}andverify thatB=PtAP3. LetVbe a finite dimensional vector space over a fieldFandh., .iis a symmetric bilinear form onV.F o r e a c h s u b s p a c eWofV,l e tW?be the set of all vectorsu2Vsuch thathu, vi=0 f o re v e r yv2W.S h o wt h a t(a)W?is a subspace ofV.(b)V=0?(c)V?=0i fa n do n l yi fhiis non degenerate(d) The restriction ofh., .itoWis nondegenerate if and only ifW\W?={0}110munotes.in