USIT204-Numerical-and-Statistical-methods-munotes

Page 1

1Chapter 1: Mathematical Modeling And Engineering Problem Solvin g
UNIT 1
1
MATHEMATICAL MODELING AND
ENGINEERING PROBLEM SOLVING
Unit Structure
1.0 ObMectives
1.1 Introduction
1.2 Mathematical Modeling
1.3 Conservation Law and Engineering Problems
1.4 Summary
1.5 Exercises
1.0 Objectives
The obMective of this chapter is to introduce the reader to mat hematical modeling
and its application in engineering problem solving. This chapte r also discuss the
conservation laws and how numerical methods are useful in engin eering problem
solving.
1.1 Introduction
Mathematics is widely used in solving real-world problems espec ially due to the
increasing computational power of digital computers and computi ng methods,
which have facilitated the easy handling of lengthy and complic ated problems.
Numerical methods are technique s by which mathematical problems are
formulated so that they can be solved with arithmetic operation s. Although there
are many kinds of numerical methods, they have one common chara cteristic: they
invariably involve large numbers of tedious arithmetic calculat ions and with the
development of fast, efficient digital computers, the role of n umerical methods in
solving engineering problems has increased.
Translating a real-life problem into a mathematical form can gi ve a better
representation of certain problems and helps to find a solution for the problem . munotes.in

Page 2

2NUMERICAL AND STATISTICAL METHODS
There are many advantages of translating a real-life problem in to a mathematical
form like
- Mathematics helps us to formulate a real-world problem.
- Computers can be used to perform lengthy numerical calculations .
- Mathematics has well defined rules for manipulations.
In the coming sections we will discuss the concepts of mathemat ical modelling and
how to model a given problem. We will also discuss how mathemat ical modeling
can be used in engineering problem solving.
1.2 Mathematical Modeling
The method of translating a real-life problem into a mathematic al form is called
mathematical modeling. Mathematical modeling is a tool widely u sed in science
and engineering. They provide a rigorous description of various real-world
phenomena. Mathematical modeling helps to understand and analys e these real-
world phenomena.
In mathematical modelling, we consider a real-world problem and write it as a
mathematical problem. We then solve this mathematical problem, and interpret the
solution in terms of the real-world problem which was considere d. We will see how
to formulate a given problem using a very simple example.
Example 1.2.1: To travel 500 kms if 50 litres of petrol is required. How much
petrol is needed to go to a place which is 200 kms away"
Solution. Step I : We will first mathematically formulate this given problem:
Let ݔ be the litres of petrol needed and ݕ be the kilometres travelled. Then we know
that more the distance travelled, more is the petrol required. That is ݔ is directly
proportional to ݕ .
Thus, we can write
ݔൌݕ݇
where ݇ is a positive constant.
Step II : Solving the model using the given data:
Now, since it is given that to travel 500 kms if 50 litres of p etrol are required, we
get
ͷͲ ൌ ݇ ൈͷͲͲ munotes.in

Page 3

3Chapter 1: Mathematical Modeling And Engineering Problem Solvin g
That is
݇ൌͳ
ͳͲ
Step III : Using the solution obtained in Step II to interpret the given problem:
To find the litres of petrol required to travel 200kms, we subs titute the values in the
mathematical model. That is
ݔൌͳ
ͳͲൈʹͲͲ
which gives
ݔൌʹ Ͳ
That is, we need 20 litres of petrol to travel 200kms away.
Now, we will see one more example to get a better understanding of mathematical
modeling.
Example 1.2.2: A motorboat goes upstream on a river and covers the distance
between two towns on the riverbank in 12 hours. It covers this distance downstream
in 10 hours. If the speed of the stream is 3 km/hr, find the sp eed of the boat in still
water.
Solution. Step I : We will first mathematically formulate this given problem:
Let ݔ be the speed of the boat, ݐ be the time taken and ݕ be the distance travelled.
Then using the formula for speed, distance and time we get
ݕൌݔݐ
Step II : Solving the model using the given data:
We know that while going upstream, the actual speed of the boat is equal to
ݐܽ݋ܾ݄݁ݐ݂݋݀݁݁݌ݏȂݎ݁ݒ݅ݎ݄݁ݐ݂݋݀݁݁݌ݏ
Hence, in upstream, the actual speed of the boat is ݔെ͵.
Similarly, while going downstream, the actual speed of the boat is equal to
ݐܽ݋ܾ݄݁ݐ݂݋݀݁݁݌ݏ൅ݎ݁ݒ݅ݎ݄݁ݐ݂݋݀݁݁݌ݏ
Hence, in downstream, the actual speed of the boat is ݔ൅͵.
Since, the time taken to travel upstream is 12 hours, we have
ݕൌͳ ʹ ሺ ݔെ͵ ሻ munotes.in

Page 4

4NUMERICAL AND STATISTICAL METHODS
and as the time taken to travel downstream is 12 hours, we have
ݕൌͳ Ͳ ሺ ݔ൅͵ ሻ
Thus,
ͳʹሺݔെ͵ሻൌͳ Ͳ ሺ ݔ൅͵ ሻ
which gives ݔൌ͵ ͵ Ǥ
Step III : Using the solution obtained in Step II to interpret the given problem:
From Step II it is clear that the speed of the boat in still ri ver is 33 km/hr.
Our next example, will show how mathematical modeling can be us ed to study
population of a country.
Example 1.2.: Suppose the current population is 200,000,000 and the birth rat e
and death rates are 0.04 and 0.02 respectively. What will be th e population in 5
years"
Solution. Step I : We will first mathematically formulate this given problem:
We know that the population of a country increases with birth a nd decreases with
death. Let ݐ denote time in years where ݐൌͲ implies the present time. Let ݌ሺݐሻ
denote the population at time ݐǤ
If ܤሺݐሻ denote the number of births and ܦሺݐሻ denote the number of deaths in year
ݐ then
݌ሺݐ൅ͳሻൌ݌ሺݐሻ൅ܤሺݐሻെܦሺݐሻ
Let ܾൌ஻ሺ௧ሻ
௣ሺ௧ሻ be the birth rate for the interval ݐǡݐ൅ͳ and ݀ൌ஽ሺ௧ሻ
௣ሺ௧ሻ be the death rate
for the interval ݐǡݐ൅ͳ.
Then
݌ሺݐ൅ͳሻൌ݌ሺݐሻ൅݌ܾሺݐሻെ݌݀ሺݐሻ
ൌሺͳ൅ܾെ݀ ሻ݌ሺݐሻ
Then, when ݐൌͲ we get
݌ሺͳሻൌሺͳ൅ܾെ݀ ሻ݌ሺͲሻ
Similarly, taking ݐൌͳ we get
݌ሺʹሻൌሺͳ൅ܾെ݀ ሻ݌ሺͳሻ
ൌሺͳ൅ܾെ݀ ሻଶ݌ሺͲሻ munotes.in

Page 5

5Chapter 1: Mathematical Modeling And Engineering Problem Solvin g
Continuing in this manner we get
݌ሺݐሻൌሺͳ൅ܾെ݀ ሻ௧݌ሺͲሻ
Taking ሺͳെܾെ݀ ሻൌܿ we get
݌ሺݐሻൌܿ௧݌ሺͲሻ
where ܿ is called the growth rate.
Step II : Solving the model using the given data:
From the given data, we have ܾൌͲ.04 and ݀ൌͲ Ǥ Ͳ ʹ Ǥ Thus
ܿൌͳ൅ܾെ݀
ൌͳ Ǥ Ͳ ʹ
Step III : Using the solution obtained in Step II to interpret the given problem:
The population in 5 years can be estimated as
݌ሺͷሻൌሺͳǤͲʹሻହൈʹͲͲǡͲͲͲǡͲͲͲ
ൌʹ ʹ Ͳ ͺ ͳ ͸ ͳ ͸ Ͳ Ǥ ͸
Since population cannot be in decimal, we round it to 220816161 .
1. Conservation Law and Engineering Problems
Conservation laws of science and engineering basically deals wi th
݄݁݃݊ܽܥ ൌ ݁ݏܽ݁ݎܿ݊ܫ െ݁ݏܽ݁ݎܿ݁ܦ
This equation incorporates the fundamental way in which conserv ation laws are
used in engineering that is t o determine the change with respec t to time.
Another way to use conservation laws is the case in which the c hange is 0, that is
݁ݏܽ݁ݎܿ݊ܫ ൌ ݁ݏܽ݁ݎܿ݁ܦ
This case is also known as ݕݐ݀ܽ݁ݐܵെ݁ݐܽݐܵ computation.
Let ܣ be the quantity of interest defined on a domain ȳǤ Then the rate of change of
ܳ is equal to the total amount of ܳ produced or destroyed in ك߱ȳ and the flux of
ܳ across the boundary ߲߱ that is the amount of ܳ that either goes in or comes out
of ߱Ǥ
This can be mathematically expressed as
݀
ݐ݀නܳ
ఠݔ݀ ൌ නݔ݀ܵ
ఠെන ݒܨ
డఠߪ݀ሺݔሻ munotes.in

Page 6

6NUMERICAL AND STATISTICAL METHODS
where ݒ is the unit outward normal, ߪ݀ሺݔሻ is the surface measure and ܨǡܵ are flux
and the quantity produced or destroyed respectively.
On simplifying, using integration by parts rule, we get
ܷ௧൅ݒ݅݀ሺܨሻൌܵ
where ݒ݅݀ሺܨሻ is the divergence.
When ܵ is taken to be zero, we get
ܷ௧൅ݒ݅݀ሺܨሻൌͲ
which is called the conservation law, as the only change in ܷ comes from the
quantity entering or leaving the domain of interest.
As an example, consider the following:
Heat Equation:
Assume that a hot rod is heated at one end and is left to cool, without providing any
further source of heat. The heat spreads uniformly (that is dif fuses out) and the
temperature of the rod becomes uniform after some time.
Here, let ܷ be the temperature of the material. The diffusion of the heat is given by
Fourier’s la w as
ܨሺܷሻൌെ ݇’ܷ
where ݇ is the conductivity of the medium.
Thus, the heat equation is obtained as
ܷ௧െݒ݅݀ሺ݇’ܷሻൌͲ
Conservation laws arise in many models in science and numerical methods play a
very important role in approximating or simulating the solution s of conservation
laws. Most of the engineering probl ems deals with conservation laws.
x Chemical engineering focus on mass balance reactors.
x In civil engineering force balances are utilized to analyse str uctures.
x In mechanical engineering the same principles are used to analy se the
transient up-and down motion or vibrations of an automobile.
x Electrical engineering uses conservation of energy voltage bala nce. munotes.in

Page 7

7Chapter 1: Mathematical Modeling And Engineering Problem Solvin g
1.4 Summary
This chapter
x gives an introduction to mathematical modeling.
x discusses the application of mathematical modelling in engineer ing problem
solving.
x discusses the conservation laws and how numerical methods are u seful in
engineering problem solving.
1.5 Exercises
1. An investor invested ᲏ͳͲǡͲͲͲȀെ at ͳͲΨ simple interest per year. With the
return from the investment, he wants to buy a T.V. that costs ᲏ʹͲǡͲͲͲȀെǤ
After how many years will he be able to buy the T.V."
2. A car starts from a place ܣ and travels at a speed of ͵Ͳ km/hr towards another
place ܤǤ At the same time another car starts from ܤ and travels towards ܣ at
a speed of ʹͲ km/hr. If the distance between ܣ and ܤ is ͳʹͲ kms, after how
much time will the cars meet"
3. A farmhouse uses atleast 1000 kg of special food daily. The spe cial food is a
mixture of corn and bean with the composition Material Nutrient Present Per Kg Cost Per Kg Protein Fibre Corn 0.09 0.02 ᲏10 Bean 0.6 0.06 ᲏20
The dietary requirements of the special food are atleast 30 pr otein and atmost 5
fibre. Formulate this problem to minimise the cost of the food.

™™™
munotes.in

Page 8

8NUMERICAL AND STATISTICAL METHODSUNIT 1
2
APPRO;IMATION ROUND-OFF ERRORS
Unit Structure
2.0 ObMectives
2.1 Introduction
2.2 Significant Figures
2.3 Accuracy Precision
2.4 Error Definitions
2.4.1 Calculation of Errors
2.4.2 Error Estimates for Iterative Methods
2.5 Round-Off Errors
2.6 Problems on Errors Significant numbers
2.7 Summary
2.8 References
2.9 Exercises
2.0 Objectives
After reading this chapter, you should be able to:
1. Know the concept of significant digits.
2. Know the concept Accuracy precision.
3. find the true and relative true error,
4. find the approximate and relative approximate error,
5. relate the absolute relative approximate error to the number of s i g n i f i c a n t
digits
6. Know other different errors including round-off error.
munotes.in

Page 9

9Chapter 2: Approximation Round-o൵ Errors2.1 Introduction
Approximate numbers : There are two types of numbers exact and approximate.
Exact numbers are 2, 4, 9, ଻
ଶ , 6.45, .... etc. but there are numbers such that

ଷ (=1.333…..) , √ 2 ( 1.414213 ...) and S ( 3.141592. …) which cannot be
expressed by a finite number of digits. These may be approximat ed by numbers
1.3333,1.4141, and 3.1416, respectively. Such numbers, which re present the given
numbers to a certain degree of accuracy, are called approximate numbers.
Rounding-off : There are numbers with many digits, e.g., ଶଶ
଻ 3.142857143.
In practice, it is desirable to limit such numbers to a managea ble number of digits,
such as 3.14 or 3.143. This process of dropping unwanted digits is called rounding-
off.
In this chapter information concerned with the quantification o f error is discussed
in the first sections. This is followed by a section on one of the two maMor forms of
numerical error: round-off error. Round-off error is due to the fact that computers
can represent only quantities with a finite number of digits.
2.2 Significant Figures
Whenever we use a number in a computation, we must have assuran ce that it can
be used with confidence. For example, Fig. 2.1 depicts a speedo meter and odometer
from an automobile.

Fig. 2.1: An automobile speedometer and odometer illustrating t he concept
of a significant figure.
munotes.in

Page 10

10NUMERICAL AND STATISTICAL METHODSVisual inspection of the speedometer indicates the speed betwee n 180 and 200
MPH. Because the indicator is lesser than the midpoint between the markers on the
gauge, it can be said with assurance that the car is traveling at approximately 190
MPH. However, let us say that we insist that the speed be estim ated to one decimal
place. For this case, one person might say 188.8, whereas anoth er might say 188.9
MPH. Therefore, because of the l imits of this instrument, only the first three digits
can be used with confidence. Estimates of the fourth digit (or higher) must be
viewed as approximations. It would be nonsensical to claim, on the basis of this
speedometer, that the automobile is traveling at 188.8642138 MP H. In contrast, the
odometer provides up to six certai n digits. From Fig. 2.1, we c an conclude that the
car has traveled slightly less than 248.5 km during its lifetim e. In this case, the fifth
digit (and higher) is uncertain.
Definition: The number of significant figures or significant digits in the
representation of a number is the number of digits that can be used with confidence .
In particular, for our purposes, the number of significant digi ts is equal to the
number of digits that are known (or assumed) to be correct plus one estimated digit.
o The concept of a significant figure, or digit, has been develop ed to formally
designate the reliability of a numerical value.
o The significant digits of a numbe r are those that can be used w ith
confidence.
o They correspond to the number of certain digits plus one estima ted digit.
Significant digits . The digits used to express a number are called significant di gits.
The digits 1, 2, 3, 4, 5, 6, 7, 8, 9 are significant digits. ‘0’ is also a si gnificant digit
except when it is used to fix the decimal point or to fill the places of unknown or
discarded digits.
For example , each of the numbers 7845, 3.589, and 0.4758 contains 4 signif icant
figures while the numbers 0.00386, 0.000587, 0.0000296 contain only three
significant figures (since zer os only help to fix the position of the decimal point).
Similarly, in the number 0.0003090, the first four ‘0’ s’ are n ot significant digits
since they serve only to fix t he position of the decimal point and indicate the place
values of the other digits. The other two ‘0’ s’ are significan t.
To be more clear, the number 3.0686 contains five significant d igits.
munotes.in

Page 11

11Chapter 2: Approximation Round-o൵ ErrorsThe significant figure in a numbe r in positional notation consi sts of
1) All non-zero digits
2) =ero digits which
 lie between significant digits
 lie to the right of decimal point and at the same time to the r ight of a
non-zero digit
are specifically indicated to be significant.
The significant figure in a number written in scientific notati on
(e.g., M î 10k) consists of all the digits explicitly in M.
Example 1
Give some examples of showing t he number of significant digits.
Solution
a) 0.0459 has three significant digits
b) 4.590 has four significant digits
c) 4008 has four significant digits
d) 4008.0 has five significant digits
e) 310 079.1 u has four significant digits
f) 310 0790.1 u has five significant digits
g) 310 07900.1 u has six significant digits
Significant digits are counted from left to right starting with the non- zero digit on
the left.
A list is provided to help students understand how to calculate significant digits in
a given number: Number Significant digits Number of significant digits 3969 3, 9, 6, 9 04 3060 3, 0, 6 03 3900 3, 9 02 39.69 3, 9, 6, 9 04 0.3969 3, 9, 6, 9 04 munotes.in

Page 12

12NUMERICAL AND STATISTICAL METHODS Number Significant digits Number of significant digits 39.00 3, 9, 0, 0 04 0.00039 3, 9 02 0.00390 3, 9, 0 03 3.0069 3, 0, 0, 6, 9 05 3.9 î 10 6 3, 9 02 3.909 î 10 5 3, 9, 0, 9 04 6 î 10 –2 6 01 The concept of significant figures has two important implicatio ns for study of
numerical methods:
1. Numerical methods yield approximate results . We must, therefore, develop
criteria to specify how confident we are in our approximate res ult. One way
to do this is in terms of significant figures. For example, we might decide that
our approximation is acceptable if it is correct to four signif icant figures.
2. Although quantities such as π, e, or √7 represent specific qu antities, they
cannot be expressed exactly by a limited number of digits.
For example,
π = 3.141592653589793238462643. . .
ad infinitum. Because computers retain only a finite number of significant figures,
such numbers can never be represented exactly. The omission of the remaining
significant figures is called round-off error.
The concept of significant figures will have relevance to our d efinition of accuracy
and precision in the next section.
Applications:
The output from a physical measuring device or sensor is genera lly known to be
correct up to a fixed number of digits.
For example, if the temperature is measured with a thermometer is calibrated
between 85.6 and 85.7 degrees. the first three digits of the te mperature are known
i.e. 8, 5, and 6, but do not know the value of any subsequent d igits.
The first unknown digit is sometimes estimated at half of the v alue of the
calibration size, or 0.05 degrees in our case. If we did this, the temperature would
be reported to four significant digits as 85.65 degrees. munotes.in

Page 13

13Chapter 2: Approximation Round-o൵ ErrorsAlternatively, we can choose to report the temperature to only three significant
digits.
In this case, we could truncate or chop off the unknown digits to give a result of
48.6 degrees, or round off the result to the nearest tenth of a degree to give either
48.6 or 48.7 degrees depending on whether the actual reading wa s more or less than
half-way between the two calibrations.
Round-off is generally the preferred procedure in this example, but without
knowing which technique was adopted, we would really only be co nfident in the
first two digits of the temperature.
Hence, if the temperature is reported as 48.6 degrees without a ny further
explanation, we do not know whether the 6 is a correct digit or an estimated digit.
2. Accuracy and Precision
The errors associated with both calculations and measurements c an be
characterized with regard to their accuracy and precision.
Accuracy refers to how closely a computed or measured value agrees with the true
value .
Accuracy tells us how close a measured value is to the actual v alue.
It is associated with the quality of data and numbers of errors present in the data
set. Accuracy can be calculated using a single factor or measur ement.
Precision tells us how close measured values are to each other .
It often results in round off errors. To calculate precision, m ultiple measurements
are required.
Example :- The distance between point A and B is 7.15. On measuring wit h
different devices the distance appears as:
Data set 1: 6.34, 6.31, 6.32
Data set 2 : 7.11, 7.19, 7.9
Data set 1 is more precise since they are close to each other a nd data set 2 is more
accurate since they are close to the actual value.
munotes.in

Page 14

14NUMERICAL AND STATISTICAL METHODSTo round-off a number to n significant digits to get accuracy, discard all
digits to the right of the nth digit and if this discarded number is
o less than 5 in ( n  1)th place, leave the nth digit unaltered. e.g., 7.893
to 7.89.
o greater than 5 in ( n + 1)th place, increase the nth digit by unity, e.g.,
6.3456 to 6.346.
o exactly 5 in ( n  1)th place, increase the nth digit by unity i f it is odd,
otherwise leave it unchanged.
e.g., 12.675 a 12.68 12.685 a 12.68 The number thus rounded-off is said to be correct to n significant figures. A
list is provided for explanatory proposes: Number Rounded-off to Three digits Four digits Five digits 00.543241 00.543 00.5432 00.54324 39.5255 39.5 39.52 39.526 69.4155 69.4 69.42 69.416 00.667676 00.668 00.6677 00.66768
Graphical Illustration of Precision Accuracy
Precision Accuracy is illustrated graphically using an analog y from target
practice.
The bullet holes on each target in Fig. 2.2 can be thought of a s the predictions of a
numerical technique, whereas the bull’s -eye represents the truth.
o Inaccuracy (also called bias) is defined as systematic deviation from the
truth. Is illustrated on circles on right side
o Imprecision (also called uncertainty ), on the other hand, refers to the
magnitude of the scatter is represented on left side circles wh ere the shots are
tightly grouped. munotes.in

Page 15

15Chapter 2: Approximation Round-o൵ Errors
Fig 2.2 Graphical Illustration of Precision & Accuracy
2.4 Error Definitions
Machine epsilon
We know that a computer has a finite word length, so only a fix ed number of
digits is stored and used during computation. Hence, even in st oring an exact
decimal number in its converted form in the computer memory, an e r r o r i s
introduced. This error is machine dependant and is called machi ne epsilon.
In any numerical computation, we come across the following type s of errors:
Inherent errors. Errors which are already present in the statement of a
problem before its solution are called inherent errors. Such er rors arise either
due to the fact that the given data is approximate or due to limitations of
mathematical tables, calculators, or the digital computer. Error True value – Approximate
munotes.in

Page 16

16NUMERICAL AND STATISTICAL METHODSInherent errors can be minimized by taking better data or by us ing high
precision computing aids. Accuracy refers to the number of sig nificant digits
in a value, for example, 53.965 is accurate to 5 significant di gits.
Precision refers to the number of decimal positions or order of magnitude of the
last digit in the value. For example, in 53.965, precision is 1 0–3.
Example. Which of the following numbers has the greatest precision?
4.3201, 4.32, 4.320106.
Sol. In 4.3201, precision is 10 –4
In 4.32, precision is 10 –2
In 4.320106, precision is 10 –6.
Truncation errors
Truncation errors are caused by using approximate results or by replacing an
infinite process with a finite one.
If we are using a decimal computer having a fixed word length o f 4 digits,
rounding-off of 13.658 gives 13.66, whereas truncation gives 13 .65.
Therefore this difference in last significant digits causes err or as result of difference
in processing of rounding-off truncating a number.
Numerical errors arise from the use of approximations to repres ent exact
mathematical operations and quantities. These include truncatio n errors, which
result when approximations are used to represent exact mathemat ical procedures,
and round-off errors, which result when numbers having limited significant figures
are used to represent exact numbers.
For both types, the relationship between the exact, or true, re sult and the
approximation can be formulated as
True value approximation  error (2.1)
By rearranging Eq. (2.1), we find that the numerical error is e qual to the
discrepancy between the truth and the approximation, as in
ܧ௧ = true value − approximation (2.2) munotes.in

Page 17

17Chapter 2: Approximation Round-o൵ Errorswhere ܧ௧ is used to designate the exact value of the error. The subscri pt t is included
to designate that this is the “true” error. This is in contrast to other cas es, as
described shortly, where an “approximate” estimate of the error must be employed.
A shortcoming of this definition is that it takes no account of the order of magnitude
of the value under examination. For example, an error of a cent imeter is much more
significant if we are measuring a rivet rather than a bridge. O ne way to account for
the magnitudes of the quantities being evaluated is to normaliz e the error to the true
value, as in
”—‡ˆ”ƒ…–‹‘ƒŽ”‡Žƒ–‹˜‡‡””‘” ൌݎ݋ݎݎ݁݁ݑݎݐ
݁ݑ݈ܽݒ݁ݑݎݐ
where, as specified by Eq. (2.2), error true value − approximation.
The relative error can also be multiplied by 100 percent to exp ress it as
ߝ௧ൌ௧௥௨௘௘௥௥௢௥
௧௥௨௘௩௔௟௨௘ܺͳͲͲ (2.3)
where ߝ௧ designates the true percent relative error.
2.4.1 Calculation of Errors
Example 2.4.1
Problem Statement . Suppose that you have the tas k of measuring the lengths of a
bridge and a rivet and come up with 9999 and 9 cm, respectively . If the true values
are 10,000 and 10 cm, respectively, compute (a) the true error and (b) the true
percent relative error for each case.
Solution.
(a) The error for measuring the bridge is >Eq. (2.2)@
ܧ௧ = 10,000 − 9999 = 1 cm
and for the rivet it is
ܧ௧ = 10 − 9 = 1 cm
(b) The percent relative error for the bridge is >Eq. (2.3)@
ߝ௧ൌͳ
ͳͲǡͲͲͲܺͳͲͲ ൌ ͲǤͲͳΨ
and for the rivet it is
ߝ௧ൌͳ
ͳͲܺͳͲͲ ൌ ͳͲΨ munotes.in

Page 18

18NUMERICAL AND STATISTICAL METHODSThus, although both measurements have an error of 1 cm, the rel ative error for the
rivet is much greater. We would conclude that we have done an a dequate Mob of
measuring the bridge, whereas our estimate for the rivet leaves something to be
desired.
Notice that for Eqs. (2.2) and (2 .3), E and ε are subscripted with a t to signify that
the error is normalized to the true value.
In real-world applications, we will obviously not know the true answer a priori. For
these situations, an alternative is to normalize the error usin g the best available
estimate of the true value, that is, to the approximation itsel f, as
ߝ௔ൌୟ୮୮୰୭୶୧୫ୟ୲ୣୣ୰୰୭୰
ୟ୮୮୰୭୶୧୫ୟ୲୧୭୬ܺͳͲͲ (2.4)
where the subscript a signifies that the error is normalized to an approximate
value.
Note also that for real-world applications, Eq. (2.2) cannot be used to calculate the
error term for Eq. (2.4). One of the challenges of numerical me thods is to determine
error estimates in the absence of knowledge regarding the true value.
For example, certain numerical methods use an iterative approac h to compute
answers. In such an approach, a present approximation is made o n the basis of a
previous approximation. This process is performed repeatedly, o r iteratively, to
successively compute (we hope) better and better approximations .
For such cases, the error is often estimated as the difference between previous and
current approximations. Thus, percent relative error is determi ned according to
ߝ௔ൌୡ୳୰୰ୣ୬୲ୟ୮୮୰୭୶୧୫ୟ୲୧୭୬ି୮୰ୣ୴୧୭୳ୱୟ୮୮୰୭୶୧୫ୟ୲୧୭୬
ୡ୳୰୰ୣ୬୲ୟ୮୮୰୭୶୧୫ୟ୲୧୭୬ܺͳͲͲ (2.5)
The signs of Eqs. (2.2) through (2.5) may be either positive or negative. If the
approximation is greater than the true value (or the previous a pproximation is
greater than the current approximati on), the error is negative if the approximation
is less than the true value, the error is positive. Also, for E qs. (2.3) to (2.5), the
denominator may be less than zer o, which can also lead to a neg ative error. Often,
when performing computations, we may not be concerned with the sign of the error,
but we are interested in whether the percent absolute value is lower than a
prespecified percent tolerance ߝ௦ . Therefore, it is often useful to employ the
absolute value of Eqs. (2.2) through (2.5). For such cases, the c o m p u t a t i o n i s
repeated until
_ ߝ௔_  ߝ௦ (2.6) munotes.in

Page 19

19Chapter 2: Approximation Round-o൵ ErrorsIf this relationship holds, our result is assumed to be within the prespecified
acceptable level ߝ௦ . Note that for the remainder of this text, we will almost
exclusively employ absolute valu es when we use relative errors.
It is also convenient to relate these errors to the number of s ignificant figures in the
approximation.
It can be shown (Scarborough, 1966) that if the following crite rion is met, we can
be assured that the result is cor rect to at least n significant figures.
ߝ௦ = (0.5 × 102−n)% (2.7)
2.4.2 Error Estimates for Iterative Methods
Problem Statement . In mathematics, functions can often be represented by infinit e
series. For example, the exponential function can be computed u sing
݁௫ൌͳ൅ݔ൅௫మ
ଶǨ൅௫య
ଷǨ൅ڮ൅௫೙
௡Ǩ (2.8)
Thus, as more terms are added in sequence, the approximation be comes a better
and better estimate of the true value of ݁௫. Equation (Eq. 2.8) is called a Maclaurin
series expansion.
Starting with the simplest version, ݁௫ 1, add terms one at a time to estimate ݁௢Ǥହ.
After each new term is added, compute the true and approximate percent relative
errors with Eqs. (2.3) and (2.5), respectively. Note that the t rue value is
݁௢Ǥହ 1.648721 . . . Add terms until the absolute value of the app roximate error
estimate ߝ௔ falls below a prespecified error criterion ߝ௦ conforming to three
significant figures.
Solution. First, Eq. (2.7) can be employed to determine the error crite rion that
ensures a result is correct to at least three significant figur es:
ߝ௦ = (0.5 × 102−3)% = 0.05%
Thus, we will add terms to the series until ߝ௔ falls below this level.
The first estimate is simply equal to Eq. (2.8) with a single t erm. Thus, the first
estimate is equal to 1. The sec ond estimate is then generated b y adding the second
term, as in
ex 1  x
or for x 0.5,
e0.5 1  0.5 1.5 munotes.in

Page 20

20NUMERICAL AND STATISTICAL METHODSThis represents a true per cent relative error of >Eq. (2.3)@
ߝ௧ൌͳǤ͸Ͷͺ͹ʹͳെͳǤͷ
ͳǤ͸Ͷͺ͹ʹͳܺͳͲͲ ൌ ͻǤͲʹΨ
Equation (2.5) can be used to determine an approximate estimate of the error, as in
ߝ௔ൌͳǤͷെͳ
ͳǤͷܺͳͲͲ ൌ ͵͵Ψ
Because ߝ௔ is not less than the required value of ߝ௦, we would continue the
computation by adding another term, x2/2, and repeating the error calculations.
The process is continued until ߝ௔  ߝ௦ .
The entire computation can be summarized as Terms Result ࢿ࢚ )(ࢿࢇ )(1 1 39.3 2 1.5 9.02 33.3 3 1.625 1.44 7.69 4 1.645833333 0.175 1.27 5 1.648437500 0.0172 0.158 6 1.648697917 0.00142 0.0158 Thus, after six terms are included, the approximate error falls below ߝ௦ 0.05
and the computation is terminated. However, notice that, rather t h a n t h r e e
significant figures, the result is accurate to five This is be cause, for this case, both
Eqs. (2.5) and (2.7) are conservative.
That is, they ensure that the result is at least as good as the y specify.
2.5 Round-Off errors
Rounding errors. Rounding errors arise from the process of rounding- off
numbers during the computation. They are also called procedual errors or
numerical errors. Such errors are unavoidable in most of the ca lculations due
to limitations of computing aids.
These errors can be reduced, however, by
I. changing the calculation procedure so as to avoid subtractio n of nearly
equal numbers or division by a small number
II. retaining at least one more significant digit at each step and rounding-off
at the last step. Rounding-off may be executed in two ways:
munotes.in

Page 21

21Chapter 2: Approximation Round-o൵ Errors

Chopping. In chopping, extra digits are dropped by truncation of number.
Suppose we are using a computer with a fixed word length of fou r digits, then
a number like 12.92364 will be stored as 12.92.
We can express the number 12.92364 in the floating print form as
True x 12.92364
0.1292364 î 10 2 (0.1292  0.0000364) î 10 2
0.1292 î 10 2  0.364 î 10 –4  2
fx . 10E  gx . 10E – d
Approximate x  Error

Error gx . 10E – d, 0 ≤ gx ≤ d
Here, gx is the mantissa, d is the length of mantissa and E is exponent
Since 0 ≤ gx  1
Absolute error ≤ 10E – d
Case I. If gx  0.5 then approximate x fx . 10E
Case II. If gx ≥ .5 then approximate x fx . 10E  10 E – d
Error True value – Approximate value
fx . 10E  gx . 10E – d – fx .10E – 10E – d
( gx – 1) . 10 E – d
absolute error ≤ 0.5.(10) E – d.
Symmetric round-off. In symmetric round-off, the l ast retained significant
digit is rounded up by unity if the first discarded digit is ≥ 5, otherwise the
last retained digit is unchanged.
2.6 Problems on Error Definitions, Significant numbers
accuracy
Problem 1. Suppose 1.414 is used as an approximation to √2 . Find the
absolute and relative errors.
Sol. True value 1.41421356
Approximate value 1.414
Error True value – Approximate value
– 1.414 1.41421356 – 1.414
0.00021356

munotes.in

Page 22

22NUMERICAL AND STATISTICAL METHODSAbsolute error ea _ Error _
_ 0.00021356 _ 0.21356 î 10 –3
Relative error er ea True value
0.151 î 10 –3.
Problem 2. If 0.333 is the approximate value of ଵ
ଷ , f i n d t h e a b s o l u t e ,
relative, and percentage errors.
Solution. True value (;) ଵ

Approximate value (;
) 0.333
׵ A b s o l u t e e r r o r ea _ ; – ;’_
_ ଵ
ଷെͲǤ͵͵͵_ _ 0.333333 – 0.333 _ . 000333
Relative error ݁௥ൌ ௘ೌ
௫ ଴Ǥ଴଴଴ଷଷଷ
଴Ǥଷଷଷଷଷଷ 0.000999
Percentage error ep er î 100 .000999 î 100 .099
Problem . An approximate value of π i s given by 3.1428571 and its true
value is 3.1415926. Find the absolute and relative errors.
Sol. True value 3.1415926
Approximate value 3.1428571
Error True value – Approximate value
3.1415926 – 3.1428571
– 0.0012645
Absolute error ea _ Error _ 0.0012645
Relative error ݁௥ൌ ௘ೌ
்௥௨௘௏௔௟௨௘ ଴Ǥ଴଴ଵଶ଺ସହ
ଷǤଵସଵହଽଶ଺
0.000402502.
Problem 4. Three approximate values of the number ଵ
ଷ are given as 0.30,
0.33,and 0.34. Which of these three is the best approximation"
Sol. The best approximation will be the one which has the least abso lute error.
True value ଵ
ଷ 0.33333
0.21356 u 103
2 munotes.in

Page 23

23Chapter 2: Approximation Round-o൵ ErrorsCase I. Approximate value 0.30
Absolute error _ True value – Approximate value _
_ 0.33333 – 0.30 _
0.03333
Case II. Approximate value 0.33
Absolute error _ True value – Approximate value _
_ 0.33333 – 0.33 _
0.00333.
Case III. Approximate value 0.34
Absolute error _ True value – Approximate value _
_ 0.33333 – 0.34 _
_ – 0.00667 _ 0.00667
Since the absolute error is least in case II, 0.33 is the best approximation.
Problem 5. Find the relative error of the number 8.6 if both of its digits
are correct.
Sol. Here, ea 0.5 ݁׶ ௔ൌଵ
ଶܺͳͲିଵ

݁௥ൌ ଴Ǥହ
଼Ǥ଺ ൌǤͲͲͷͺ
Problem 6. Find the relative error if ଶ
ଷ is approximated to 0.667.
Sol. True value ଶ
ଷൌ 0.666666
Approximate value 0.667
Absolute error ea _ True value – approximate value _
_ .666666 – .667 _ .000334
Relative error ݁௥ൌ Ǥ଴଴଴ଷଷସ
Ǥ଺଺଺଺଺଺ .0005
Problem 7 . Find the percentage error if 625.483 is approximated to
three significant figures.
Sol. ea _ 625.483 – 625 _ 0.483
݁௥ൌ ௘ೌ
଺ଶହǤସ଼ଷ ଴Ǥସ଼ଷ
଺ଶହǤସ଼ଷ 0.000772
e p er î 100
ep 0.000772 î 100 0.77 munotes.in

Page 24

24NUMERICAL AND STATISTICAL METHODSProblem . Round-off the numbers 865250 and 37.46235 to four
significant figures and compute e a, er, ep in each case.
Sol. (i) Number rounded-off to four significant digits 865200
; 865250
;
865200
Error ; – ;
865250 – 865200 50
Absolute error ea _ error _ 50
R e l a t i v e e r r o r ݁௥ൌ ௘ೌ
ଡ଼ ହ଴
଼଺ହଶହ଴ 5.77 î 10 –5
Percentage error ep er î 100 5.77 î 10 –3
( ii) Number rounded-off to four significant digits 37.46
; 37.46235
;
37.46
Error ; – ;
0.00235
Absolute error ea _ error _ 0.00235
Relative error ݁௥ൌ ௘ೌ
ଡ଼ ଴Ǥ଴଴ଶଷହ
ଷ଻Ǥସ଺ଶଷହ 6.2729î 10 –5
Percentage error ep er î 100 6.2729 î 10 –3.
Problem . Round-off the number 75462 to four significant digits and
then calculate the absolute error and percentage error .
Sol. Number rounded-off to four significant digits 75460
Absolute error ea _ 75462 – 75460 _ 2
Relative error ݁௥ൌ ௘ೌ
଻ହସ଺ଶ ଶ
଻ହସ଺ଶ 0.0000265
Percentage error ep er î 100 0.00265
Problem 10. Find the absolute, relative, a nd percentage errors if x is
rounded- off to three decimal digits. Given x 0.005998.
Sol. Number rounded-off to three decimal digits .006
Error .005998 – .006 – .000002
Absolute error ea _ error _ .000002
Relative error ݁௥ൌ ௘ೌ
଴Ǥ଴଴ହଽଽ଼ Ǥ଴଴଴଴଴ଶ
଴Ǥ଴଴ହଽଽ଼ 0.0033344
Percentage error ep er î 100 0. 33344 munotes.in

Page 25

25Chapter 2: Approximation Round-o൵ Errors2.7 Summary
The second chapter of this book introduces the learner with the c o n c e p t s o f
approximations, rounding, significant digits and error which pl ays a very crucial
role in solving a problem using numerical methods. Rules to rou nd-off a given data
and rules to determine significant digits are discussed which i s useful in achieving
the desired accuracy in the given problem.
2. References
Following books are recommended for further reading :-
1) Numerical Methods for Engineers Steven C Chapra Raymond P Can ale –
6th Edition
2) Introductory Methods of Numerical Methods by S S Shastri
3) Computer based Numerical Statistical Techniques – M. Goyal
4) Numerical Statistical Methods by Bhupendra T Kesria, Himalaya
Publishing House
2. Exercises
1. Round-off the following numbers correct to four significant dig its:
3.26425, 35.46735, 4985561, 0.70035, 0.00032217, 1.6583, 30.05 67,
0.859378, 3.14159.
2. The height of an observation tower was estimated to be 47 m.
whereas its actual height was 45 m. Calculate the percentage of
relative error in the measurement.
3. If true value ଵ଴
ଷ , approximate value 3.33, find the absolute and
relative errors.
4. Round-off the following numbe rs to two decimal
4 8 . 2 1 4 1 6 , 2 . 3 7 4 2 , 5 2 . 2 7 5 , 2 . 3 7 5 , 2 . 3 8 5 , 8 1 . 2 5 5 .
5. If ; 2.536, find the absolute error and relative error when
ȋ‹Ȍ ; is rounded-off
ȋ‹‹Ȍ ; is truncated to two decimal digits. munotes.in

Page 26

26NUMERICAL AND STATISTICAL METHODS6. If ߨൌଶଶ
଻is approximated as 3.14, find the absolute error, relative
error, and percentage of relative error.
7. Given the solution of a problem as ;
35.25 with the relative error
in the solution atmost 2, find, to four decimal digits, the ra nge of
values within which t he exact value of the solution must lie.
8. Given that:
a 10.00 “ 0.05, b 0.0356 “ 0.0002
c 1 5 3 0 0 “ 1 0 0 , d 6 2 0 0 0 “ 5 0 0
Find the maximum value o f the absolute error in
ȋ‹Ȍ a  b  c  d (ii) a  5c – d (iii) d3.
9. What do you understand by machine epsilon of a computer" Explai n.
10. What do you mean by truncation error" Explain with examples.


™™™ munotes.in

Page 27

27Chapter 3: Truncation Errors the Taylor SeriesUNIT 1

TRUNCATION ERRORS
THE TAYLOR SERIES
Unit Structure
3.0 ObMectives
3.1 Introduction
3.2 The Taylor ’s Series
3.2.1 Taylor’s Theorem
3.2.2 Taylor Series Approximation of a Polynomial
3.2.3 The Remainder for the Taylor Series Expansion
3.2.4 Error in Taylor Series
3.3 Error Propagation
3.3.1 Functions of a Single Variable
3.3.2 Propagation in a Function of a Single Variable
3.4 Total Numerical Errors
3.5 Formulation Errors and Data Uncertainty
3.6 Summary
3.7 References
3.8 Exercise
.0 Objectives
After reading this chapter, you should be able to
1. Understand the basics of Taylor’s theorem,
2. Write trigonometric functions as Taylor’s polynomial,
3. Use Taylor’s theorem to find the values of a function at any poi nt, given the
values of the function and all its derivatives at a particular point,
4. Calculate errors and error bounds of approximating a function b y Taylor
series
5. Revisit whenever Taylor’s theorem is used to derive or explain numerical
methods for various mathematical procedures. munotes.in

Page 28

28NUMERICAL AND STATISTICAL METHODS.1 Introduction
Truncation errors are those that result from using an approxima tion in place of an
exact mathematical procedure. A truncation error was introduced i n t o t h e
numerical solution because the difference equation only approxi mates the true
value of the derivative. In order to gain insight into the prop erties of such errors,
we now turn to a mathematical formulation that is used widely i n numerical
methods to express functions in an approximate manner —the Taylor series.
In this chapter Understanding the basics of Taylor’s theorem, Taylors series
Truncation errors, remainder in taylor series, error propagatio n, Total numerical
errors is covered
.2 The Taylor ’s Series
Taylor’s theorem and its associated formula, the Taylor series, is of great val ue in
the study of numerical methods. In essence, the Taylor series provides a means to
predict a function value at one point in terms of the function value and its
derivatives at another point . In particular, the theorem states that any smooth
function can be approximated as a polynomial.
A useful way to gain insight into the Taylor series is to build it term by term. For
example, the first term in the series is
f(xi+1) ؆ f(x i) (3.1)
This relationship, called the zero-order approximation , indicates that the value of
f at the new point is the same as its value at the old point. T his result makes
intuitive sense because if x i and x i1 are close to each other, it is likely that the
new value is probably similar to the old value.
Equation (3.1) provides a perfect estimate if the function bein g approximated is,
in fact, a constant. However, if the function changes at all ov er the interval,
additional terms of the Taylor series are required to provide a better estimate.
For example, the first-order approximation is developed by adding another term
to yield
f(xi+1) ؆ f(x i) + f '(x i) (x i+1 − xi) (3.2)
The additional first-order term consists of a slope f(xi) multiplied by the distance
between xi and xi+1. Thus, the expression is now in the form of a straight line an d
is capable of predicting an increase or decrease of the functio n between xi
and xi+1. munotes.in

Page 29

29Chapter 3: Truncation Errors the Taylor SeriesAlthough Eq. (3.2) can predict a change, it is exact only for a s t r a i g h t - l i n e , o r
linear,
trend. Therefore, a second-order term is added to the series to capture some of the
curvature that the function might exhibit:
݂ሺݔ௜ାଵሻ؆݂ሺݔ௜ሻ൅݂Ԣሺݔ ௜ሻሺݔ௜ାଵെݔ௜ሻ൅௙ᇲᇲሺ௫೔ሻ
ଶǨሺݔ௜ାଵെݔ௜ሻଶ (3.3)
In a similar manner, additional terms can be included to develo p the complete
Taylor series expansion:
݂ሺݔ௜ାଵሻ؆݂ሺݔ௜ሻ൅݂Ԣሺݔ ௜ሻሺݔ௜ାଵെݔ௜ሻ൅௙ᇲᇲሺ௫೔ሻ
ଶǨሺݔ௜ାଵെݔ௜ሻଶ൅
௙ᇲᇲᇲሺ௫೔ሻ
ଷǨሺݔ௜ାଵെݔ௜ሻଷ൅ڮ൅௙೙ሺ௫೔ሻ
௡Ǩሺݔ௜ାଵെݔ௜ሻ௡൅ܴ௡ (3.4)
Note that because Eq. (3.4) is an infinite series, an equal sig n replaces the
approximate sign that was used in Eqs. (3.1) through (3.3). A r emainder term is
included to account for all terms from n  1 to infinity:
ܴ௡ൌ௙೙శభሺஞሻ
ሺ௡ାଵሻǨሺݔ௜ାଵെݔ௜ሻ௡ାଵ (3.5)
where the subscript n connotes that this is the remainder for the nth-order
approximation and ξ is a value of x that lies somewhere between ݔ௜ and ݔ௜ାଵ.
ξ is a value that provides an exa ct determination of the error.
It is often convenient to simplify the Taylor series by definin g a step size
h ݔ௜ାଵ – ݔ௜ and expressing Eq. (3.4) as
݂ሺݔ௜ାଵሻ؆݂ሺݔ௜ሻ൅݂Ԣሺݔ ௜ሻ݄൅௙ᇲᇲሺ௫೔ሻ
ଶǨ݄ଶ൅௙ᇲᇲᇲሺ௫೔ሻ
ଷǨ݄ଷ൅ڮ൅௙೙ሺ௫೔ሻ
௡Ǩ݄௡൅ܴ௡ (3.6)
where the remainder term is now
ܴ௡ൌ௙೙శభሺஞሻ
ሺ௡ାଵሻǨ݄௡ାଵ (3.7)
.2.1 Taylor’s Theorem
If the function f and its first n  1 derivatives are continuous on an interval
containing a and x, then the value of the function at x is given by
݂ሺݔሻൌ݂ሺܽሻ൅݂Ԣሺܽሻሺݔ െܽሻ൅௙ᇲᇲሺ௔ሻ
ଶǨሺݔെܽሻଶ൅௙ᇲᇲᇲሺ௔ሻ
ଷǨሺݔ െܽሻଷ൅ڮ൅
௙೙ሺ௔ሻ
௡Ǩሺݔ െܽሻ௡൅ܴ௡ (3.8) munotes.in

Page 30

30NUMERICAL AND STATISTICAL METHODSwhere the remainder Rn is defined as
ܴ௡ൌ׬ሺ௫ି௧ሻ೙
௡Ǩ݂ሺ௡ାଵሻሺݐሻݐ݀௫
௔ (3.9)
where t a dummy variable. Equation (3.8) is called the Taylor series or Taylor’s
formula.
If the remainder is omitted, the right side of Eq. (3.8) is th e Taylor polynomial
approximation to f (x). In essence, the theorem sta tes that any smooth function
can be approximated as a polynomial.
Equation (3.9) is but one way, called the integral form, by whi ch the remainder
can be expressed. An alternative formulation can be derived on the basis of the
integral mean-value theorem.
.2.2.Taylor Series Approximation of a Polynomial
Problem Statement. Use zero- t hrough fourth-order Taylor series e x p a n s i o n s t o
approximate the function
f(x) = −0.1x4 − 0.15x3 − 0.5x2 − 0.25x + 1.2
from ݔ௜ 0 with h = 1. That is, predict the function’s value at ݔ௜ାଵ 1.
Solution. Because we are dealing with a known function, we can compute values
for
f (x) between 0 and 1.
Therefore f(x) by substituting x 0 becomes
f(0) = −0.1(0)4 − 0.15(0)3 − 0.5(0)2 − 0.25(0) + 1.2
= −0 − 0 − 0 −0 + 1.2
f (0) 1.2
and
f(x) by substituting x 1 becomes
f(1 ) = −0.1(1)4 − 0.15(1)3 − 0.5(1)2 − 0.25(1) + 1.2
= −0.1 – 0.15 – 0.5 −0.25 + 1.2
f (1) 0.2
munotes.in

Page 31

31Chapter 3: Truncation Errors the Taylor SeriesThe results (Fig. 3.1) indicate that the function starts at f(0)=1.2 and then curves
downward to f (1) 0.2
Thus, the true value that we are trying to predict is 0.2.
The Taylor series approximation with n 0 is (Eq. 3.1)
݂ሺݔ௜ାଵሻ؄ͳǤʹ





݂ሺݔ௜ାଵሻൌ݂ሺݔ௜ሻ൅݂Ԣሺݔ௜ሻ݄൅௙ᇲᇲሺ௫೔ሻ
ଶǨ݄ଶ




Fig .1
The approximation of f(x) = −0.1x4 − 0.15x3 − 0.5x2 − 0.25x + 1.2 at x 1 by
zero-order, first-order, and second -order Taylor series expansi ons.
Thus, as in Fig. 3.1, the zero-order approximation is a constan t. Using this
formulation results in a truncation error ܧ௧
ܧ௧ true value – approximation >Refer Eq. (2.2) in prev. chapter@
Et 0.2 − 1.2 −1.0
at x 1.
For n 1, the first derivative must be determined and evaluated at x 0:
f' (x) −0.1 x 4 x3 − 0.15 x 3 x2 − 0.5 x 2 x − 0.25 x 1 −0.25
f' (0) −0.4(0.0)3 − 0.45(0.0)2 − 1.0(0.0) − 0.25 −0.25 Second order first order




1.0

i + 1 i i
0.5


xi = 0
xi + 1 = 1
x
h True munotes.in

Page 32

32NUMERICAL AND STATISTICAL METHODS=Therefore, the first-order approximation is >Eq. (3.2)@
f(x i1) = 1.2 − 0.25h
which can be used to compute f (1) 0.95. Consequently, the approximation
begins to capture the downward traMectory of the function in th e form of a
sloping straight line (Fig. 3.1). This results in a reduction o f the truncation error to
Et 0.2 − 0.95 −0.75
For n 2, the second derivative is evaluated at x 0:
f

(0) −1.2(0.0)2 − 0.9(0.0) − 1.0 −1.0
Therefore, according to Eq. (3.3),
f(xi1) 1.2 − 0.25h − 0.5h2
and substituting h = 1, f (1) 0.45. The inclusion of the second derivative now adds
some downward curvature resulting in an improved estimate, as seen in Fig. 3.1.
The truncation error is reduced further to 0.2 − 0.45 −0.25.
Additional terms would improve the approximation even more. In fact, the
inclusion of the third and the fourth derivatives results in ex actly the same
equation we started with:
f(x) 1.2 − 0.25h − 0.5h2 − 0.15h3 − 0.1h4
where the remainder term is
ܴସൌ݂ሺହሻሺɌሻ
ͷǨ݄ହൌͲ
because the fifth derivative of a fourth-order polynomial is ze ro. Consequently, the
Taylor series expansion to the fourth derivative yields an exac t estimate at
xi1 1:
f(1) 1.2 − 0.25(1) − 0.5(1)2 − 0.15(1)3 − 0.1(1)4 0.2
Eq. (3.7) is useful for gaining insight into truncation errors .
This is because we do have control over the term h in the equation. In other
words, we can choose how far away from x we want to evaluate f (x), and we can
control the number of terms we include in the expansion. Conseq uently, Eq. (3.9)
is usually expressed as
Rn O(݄௡ାଵ)
where the nomenclature O(݄௡ାଵ)means that the truncation error is of the order of
݄௡ାଵ. munotes.in

Page 33

33Chapter 3: Truncation Errors the Taylor Series.2. The Remainder for the Taylor Series Expansion
Before demonstrating how the Taylor series is actually used to estimate numerical
errors, we must explain why we included the argument ξ in Eq. ( 3.7). A
mathematical derivation is presented in point 3.2.1. We will no w develop an
alternative exposition based on a somewhat more visual interpre tation. Then we
can extend this specific case t o the more general formulation.
Suppose that we truncated the Taylor series expansion >Eq. (3.6 )@ after the zero-
order term to yield
f(xi1) ׽ f(xi )
A visual depiction of this zero-order prediction is shown in Fi g. 3.2. The
remainder, or error, of this pre diction, which is also shown in the illustration,
consists of the infinite series of terms that were truncated:
ܴ଴ൌ݂Ԣሺݔ௜ሻ݄൅݂ᇱᇱሺݔ௜ሻ
ʹǨ݄ଶ൅݂ᇱᇱᇱሺݔ௜ሻ
͵Ǩ݄ଷ൅ڮ
It is obviously inconvenient to deal with the remainder in this i n f i n i t e s e r i e s
format.
One simplification might be to truncate the remainder itself, a s in
ܴ଴؆݂Ԣሺݔ௜ሻ݄
Fig .2 Graphical depiction of a zero-order Taylor series prediction a nd
remainder.
Exact prediction

R0


xi
xi + 1
x
h munotes.in

Page 34

34NUMERICAL AND STATISTICAL METHODSThe use of Taylor series exists in so many aspects of numerical methods. For
example, you must have come across expressions such as
(1)  6421) cos(6 4 2x x xx
(2)  753) sin(7 5 3x x xx x
(3)  3213 2x xx ex
All the above expressions are actually a special case of Taylor series called the
Maclaurin series. Why are these applications of Taylor’s theor em important for
numerical methods" Expressions such as given in Equations (1), (2) and (3) give
you a way to find the approximate values of these functions by using the basic
arithmetic operations of additi on, subtraction, division, and m ultiplication.
Example 1
Find the value of 25.0e using the first five terms of the Maclaurin series.
Solution
The first five terms of the Maclaurin series for xeis 43214 3 2x x xx ex| 425.0
325.0
225.025.014 3 2
25.0|e
2840.1
The exact value of 25.0e up to 5 significant digits is also 1.2840.
But the above discussion and example do not answer our question of what a
Taylor series is.
Here it is, for a function xf cccccc 3 2
3 2hxfhxfhxfxf hxf ( 4 )
Provided all derivatives of xf exist and are continuous between x and hx. munotes.in

Page 35

35Chapter 3: Truncation Errors the Taylor SeriesAs Archimedes would have said ( without the fine print ), “Give me the value of the
function at a single point, and the value of all (first, second, and so on) its
derivatives, and I can give you the valu e of the function at any other point ”.
It is very important to note that the Taylor series is not aski ng for the expression
of the function and its derivatives, Must the value of the func tion and its derivatives
at a single point.
Example 2
Take x xf sin , we all know the value of 12sin ¸
¹·¨
©§S. We also know the x xf cos c and 02cos ¸
¹·¨
©§S. Similarly ) sin(x xf  cc and 12sin ¸
¹·¨
©§S. In
a way, we know the value of xsin and all its derivatives at 2S x. We do not
need to use any calculators, Must plain differential calculus a nd trigonometry
would do. Can you use Taylor series and this information to fi nd the value of 2sin"
Solution 2S x 2 hx x h  2 22S 42920.0
So cccccccccc 4)(3 24 3 2hx fhxfhxfhxf xf hxf 2S x 42920.0 h x xf sin , ¸
¹·¨
©§ ¸
¹·¨
©§
2sin2SSf 1 munotes.in

Page 36

36NUMERICAL AND STATISTICAL METHODS x xf cos c, 02 ¸
¹·¨
©§cSf x xf sin cc, 12 ¸¹·¨©§ccSf ) cos(x xf  ccc, 02 ¸
¹·¨
©§cccSf ) sin(x xf cccc,12 ¸
¹·¨
©§ccccSf
Hence ¸
¹·¨
©§cccc¸
¹·¨
©§ccc¸
¹·¨
©§cc¸
¹·¨
©§c¸
¹·¨
©§ ¸
¹·¨
©§42 32 22 2 2 24 3 2hfhfhfh f f h fS S S SS S      ¸
¹·¨
©§442920.01342920.00242920.01 42920.001 42920.024 3 2Sf
   00141393.00 092106.001
90931.0#
The value of 2sin I get from my calculator is 90930.0which is very close to the
value I Must obtained. Now you can get a better value by using more terms of the
series. In addition, you can now use the value calculated for 2sin coupled with
the value of 2cos (which can be calculated by Taylor series Must like this
example or by using the 1 cos sin2 2{x x i d e n t i t y ) t o f i n d v a l u e o f xsin a t
some other point. In this way, we can find the value of xsin for any value from 0 x to S2 and then can use the periodicity of xsin, that is ,2,1, 2 sin sin  nnx x S t o c a l c u l a t e t h e v a l u e o f xsin a t a n y o t h e r
point.
Example 
Derive the Maclaurin series of  753sin7 5 3x x xx x
Solution
In the previous example, we wrote the Taylor series for xsin around the point 2S x. Maclaurin series is simply a Taylor series for the point 0 x. munotes.in

Page 37

37Chapter 3: Truncation Errors the Taylor Series x xf sin , 00 f x xf cos c, 10 cf x xf sin cc, 00 ccf x xf cos ccc, 1 0 cccf x xf sin cccc, 00 ccccf ) cos(x x f ccccc, 10 cccccf
Using the Taylor series now, ccccccccccccccc 50403020 0 0 05 4 3 2hfhfhfhfh f f h f ccccccccccccccc 50403020 0 05 4 3 2hfhfhfhfh f f hf
ccccccccccccccc 5 4 3 25 4 3 2hx fhxfhxfhxfhxfxf hxf  51403120 105 4 3 2h h h hh
 535 3h hh
So  535 3x xx xf  53sin5 3x xx x
Example 4
Find the value of 6f given that 1254 f, 744 cf, 304 ccf, 64 cccf
and all other higher derivatives of xf at 4 x are zero.
Solution cccccc 3 23 2hxfhxfhxf xf hxf 4 x 46 h 2
Since fourth and higher derivatives of xf are zero at 4 x. munotes.in

Page 38

38NUMERICAL AND STATISTICAL METHODS 324224 24 4 243 2
f f f f f cccccc  ¸¸
¹·
¨¨
©§¸¸
¹·
¨¨
©§ 3262230274 12563 2
f
860 148 125 
341
Note that to find 6f exactly, we only needed the value of the function and all
its derivatives at some other point, in this case, 4 x. We did not need the
expression for the function and all its derivatives. Taylor se ries application would
be redundant if we needed to know the expression for the functi on, as we could
Must substitute 6 x in it to get the value of 6f.
Actually the problem posed above was obtained from a known fun ction 5 2 32 3 x x xxf w h e r e 1254 f, 744 cf, 304 ccf, 64 cccf,
and all other higher derivatives are zero.
.2.4 Error in Taylor Series
As you have noticed, the Taylor series has infinite terms. Onl y in special cases
such as a finite polynomial does it have a finite number of ter ms. So whenever
you are using a Taylor series to calculate the value of a funct ion, it is being
calculated approximately.
The Taylor polynomial of order n o f a f u n c t i o n )(xf w i t h )1(n continuous
derivatives in the domain @ ,> hxx is given by hxRnhxfhxfhxf xf hxfnn
n  c  2

2

where the remainder is given by c fnhhxRnn
n11
)1(
 .
where hxcx 
that is,c is some point in the domain hxx,.
munotes.in

Page 39

39Chapter 3: Truncation Errors the Taylor SeriesExample 5
The Taylor series for xeat point 0 x is given by  543215 4 3 2x x x xx ex
a) What is the truncation (true) error in the representation of 1e if only four terms
of the series are used"
b) Use the remainder theorem to find the bounds of the truncati on error.
Solution
a) If only four terms of the series are used, then 3213 2x xx ex| 31
21113 2
1|e
66667.2
The truncation (true) error would be the unused terms of the Ta ylor series, which
then are  545 4x xEt
 51
415 4
0516152.0#
b) But is there any way to know the bounds of this error other tha n
calculating it directly" n c 
where
c fnhhxRnn
n11
1
 , hxcx , and c is some point in the domain hxx,. munotes.in

Page 40

40NUMERICAL AND STATISTICAL METHODSSo in this case, if we are using four terms of the Taylor serie s, the remainder is
given by 3,0 n x
c f R1313
313110
 
cf4
41
24ce
Since hxcx  10 0 c 1 0 c
The error is bound between 241241
30eRe 241241
3eR  113261.01 041667.03R
So the bound of the error is less than 113261.0 w h i c h d o e s c o n c u r w i t h t h e
calculated error of 0516152.0.
Example 6
The Taylor series for xeat point 0 x is given by  543215 4 3 2x x x xx ex
As you can see in the previous example that by taking more term s, the error
bounds decrease and hence you have a better estimate of 1e. How many terms it
would require to get an approximation of 1e within a magnitude of true error of
less than610" munotes.in

Page 41

41Chapter 3: Truncation Errors the Taylor SeriesSolution
Using 1n terms of the Taylor series gives an error bound of
c fnhhxRnn
n11
1
  xexf h x )(,1,0
c fnRnn
n11
111


cn
en 111
 
Since hxcx  10 0 c 1 0 c )1(1)1(1
 neRnn
So if we want to find out how many terms it would require to ge t an
approximation of 1e within a magnitude of true error of less than610, 610)1(ne e n610)1( ! 3 10)1(6u!n (as we do not know the value of ebut it is less than 3). 9tn
So 9 terms or more will get 1e within an error of 610 in its value.
We can do calculations such as the ones given above only for si mple functions.
To do a similar analysis of how many terms of the series are ne eded for a
specified accuracy for any general function, we can do that bas ed on the concept
of absolute relative approximate errors discussed in Chapter 01 .02 as follows. munotes.in

Page 42

42NUMERICAL AND STATISTICAL METHODSWe use the concept of absolute relative approximate error (see Chapter 02 for
details), which is calculated after each term in the series is added. The maximum
value of m, for which the absolute relative approximate error is less tha n mu2105.0 is the least number of significant digits correct in the answ er. It
establishes the accuracy of the approximate value of a function w i t h o u t t h e
knowledge of remainder of Taylor series or the true error.
. Error Propagation
The purpose of this section is to study how errors in numbers c an propagate
through mathematical functions. For example, if we multiply two n u m b e r s t h a t
have errors, we would like to estimate the error in the product .
Numerical solutions involve a series of computational steps. Th erefore, it is
necessary to understand the way the error propagates with progr essive
computations.
Propagation of error (or propagation of uncer tainty) is the effect of variables,
uncertainties (or errors) on the uncertainty of a function based on them.
..1 Functions of a Single Variable
Suppose that we have a function f (x) that is dependent on a single independent
variable x. Assume that x is an approximation of x. We, therefore, would like to
assess the effect of the discrepancy between x and x on the value of the function.
That is, we would like to estimate
∆f(ݔҧ) _ f(x) − f(ݔҧ)_
The problem with evaluating ∆ f(ݔҧሻis that f (x) is unknown because x is
unknown. We can overcome this difficulty if x˜ is close to x and f(ݔҧሻis continuous
and differentiable. If these conditions hold, a Taylor series c an be employed to
compute f (x) near f(ݔҧሻas in
݂ሺݔሻൌ݂ሺݔതሻ൅݂Ԣሺݔҧሻሺݔ െݔҧሻ൅݂ᇱᇱሺݔҧሻ
ʹǨሺݔ െݔҧሻଶ൅ڮ
Dropping the second- and higher-order terms and rearranging yie lds
f(x) − f(ݔҧ) ؆݂Ԣሺݔതሻ(x − ݔҧሻ
or
∆݂ሺݔതሻ _ ݂Ԣሺݔതሻ_∆ݔҧ (3.3.1) munotes.in

Page 43

43Chapter 3: Truncation Errors the Taylor SeriesWhere ∆݂ሺݔതሻ _݂ሺݔሻ - ݂ሺݔതሻ_
represents an estimate of the error of the function and ∆ݔҧ _ݔ- ݔത_
represents an estimate of the error of x. Equation (3.3.1) provides the capability to
approximate the error in f (x) given the derivative of a function and an estimate of
the error in the independent variable. Figure 3.3.1 is a graphi cal illustration of the
operation.

FIGURE ..1 Graphical depiction of fir st order error propagation.
..2 Error Propagation in a Function of a Single Variable
Problem Statement
Given a value of xҧ = 2.5 with an error of ∆ xҧ 0.01, estimate the resulting error in
the function, f ( x) x3.
Solution. Using Eq. (4.25),
∆f(xҧ) ؆ 3(2.5)2(0.01) 0.1875
Because f (2.5) 15.625, we predict that
f(2.5) 15.625 “ 0.1875
munotes.in

Page 44

44NUMERICAL AND STATISTICAL METHODSor that the true value lies between 15.4375 and 15.8125. In fac t, if x were actually
2.49, the function could be evaluated as 15.4382, and if x were 2.51, it would be
15.8132. For this case, the first-order error analysis provides a fairly close
estimate of the true error.
.4 Total Numerical Error
The total numerical error is the summation of the truncation an d round-off errors.

FIGURE .4.1
A graphical depiction of the trade-off between round-off and tr uncation error that
sometimes comes into play in the course of a numerical method. The point of
diminishing returns is shown, where round-off error begins to n egate the benefits
of step-size reduction.
In general, the only way to minimize round-off errors is to inc rease the number of
significant figures of the computer. Further, we have noted tha t round-off error
will increase due to subtractive cancellation or due to an incr ease in the number of
computations in an analysis. The truncation error can be reduce d by decreasing
the step size. Because a decrease in step size can lead to subt ractive cancellation
or to an increase in computations , the truncation errors are de creased as the
round-off errors are increased. Therefore, we are faced by the following dilemma:
The strategy for decreasing one component of the total error le ads to an increase
of the other component. In a computation, we could conceivably decrease the step
size to minimize truncation errors only to discover that in doi ng so, the round-off
error begins to dominate the solution and the total error grows  Thus, our remedy
becomes our problem (Fig. 3.4.1). One challenge that we face is to determine an
munotes.in

Page 45

45Chapter 3: Truncation Errors the Taylor Seriesappropriate step size for a particular computation. We would li ke to choose a
large step size in order to decrease the amount of calculations a n d r o u n d - o f f
errors without incurring the penalty of a large truncation erro r. If the total error is
as shown in Fig. 3.4.1, the challe nge is to identify the point of diminishing returns
where round-off error begins to negate the benefits of step-siz e reduction.
In actual cases, however, such situations are relatively uncomm on because most
computers carry enough significa nt figures that round-off error s do not
predominate. Nevertheless, they sometimes do occur and suggest a sort of
“numerical uncertainty principle” that places an absolute limit on the ac curacy
that may be obtained using certain computerized numerical metho ds.
.5 Formulation Error and Data Uncertainty
Formulation Errors
Formulation, or model, are the errors resulting from incomplete m a t h e m a t i c a l
models when some latent effects are not taken into account or i gnored.
Data Uncertainty
Errors resulting from the acc uracy and/or precision of the data
o When with biased (underestimation/overestimation) or imprecise
instruments
o We can use descriptive statistics (viz. mean and variance) to p rovide a
measure of the bias and imprecision.
.6 Summary
The third chapter of this book discusses the truncation error a nd Taylor series
expansion which is one of the mo st widely used example to illus trate truncation
error and truncation error bound. Formulation error and data un certainty is also
discussed in this chapter.
.7 References
Following books are recommended for further reading:-
1) Introductory Methods of Numerical Methods by S S Shastri munotes.in

Page 46

46NUMERICAL AND STATISTICAL METHODS2) Numerical Methods for Engineers Steven C Chapra Raymond P Can ale –
6th Edition
3) Numerical Statistical Methods by Bhupendra T Kesria, Himalaya
Publishing House
4) Computer based Numerical Statistical Techniques – M. Goyal
. Exercise
1. Finding Taylor polynomials
Find the Taylor polynomials of orders 0,1 and 2 genera ted by f at a
a) f (x) ଵ
௫ , a 2
b) f (x) ଵ
ሺ௫ାଶሻ , a 0
c) f (x) sinx , a గ

d) f (x) cosx , a గ

2. Find Maclaurin series for the following functions
a) ݁ି௫
b) ݁ೣ

c) ଵ
ሺଵା௫ሻ
d) ଵ
ሺଵି௫ሻ
e) ݊݅ݏ͵ݔ
f) sin௫
ଶ
. Finding Taylor Series
Find the Taylor series generated by f at x a.
a) f (x) ݔଷ-2x4, a 2
b) f (x) ʹݔଷݔଶ3x-8, a 1
c) f (x) ݔସݔଶ1, a -2 munotes.in

Page 47

47Chapter 3: Truncation Errors the Taylor Seriesd) f (x) ͵ݔହ-ݔସ2ݔଷ൅ݔଶ-2 , a -1
e) f (x) ଵ
௫మ, a 1
f) f (x) ଵ
ሺଵି௫ሻ, a 0
g) f (x) ݁௫, a 2
h) f (x) ʹ௫, a 1
4. Use the taylor series generated by ࢋ࢞ at x a to show that
݁௫ൌ݁௔ ቂͳ൅ሺݔെܽሻ൅ሺ௫ି௔ሻమ
ଶǨ൅ڮቃ
5. ሺ࢓ࢋ࢒࢈࢕࢘࢖ࢋ࢜࢕࢈ࢇࢌ࢕࢔࢕࢏࢚ࢇ࢛࢔࢏࢚࢔࢕ࢉ ሻ Find the Taylor series generated
by ݁௫ at x 1 compare your answer with the formula in example 4


™™™
munotes.in

Page 48

48NUMERICAL AND STATISTICAL METHODSUNIT 2
4
SOLUTIONS OF ALGEBRAIC AND
TRANSCENDENTAL E4UATIONS
Unit Structure
4.0 ObMectives
4.1 Introduction
4.1.1 Simple and Multiple roots
4.1.2 Algebraic and Transcendental Equations
4 . 1 . 3 D i r e c t m e t h o d s a n d I t e r a t i v e m e t h o d s
4 . 1 . 4 I n t e r m e d i a t e V a l u e T h e o r e m
4.1.5 Rate of Convergence
4.2 Bisection Method
4.3 Newton-Raphson Method
4 . 3 . 1 G e o m e t r i c a l I n t e r p r e t a t i o n
4.3.2 Rate of Convergence
4.4 Regula-Falsi Method
4 . 4 . 2 R a t e o f C o n v e r g e n c e
4.5 Secant Method
4 . 5 . 2 R a t e o f C o n v e r g e n c e
4.6 Geometrical Interpretation of Secant and Regula Falsi Metho d
4.7 Summary
4.8 Exercises
4.0 Objectives
This chapter will enable the learner to:
x understand the concepts of simple root, multiple roots, algebra ic equations,
transcendental equations, direct methods, iterative methods. munotes.in

Page 49

49Chapter 4: Solutions of Algebraic And Transcendental Equationsx find roots of an equation using Bisection method, Newton-Raphso n method,
Regula-Falsi method, Secant method.
x understand the geometrical interpretation of these methods and derive the rate
of convergence.
4.1 Introduction
The solution of an equation of the form f(x) 0 is the set of all values which when
substituted for unknowns, make an equation true. Finding roots of an equation is a
problem of great importance in the fields of mathematics and en gineering. In this
chapter we see different methods to solve a given equation.
4.1.1 Simple and Multiple Roots
Definition 4.1.1.1 (Root of an Equation). A number ζ is said to be a root or a zero
of an equation f(x) 0 if f(ζ) 0.
Definition 4.1.1.2 (Simple Root). A number ζ is said to be a simple root of an
equation f(x) 0 if f(ζ) 0 and f
(ζ) ≠ 0. In this case we can write f(x) as
f(x) (x − ζ)g(x), where g(ζ) ≠ 0.
Definition 4.1.1. (Multiple Root). A number ζ is said to be a multiple root of
multiplicity m of an equation f(x) 0 if f(ζ) 0, f
(ζ) 0, ..., f(m−1)(ζ) 0 and f(m)(ζ)
≠ 0. In this case we can write f(x) as
f(x) (x − ζ)mg(x), where g(ζ) ≠ 0.
4.1.2 Algebraic and Transcendental Equations
Definition 4.1.2.1 (Algebraic Equation). A polynomial equation of the form f(x)
a0  a1x  a2x2  āāā  an−1xn−1  anxn 0, an ≠ 0 where ai אԧ for 0 ≤ i ≤ n is called
an algebraic equation of degree n.
Definition 4.1.2.2 (Transcendental Equation). A n e q u a t i o n w h i c h c o n t a i n s
exponential functions, logarithmic functions, trigonometric fun ctions etc. is called
a transcendental function.
4.1. Direct Methods and Iterative Methods
Definition 4.1.2.1 (Direct Methods). A method which gives an exact root in a
finite number of steps is called direct method. munotes.in

Page 50

50NUMERICAL AND STATISTICAL METHODSDefinition 4.1.2.2 (Iterative Methods). A method based on successive
approximations, that is starting with one or more initial appro ximations to the root,
to obtain a sequence of approximations or iterates which conver ge to the root is
called an iterative method.
4.1.4 Intermediate Value Theorem
Iterative methods are based on successive approximations to the root starting with
one or more initial approximations. Choosing an initial approxi mation for an
iterative method plays an important role in solving the given e quation in a smaller
number of iterates. Initial approximation to the root can be ta ken from the physical
considerations of the given problem or by graphical methods.
As an example of finding initial approximation using physical c onsiderations of the
given problem, consider f(x) x3 − 28. Then to find the root of f(x) 0, one of the
best initial approximation is x 3 as cube of x 3 is close to the given value 28 To find initial approximation graphically, consider an example of

Figure 1: f(x) x3 − 3x
f(x) x3 − 3x. The value of x at which the graph of f(x) intersects x- axis gives the
root of f(x) 0. From Figure 1, it is clear that the roots of f(x) 0 lies close to 2 and
−2. Hence the best initial appr oximations will be 2 and −2.
Intermediate Value Theorem is another commonly used method to o btain the initial
approximations to the root of a given equation.
Theorem 4.1.4.1 (Intermediate Value Theorem). If f(x) is a continuous function
on some interval > a,b@ and f(a)f(b) < 0 then the equation f(x) 0 has at least one
real root or an odd number of real roots in the interval ( a,b).
munotes.in

Page 51

51Chapter 4: Solutions of Algebraic And Transcendental Equations4.1.5 Rate of Convergence
An iterative method is said to have a rate of convergence α, if α is the largest
positive real number for which there exists a finite constant C ≠ 0 such that

where is the error in the it iterate for i אN׫^0`. The constant C is called
the asymptotic error constant.
4.2 Bisection Method
Bisection method is based on the repeated application of interm ediate value
theorem. Let I0 (a0,b0) contain the root of the equation f(x) 0. We find
by bisecting the interval I0. Let I1 be the interval ( a0,m1),
if f(a0)f(m1) < 0 or the interval ( m1,b0), if f(m1)f(b0) < 0. Thus interval I1 also
contains the root of f(x) 0. We bisect the interval I1 and take I2 as the subinterval
at whose end points the function f(x) takes the values of opposite signs and hence
I2 also contains the root.
Repeating the process of bisecting intervals, we get a sequence o f n e s t e d
subintervals I0 ـ I1 ـ I2 ـāāā such that each subinterval contains the root. The
midpoint of the last sub interval is taken as the desired approx imate root.
Example 4.2.1. F i n d t h e s m a l l e s t p o s i t i v e r o o t o f f(x) x3 − 5x  1 0 by
performing three iterations of Bisection Method.
Solution: Here, f(0) 1 and f(1) = −3. That is f(0)f(1) < 0 and hence the smallest
positive root lies in the interval (0 ,1). Taking a0 0 and b0 1, we get
(First Iteration)

Since, f(m1) = −1.375 and f(a0)f(m1) < 0, the root lies in the interval (0 ,0.5).
Taking a1 0 and b1 0.5, we get
(Second Iteration)

Since, f(m2) = −0.234375 and f(a1)f(m2) < 0, the root lies in the interval (0 ,0.25).
munotes.in

Page 52

52NUMERICAL AND STATISTICAL METHODSTaking a2 0 and b2 0.25, we get
(Third Iteration)

Since, f(m3) 0 .37695 and f(m3)f(b2) < 0, the approximate root lies in the interval
(0.125,0.25).
Since we have to perform three iterations, we take the approxim ate root as midpoint
of the interval obtained in the third iteration, that is (0 .125,0.25). Hence the
approximate root is 0 .1875.
4. Newton-Raphson Method
Newton-Raphson method is based on approximating the given equat ion f(x) 0
with a first degree equation in x. Thus, we write
f(x) a0x  a1 0
whose root is given by such that the parameters a1 and a0 are to be
determined. Let xk be the kth approximation to the root. Then
f(xk) a0xk  a1 (4.3.1)
and
f
(xk) a0 (4.3.2)
Substituting the value of a0 in 4.3.1 we get a1 f(xk) − f
(xk)xk. Hence,
.
Representing the approximate value of x by xk1 we get
(4.3.3)
This method is called the Newto n-Raphson method to find the roo ts of f(x) 0.
Alternative:
Let xk be the kth approximation to the root of the equation f(x) 0 and h be an
increment such that xk  h is an exact root. Then
f(xk  h) 0 .
munotes.in

Page 53

53Chapter 4: Solutions of Algebraic And Transcendental EquationsUsing Taylor series expansion on f(xk  h) we get,

Neglecting the second and higher powers of h we get,
f(xk)  h f
(xk) 0 .
Thus,

We put xk1 xk  h and obtain the iteration method as

Newton-Raphson method requires two function evaluations fk and fk’ for each
iteration.
4..1 Geometrical Interpretation
We approximate f(x) by a line taken as a tangent to the curve at xk which gives the
next approximation xk1 as the x-intercept as in Figure 2.
Example 4.3.1. Find the approximate root correct upto two decim al places for f(x)
x4 − x − 10 using Newton-Raphson Method with initial approximation x0 1.

Figure 2: Newton-Raphson Method
Solution: Here f(x) x4 − x − 10 0 implies f
(x) 4 x3 − 1. For x0 1, f(x0) = −10
and f
(x0) 3. By Newton-Raphson iteration Formula,

munotes.in

Page 54

54NUMERICAL AND STATISTICAL METHODSThus,
.
Similarly, x3 2.5562 , x4 2.0982 , x5 1.8956 , x6 1.8568 , x7 1.8555.
4..2 Rate of Convergence
Let f(x) be a continuous function. Then the Newton-Raphson method to f ind the
approximate root of f(x) 0 is given by equation 4.3.3.
Let ζ be the exact root of f(x). Then we write xk1, xk in terms of ζ as
and , where I is the error in the ith iteration.
Let
.
Thus, equation 4.3.3 becomes
.
That is
. (4.3.4)
Using Taylor series expansion on and , we get

and




munotes.in

Page 55

55Chapter 4: Solutions of Algebraic And Transcendental EquationsThus, equation 4.3.4 becomes

Neglecting the third and higher powers of ߳௞, we get

where . Thus, Newton-Raphson method has a second order rate of
convergence.
4.4 Regula-Falsi Method
Given a continuous function f(x), we approximate it by a first-degree equation of
the form a0x  a1, such that a0 ≠ 0 in the neighbourhood of the root. Thus, we write f(x) a0x  a1 0
Then (4.4.1) f (xk) a0xk  a1 and (4.4.2) f(ݔ௞ିଵ) ܽ଴ݔ௞ିଵ൅ܽଵ (4.4.3) On solving equation 4.4.2 and 4.4.3, we get

and



munotes.in

Page 56

56NUMERICAL AND STATISTICAL METHODSFrom 4.4.1, we have . Hence substituting the values of a0 and a1 and
writing x as xk1 we get
That is (4.4.4)
which can be expressed as
( 4 . 4 . 5 )
Here, we take the approximations xk and xk−1 such that f(xk)f(xk−1) < 0. This method
is known as Regula Falsi Method or False Position Method. This method requires
only one function evaluation per iteration.
Example 4.4.1. P e r f o r m f o u r i t e r a t i o n s o f R e g u l a F a l s i m e t h o d f o r
f(x) x3 − 5x  1 such that the root lies in the interval (0 ,1).
Solution: Since the root lies in the interval (0 ,1), we take x0 0 and x1 1. Then
f(x0) f(0) 1 and f(x1) f(1) = −3. By Regula Falsi Method,
and
As f(x0)f(x2) < 0 and f(x1)f(x2) > 0, by Intermediate Value property, the root lies in
the interval ( x0,x2). Hence

and f(x3) = −0.004352297.
Similarly, we get x4 0.201654 and x5 0.20164. Hence, we get the approximate
root as 0 .20164.
4.4.2 Rate of Convergence
Let f(x) be a continuous function. Then the Regula Falsi method to fin d the
approximate root of f(x) 0 is given by equation 4.4.5. Let ζ be the exact root of
f(x). Then we write xk1, xk, xk−1 in terms of ζ as

where is the error in the ith iteration.

munotes.in

Page 57

57Chapter 4: Solutions of Algebraic And Transcendental EquationsHence, .
Thus, equation 4.4.5 becomes

Applying Taylor expansion on and we get
Since ζ is an exact root of f(x) 0 we get

Neglecting the higher powers of ߳௞ and , we get

Hence
(4.4.6)
where .
Since in Regula Falsi method, one of the x0 or x1 is fixed, equation 4.4.6 becomes
, if x0 is fixed and , if x1 is fixed.
,,,,,,,,,
,,,,,,,,,munotes.in

Page 58

58NUMERICAL AND STATISTICAL METHODSIn both the cases, the rate of convergence is 1. Thus, Regula F alsi method has a
linear rate of convergence.
4.5 Secant Method
Given a continuous function f(x), we approximate it by a first-degree equation of
the form a0x  a1, such that a0 ≠ 0 in the neighbourhood of the root. Thus, we write
f(x) a0x  a1 0 (4.5.1)
Then
f(xk) a0xk  a1 (4.5.2)
and
f(xk−1) a0xk−1  a1 (4.5.3)
On solving equation 4.5.2 and 4.5.3, we get

and

From 4.5.1, we have . Hence substituting the values of a0 and a1 and
writing x as xk1 we get
That is
(4.5.4)
which can be expressed as
(4.5.5)
This method is known as Secant Method or Chord Method.
munotes.in

Page 59

59Chapter 4: Solutions of Algebraic And Transcendental EquationsExample 4.5.1. Using Secant Method, find the root of f(x) cosx−xex 0 taking
the initial approximations as 0 and 1.
Solution: Here x0 0, f(x0) 1 and x1 1, f(x1) = −2.17798. Using Secant formula

we get

and f(x2) 0 .519872. Then

and f(x3) 0 .2036. Similarly, we get x4 0.531706 and x5 0.51691. Hence the
approximate root is 0 .51691.
4.5.2 Rate of Convergence
Let f(x) be a continuous function. Then t he Secant method to find the approximate
root of f(x) 0 is given by equation 4.5.5. Let ζ be the exact root of f(x). Then we
write xk1, xk, xk−1 in terms of ζ as

where is the error in the ith iteration.
Hence, . Thus, equation
4.5.5 becomes

Applying Taylor expansion on and we get



munotes.in

Page 60

60NUMERICAL AND STATISTICAL METHODSSince ζ is an exact root of f(x) 0 we get


Neglecting the higher powers of k and , we get

Hence (4.5.6)
where .
Considering the general equation of rate of convergence, we hav e
( 4 . 5 . 7 )
which implies
.
Then

Substituting the value of in 4.5.6 we get



d

munotes.in

Page 61

61Chapter 4: Solutions of Algebraic And Transcendental EquationsFrom equation 4.5.7
(4.5.8)
Comparing the powers of ߳௞ we get

which implies

Neglecting the negative value of p, we get the rate of convergence of Secant method
as . On comparing the constants of 4.5.8 we get,
.

Thus, where and .
4.6 Geometrical Interpretation of Secant and Regula Falsi Metho d
Geometrically, in Secant and Regula Falsi method we replace the function f(x) by
a chord passing through ( xk,fk) and ( xk−1,fk−1). We take the next root approximation
as the point of intersection of the chord with the x- axis.

Figure 3: Secant Method Figure 4: Re gula Falsi Method

munotes.in

Page 62

62NUMERICAL AND STATISTICAL METHODS4.7 Summary
In this chapter, iteration methods to find the approximate root s of an equation is
discussed.
The concepts of simple roots, multiple roots, algebraic and tra nscendental
equations are discussed.
Intermediate value property to find the initial approximations to the root is
discussed. Bisection method, Newton-Raphson method, Regula Fals i method and
Secant method to find the approximate root of an equation is di scussed.
Geometrical interpretation and r ate of convergence of each meth od are discussed.
4. Exercise
1. Perform four iterations of bisection method to obtain a root of
f(x) cosx − xex.
2. Determine the initial approximation to find the smallest positi ve root for f(x)
x4 − 3x2  x − 10 and find the root correct to five decimal places by Newton
Raphson Method.
3. Perform four iterations of Newton Raphson method to obtain the approximate
value of ͳ͹భ
య starting with the initial approximation x0 2.
4. Using Regula Falsi Method, find the root of f(x) cosx − xex 0 taking the
initial approximations as 0 and 1.
5. Perform three iterations of Secant Method for f(x) x3 −5x1 such that the
root lies in the interval (0 ,1).
6. For f(x) x4 −x−10 determine the initial approximations to find the smallest
positive root correct up to three decimal places using Newton R aphson
method, Secant Method and Regula Falsi Method.
7. Show that the Newton-Raphson method leads to the recurrence

to find the square-root of a.
munotes.in

Page 63

63Chapter 4: Solutions of Algebraic And Transcendental Equations8. For f(x) x−e−x 0 determine the initial approximation to find the smallest
positive root. Find the root correct to three decimal places u sing Regula Falsi
and Secant method.
9. Find the approximate root correct upto three decimal places for f(x) cosx −
xex using Newton- Raphson Method with initial approximation x0 1.


™™™
munotes.in

Page 64

64NUMERICAL AND STATISTICAL METHODSUNIT 2
5
INTERPOLATION
Unit Structure
5.0 ObMectives
5.1 Introduction
5.1.1 Existence and Uniqueness of Interpolating Polynomial
5.2 Lagrange Interpolation
5 . 2 . 1 L i n e a r I n t e r p o l a t i o n
5.2.2 4uadratic Interpolation
5 . 2 . 3 H i g h e r O r d e r I n t e r p o l a t i o n
5.3 Newton Divided Difference Interpolation
5 . 3 . 1 L i n e a r I n t e r p o l a t i o n
5.3.2 Higher Order Interpolation
5.4 Finite Difference Operators
5.5 Interpolating polynomials using finite difference operators
5 . 5 . 1 N e w t o n F o r w a r d D i f f e r e n c e I n t e r p o l a t i o n
5 . 5 . 2 N e w t o n B a c k w a r d D i f f e r e n c e I n t e r p o l a t i o n
5.6 Summary
5.7 Exercises
5.0 Objectives
This chapter will enable the learner to:
x Understand the concepts of interpolation and interpolating poly nomial. Prove
the existence and uniqueness of an interpolating polynomial.
x To find interpolating polynomial u sing Lagrange method and Newt on
Divided Difference method. munotes.in

Page 65

65Chapter 5: Interpolationx To understand the concepts of finite difference operators and t o relate
between difference operators and divided differences.
x To find interpolating polynomial using finite differences using N e w t o n
Forward Difference and Backwa rd Difference Interpolation.
5.1 Introduction
If we have a set of values of a function y f(x) as follows: x x0 x1 x2 āāā xn y y0 y1 y2 āāā yn then the process of finding the value of f(x) corresponding to any value of x xi
between x0 and xn is called interpolation. If the function f(x) is explicitly known,
then the value of f(x) corresponding to any value of x xi can be found easily. But,
if the explicit form of the function is not known then finding the value of f(x) is not
easy. In this case, we approximate the function by simpler func tions like
polynomials which assumes the same values as those of f(x) at the given points x0,
x1, x2,āāā ,xn.
Definition 5.1.1 (Interpolating Polynomial). A polynomial P(x) is said to be an
interpolating polynomial of a function f(x) if the values of P(x) and/or its certain
order derivatives coincides with those of f(x) for given values of x.
If we know the values of f(x) at n  1 distinct points say x0 < x 1 < x 2 < āāā < x n, then
interpolation is the process of finding a polynomial P(x) such that
(a) P(xi) f(xi), i 0,1,2,āāā ,n
or
(a) P(xi) f(xi), i 0,1,2,āāā ,n
(b) P’(xi) f ’(xi), i 0,1,2,āāā ,n munotes.in

Page 66

66NUMERICAL AND STATISTICAL METHODS5.1.1 Existence and Uniqueness of Interpolating Polynomial
Theorem 5.1.1.1. L e t f(x) be a continuous function on > a,b@ and let
a x0 < x 1 < x 2 < āāā < x n b be the n  1 distinct points such that the value of
f(x) is known at these points. Then there exist a unique polynomia l P(x) such that
P(xi) f(xi), for i 0,1,2,āāā ,n.
Proof. We intend to find a polynomial P(x) a0  a1x  a2x2  āāā  anxn such that
P(xi) f(xi), for i 0,1,2,āāā ,n. That is
(5.1.1.1)
Then the polynomial P(x) exists only if the system of equations 5.1.1.1 has a unique
solution. That is, the polynomial P(x) exists only if the Vandermonde
s determinant
is non-zero (in other words, P(x) exists only if the determinant of the co-efficient
matrix is non-zero). Let

munotes.in

Page 67

67Chapter 5: Interpolation
5.2 Lagrange Interpolation
5.2.1 Linear Interpolation
For linear interpolation, n 1 and we find a 1-degree polynomial
P1(x) a1x  a0
such that
f(x0) P1(x0) a1x0  a0
and
f(x1) P1(x1) a1x1  a0.
We eliminate a0 and a1 to obtain P1(x) as follows:

On simplifying, we get
P1(x)(x0 − x1) − f(x0)(x − x1)  f(x1)(x − x0) 0 and hence
(5.2.1.2)
where and are called the Lagrange
Fundamental Polynomials which satisfies the condition l0(x)  l1(x) 1.
Example 5.2.1. Find the unique polynomial P(x) such that P(2) 1 .5, P(5) 4
using Lagrange interpolation.
Solution: By Lagrange interpolation formula,


munotes.in

Page 68

68NUMERICAL AND STATISTICAL METHODS5.2.2 4uadratic Interpolation
For quadratic interpolation, n 2 and we find a 2-degree polynomial
P2(x) a2x2  a1x  a0
such that
݂ሺݔ଴ሻൌܲଶሺݔ଴ሻൌܽଶݔ଴ଶ൅ܽଵݔ଴൅ܽ଴
݂ሺݔଵሻൌܲʹሺݔ ଵሻൌܽ ଶݔଵଶ൅ܽଵݔଵ൅ a0.
f(x2) P2(x2) ܽଶݔଶଶ൅ܽଵݔଶ൅ a0
We eliminate a0, a1 and a2 to obtain P2(x) as follows:

On simplifying, we get

where l0(x)  l1(x)  l2(x) 1.
Example 5.2.2. Find the unique polynomial P(x) such that P(3) 1, P(4) 2 and
P(5) 4 using Lagrange interpolation.
Solution: By Lagrange interpolation formula,

munotes.in

Page 69

69Chapter 5: Interpolation5.2. Higher Order Interpolation
The Lagrange Interpolating polynomial P(x) of degree n for given n  1 distinct
points a x0 < x 1 < x 2 < āāā < x n b is given by
(5.2.3.1)
where,

for i 0,1,āāā ,n.
5. Newton Divided Difference Interpolation
5..1 Linear Interpolation
For linear interpolation, n 1 and we find a 1-degree polynomial
P1(x) a1x  a0
such that
f(x0) P1(x0) a1x0  a0
and
f(x1) P1(x1) a1x1  a0.
We eliminate a0 and a1 to obtain P1(x) as follows:

On simplifying, we get

Let , then
P1(x) f(x0)  (x − x0) f>x0, x1@ (5.3.1.1)
The ratio f>x0, x1@ is called the first divided difference of f(x) relative to x0 and x1.
The equation 5.3.1.1 is called the linear Newton interpolating polynomial with
divided differences.
tmunotes.in

Page 70

70NUMERICAL AND STATISTICAL METHODSExample 5..1. Find the unique polynomial P(x) such that P(2) 1 .5, P(5) 4
using Newton divided difference interpolation.
Solution: Here

Hence, by Newton divided difference interpolation,

5..2 Higher Order Interpolation
We define higher order divided differences as

Hence, in general we have

for k 3,4,āāā ,n and in terms of function values, we have

Then the Newton’s Divided Difference interpolating polynomial is given by
Pn(x) f>x0@  (x − x0)f>x0,x1@  (x − x0)(x − x1)f>x0,x1,x2@āāā
 ( x − x0)(x − x1)āāā(x − xn−1)f>x0,x1,āāā ,xn@ (5.3.2.1)

munotes.in

Page 71

71Chapter 5: InterpolationExample 5..2. Construct the divided difference table for the given data and hence
find the interpolating polynomial. x 0.5 1.5 3.0 5.0 6.5 8.0 f(x) 1.625 5.875 31.000 131.000 282.125 521.000 Solution: The divided difference table will be as follows:
x f (x) I order d.d. II order d.d. III order d.d. IV order d.d.
0.5 1.625
4.25
1.5 5.875 5
16.75 1
3.0 31.000 9.5 0
50.00 1
5.0 131.000 14.5 0
100.75 1
6.5 282.125 19.5
159.25
8.0 521.000
From the table, as the fourth divided differences are zero, the i n t e r p o l a t i n g
polynomial will be given as
P3(x) f>x0@  (x − x0)f>x0,x1@  (x − x0)(x − x1)f>x0,x1,x2@
 ( x − x0)(x − x1)(x − x2)f>x0,x1,x2,x3@
1 .625  ( x − 0.5)(4.25)  5( x − 0.5)(x − 1.5)
 1( x − 0.5)(x − 1.5)(x − 3)
x3  x  1
5.4 Finite Difference Operators
Consider n1 equally spaced points x0,x1,āāā ,xn such that xi x0ih,i 0,1,āāā ,n.
We define the following operators:
1. Shift Operator (E): E(f(xi)) f(xi  h). munotes.in

Page 72

72NUMERICAL AND STATISTICAL METHODS2. Forward Difference Operator ( ∆): ∆(f(xi)) f(xi  h) − f(xi).
3. Backward Difference Operator ( ׏ :)׏(f(xi)) f(xi)−f(xi −h).
4. Central Difference Operator ( .
5. Average Operator ( .
Some properties of these operators:
.
Now we write the Newton’s divided differences in terms of forward and backward
difference operators. Consider

As xi x0  ih, i 0,1,āāā ,n, we get x1 − x0 h and hence

Now we consider

Thus, by induction we have
.
Similarly, for backward difference operator, consider
munotes.in

Page 73

73Chapter 5: Interpolation

Thus, by induction we have
.
5.5 Interpolating polynomials using finite differences operator s
5.5.1 Newton Forward Difference Interpolation
Newton’s forward difference interpolating polynomial is obtained by substituting
the divided differences in 5.3.2.1 with the forward differences . That is
(5.5.1.1)
Let , then 5.5.1.1 can be written as
(5.5.1.2)
5.5.2 Newton Backward Difference Interpolation
Newton interpolation with divided differences can be expressed in terms of
backward differences by evaluati ng the differences at the end p oint xn. Hence, we,
write

where .
t
munotes.in

Page 74

74NUMERICAL AND STATISTICAL METHODSExpanding (1 −׏)−u in binomial series, we get

Neglecting the higher order differences, we get the interpolati ng polynomial as
P n(x) Pn(xn  hu)
(5.5.2.1)
Example 5.5. Obtain the Newton’s forward and backward difference interpolating
polynomial for the given data x 0.1 0.2 0.3 0.4 0.5 f(x) 1.40 1.56 1.76 2.00 2.28 Solution: The difference table will be as follows:
x f (x) I order d.d. II order d.d. III order d.d. IV order d.d.
0.1 1.40
0.16
0.2 1.56 0.04
0.20 0
0.3 1.76 0.04 0
0.24 0
0.4 2.00 0.04
0.28
0.5 2.28
From the table, as the third differences onwards are zero, the interpolating
polynomial using Newton’s forward differences will be given as

and the Newton’s backward difference interpolating polynomial will be given as
munotes.in

Page 75

75Chapter 5: Interpolation
5.6 Summary
In this chapter, interpolation methods to approximate a functio n by a family of
simpler functions like polynomials is discussed.
Lagrange Interpolation method and Newton Divided Difference Int erpolation
method are discussed.
Difference operators namely shift operator, forward, backward a nd central
difference operators and average operator are discussed.
The relation between divided difference and the forward and bac kward difference
operators are discussed and hence the Newton’s divided difference interpolation is
expressed in terms of forward and backward difference operators .
5.7 Exercises
1. Given f(2) 4 and f(2.5) 5 .5, find the linear interpolating polynomial using
Lagrange interpolation and Newton divided difference interpolat ion.
2. Using the data sin0 .1 0 .09983 and sin0 .2 0 .19867, find an approximate
value of sin0 .15 using Lagrange interpolation.
3. Using Newton divided difference interpolation, find the unique polynomial
of degree 2 such that f(0) 1 ,f(1) 3 and f(3) 55.
4. Calculate the nth divided difference of for points x0,x1,āāā ,xn.
5. Show that δ ׏(1 −׏)−1/2 and .
6. For the given data, find the Newton’s forward and backward difference
polynomials. x 0 0.1 0.2 0.3 0.4 0.5 f(x) −1.5 −1.27 −0.98 −0.63 −0.22 0.25 7. Calculate .
8. If f(x) eax, show that ∆nf(x) (eah − 1)neax.
munotes.in

Page 76

76NUMERICAL AND STATISTICAL METHODS9. Using the Newton’s backward difference interpolation, construct the
interpolating polynomial that fits data. x 0.1 0.3 0.5 0.7 0.9 1.1 f(x) −1.699 −1.073 −0.375 0.443 1.429 2.631 10. Find the interpolating polynomial that fits the data as follows : x −2 −1 0 1 3 4 f(x) 9 16 17 18 44 81

™™™
munotes.in

Page 77

77Chapter 6: Solution of Simultaneous Algebraic Equations
UNIT 
6
SOLUTION OF SIMULTANEOUS
ALGEBRAIC E4UATIONS
Unit Structure
6.0 ObMectives
6.1 Introduction
6.2 Gauss-Jordan Method
6.3 Gauss-Seidel Method
6.4 Summary
6.5 Bibliography
6.6 Unit End Exercise
6.0 Objectives
Student will be able to understand the following from the Chapt er:
Method to represent equations in Matrix Form.
Rules of Elementary Transformation.
Application of Dynamic Iteration Method.
6.1 Introduction
An equation is an expression having two equal sides, which are separated by an
equal to sign. For Example: 9  4 13
Here, a mathematical operation has been performed in which 4 an d 9 has been
added and the result 13 has been written alongside with an equal to sign.
Similarly, an Algebraic equation is a mathematical expression having two equal
sides separated by an equal to sign, in which one side the expression is formulated
by a set of variables and on the other side there is a constant value. For example
2x2  5y3 9 , munotes.in

Page 78

78NUMERICAL AND STATISTICAL METHODS
Here, the set of variables x and y has been used to provide a unique pair of values
whose sum will be equal to 9.
If the powers (degrees) of the variables used is 1, then these algebraic equa tions
are known as Linear Algebraic equation. For example: 2 x  3y  5z 8.
It may happen, that there are more than equations,where there w ill be at-lest one
unique pair which will satisfy all the Algebraic equations. The procedure to find
these unique pairs are known as Solutions of Simultaneous Algeb raic Equations.
There are two types of methods:
1. Gauss-Jordan Method
2. Gauss-Seidel Method
6.2 Gauss-Jordan Method
Gauss Jordan Method is an algorithm used to find the solution o f simultaneous
equations. The algorithm uses Matrix Approach to determine the solution.
The method requires elementary transformation or elimination us ing ow
operations. Hence, it is also known as Gauss-Elimination Method .
Steps of Algorithm:
i Represent the set of equation in the following format:
A î X B
where,
A: Coefficient Matrix
X: Variable Matrix B: Constant Matrix
Example:
Convert the following equations in Matrix format:
3x  5y 12
2x  y 1
The Matrix representation is:
In the above set of equations, the coefficients are the values which are written
along-with the variables. munotes.in

Page 79

79Chapter 6: Solution of Simultaneous Algebraic Equations
The Constant matrix are the values which are written after the equal to sign.
Hence, the coefficient matrix is given as:

Hence, the variable matrix is given as:

And, the Constant matrix is:

ii Temporarily Combine the Coefficient A and Constant B Matrices in the
following format.
>A : B@
iii Perform row Transformation considering following Do’s and Don’t: Allowed Not Allowed Swapping of Rows
Ra ↔ Rb
where a 6 b Swapping between Row and
Column
Ra ↔ Cb Mathematical Operations between
Rows allowed:
Addition, Subtraction.
Ra ↔ Ra “ Rb
Mathematical Operations with
constant value allowed:
Multiplication, Division.
Mathematical Operations between
Rows not allowed:
Multiplication, Division.

Mathematical Operations with
constant value allowed:
Addition, Subtraction .
Ra ↔ Ra “ k Row Transformation is done to convert the Coefficient Matrix A to Unit Matrix
of same dimension as that of A.
6.2.1 Solved Examples:
i. Solve the system-
6x  y  z 20 x  4y − z 6 x − y  5z 7
using Gauss-Jordan Method
Sol. Given:
6x  y  z 20 x  4y − z 6 x − y  5z 7
munotes.in

Page 80

80NUMERICAL AND STATISTICAL METHODS
The matrix representation is-

Followed by the echolon form, formed by combining Coefficient M atrix and
Constant Matrix.

Perform elementary ow transformation in the above matrix to con vert matrix A to the following: ൥ͳͲͲ
ͲͳͲ
ͲͲͳ൩
൥͸ͳ ͳǣ ʹ Ͳ
ͳͶെ ͳ ǣ͸
ͳെ ͳ ͷ ǣ ͹൩
ܴଵ՞ܴଶ
൥ͳͶെ ͳ ǣ͸
͸ͳ ͳǣ ʹ Ͳ
ͳെ ͳ ͷ ǣ ͹൩
ܴଶ՞ܴଶെ͸ܴଵ
൥ͳͶെ ͳ ǣ͸
Ͳെ ʹ ͵ ͹ ǣെ ͳ ͸
ͳെ ͳ ͷǣ ͹൩
ܴଷ՞ܴଷെܴଵ
munotes.in

Page 81

81Chapter 6: Solution of Simultaneous Algebraic Equations


The solution of the equations are: x 3 y 1 and z 1.
munotes.in

Page 82

82NUMERICAL AND STATISTICAL METHODS
ii. Solve the system-
2x  y  z 10 3 x  2y  3z 18 x  4y  9z 16
using Gauss-Jordan Method
Sol. Given:
2x  y  z 10 3 x  2y  3z 18 x  4y  9z 16
The matrix representation is-

Followed by the echolon form, formed by combining Coefficient M atrix and
Constant Matrix.

Perform elementary ow transformation in the above matrix to con vert matrix A to
the following:
൥ͳͲͲ
ͲͳͲ
ͲͲͳ൩
൥ʹͳͳǣͳ Ͳ
͵ʹ͵ǣͳ ͺ
ͳͶͻǣͳ ͸൩



munotes.in

Page 83

83Chapter 6: Solution of Simultaneous Algebraic Equations


munotes.in

Page 84

84NUMERICAL AND STATISTICAL METHODS

The solution of the equations are: x 7 y = −9 and z 5.
6. Gauss-Seidel Method
Gauss Seidel Method uses Iterative methods to find the unique s olution of Linear
Algebraic equations. In this method, the present value of a var iable depends on the
past and present value of the other variables. This type of Ite ration is known as
Dynamic Iteration Method .
To achieve convergence of the values it is important to have Diagonal Dominance .
In Diagonal Dominance, the first equation should have the highe st coefficient
among the set of x coefficients as well as it should be the highest coefficient wi thin
the same equation. Similarly, the second equation should have h ighest coefficient
of y as well as among its own equation.
After ensuring Diagonal Dominance, the variable of each equatio n is represented
as the function of other variables.
6..1 Solved Examples:
i. Solve the equation: x  4y − z 6 6 x  y  z 20 x − y  5z 7
by using Gauss-Seidel Method
Sol. Given:
x  4y − z 6 6 x  y  z 20 x − y  5z 7
On comparing the coefficient of x among the given set of equations. The maximum
value is 6 which is present in the second equation. Considering the second equation,
the maximum coefficient present among the variable is also 6 (C oefficient of x)

munotes.in

Page 85

85Chapter 6: Solution of Simultaneous Algebraic Equations
Hence, the first equation is: 6xy] 20 —(i)
Similarly, among First and Third equation, the maximum value of the Coefficient
of y is 4.
Hence, the second equation is: x4y-] 6 —(ii)
And, the third equation is: x-y5] 7 —(iii)
Now represent each variable as a function of other two variable s like: Using
equation 1:

Using equation 2:

Using equation 3:

Consider the initial values of x, y and z as 0.
Now,
To implement the iteration the equations are re-written as:

Where, xn, yn and zn are the Present values of x, y and z respectively. xn−1, yn−1 and
zn−1 are the Past values of x, y and z respectively.
Means,
To calculate the Present value of x, we require Past values of y and z.
To calculate the Present value of y, we require Present value of x and Past value of
z.
To calculate the Present value of z, we require Present values of x and y.

munotes.in

Page 86

86NUMERICAL AND STATISTICAL METHODS
Hence the values of x, y and z will be: i xn yn zn 0 0 0 0 1 3.33 0.6675 0.8675 2 3.0775 0.9475 0.974 3 3.0131 0.97412 0.992 4 3.0056 0.99665 0.9982 5 3.0035 0.99815 0.99893 6 3.0005 0.9996 0.99982 7 3.00009 0.99993 0.999668
On Approximation:
x 3 y 1 and z 1 ii. Solve the equation:
x1  10x2  4x3 6 2 x1  10x2  4x3 = −15
9x1  2x2  4x3 20
by using Gauss-Seidel Method
Sol. Given:
x1  10x2  4x3 6
2x1 − 4x2  10x3 = −15
9x1  2x2  4x3 20
On re-arranging to achieve the Diagonal Dominance:
9x1  2x2  4x3 20 — (1) x1  10x2  4x3 6 —(2)
2x1 − 4x2  10x3 = −15 — (3)
Therefore,
(From equation (1))
(From equation (2))
(From equation (3)) Hence the values of x, y and z will
be:

munotes.in

Page 87

87Chapter 6: Solution of Simultaneous Algebraic Equationsi xn yn zn 0 0 0 0 1 2.2222 0.3778 -1.7933 2 2.9353 1.0238 -1.6775 3 2.7403 0.99697 -1.6493 4 2.7337 0.98635 -1.6522 5 2.7373 0.98715 -1.6526 6 2.7373 0.98731 -1.6525 7 2.7373 0.9873 -1.6525 On Approximation:
x 2.7373 y 0.9873 and z = −1.6525
6.4 Summary
Linear Algebraic equations can be solved by using two methods.
– Gauss Seidel Method
– Gauss Jordan Method
Gauss Seidel Method uses iterative approach, following Diagonal D o m i n a n c e
principle.
Gauss Jordan Method uses Matrix approach of the form: A î X B, following
Elementary Transformation principle.
6.5 References
(a) S. S. Shastry ”Introductory Met hods of Numerical Methods”. (Chp 3)
(b) Steven C. Chapra, Raymond P. Canale ”Numerical Methods for Engineers”.
6.6 Unit End Exercise
Find the solution of the following set of equation by using Gau ss Jordan method.
(a) 3x  2y  4z 7 2 x  y  z 7 x  3y  5z 2
(b) 10x  y  z 12 2 x  10y  z 13 x  y  3z 5
(c) 4x  3y − z 6 3 x  5y  3z 4 x  y  z 1 munotes.in

Page 88

88NUMERICAL AND STATISTICAL METHODS
(d) 2x  y − z = −1 x − 2y  3z 9
3 x − y  5z 14
Find the solution of the following set of equation by using Gau ss Seidel method.
(a) 10x  y  z 12 2 x  10y  z 13 x  y  3z 5
(b) 28x  4y − z 32 2 x  17y  4z 35 x  3y  10z 24
(c) 7x1  2x2 − 3x3 = −12 2x1  5x2 − 3x3 = −20 x1 − x2 − 6x3 = −26
(d) 7x1  2x2 − 3x3 = −12 2x1  5x2 − 3x3 = −20 x1 − x2 − 6x3 = −26
(e) 10x  y  z 12 x  10y  z 12 x  y  10z 12
(Assume x(0) 0.4, y(0) 0.6, z(0) 0.8)

™™™
munotes.in

Page 89

89Chapter 7: Numerical Di൵ erentiation and Integration
UNIT 
7
NUMERICAL DIFFERENTIATION AND
INTEGRATION
Unit Structure
7.0 ObMectives
7.1 Introduction
7.2 Numerical Differentiation
7.3 Numerical Integration
7.4 Summary
7.5 Bibliography
7.6 Unit End Exercise
7.0 Objectives
Student will be able to understand the following from the Chapt er:
Methods to compute value of Differentiation of a function at a particular value.
Methods to find the area covere d by the curve within the the gi ven interval.
Understand the importance of Interpolation Method.
7.1 Introduction
Differentiation is the method used to determine the slope of th e curve at a particular
point. Whereas, Integration is the method used to find the area between two values.
The solution of Differentiation and Integration at and between particular values can
be easily determined by using some Standard rules. There are so me complex
function whose differentiation and Integration solution is very d i f f i c u l t t o
determine. Hence, there are some practical approaches which can be used to find
the approximated value. munotes.in

Page 90

90NUMERICAL AND STATISTICAL METHODS
7.2 Numerical Differentiation
The value of differentiation or derivative of a function at a p articular value can be
determined by using Interpolation Methods.
(a) Newton’s Difference Method (If the step size is constant.)
(b) La-grange’s Interpolation Method (If step size is not constant.)
Newton’s Difference Method is further divi ded into two types depending on the
position of the sample present in the input data set.
Newton’s Forward Difference Method: To be used when the input value is lying
towards the start of the input data set.
Newton’s Backward Difference Method: To be used when the input value is lying
at the end of the input data set.
7.2.1 Newton Forward Difference Method
For a given set of values ( xi,yi), i 0,1,2,...n where xi are at equal intervals and h is
the interval of of the input values i.e. xi x0  i î h then,

Since the above equation is in form of “ p”, therefore, chain rule is to be used for
differentiation. Hence,

We know that,

Where, x Unknown Value x0 First Input value h Step Size
Hence,


munotes.in

Page 91

91Chapter 7: Numerical Di൵ erentiation and Integration
And,

Hence due to simplification,

On differentiating second time, the expression becomes:

7.2.2 Newton Backward Difference Method
For a given set of values ( xi,yi), i 0,1,2,...n where xi are at equal intervals and h is
the interval of of the input values i.e. xi x0  i î h then,

Since the above equation is in form of “ p”, therefore, chain rule is to be used for
differentiation. Hence,

We know that,

Where, x Unknown Value xn Final Input value h Step Size
Hence,



munotes.in

Page 92

92NUMERICAL AND STATISTICAL METHODS
And,

Hence due to simplification,

On differentiating second time, the expression becomes:

7.2.3 Solved Examples
i. From the data table given below obtain and x 1.0 1.2 1.4 1.6 1.8 2.0 2.2 y 2.7183 3.3201 4.0552 4.9530 6.0496 7.3891 9.0250 Sol. The first step is to identify which method to be used.
Since in the question the value of and 2 is to be determined.
Hence, Forward Difference to be used as the value lies at the start of the data set.
Therefore, Forward Difference Table is to be formed. x y 4y 42y 43y 44y 45y 46y 1 2.7183 0.6018 0.1333 0.0294 0.0067 0.0013 0.0001 1.2 3.3201 0.7351 0.1627 0.0361 0.0080 0.0014 1.4 4.0552 0.8978 0.1988 0.0441 0.0094 1.6 4.9530 1.0966 0.2429 0.0535 1.8 6.0496 1.3395 0.2964 2.0 7.3891 1.6359 2.2 9.0250

munotes.in

Page 93

93Chapter 7: Numerical Di൵ erentiation and Integration
Now select the values corresponding to the row of x 1.2. Since, value of and
is to be determined at x 1.2
Hence, the values are: 4 y0 0.7351, 42y0 0.1627, 43y0 0.0361 , 44y0 0.0080,
45y0 0.0014.
Substituting the values in the following formula:


Similarly, to find the value of , Substitute the values in the following equation:
where, 0 .0014.
Therefore,

i. From the data table given below obtain and x 1.0 1.2 1.4 1.6 1.8 2.0 2.2 y 2.7183 3.3201 4.0552 4.9530 6.0496 7.3891 9.0250 Sol. The first step is to identify which method to be used.
Since in the question the value of and 8 is to be determined.
Hence, Backward Difference to be used as the value lies at the end of the data set.
Therefore, Backward Difference Table is to be formed. x y 5y 52y 53y 54y 55y 56y 1 2.7183 1.2 3.3201 0.6018 1.4 4.0552 0.7351 0.1333

munotes.in

Page 94

94NUMERICAL AND STATISTICAL METHODSx y 5y 52y 53y 54y 55y 56y 1.6 4.9530 0.8978 0.1627 0.0294 1.8 6.0496 1.0966 0.1988 0.0067 2.0 7.3891 1.3395 0.2429 0.0441 0.0080 0.0013 2.2 9.0250 1.6359 0.2964 0.0535 0.0094 0.0014 0.0001 Now select the values corresponding to the row of x 1.2. Since,
value of and is to be determined at x 1.8
Hence, the values are:
Substituting the values in the following formula:

Similarly, to find the value of , Substitute the values in the following equation:
where,
Therefore,

If step si]e of the input value is not constant , then the derivatives can be
determined by simply different iating the expression provided by the Langrange’s
Polynomial .
7.2.4 Solved Examples
i. Tabulate the following function: y x3 −10x6 at x0 0.5, x1 1 and x2 2.
Compute its 1st and 2nd derivatives at x 1.00 using Lagrange’s interpolation
method.
Sol. Given: y x3 − 10x  6
At x0 0.5 y0 0.53 − 10 î 0 .5  6 1 .125
At x1 1 y1 13 − 10 × 1 + 6 = −3
At x2 2 y2 23 − 10 × 2 + 6 = −6
munotes.in

Page 95

95Chapter 7: Numerical Di൵ erentiation and Integration
Hence, the Lagange’s Formula is:

Hence, will be obtained by differentiating both the sides

So therefore, at x 1 the will be:

7. Numerical Integration
Numerical Integration provides a set of methods to compute the Definite
Integration of a Function between the given set of values.
There are three methods used to find the value of Integration:
x Trapezoidal Rule.
x Simpson’s Rule.
x Simpson’s Rule.
7..1 Trape]oidal Rule
In this method the curve is divided into small trapeziums.
These trapeziums are then added to to find the complete area of the curve between
two values. Hence,
munotes.in

Page 96

96NUMERICAL AND STATISTICAL METHODS
Let y f(x) then,

Where,

xn: Upper Limit x0: Lower Limit y0, y1, y2, āāā, yn are the values of of y corresponding
to x0, x1, x2, āāā, xn
7.3.2 Simpson’s Rule
Let y f(x) then,
(Sum of Odd osition Terms)  2 î
(Sum of Even osition Terms)

Where,

xn: Upper Limit
x0: Lower Limit
y0, y1, y2, āāā, yn are the values of of y corresponding to x0, x1, x2, āāā, xn
7.3.2 Simpson’s Rule
Let y f(x) then,
(Sum of Multiple of 3, position terms) 
3 î (Sum of Remaining terms)




munotes.in

Page 97

97Chapter 7: Numerical Di൵ erentiation and Integration
Where,
݄ൌݔ௡െݔ଴
݊
xn: Upper Limit
x0: Lower Limit
y0, y1, y2, āāā, yn are the values of of y corresponding to x0, x1, x2, āāā, xn
7.3.3 Solved Examples
i. A solid of revolution is formed by rotating about the x-axis, the area between the
x-axis,the lines x 0 and x 1 and the curve through the points below: x 0.00 0.25 0.50 0.75 1.00 x 1.000 0.9896 0.9589 0.9089 0.8415 Estimate the volume of the solid formed.
Sol. Let “V” be the volume of the solid formed by rotating the curve around x- axis,
then

Therefore, the tables is updated as: x 0.00 0.25 0.50 0.75 1.00 x 1.000 0.9793 0.9195 0.8261 0.7081 Rewriting the same table to find the value of n. x 0.00 x0 0.25 x1 0.50 x2 0.75 x3 1.00 x4 x 1.000 0.9793 0.9195 0.8261 0.7081 As the extreme or the last value is x4. Hence, n 4
Therefore,

(a) TRAPE=OIDAL RULE

munotes.in

Page 98

98NUMERICAL AND STATISTICAL METHODS
where, h 0.25, y0 1.000, y1 0.9793, y2 0.9195, y3 0.8261, y4 0.7081
On substituting the values we get:

(b) SIMPSON’S RULE

where, h 0.25, y0 1.000, y1 0.9793, y2 0.9195, y3 0.8261, y4 0.7081
On substituting the values we get:

(c) SIMPSON’S ଷ
଼th RULE
ߨൈනԝଵ
଴ݕଶݔ݀ ൌ ߨൈ͵݄
ͺ൫ݕ଴൅ݕସ൅ʹൈሺݕଷሻ൅͵ൈሺݕଵ൅ݕଶሻ൯
where, ݄ൌͲ Ǥ ʹ ͷ ǡݕ ଴ൌͳ Ǥ Ͳ Ͳ Ͳ ǡݕ ଵൌͲ Ǥ ͻ ͹ ͻ ͵ ǡݕ ଶൌͲ Ǥ ͻ ͳ ͻ ͷ ǡݕ ଷൌ
ͲǤͺʹ͸ͳǡݕ ସൌͲ Ǥ ͹ Ͳ ͺ ͳ
On substituting the values we get:
ߨൈනԝଵ
଴ݕଶݔ݀ ൌ ʹǤ͸͸͹Ͷͳ
7.4 Summary
Numerical Differentiation uses the methods to find the value of First ans Second
Order Derivatives at a par ticular value of x or the input varia ble.
Numerical Integration provides methods to find the Definite Int egration or the area
covered by the curve between two points.
7.5 References
(a) S. S. Shastry ”Introductory Methods of Numerical Methods”.
(b) Steven C. Chapra, Raymond P. Canale ”Numerical Methods for Engineers”.
munotes.in

Page 99

99Chapter 7: Numerical Di൵ erentiation and Integration
7.6 Unit End Exercise
(a) Find and 2 and x 0.4 from the
following table:
( 0 ,1.0) (0 .1,0.9975) (0 .2,0.9900) (0 .3,0.9776) (0 .4,0.9604)
(b) The following table gives angular displacement a at different t ime t (time).
(0,0.052) (0 .02,0.105) (0 .04,0.168) (0 .06,0.242) (0 .08,0.327) 
(0.10,0.408) 
Calculate angular velocity and acceleration at t 0.04, 0.06, an d 0.1
Angular Velocity :  Angular Acceleration :
(c) A cubic function y f(x) satisfies the following data. x 0 1 3 4 y 1 4 40 85 D e t e r m i n e f(x) and hence find f ’(2) and f ’’(2)
(d) Use Trapezoidal Rule to evaluate integral width of sub-interva l (h)
0.5.
(e) Using Simpson’s rules. Evaluate take n 6.
(f) Using Simpson’s rules. Evaluate take n 6.
(g) Using Simpson’s rules. Evaluate take n 6.
(h) Use Trapezoidal Rule to evaluate integral width of sub-interval
(h) 0 .5.

™™™
munotes.in

Page 100

100NUMERICAL AND STATISTICAL METHODSUNIT 

NUMERICAL DIFFERENTIATION E4UATION
Unit Structure
8.0 ObMectives
8.1 Introduction
8.2 Euler’s Method
8.3 Euler’s Modified Method
8.4 Range Kutta Method
8.5 Summary
8.6 Bibliography
8.7 Unit End Exercise
.0 Objectives
Student will be able to understa nd the following from the Chapt er:
Methods to compute value of Differential Equation of a function at a particular
value.
Practical or Software Implemented method to find the solution o f Differential
equation.
.1 Introduction
Differential equation is defined as an expression which contain s derivative terms.
For example: . Here the term is indicating that the occurrence of
a Differential Equation.
munotes.in

Page 101

101Chapter 8: Numerical Di൵ erentiation EquationThe differential equations can be solved analytically using var ious methods like:
Variable separable , Substitution , Linear Differential Equation , Solution to
Homogeneous Equation , etc. but in this chapter, various practically approachable
methods will be discussed to find the solution of a given diffe rential equation at a
particular value.
A Differential equation is defined on the basic of two terminol ogies:
Order: Number of times a variable is getting differentiated by an an other variable.
Degree: Power of the highest order deriva tive in a differential equat ion.
For Example : Consider the following equations: A.
B. C.
D.
In the above equations, eqn. A has Order 1, because y is differentiated only
1 time w.r.t. x. The degree 1 because the power of the only differential equation
is 1.
In eqn. B has Order 2, because y is differentiated 2 times w.r.t. x.
The degree 1 because the power of the only differential equation is 1.
In eqn. C has Order 1, because y is differentiated only 1 time w.r.t. x.
The degree 2 because the power of the only differential equation is 2.
In eqn. D, there are two derivatives present. Hence, the term with maxim um
differentiation present is to be selected. Therefore, the equat ion has Order 2,
because y is differentiated only 2 time w.r.t. x. The degree 1 because the power
of the derivative of maximum order is 1.
8.2 Euler’s Method
Euler’s Method is a Practically implemented method used to find the solution of
first order Differential equation.
Suppose the given differential equation is ) with initial condi tions y
(x0) y0 (The value of y at x x0 is y0.) Then by Euler’s method:
yn1 yn  h î f(xn,yn) for n 0,1,2,...

munotes.in

Page 102

102NUMERICAL AND STATISTICAL METHODSWhere,
yn1 is Future Value of y . yn is Present Value of y .
h is Common Difference or Step Size .
.2.1 Solved Examples
Find the value of y when x 0.1, given that y(0) 1 and y0 x2  y by using Euler’s
Method.
Sol. Given: y0 x2  y with initial condition y(0) 1
and the meaning of y(0) 1 is the value of y at x 0
is 1. Hence,
x0 0 and y0 1
To find the value of y at x 0.1, value of h is required.
Hence, h 0.05.
According to Euler’s Method, yn1 yn  h î f (xn,yn).
Iteration 1:
y1 y0  h î f (x0,y0) y0 1 x0 0 h 0.05 ׵ f (xn,yn) x2  y y1 1  0 .05(02  1)
y1 1.05 (The value of y at x x0  h 0  0 .05 0 .05
Iteration 2:
y2 y1  h î f (x1,y1) y2 1.05 x1 x0  h 0.05 y2 1.05  0 .05(0.052  1.05) y1
1.0265 (The value of y at x x1  h 0.05  0 .05 0 .10
8.3 Euler’s Modified Method
The values of y determined at every iteration may have some error depending on
the value of selected Common Difference or Step Size h. Hence, to find the accurate
value of y at a particular x, Euler’s Modified Method is used.
In this method the value of the corresponding iteration is ensu red by providing a
process called Iteration within Iteration method .
This method is used to minimize the error. The iterative formul a is given as


munotes.in

Page 103

103Chapter 8: Numerical Di൵ erentiation EquationWhere,
xm, ym are the values to be used in the basic iteration and y(m  1)n is the value
obtained while saturating the given “ y” value in the same iteration.
The steps to use Euler’s Modified Method is:
ȋƒȌ Apply Euler’s Method at every iteration to find the approximate value at new
value of x using Euler’s Method .
ȋ„Ȍ Apply Euler’s Modified Method in a particular iteration to saturate the value
of y.
..1 Solved Examples
Find the value of y when x 0.1, given that y(0) 1 and y0 x2  y by using Euler’s
Modified Method.
Sol. Given: y0 x2  y with initial condition y(0) 1
and the meaning of y(0) 1 is the value of y at x 0
is 1. Hence,
x0 0 and y0 1
To find the value of y at x 0.1, value of h is required.
Hence, h 0.05.
Iteration 1:
Using Euler’s Method y1 y0  h î f (x0,y0) y0 1 x0 0 h 0.05 ׵ f (xn,yn) x2 
y y1 1  0 .05(02  1) y1 1.05
Iteration 1 a :
Using Euler’s Modified Method,

where, x1 x0  h 0  0 .05 0 .05 and is the initial value obtained by using
Euler’s Method.

Since, 0500 and 0513 are not equal. Hence, apply 2nd iteration.
munotes.in

Page 104

104NUMERICAL AND STATISTICAL METHODSIteration 1 b :
Using Euler’s Modified Method,

where, x1 x0h 00 .05 0 .05 and 0513 is the value obtained in
Iteration 1(a) .

Since, values of and are equal. Hence, the value of y at x 0.05 is 1 .0513
Iteration 2:
Using Euler’s Method y2 y1  h î f (x1,y1) y1 1.0513 x1 x0  h 0.05 h 0.05
׵ f (xn,yn) x2  y
y2 1.0513  0 .05(0.052  1.0513) y2 1.1044
Iteration 2 a :
Using Euler’s Modified Method,

where, x2 x1h 0.050 .05 0 .10 and is the initial value obtained by using
Euler’s Method.

Since, 1044 and 1055 are not equal. Hence, apply 2nd iteration.
Iteration 1 b :
Using Euler’s Modified Method,

where, x2 x1  h 0.05  0 .05 0 .10 and is the value obtained at
d
munotes.in

Page 105

105Chapter 8: Numerical Di൵ erentiation EquationIteration 2 a.

Since, values of and are equal. Hence,
the value of y at x 0.1 is 1 .1055
.4 Range Kutta Method
Range Kutta Method is an another method to find the solution of a F irst Order
Differential Equations. It is mainly divided into two methods d epending on the
number of parameters used in a method.
Range Kutta 2nd Order Method.
Range Kutta 4th Order Method.
Range Kutta 2nd order Method uses Two parameters k1 and k2 to find the value of
yn1. The expression is given as:

where
k1 h î f (xn,yn) k2 h î f (xn  h,yn  k1)
Range Kutta 4th order Method uses Four parameters k1, k2, k3 and k4 to find the
value of yn1. The expression is given as:

where,


munotes.in

Page 106

106NUMERICAL AND STATISTICAL METHODS.4.1 Solved Examples
Given , where y0 2, Find y(0.1) and y(0.2), correct upto 4 decimal
places.
Sol. Given: y(0) 2
׵ y0 2 and x0 0
Range Kutta 2nd Order Method:
h 0.1
Iteration 1:

So,
׵ y1 2.2050 at x 0.1 Iteration 2:

k1 h î f (x1,y1)
׵ k1 0.1(2.205 − 0.1) ׵ k1 0.2105 k2 h î f (x1  h,y1  k1)
׵ k2 0.1 î ((2 .2050  0 .2105) − (0 .1  0 .1))
׵ k2 0.22155
So,
׵ y1 2.421025 at x 0.2 Runge Kutta 4th Order:

yyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
munotes.in

Page 107

107Chapter 8: Numerical Di൵ erentiation EquationWhere,

Let h 0.1,
Iteration 1:






munotes.in

Page 108

108NUMERICAL AND STATISTICAL METHODSIteration 2:

.5 Taylor Series
The methods discussed in the previous sections, are applicable only for First
order Differential Equation. But, Taylor Series method can be u sed to find the
solution for higher order differential equations.
Taylor Series is a method used to represent a Function as a sum of infinite series
represented in terms of derivatives derived at a particular poi nt. Taylor series is
mathematically expressed as:

Similarly, to get an approximate value of y at x x0  h is given by following
expression:

munotes.in

Page 109

109Chapter 8: Numerical Di൵ erentiation Equation.5.1 Solved Examples


munotes.in

Page 110

110NUMERICAL AND STATISTICAL METHODS.6 Summary
Euler’s Method, Euler’s Modified and Range Kutta Method are app licable only in
First Order Differential Equation.
The First order differential equation should be of the form
Solution of Higher Order Differential Equation can be done by u sing Taylor’s
Method.
.7 References
ȋƒȌ S. S. Shastry ”Introductory Methods of Numerical Methods”.
ȋ„Ȍ Steven C. Chapra, Raymond P. Canale ”Numerical Methods for Engineers”.
.7 Unit End Exercise
ȋƒȌ Use Euler’s method to estimate y(0.5) of the following equation with
h 0.25 and
ȋ„Ȍ Apply Euler’s method to solve with y(0) 1.
Hence, find y(1). Take h 0.2
ȋ…Ȍ Using Euler’s method, find y(2) where and y(1) 1,
take h 0.2.
ȋ†Ȍ Using Euler’s method, find y(2) where and y(1) 1,
take h 0.2.
ȋ‡Ȍ Solve y0 = 1 − y, y(0) = 0 by Euler’s Modified Method and obtain y
at x 0.1 and x 0.2
ȋˆȌ Apply Euler’s Modified method to find y(1.2). Given
0 .1 and y(1) 2 .718. Correct upto three decimal places.
ȋ‰Ȍ Solve (1) 2. Compute y for x 1.2 and x 1.4 using
munotes.in

Page 111

111Chapter 8: Numerical Di൵ erentiation Equation Euler’s Modified Method.
ȋŠȌ Using Range-Kutta Method of second order to find y(0.2).
Given x  y, y(0) 2, h 0.1
ȋ‹Ȍ Use Range-Kutta Method of fourth order to find y(0.1), y(0.2).
Given (0) 1. (Take h 0.1)
ȋŒȌ Use Range-Kutta Method of fourth order to find y(0.1).
Given (0) 1. (Take h 0.1)
ȋȌ Use Range-Kutta Method of second order to find y(1.2).
Given y2  x2, y(1) 0. (Take h 0.1)
ȋŽȌ Solve , where y(0) 1, to find y(0.1) using
Range-Kutta method.
ȋȌ By using Runge-Kutta method of order 4 to evaluate y(2.4) from the
following differential equation y0 f(x,y) where f(x,y) ( x1)y. Initial
condition y(2) 1, h 0.2 correct upto 4 decimal places.
ȋȌ Use Taylor series method, for the equation, and y(0) 2 to find
the value of y at x 1.
ȋ‘Ȍ Use Taylor series method, for the equation, and y(0) 1
to find the value of y at x 0.1,0.2,0.3.
ȋ’Ȍ Use Taylor series method, for the equation, and y(4) 4
to find the value of y at x 4.1,4.2.
ȋ“Ȍ Use Taylor’s series method, for the equation, y0 x2 − y and y(0) 1
to find y(0.1)


™™™

xxxxxxxxxxx
yyyyyyyyyyy
munotes.in

Page 112

112NUMERICAL AND STATISTICAL METHODSUnit 4
9
LEAST-SQUARES REGRESSION
Unit Structure
9.0 Objectives
9.1 Introduction
9.2 Basic Concepts of Correlation
9.2.1 Types of Correlation
9.2.2 Scatter Diagram
9.2.3 Karl Pearson's Coefficient of correlation
9.3 Linear Regression
9.3.1 Regression Equations
9.3.2 Method of Least Squares
9.3.3 Properties of Regression Coefficients
9.4 Polynomial Regression
9.5 Non- liner Regression
9.5.1 Polynomial fit
9.6 Multiple Linear Regression
9.7 General Linear Least Squares
9.7.1 Ordinary Least Square(OLS)
9.7.2 Generalized Least Square(GLS)
9.8 Summary
9.9 References
9.10 Exercises

munotes.in

Page 113

113Chapter 9: Least-Squares Regression.0 Objectives
This chapter would make you understand the following concepts:
x Correlation.
x Scatter Diagram .
x Linear Regression.
x Polynomial Regression
x Non- liner Regression.
x Multiple Linear Regression.
.1 Introduction
The functional relationship between two or more variables is ca lled 'Regression'.
here in regression, the variables may be considered as independ ent or dependent.
For example imagine the person walking on the straight road fro m point A to
point B he took 10min, the distance from A to B is 100m, now to reach point C
from B, he took another 10min while the distance from B to C is 100m, then how
many minutes will he take to reach point E from C which is 200 m long from
Point C ? or if he walks continuously 20min & straight like thi s from point C
where will he reach ? also what if the walking speed is not uni form ? or road is
not straight ?
.2 Basic Concepts of Correlation
Definition:
Two variables are said to be correlated if the change in the value of one variable
causes corresponding change in the value of other variable.
9.2.1 Types of Correlation
1) Positive & Negative Correlation : If changes in the value of one variable
causes change in value of other variable in the same directi on then the
variables are said to be positively correlated.
eg. The more petrol you put in your car, the farther it can go .
If changes in the value of one variable causes change in valu e of other
variable in opposite direction then the variable are said to be negatively
correlated eg. As a biker's speed increases, his time to get t o the finish line
decreases. munotes.in

Page 114

114NUMERICAL AND STATISTICAL METHODS2) Simple, Multiple & Partial Correlation : The correlation between two
variables is simple correlation. The Correlation between three or more
variables is called multivariate correlation, For example the r elationship
between supply, price and profit of a commodity. in partial cor relation
though more than two factors are involved but correlation is st udied only
between two factors and the other factors are assumed to be con stant.
3) Linear and Non-linear Correlation : if the nature of the graph is straight
line the correlation is called linear and if the graph is not a straight line but
curve is called non-linear correlation
9.2.2 Scatter Diagram
This is a graphical method to study correlation. In this method each pair of
observations is represented by a point in a plane. The diagram formed by plotting
all points is called scatter diagram. in this following Scatter diagram each dot
coordinate is like ሺݔଵǡݕଵሻǡሺݔଶǡݕଶሻǡǥǤǤሺݔ ௡ǡݕ௡ሻ on XY- plane
The Following are types of scatter diagram:

Merits & Demerits of Scatter Diagram
munotes.in

Page 115

115Chapter 9: Least-Squares RegressionMerits:
i) Scatter diagram is easy to draw.
ii) It is not influenced by extreme items.
iii) It is easy to understand and non-mathematical method of s tudying
correlation between two variables
Demerits:
It does not give the exact degree of correlation between two va riables. it gives
only a rough idea.
9.2.3 Karl Pearson's Coefficient of correlation (Product moment Coefficient
of correlation)
Karl Pearson's Coefficient of Correlation is an extensively use d mathematical
method in which the numerical representation is applied to meas ure the level of
relation between linear related variables. The coefficient of correlation is
expressed by Dzݎdz.
Steps involved to calculate ̶̶࢘:
Step 1: Calculate the actual mean of ݔ and actual mean of ݕ
Step 2: Take deviation from the actual mean of ݔ series, It gives column
ܺൌሺݔെݔҧሻǤ
Step 3: Take deviation from the actual mean of ݕ s e r i e s , I t g i v e s c o l u m n
ܻ ൌ ሺݕെݕ തሻǤ
Step 4: Calculate summation of deviation product of ܺ and ܻ i.e. σܻܺǤ
Step 5 : Square the deviations of ܺand ܻ and calculate the summation
i.e. σܺଶand σܻଶǤ
Step 6 : Use the following formula to calculate ݎ
ݎൌσܻܺ
ඥσܺଶσܻଶ

munotes.in

Page 116

116NUMERICAL AND STATISTICAL METHODSInterpretation of the values of :

Example 9.2.3.1 :
Find the Coefficient of correlation for the following data and comment on its
value. ࢞6 13 9 10 6 4 ࢟2 15 17 13 7 6
Solution : Here ݊ൌ͸ ǡ ࢞ഥൌσ࢞࢔
࢔ൌ૝ૡ
૟ൌ ͺ and ࢟ഥൌσ࢟࢔
࢔ൌ૟૙
૟ൌͳ Ͳ
Consider the following table for calculation ࢞࢟ࢄൌሺ ࢞െ࢞ ഥሻࢅൌሺ ࢟െ࢟ ഥሻࢅࢄࢄ૛ࢅ૛͸ʹെʹെͺͳ͸Ͷ͸Ͷͳ͵ͳͷͷͷʹͷʹͷʹͷͻͳ͹ͳ͹͹ͳͶͻͳͲͳ͵ʹ͵͸Ͷͻ͸͹െʹെ͵͸ͶͻͶ͸െͶെͶͳ͸ͳ͸ͳ͸૝ૡ૟૙ૠ૟૞૝૚ૠ૛
ݎൌσ௑௒
ඥσ௑మσ௒మ = ଻଺
ξହସൈଵ଻ଶൌͲ Ǥ ͹ ͻ
ݎൌͲ Ǥ ͹ ͻ
Hence, Strongly positive correlation between ࢞ and ࢟
Similarly we can calculate ݎvalue without deviation form means (i.e. Direct
method)
munotes.in

Page 117

117Chapter 9: Least-Squares Regressionwhere ࢘ൌσ࢟࢞ିσ࢞σ࢟
࢔ඨσ࢞૛ିሺσ࢞ሻ૛
࢔ൈටσ࢟૛ିሺσ࢟ሻ૛
࢔ , where ݊ൌnumber of observations.
Also we can calculate ݎ value by using assumed means of ݔand ݕ
where ൌσ࢛࢜ିσ࢛σ࢜
࢔ඨσ࢛૛ିሺσ࢛ሻ૛
࢔ൈටσ࢜૛ିሺσ࢜ሻ૛
࢔ ,
where ݊ൌnumber of observations,
ݑൌݔെܣ ǡ here A is assumed mean of ݔseries,
ݒൌݕെܤ ǡ here B is assumed mean of ݕseries.
. Linear Regression
Regression is the method of estimating the value of one variabl e when other
variable is known and when the variables are correlated. when t he points of
scatter diagram concentrate around straight line, the regressio n is called linear and
this straight line is known as the line of regression
9.3.1 Regression Equations
Regression equations are algebraic forms of the regression line s. Since there are
two regression lines, there are two regression equations the re gression of ݔ on
ݕ( ݕ is independent and ݔ is dependent) is used to estimate the values of ݔ for the
given changes in ݕ and the regression equation of ݕ on ݔ( ݔ is independent and
ݕ is dependent ) is used to estimate the values of ݕ for the given changes in ݔ .
a) Regression equation of ݕ on ݔ :
The regression equation of ݕ on ݔ is expressed as ࢟ൌࢇ൅࢞࢈ (--- 1)
where ݔ = independent variable, ݕൌ dependent variable,
ܽൌݕ intercept, ܾൌ slope of the said line & ݊ = number of observations.
The values of Ԣܽᇱand ԢܾԢ can be obtained by the Method of least squares. in this
method the following two algebraic Normal equations are solved to determine the
values of ܽ and ܾ :
σ࢟ൌࢇ࢔ ൅࢈ σ࢞ (---2)
σ࢟࢞ ൌ ࢇσ࢞൅࢈σ࢞૛ ---(3) munotes.in

Page 118

118NUMERICAL AND STATISTICAL METHODSb) Regression equation of ݔ on ݕ :
The regression equation of ݔon ݕ is expressed as ݔൌܿ൅ݕ݀ (--- 1')
where ݕ ൌ independent variable, ݔൌ dependent variable,
ܿൌݕ intercept, ݀ൌ slope of the said line & ݊ = number of observations.
The values of ԢܿԢ and Ԣ݀Ԣ can be obtained by the Method of least squares. in this
method the following two algebraic Normal equations are solved to determine the
values of ܿ and ݀ :
σ࢞ൌࢉ࢔ ൅ࢊ σ࢟ (---2')
σ࢞࢟ ൌ ࢉσ࢟൅ࢊσ࢟૛ ---(3')
Regression equation using regression coefficients:
Direct method:
a) Regression equation of ݕ on ׷ݔ
When the values of ݔ and ݕ are large. In such case,
the equation ݕൌܽ൅ݔܾ is changed to
࢟െ࢟ ഥൌ࢈࢞࢟ሺ࢞െ࢞ഥሻ ---(4)
where ݕഥand ݔҧ are arithmetic mean of ݕ andݔ respectively
Dividing both side of equation (2) by ݊ ,we get ݕതൌܽ൅ݔܾ ҧ
so that ܽൌݕത൅ݔܾҧ
Substituting this in equation (1), we get
ݕെݕതൌܾ ሺ ݔെݔ ҧሻ
Writing ܾ with the usual subscript we get the equation (4)
again, multiplying equation (2) by σݔ and equation (3) by ݊ we have
σݔσݕൌܽ݊ σݔ൅ܾሺσݔሻଶ
݊σݕݔ ൌ ܽ݊σݔ൅ܾ݊ሺ σݔሻଶ
subtracting the first from second, we get
݊σݕݔെσݔσݕൌܽ݊ ሺσݔሻଶെܾሺσݔሻଶ munotes.in

Page 119

119Chapter 9: Least-Squares Regression ܾൌσ௫௬ିσೣσ೤
೙σ௫మିሺσೣሻమ

writing ܾ with the usual subscript we get
Regression coefficient of ݕ on ݔ is ࢈࢞࢟ൌσ࢟࢞ିσ࢞σ࢟
࢔σ࢞૛ିሺσ࢞ሻ૛
࢔
b) Regression equation of ݔ on ݕ
Processing the same way as before, the regression equation of ݔ on ݕ is
࢞െ࢞ഥൌ࢈࢟࢞ሺ࢟െ࢟ഥሻ
Where Regression coefficient of ݔ on ݕ is ࢈࢟࢞ൌσ࢞࢟ିσ࢟σ࢞
࢔σ࢟૛ିሺσ࢟ሻ૛

Example 9.3.1.1:
Find the two-regression equation for the following data: ࢞ 16 18 20 23 26 27 ࢟ 11 12 14 15 17 16
Solution :
Calculation for regression equations ࢞ ࢟ ࢟࢞ ࢞૛ ࢟૛ 16 11 176 256 121 18 12 216 324 144 20 14 280 400 196 23 15 345 529 225 26 17 442 676 289 27 16 432 729 256 σ࢞=130 σ࢟ൌ 85 σ࢟࢞ ൌ1891 σ࢞૛ൌ2914 σ࢟૛ൌ1231 ࢞ഥൌσ࢞࢔
࢔ൌ૚૜૙
૟= 21.67, ࢟ഥൌ࢟࢔
࢔ൌૡ૞
૟ൌ૚ ૝ Ǥ૚ ૠ
munotes.in

Page 120

120NUMERICAL AND STATISTICAL METHODSi) Regression Equation of ࢟on ࢞ is ࢟െ࢟ഥൌ࢈࢞࢟ሺ࢞െ࢞ഥሻ
Regression coefficient of ݕ on ݔ is ࢈࢞࢟ൌσ࢟࢞ିσ࢞σ࢟
࢔σ࢞૛ିሺσ࢞ሻ૛

࢈࢞࢟ൌ૚ૡૢ૚ି૚૜૙כૡ૞
૟૛ૢ૚૝ି૚૜૙૛
૟ = 0.50
Regression Equation of ࢟on ࢞ is
ݕെݕതൌܾ௬௫ሺݔെݔҧሻ
ݕെͳͶǤͳ͹ൌͲǤͷͲ ሺݔെʹͳǤ͸͹ ሻ
࢟ൌ૙ Ǥ૞ ૙ ࢞൅૜ Ǥ૜ ૜
ii) Regression Equation of ࢞on ࢟ is ࢞െ࢞ഥൌ࢈࢟࢞ሺ࢟െ࢟ഥሻ
Regression coefficient of ݔ on ݕ is ࢈࢟࢞ൌσ࢞࢟ିσ࢟σ࢞
࢔σ࢟૛ିሺσ࢟ሻ૛

࢈࢟࢞ൌ૚ૡૢ૚ିૡ૞כ૚૜૙
૟૚૛૜૚ିૡ૞૛
૟ൌ૚ Ǥૡ ૝ 
Regression Equation of ࢞ on ࢟ is
ݔെݔҧൌܾ௫௬ሺݕെݕതሻ
ݔെʹ ͳ Ǥ ͸ ͹ൌͳ Ǥ ͺ Ͷ ሺ ݕെͳ Ͷ Ǥ ͳ ͹ ሻ
࢞ൌ૚ Ǥૡ ૝ ࢞െ૝ Ǥ૝
Deviations taken from arithmetic mean of ࢞ and ࢟
a) Regression equation of ݕ on ݔ :
Another way of expressing regression equation of ݕ on ݔ is by taking deviation
ofݔ andݕ series from their respective ac tual means. in this case also, the equation
ݕൌܽ൅ݔܾ is changed to ࢟െ࢟ഥൌ࢈࢞࢟ሺ࢞െ࢞ഥሻ
The value of ܾ௬௫ can be easily obtained as follows:
࢈࢞࢟ൌσ૏઻
σ૏૛
Where χൌሺݔെݔҧሻ and γൌሺ ݕെݕതሻ munotes.in

Page 121

121Chapter 9: Least-Squares RegressionThe two normal equation which we had written earlier when chang ed in terms of
χ and γ becomes
σߛൌܽ݊ ൅ܾ σ߯( --- 5)
σߛ߯ ൌ ܽσ߯൅ܾσ߯ଶ --- (6)
S i n c e σ߯ൌσߛൌͲ (deviation being taken from mean)
E q u a t i o n ( 5 ) r e d u c e s t o ܽ݊ ൌ Ͳ ܽ׵ൌͲ
E q u a t i o n ( 6 ) r e d u c e s t o σߛ߯ ൌ ܾσ߯ଶ
࢈ or ࢈࢞࢟ൌσࢽ࣑
σ࣑૛
After obtaining the value of ܾ௬௫ the regression equation can easily be written in
terms of ݔand ݕ by substituting for ߯ൌሺݔെݔҧሻ and ߛൌሺݕെݕതሻǤ
Deviations taken from arithmetic mean of ࢞ and ࢟
The regression equation ݔൌܿ൅ݕ݀ is reduces to ࢞െ࢞ഥൌ࢈࢟࢞ሺ࢟െ࢟ഥሻ
Where ࢈࢟࢞ൌσࢽ࣑
σࢽ૛
Example 9.3.1.2 :
The following data related to advertising expenditure and sales Advertising expenditure
(In Lakhs) ͷ͸͹ͺͻSales (In Lakhs) ʹͲ͵ͲͶͲ͸ͲͷͲ
i) Find the regression equations
ii) Estimate the likely sales when advertising expenditure is 12 lakhs Rs.
iii) What would be the advertis ing expenditure if the firm wan ts to attain sales
target of 90 lakhs Rs.
Solution :
i) Let the advertising expenditure be denoted by ݔand sales by ݕ
arithmetic mean of ݔ is ࢞ഥൌσ࢞࢔
࢔ൌ૜૞
૞ൌૠ
& arithmetic mean of ݕ is ࢟ഥൌσ࢟࢔
࢔ൌ૛૙૙
૞ൌ૝ ૙

munotes.in

Page 122

122NUMERICAL AND STATISTICAL METHODSCalculation for regression equations ࢞ ࢟ ࣑ൌሺ ࢞െ࢞ ഥሻ ࢽൌሺ ࢟െ࢟ ഥሻ ࢽ࣑ ࣑૛ ࢽ૛ ͷʹͲെʹെʹͲͶͲͶͶͲͲ͸͵ͲെͳെͳͲͳͲͳͳͲͲ͹ͶͲͲͲͲͲͲͺ͸ͲͳʹͲʹͲͳͶͲͲͻͷͲʹͳͲʹͲͶͳͲͲ෍࢞
ൌ૜ ૞ ෍࢟
ൌ૛ ૙ ૙ ෍࣑ൌ૙෍࢟ൌ૙෍ࢽ࣑
ൌૢ ૙ ෍࣑૛
ൌ૚ ૙ ෍ࢽ૛
ൌ૚ ૙ ૙ ૙ 
Regression equation of ݕ on ݔ
ܾ௬௫ൌσఞఊ
σఞమൌଽ଴
ଵ଴ൌͻ
Regression equation of ݕ on ݔ is
ݕെݕതൌܾ௬௫ሺݔെݔҧሻ
ݕെͶͲൌͻ ሺݔെ͹ሻ
ݕെͶͲൌͻݔെ͸͵
࢟ൌૢ ࢞െ૛ ૜
Regression equation of ݔ on ݕ
ܾ௫௬ൌσఞఊ
σఊమൌଽ଴
ଵ଴଴଴ൌͲ Ǥ Ͳ ͻ
ݔെݔҧൌܾ௫௬ሺݕെݕതሻ
ݔെ͹ൌͲ Ǥ Ͳ ͻ ሺݕെͶͲሻ
ݔെ͹ൌͲ Ǥ Ͳ ͻ ݕെ͵ Ǥ ͸
࢞ൌ૙ Ǥ૙ ૢ ࢟൅૜ Ǥ૝
ii) For estimate the likely sales when advertising expenditure is 1 2 lakhs
࢟ൌૢ ࢞െ૛ ૜ൌͻ ሺͳʹሻെʹ͵ൌૡ૞ lakhs Rs.
iii) The advertising expenditure if the firm wants to attain sales target of 90
lakhs Rs.
࢞ൌ૙ Ǥ૙ ૢ ࢟൅૜ Ǥ૝ൌͲ Ǥ Ͳ ͻ ሺͻͲሻ൅͵ǤͶൌ૚૚Ǥ૞ lakhs Rs.

munotes.in

Page 123

123Chapter 9: Least-Squares RegressionDeviation taken from the assumed mean:
a) Regression equation of ࢟ on ࢞:
The equation ݕൌܽ൅ݔܾ is changed to ݕെݕതൌܾ௬௫ሺݔെݔҧሻ
W h e n a c t u a l m e a n i s n o t a w h o l e n u m b e r , b u t a f r a c t i o n o r w h e n values of
ݔand ݕ are large
W e u s e t h e f o r m u l a : ࢈࢞࢟ൌσ࢛࢜ିσ࢛σ࢜
࢔σ࢛૛ିሺσ࢛ሻ૛

W h e r e ݑൌݔെܣ , ݒൌݕെܤ ǡ
ܣൌ Assumed mean for ݔǡ ܤൌ Assumed mean for ݕ
b) Regression equation of ࢞ on ࢟ :
T h e r e g r e s s i o n e q u a t i o n ݔൌܿ൅ݕ݀ is reduces to ݔെݔҧൌܾ௫௬ሺݕെݕതሻ
W h e r e ࢈࢟࢞ൌσ࢛࢜ିσ࢛σ࢜
࢔σ࢜૛ିሺσ࢜ሻ૛

Where ݑൌݔെܣ , ݒൌݕെܤ ǡ

ܣൌ Assumed mean for ݔǡ ܤൌ Assumed mean for ݕ
Also the values of correlation coefficient, means and Standard deviations of two
variables are given, in such a case we can find ܾ௫௬ and ܾ௬௫ as following:
Let ݔҧ , ݕത be the means and ߪ௫ , ߪ௬ be the standard deviation of the ݔ and ݕ
respectively and ݎ be the correlation coefficient, then ࢈ࢽ࣑ൌ࣑࢘࣌
࣌ࢽ and ࢈࣑ࢽൌ࢘࣌ࢽ
࣑࣌
Example 9.3.1.3 :
From the following data, calculate two lines of regression ࢞ 17 21 18 22 16 ࢟ 60 70 68 70 66 i) Estimate value of ݕ when ݔis 35
ii) Estimate value of ݔ when ݕis 75
munotes.in

Page 124

124NUMERICAL AND STATISTICAL METHODSSolution :
Let 21 be the assumed mean for ݔ i.e. ܣൌʹ ͳ & 90 be the assumed mean for ݕ ,
i.e. ܤൌͻ Ͳ
Calculation for regression equations ࢛࢞࢟ൌ࢞െ࡭ ࢜ൌ࢟െ࡮ ࢛࢛࢜૛࢜૛ͳ͹͸ͲെͶെͳͲͶͲͳ͸ͳͲͲʹͳ͹ͲͲͲͲͲͲͳͺ͸ͺെ͵െʹ͸ͻͶʹʹ͹ͲͳͲͲͳͲͳ͸͸͸െͷെͶʹͲʹͷͳ͸෍࢞
ൌૢ ૝ ෍࢟
ൌ૜ ૜ ૝ ෍࢛
ൌെ૚ ૚ ෍࢜
ൌെ ૚ ૟ ෍࢛࢜
ൌ૟ ૟ ෍࢛૛
ൌ૞ ૚ ෍࢜૛
ൌ૚ ૛ ૙ 
Regression equation of ࢟ on ࢞
࢈࢞࢟ൌσ࢛࢜ିσ࢛σ࢜
࢔σ࢛૛ିሺσ࢛ሻ૛
࢔ , Where ݑൌݔെܣ , ݒൌݕെܤ ǡ
࢈࢞࢟ൌ૟૟ିሺష૚૚ሻൈሺష૚૟ሻ
૞૞૚ିሺష૚૚ሻ૛
૞= ͳǤͳͶͻʹ
ݔҧൌͳ ͺ Ǥ ͺ   Ƭ   ݕ തൌ͸ ͸ Ǥ ͺ
Regression equation of ݕ on ݔ is ݕെݕതൌܾ௬௫ሺݔെݔҧሻ
ݕെ͸͸ǤͺൌͳǤͳͶͻʹ ሺݔെͳͺǤͺሻ
ݕെ͸͸ǤͺൌͳǤͳͶͻʹݔെʹͳǤ͸ͲͶͻ
࢟ൌ૚ Ǥ૚ ૝ ૢ ૛ ࢞൅૝ ૞ Ǥ૚ ૢ ૞ ૚
i) When ݔis 35, ݕis
ݕൌͳ Ǥ ͳ Ͷ ͻ ʹ ሺ ͵ ͷ ሻ൅Ͷ ͷ Ǥ ͳ ͻ ͷ ͳ
࢟ൌૡ ૞ Ǥ૝ ૚ ૠ ૚ munotes.in

Page 125

125Chapter 9: Least-Squares RegressionRegression equation of ࢞ on ࢟
࢈࢟࢞ൌσ࢛࢜ିσ࢛σ࢜
࢔σ࢜૛ିሺσ࢜ሻ૛
࢔ , Where ݑൌݔെܣ , ݒൌݕെܤ ǡ
࢈࢟࢞ൌ૟૟ିሺష૚૚ሻൈሺష૚૟ሻ
૞૚૛૙ିሺష૚૟ሻ૛
૞ ൌ Ͳ Ǥ Ͷ Ͷ ͹ ͸
ݔҧൌͳ ͺ Ǥ ͺ   Ƭ   ݕ തൌ͸ ͸ Ǥ ͺ
Regression equation of ݔ on ݕ is ݔെݔҧൌܾ௫௬ሺݕെݕതሻ
ݔെͳ ͺ Ǥ ͺ ൌͲ Ǥ Ͷ Ͷ ͹ ͸  ሺ ݕെ͸ ͸ Ǥ ͺ ሻ
ݔെͳ ͺ Ǥ ͺൌͲ Ǥ Ͷ Ͷ ͹ ͸ ݕെʹ ͻ Ǥ ͺ ͻ ͻ ͸
࢞ൌ૙ Ǥ૝ ૝ ૠ ૟ ࢟െ૚ ૚ Ǥ૙ ૢ ૢ ૟
ii) When ݕis 75, ݔis
ݔൌͲ Ǥ Ͷ Ͷ ͹ ͸ ሺ ͹ ͷ ሻെͳ ͳ Ǥ Ͳ ͻ ͻ ͸
࢞ൌ૛ ૛ Ǥ૝ ૠ
9.3.2 Method of Least Squares
The famous German mathematician Carl Friedrich Gauss had invest igated the
method of least squares as early as 1794, but unfortunately he did not publish the
method until 1809. In the meantime, the method was discovered a nd published in
1806 by the French mathematician Legendre, who quarrelled with Gauss about
who had discovered the method first. The basic idea of the meth od of least
squares is easy to understand. It may seem unusual that when se veral people
measure the same quantity, they usually do not obtain the same results. In fact, if
the same person measures the same quantity several times, the r esults will vary.
What then is the best estimate for the true measurement? The me thod of least
squares gives a way to find the best estimate, assuming that th e errors (i.e. the
differences from the true value) are random and unbiased, The m ethod of least
squares is a standard approach in regression analysis to approx imate the solution
of over determined systems by minimizing the sum of the squares of the residuals
made in the results of every single equation. The most importan t application is in
data fitting.
munotes.in

Page 126

126NUMERICAL AND STATISTICAL METHODSSteps Involved:
Step (i) Calculate the actual means of ݔ and actual mean of ݕ .
Step (ii) Calculate the summation of the ݔseries and ݕ series.
I t ' l l g i v e s σݔand σݕ respectively.
Step (iii) Square the values of the series ݔand ݕand calculate its summation.
it'll gives σݔଶ and σݕଶ respectively.
Step (iv) Multiply each value of the series ݔby the respective values of the
series ݕ and calculation its summation, it'll gives σݕݔ .
Step (v) Solve the following Normal equations to determine the values of 'ܽ '
and 'ܾ 'for regression equation ݕ on ݔ .
σݕൌܽ݊ ൅ܾ σݔ
σݕݔ ൌ ܽσݔ൅ܾσݔଶ
after determine the value of ' ܽ 'and 'ܾ 'put these values in the following equation.
ݕൌܽ൅ݔܾ
Step (v') Solve the following Normal equations to determine the values of 'ܿ '
and '݀ 'for regression equation ݕ on ݔ .
σݔൌܿ݊ ൅݀ σݕ
σݔݕ ൌ ܿσݕ൅݀σݕଶ
after determine the value of ' ܿ 'and '݀ 'put these values in the following equation.
ݔൌܿ൅ݕ݀ .
Example 9.3.2.1 :
From the following data fit a regression line where ݕ is dependent variable and ݔ
is independent variable (i.e. ݕ on ݔ ) using the method of least square, also
estimate the value of ݕ when ݔൌ͹ Ǥ ͺ ݔ 2 4 6 8 10 ݕ 15 14 8 7 2 munotes.in

Page 127

127Chapter 9: Least-Squares RegressionSolution : Let ݕൌܽ൅ݔܾ be the required equation of line (because ݕ on ݔሻǤ
To determine the values of ' ܽ 'and 'ܾ 'for regression equation ݕ on ݔ
The two normal equations are σݕൌܽ݊ ൅ܾ σݔ ,
σݕݔ ൌ ܽσݔ൅ܾσݔଶ
where ݊ൌͷ (number of datasets) ࢞ ࢟ ࢟࢞ ࢞૛ ʹͳͷ͵ͲͶͶͳͶͷ͸ͳ͸͸ͺͶͺ͵͸ͺ͹ͷ͸͸ͶͳͲʹʹͲͳͲͲ෍ݔൌ͵Ͳ෍ݕൌͶ͸෍ݕݔൌʹͳͲ෍ݔଶൌʹʹͲSubstituting these calculated values into normal equation, we'l l get
Ͷ͸ ൌ ͷܽ൅͵Ͳܾ
ʹͳͲ ൌ ͵Ͳܽ൅ʹʹͲܾ
Solving the above equations, we get
ܽൌͳ ͻ Ǥ ͳ and ܾൌെ ͳ Ǥ ͸ ͷ
Hence, the required line ࢟ൌࢇ൅࢞࢈ is ࢟ൌ૚ ૢ Ǥ૚൅ ሺെ૚Ǥ૟૞ሻ࢞
or ࢟ൌ૚ ૢ Ǥ૚െ૚ Ǥ૟ ૞ ࢞
W h e n ൌ͹ Ǥ ͺ , ݕis equal to ͳͻǤͳെͳǤ͸ͷሺ͹Ǥͺሻ = 6.23
Hence estimate value of ࢟ when ࢞ൌૠ Ǥૡ is 6.23
Example 9.3.2.2 :
From the following data fit a regression line where ݔ is dependent variable and ݕ
is independent variable (i.e. ݔ on ݕ ) using the method of least square, also
estimate the value of ݔ when ݕൌ͵ Ͳ ࢞ 7 8 11 12 14 16 ࢟ 20 12 15 19 8 25
munotes.in

Page 128

128NUMERICAL AND STATISTICAL METHODSSolution : Let ݔൌܿ൅ݕ݀ be the required equation of line (because ݔ on ݕሻǤ
To determine the values of ' ܿ 'and '݀ 'for regression equation ݔ on ݕ
T h e t w o n o r m a l e q u a t i o n s a r e σݔൌܿ݊ ൅݀ σݕ
σݔݕ ൌ ܿσݕ൅݀σݕଶ
w h e r e ݊ൌ͸ (number of datasets) ࢞࢟࢞࢟࢟૛͹ʹͲͳͶͲͶͲͲͺͳʹͻ͸ͳͶͶͳͳͳͷͳ͸ͷʹʹͷͳʹͳͻʹʹͺ͵͸ͳͳͶͺͳͳʹ͸Ͷͳ͸ʹͷͶͲͲ͸ʹͷ෍ݔൌ͸ͺ෍ݕൌͻͻ෍ݔݕൌͳͳͶͳ෍ݕଶൌͳͺͳͻSubstituting this calculated values into normal equation, we'll get
͸ͺ ൌ ͸ܿ൅ͻͻ݀
ͳͳͶͳ ൌ ͻͻܿ൅ͳͺͳͻ݀
Solving the above equations, we get
ܿൌͻ Ǥ ͸ Ͷ and ݀ൌ Ͳ Ǥ ͳ Ͳ
Hence, the required line ࢞ൌࢉ൅࢟ࢊ is ࢞ൌૢ Ǥ૟ ૝൅૙ Ǥ૚ ૙ ࢟
When ൌ͵ Ͳ , ݔis equal to ͻǤ͸Ͷ൅ͲǤͳͲሺ͵Ͳሻ = 12.64
Hence estimate value of ࢞ when ࢟ൌ૜ ૙ is 12.64
9.3.3 Properties of Regression Coefficients
i) Correlation coefficient is the geometric mean between the regre ssion
coefficients.
i.e. The coefficien t of regression are ݎఙ೤
ఙೣ and ݎఙೣ
ఙ೤
׵Geometric Mean between them ටݎఙ೤
ఙೣൈݎఙೣ
ఙ೤=ξݎଶൌേ ݎ munotes.in

Page 129

129Chapter 9: Least-Squares RegressionBoth the regression coefficients will have the same sign, i.e., they will be wither
positive or negative. It is not possible for one to be positive and other to be
negative.
If regression coefficients are positive, then ݎis positive and if regression
coefficients are negative, ݎ is negative .if ܾ௫௬ൌͲ Ǥ ͵ and ܾ௬௫ൌͲ Ǥ ͸
ܾ௫௬൅ܾ௬௫
ʹൌͲǤ͵൅ͲǤ͸
ʹൌͲ Ǥ Ͷ ͷ
then the value of ݎൌξͲǤ͵ൈͲǤ͸ൌͲ Ǥ Ͷ ʹ which is less than 0.45
The value of the correlation coefficient cannot exceed one, if one of the
regression coefficient is greater than unity, the other must be less than unity.
if ܾ௫௬ൌͳ Ǥ Ͷ and ܾ௬௫ൌͳ Ǥ ͷ the ݎൌξͳǤͶൈͳǤͷൌͳ Ǥ Ͷ ͷ which (greater than 1) is
not possible.
The point ሺݔҧǡݕതሻ satisfies both the regression equations as it lies on both the lines
so that it is the point of intersection of the two lines. This can be helpful to us
whenever the regression equations are known and the mean values of ݔand ݕ are
to be obtained, In this case, the two regression equations can be solved
simultaneously and the common solution represent ݔҧ and ݕത
Example 9.3.3.1 :
From the given data calculate equations of two lines of regress ion Mean Standard deviation ࢞20 3 ࢟100 12 Coefficient of correlation is 0.8 i.e. ݎ = 0.8
Solution :
We have ݔҧൌʹ Ͳ ǡߪ ௫ൌ͵ ǡݕതൌͳ Ͳ Ͳ ǡߪ ௬ൌͳ ʹ and ݎൌͲ Ǥ ͺ
i) To find the regression equation of ݕ on ݔ
ܾ௬௫ൌݎൈߪ௬
ߪ௫ൌͲ Ǥ ͺൈͳʹ
͵ൌ͵ Ǥ ʹ
Regression equation of ݕ on ݔ is ሺݕെݕതሻൌܾ௬௫ሺݔെݔҧሻ
ሺݕെͳͲͲሻൌ͵ Ǥ ʹሺݔെʹ Ͳሻ
ሺݕെͳͲͲሻൌ͵ Ǥ ʹ ݔെ͸ Ͷ
࢟ൌ૜ Ǥ૛ ࢞൅૜ ૟ munotes.in

Page 130

130NUMERICAL AND STATISTICAL METHODSii) To find the regression equation of ݔ on ݕ
ܾ௫௬ൌݎൈߪ௫
ߪ௬ൌͲ Ǥ ͺൈ͵
ͳʹൌͲ Ǥ ʹ
Regression equation of ݔ on ݕ is ሺݔെݔҧሻൌܾ௫௬ሺݕെݕതሻ
ሺݔെʹ ͲሻൌͲ Ǥ ʹሺݕെͳͲͲሻ
ሺݔെʹ ͲሻൌͲ Ǥ ʹ ݕെʹ Ͳ
࢞ൌ૙ Ǥ૛ ࢟
Example 9.3.3.2 :
Given the two regression lines as ݔെͶ ݕൌͷ and ݔെͳ ͸ ݕൌെ ͸ Ͷ , find the
average of ݔand ݕ and correlation coefficient between ݔ and ݕ i.e. to find value
of ݔҧǡݕതand ݎ
Solution :
To find the mean of ݔ and ݕ
As per properties, ( ݔҧǡݕതሻ will lie on both the regression. thus,
ݔҧെͶݕതെͷൌͲ and ݔҧെͳ͸ݕതൌ͸ ͶൌͲ
Solve the given two equations simultaneously.
ݔҧെͶݕതൌͷ - - - ( 1 )
ݔҧെͳ͸ݕതൌെ ͸ Ͷ ---(2)
െ൅൅
______________________
ͳʹݕതൌ͸ ͻ
࢟ഥൌ૞ Ǥૠ ૞
substituting in equation (1)
ݔҧെͶሺͷǤ͹ͷሻൌͷ
࢞ഥൌ૛ ૡ
࢞׵ഥൌ૛ ૡ ǡ࢟ഥൌ૞ Ǥૠ ૞
To find correlation coefficient we have two lines of regression s. Which line is
regression line ݕ on ݔ and which is ݔ and which is ݔob ݕin not known.
Let us assume that ݔെͶ ݕൌͷ be regression line of ݕ on ݔ and ݔെͳ ͸ ݕൌ
െ͸Ͷbe the regression line of ݔ on ݕ munotes.in

Page 131

131Chapter 9: Least-Squares RegressionHence we get
ݕൌെହ
ସ൅ଵ
ସݔ and ݔൌͳ ͸ ݕെ 64
ܾ׵௬௫ൌଵ
ସand ܾ௫௬ൌͳ ͸
We require to check whether the assumption is correct or not ?
Check 1: Signs: Both regression coefficients are positive.
Check 2: Product: Product of two regression coefficients.
ܾ௬௫ൈܾ௫௬ൌଵ
ସൈͳ͸ൌͶ൐ͳ Which is greater than 1
׵ Assumption is wrong.
׵ Let ݔെͶ ݕൌͷ be regression line of ݔ on ݕ
and ݔെͳ ͸ ݕൌെ ͸ Ͷ  be the regression line of ݕ on ݔ
ܾ׵௫௬ൌͶ and ܾ௬௫ൌଵ
ଵ଺
Check 1: Signs: Both regression coefficients are positive.
Check 2: Product: Product of two regression coefficients.
ܾ௬௫ൈܾ௫௬ൌଵ
ଵ଺ൈͶൌͲǤʹͷ൏ͳ Which is less than 1
׵ This assumption is correct.
Now, ݎൌേඥܾ௫௬ൈܾ௬௫ൌേටͶൈଵ
ଵ଺ൌേଵ

The sing of correction coefficient is same as sign of both regr ession coefficients
ݎ׵ൌଵ
ଶorͲǤͷ
.4 Polynomial Regression
In statistics, polynomial regression is a form of regression an alysis in which the
relationship between the independent variable ݔ and the dependent variable ݕ is
modeled as an nth degree polynomial in ݔ .
Polynomial regression fits a nonlinear relationship between the value of ݔ and the munotes.in

Page 132

132NUMERICAL AND STATISTICAL METHODScorresponding conditional mean of ݕǡand has been used to describe non linear
phenomena such as growth rate of tissues, the distribution of c arbon isotopes in
lake sediments etc.
.5 Non- liner Regression
Regression will be called non-linear if there exists a relation ship other than a
straight line between the variables under consideration.
Nonlinear regression is a form of regression analysis in which observational data
are modeled by a function which is a nonlinear combination of t he model
parameters and depends on one or more independent variables.
9.5.1 Polynomial fit :
Let ݕൌܽ൅ݔܾ ൅ݔܿ ଶ be second degree parabolic curves of regression of ݕ on ݔ
to be fitted for the data ሺݔ௜ǡݕ௜ሻ, where ݅ൌͳ ǡ ʹ ǡ ͵ǥ݊
To fit parabolic curves for ݔ on ݕǡ
Considerݕොൌܽ൅ݔܾ ൅ݔܿ ଶ
For each ݔ௜ǡݕොൌܽ൅ݔܾ ௜൅ݔܿ௜ଶ
׵ Error inestimation is ݕ௜െݕపෝ and Summation of squares of error is
߮ൌ෍݁௜ଶൌ ෍ሺݕ ௜െݕపෝ௡
௜ୀଵ௡
௜ୀଵሻଶ
߮׵ൌσሺݕ௜െܽെݔܾ ௜െݔܿ௜ଶሻଶ ௡
௜ୀଵ ǥሺ૚ሻ
To find values of ܽǡܾǡܿ such that ߮is minimum
׵డఝ
డ௔ൌͲ ǡడఝ
డ௕ൌͲ and డఝ
డ௖ൌͲ
Differentiating ߮w.r.t. ܽ ,we get
డఝ
డ௔ൌʹ σሺݕ௜െܽെݔܾ ௜െݔܿ௜ଶሻሺെͳሻ
׵σሺݕ௜െܽെݔܾ ௜െݔܿ௜ଶሻൌͲ
σݕ௜െܽ݊െܾσݔ ௜െܿσݔ௜ଶൌͲ
׵σ࢟࢏ൌࢇ࢔ ൅࢈ σ࢞࢏൅ࢉσ࢞࢏૛ ...(2) munotes.in

Page 133

133Chapter 9: Least-Squares RegressionAgain differentiating ߮w.r.t. ܾ ,we get
డఝ
డ௕ൌʹ σሺݕ௜െܽെݔܾ ௜െݔܿ௜ଶሻሺെݔ௜ሻ
׵σሺݕ௜െܽെݔܾ ௜െݔܿ௜ଶሻሺݔ௜ሻൌͲ
σݔ௜ݕ௜െܽσݔ௜െܾσݔ௜ଶെܿσݔ௜ଷൌͲ
׵σ ࢞࢏࢟࢏ൌࢇσ࢞࢏൅࢈σ࢞࢏૛൅ࢉσ࢞࢏૜ ...(3)
Again differentiating ߮w.r.t. ܿ ,we get
డఝ
డ௖ൌʹ σሺݕ௜െܽെݔܾ ௜െݔܿ௜ଶሻሺെݔ௜ଶሻ
׵σሺݕ௜െܽെݔܾ ௜െݔܿ௜ଶሻሺݔ௜ଶሻൌͲ
σݔ௜ଶݕ௜െܽσݔ௜ଶെܾσݔ௜ଷെܿσݔ௜ସൌͲ
׵σ ࢞࢏૛࢟࢏ൌࢇσ࢞࢏૛൅࢈σ࢞࢏૜൅ࢉσ࢞࢏૝ ...(4)
Equation (2) (3) and (4) are normal e quations for fitting a sec ond degree parabolic
curve.
i.e. ࢟ෝൌࢇ൅࢞࢈ ൅࢞ࢉ ૛
Example 9.5.1.1 :
Using method of least square fit a second degree parabola for t he following data. ࢞ͳʹ͵Ͷͷ͸͹ͺͻ࢟ʹ͸͹ͺͳͲͳͳͳͳͳͲͻSolution :
The second-degree polynomial equation is ࢟ൌࢇ൅࢞࢈ ൅࢞ࢉ ૛ and Normal
Equations are
σ࢟ൌࢇ࢔ ൅࢈ σ࢞൅ࢉσ࢞૛
σ࢟࢞ ൌ ࢇ σ࢞൅࢈σ࢞૛൅ࢉσ࢞૜
σ࢞૛࢟ൌࢇσ࢞૛൅࢈σ࢞૜൅ࢉσ࢞૝

munotes.in

Page 134

134NUMERICAL AND STATISTICAL METHODSConsider the following table for calculation ࢞࢟࢞૛ ࢟࢞࢞૜࢞૛࢟࢞૝ͳʹͳʹͳʹͳʹ͸ͶͳʹͺʹͶͳ͸͵͹ͻʹͳʹ͹͸͵ͺͳͶͺͳ͸͵ʹ͸Ͷͳʹͺʹͷ͸ͷͳͲʹͷͷͲͳʹͷʹͷͲ͸ʹͷ͸ͳͳ͵͸͸͸ʹͳ͸͵ͻ͸ͳʹͻ͸͹ͳͳͶͻ͹͹͵Ͷ͵ͷ͵ͻʹͶͲͳͺͳͲ͸ͶͺͲͷͳʹ͸ͶͲͶͲͻ͸ͻͻͺͳͺͳ͹ʹͻ͹ʹͻ͸ͷ͸ͳ૝૞ૠ૝૛ૡ૞૝૛૚૛૙૛૞૛ૠૠ૚૚૞૜૜૜Here ࢔ൌૢ,
σ࢞ൌ૝ ૞ǡσ࢟ൌૠ ૝ ǡσ࢞૛ൌ૛ ૡ ૞ ǡσ࢟࢞ ൌ ૝૛૚ǡ σ࢞૜ൌ૛ ૙ ૛ ૞ ǡ σ࢞૛࢟ൌ
૛ૠૠ૚ǡσ࢞૝ൌ૚ ૞ ૜ ૜ ૜
Substituting these values into Normal Equations we get
͹Ͷ ൌ ͻܽ൅Ͷͷܾ൅ʹͺͷܿ (... 1 )
Ͷʹͳ ൌ Ͷͷܽ൅ʹͺͷܾ൅ʹͲʹͷܿ (... 2)
ʹ͹͹ͳ ൌ ʹͺͷܽ൅ʹͲʹͷܾ൅ͳͷ͵͵͵ܿ (... 3 )
Solving equation (1), (2) and (3) (by simultaneously or use Cra mer's rule)
ܽൌെ Ͳ Ǥ ͻ ʹ ͺ ͷ ǡܾൌ͵ Ǥ ͷ ʹ ͵ ͳ ǡܿൌെ Ͳ Ǥ ʹ ͸ ͹ ͵
Hence, The second degree equation is
࢟ൌെ ૙ Ǥૢ ૛ ૡ ૞൅૜ Ǥ૞ ૛ ૜ ૚ ࢞െ૙ Ǥ૛ ૟ ૠ ૜ ࢞૛
.6 Multiple Linear Regression
Multiple linear regression is a technique that uses two or more i n d e p e n d e n t
variables to predict the outcome of a dependent variable.
The technique enables analysts to determine the variation of th e model and the
relative contribution of each independent variable in the total variance. munotes.in

Page 135

135Chapter 9: Least-Squares RegressionMultiple regression can take two forms, i.e., linear regression and non-linear
regression.
Consider such a linear function as ݕ ൌ ܽ൅ݔܾ ൅ݖܿ .
To fit multiple linear regression for ݔǡݕ and ݖ consider
ݕො ൌ ܽ൅ݔܾ൅ݖܿ
for each ݔ௜, ݕపෝൌ ܽ൅ݔܾ ௜൅ݖܿ௜
׵ Error in estimation is ݕ௜െݕపෝ and summation of squares of error is
߮ൌ෍‡୧ଶൌ෍ ሺ › ୧െ›Ƹ୧୬
୧ୀଵ୬
௜ୀଵሻଶ
߮׵ൌ෍ ሺݕ௜െܽെݔܾ ௜െݖܿ௜ሻଶ௡
௜ୀଵǥሺ૚ሻ
To find values of ܽǡܾǡܿsuch that ߮ is minimum
i.e. డఝ
డ௔ൌడఝ
డ௕ൌడఝ
డ௖ൌͲ
Differentiating equation (1) w.r.t. ܽ we get
డఝ
డ௔ൌʹσሺݕ௜െܽെݔܾ ௜െݖܿ௜ሻሺെͳሻ
׵σݕ௜െܽ݊െܾ σݔ௜െܿσݖ௜ൌͲ
෍ݕ௜ൌܽ݊ ൅ܾ෍ݔ ௜൅ܿ෍ݖ ௜ǥሺ૛ሻ
Again differentiating equation (1) w.r.t ܾ we get
డఝ
డ௕ൌʹσሺݕ௜െܽെݔܾ ௜െݖܿ௜ሻሺെݔ௜ሻ
׵σݔ௜ݕ௜െܽσݔ௜െܾσݔ௜ଶെܿσݔ௜ݖ௜ൌͲ
෍ݔ௜ݕ௜ൌܽ෍ݔ ௜൅ܾ෍ݔ௜ଶ൅ܿ෍ݔ ௜ݖ௜ǥሺ૜ሻ
Again differentiating equation (1) with respect to c, we get
డఝ
డ௖ൌʹσሺݕ௜െܽെݔܾ ௜െݖܿ௜ሻሺെݖ௜ሻ munotes.in

Page 136

136NUMERICAL AND STATISTICAL METHODS׵σݖ௜ݕ௜െܽσݖ௜െܾσݔ௜ݖ௜െܿσݖ௜ଶൌͲ
෍ݖ௜ݕ௜ൌܽ෍ݖ ௜൅ܾ෍ݔ ௜ݖ௜൅ܿ෍ݖ௜ଶǥሺ૝ሻ
Here Equation (2), (3), and (4) are the normal equations for f itting a
multiple linear regression equation
ݕො ൌ ܽ൅ݔܾ ൅ݖܿ
Example 9.6.1 :
Obtain a regression plane by using multiple regression to fit t he following data ࢞Ͳͳʹ͵Ͷ࢟ͳ͵ͳ͹ͳͻʹͳʹ͸ࢠͳʹ͵Ͷͷ
Solution :
The multiple regression equation is ݕ ൌ ܽ൅ݔܾ ൅ݖܿ
and normal equations are
σݕൌܽ݊ ൅ܾ σݔ൅ܿσݖ
σݕݔ ൌ ܽσݔ൅ܾσݔଶ൅ܿσݖݔ
σݖݕ ൌ ܽσݖ൅ܾσݖݔ൅ܿσݖଶ
Consider the following table for calculation ࢞࢟ࢠ࢟࢞࢞૛ࢠ࢞ࢠ૛ࢠ࢟Ͳͳ͵ͳͲͲͲͳͳ͵ͳͳ͹ʹͳ͹ͳʹͶ͵Ͷʹͳͻ͵͵ͺͶ͸ͻͷ͹͵ʹͳͶ͸͵ͻͳʹͳ͸ͺͶͶʹ͸ͷͳͲͶͳ͸ʹͲʹͷͳ͵Ͳ૚૙ૢ૟૚૞૛૛૛૜૙૝૙૞૞૜૚ૡHere ݊ ൌ ͷ,
σ࢞ൌ૚ ૙ ǡσ࢟ൌૢ ૟ ǡσࢠൌ૚ ૞ ǡσ࢟࢞ ൌ ૛૛૛ǡ σ࢞૛ൌ૜ ૙ ǡσࢠ࢞ ൌ ૝૙ǡ σࢠ૛ൌ
૞૞ǡσࢠ࢟ ൌ ૜૚ૡ munotes.in

Page 137

137Chapter 9: Least-Squares RegressionSubstituting these values i nto Normal Equations we get,
ͻ͸ ൌ ͷܽ൅ͳͲܾ൅ͳͷܿ (... 1)
ʹʹʹ ൌ ͳͲܽ൅͵Ͳܾ൅ͶͲܿ (... 2 )
͵ͳͺ ൌ ͳͷܽ൅ͶͲܾ൅ͳͷܿ (... 3 )
Solving equation (1), (2) and (3) (by simultaneously or use Cra mer's rule)
ܽൌͳ ͵ Ǥ ͺ ǡ ܾൌ͵ ǡ ܿൌͲ
‡…‡ǡThe required regression plane equation is
࢟ൌ૚ ૜ Ǥ૛൅૜ ࢞
.7 General Linear Least Squares
The linear least square is the least squares approximation of l inear function to
data. It is a set of formulations for solving statistical probl ems involved in linear
regression including variants for ordinary and generalized leas t squares.
9.7.1 Ordinary Least Square(OLS)
Ordinary least squares (OLS) is a type of linear least squares method for
estimating the unknown parameters in a linear regression model. OLS chooses the
parameters of a linear function of a set of explanatory variabl es by the principle of
least squares: minimizing the sum of the squares of the differe nces between the
observed dependent variable (valu es of the variable being obser ved) in the given
dataset and those predicted by the linear function of the indep endent variable.
9.7.2 Generalized Least Square (GLS)
Generalized least squares (GLS) is a technique for estimating t he unknown
parameters in a linear regression model when there is a certain degree of
correlation between the residuals in a regression model.
. Summary
x A correlation or simple linear regression analysis can determin e if two
numeric variables are significantly linearly related. A correla tion analysis
provides information on the strength and direction of the linea r relationship
between two variables, while a simple linear regression analysi s estimates
parameters in a linear equation that can be used to predict val ues of one
variable based on the other. munotes.in

Page 138

138NUMERICAL AND STATISTICAL METHODSx Both correlation and regression can be used to examine the pres ence of a
linear relationship between two variables providing certain ass umptions
about the data are satisfied. The results of the analysis, howe ver, need to be
interpreted with care, particularly when looking for a causal r elationship or
when using the regression equation for prediction. Multiple and l o g i s t i c
regression will be the subMect of future reviews.
. References
Following books are recommended for further reading :-
x Introductory Methods of Numerical Methods - S. S. Shastri  PHI.
x Numerical Methods for Engineers - Steven C. Chapra, Raymond P. Canale 
Tata Mc Graw Hill.
.10 Exercises
Q.1) Calculate the product moment coefficient of correlation for the given data: ࢞131091112148࢟15971012134(Ans. ݎൌͲ Ǥ ͻ Ͷ ͺ ሻ
Q.2) Calculate coefficient of corre lation between the values of ݔ and ݕgiven
below. ࢞6670706874707275࢟6874666972636965( Ans. ݎൌെ Ͳ Ǥ Ͳ ͵ ሻ
Q.3) Find the coefficient of correlation for the following data: ࢞808490757270788286࢟110115118105104100108112116( Ans. ݎൌͲ Ǥ ͻ ͺ ͹ ሻ
Q.4) Find the coefficient of correlation for the following data by u sing assumed
mean method ࢞ʹͳʹʹͳͶʹͲͷʹʹͲʹʹͷʹͳͶʹͳͺ࢟ͷͲͲͷͳͷͷͳͳͷ͵Ͳͷʹʹͷͳ͸ͷʹͷ( Ans. ݎൎͲ Ǥ ͸ ͸ ͺ ሻ munotes.in

Page 139

139Chapter 9: Least-Squares RegressionQ.5) Fit the straight line by using least square method for the foll owing data: ࢞ͳ234567࢟0.52.52.04.03.56.05.2
Q.6) Find the best fit values of ܽ andܾso that ݕൌܽ൅ݔܾ fits the data given in
the table: ࢞ͲͳʹͶ࢟ͳͳǤͺ͵Ǥ͵ͶǤͷ
Q.7) Consider the data below: ࢞ͳʹ͵Ͷ࢟ͳ͹ͳͳʹͳUse linear least-Square regression to determine function of the from ݕൌܾ݁ ௠௫
the given data by specifying ܾ and ݉
(Hint: Take natural log o n both side of the function)
Q.8) Fit the second order polynomial to the given below: ࢞123456789࢟2678101111109Q.9) Fit the second order polynomial to the given below: ࢞01234 ࢟-4-141120
Q.10) Use the multiple regression t o fit the following data ࢞022.5147࢟012362ࢠ51090327

™™™
munotes.in

Page 140

140NUMERICAL AND STATISTICAL METHODSUnit 4
10
LINEAR PROGRAMMING
Unit Structure
10.0 ObMectives
10.1 Introduction
10.2 Common terminology for LPP
10.3 Mathematical Formulation of L.P.P
10.4 Graphical Method
10.5 Summary
10.6 References
10.7 Exercise
10.0 Objectives
This chapter would make you understand the following concepts:
x Sketching the graph for linear equations.
x Formulate the LPP
x Conceptualize the feasible region.
x Solve the LPP with two variables using graphical method.
10.1 Introduction
Linear programming (LP, also called linear optimization) is a m ethod to achieve
the best outcome (such as maximum profit or lowest cost) in a m athematical
model whose requirements are represented by linear relationship s. Linear
programming is a special case of mathematical programming (also known as
mathematical optimization).
10.2 Common terminology for LPP
Linear programming is a mathematical concept used to determine the solution to
a linear problem. Typically, the goal of linear programming is to maximize or munotes.in

Page 141

141Chapter 10: Linear Programmingminimize specified obMectives, such as profit or cost. This pro cess is known as
optimization. It relies upon three different concepts: variable s, obMectives, and
constraints.
"Linear programming is the analysis of pr oblems in which a linear function of a
number of variables is to be maximized (or minimized) when those variables are
subject to a number of restrains in the form of linear inequalities."
Objective Function : The linear function, which is to be optimized is called
obMective function The obMective function in linear programming problems is the
real-valued function whose value is to be either minimized or m aximized subMect
to the constraints defined on the given LPP over the set of fea sible solutions. The
obMective function of a LPP is a linear function of the form ܼൌܽ ଵݔଵ൅ܽଶݔଶ൅
ܽଷݔଷǥǤܽ௡ݔ௡
Decision Variables : The variables involved in LPP are called decision variables
denoted them as ሺݔǡݕሻor ሺݔଵǡݔଶሻ etc. here its refer to some quantity like units,
item production on sold quantity, Time etc.
Constraints : The constraints are limitations or restrictions on decision var iables.
they are expressed in linear equalities or inequalities i.e. ൌǡ൑ǡ൒
Non-negative Constraints : This is a condition in LPP specifies that the value of
variable being considered in the linear programming problem wil l never be
negative. It will be either ze ro or greater than zero but can n ever be less than zero,
Thus it is expressed in the form of ݔ൒Ͳ and ݕ൒Ͳ.
Feasible Solution : A feasible solution is a set of values for the decision variabl es
that satisfies all of the constraints in an optimization proble m. The set of all
feasible solutions defines the feasible region of the problem. in graph the
overlapping region is called feasible region
Optimum solution : An optimal solution to a linear program is the solution
which satisfies all constraints with maximum or minimum obMecti ve function
value.
10. Mathematical Formulation of L.P.P
To write mathematical formul ation of L.P.P following steps to b e remembered.
Step 1 : Identify the variables involved in LPP (i.e. Decision variables) and
denote them as ሺݔǡݕሻor ሺݔଵǡݔଶሻ. munotes.in

Page 142

142NUMERICAL AND STATISTICAL METHODSStep 2 : Identify the obMective function and write it as a math ematically in terms
of decision variables.
Step 3 : Identify the different constraints or restrictions an d express them
mathematically
Example 10..1 :
A bakery produces two type of cakes I and II using raw material s ܴଵ and ܴଶ. One
cake of type I is produced by using 4 units of raw material ܴଵand 6 units of raw
material ܴଶ and one cake of type II is produced by using 5 units of raw ma terial
ܴଵand 9 units of raw material ܴଶ. There are 320 units of ܴଵ and 540 units ܴଶ in
the stock. The profit per cake of type I and type II is Rs. 200 and Rs. 250
respectively. How many cakes of type I and type II be produced so as to
maximize the profit" formulate the L.P.P
Solution:
Let ݔ be the number of cakes of type I and ݕ be the number of cakes of type II to
be produce to get maximum profit.
since the production value is never negative
࢞׵൒૙ ǡ࢟൒૙ 
This is non- negative constraints.
׵ The profit earned by selling 1 cake of type I is Rs. 200. Henc e the profit earned
by selling ݔ cakes is Rs. 20 Ͳݔ .
Similarly, the profit earned by selling 1 cake of type II is Rs . 250 and hence profit
earned by selling ݕ cake is Rs. 250 ݕ .
׵The Profit earned is
ࢆൌ૛ ૙ ૙ ࢞൅૛ ૞ ૙ ࢟
This is obMective function.
Now, after reading the given data carefully we can construct th e following table Cake Type Raw Material required Profit ࡾ૚ (units)/Cake ࡾ૛ (Units)/Cake I 4 6 200 II 5 9 250 Availability (units) 20 540 - munotes.in

Page 143

143Chapter 10: Linear Programming ׵ According to the table
׵ 1 Cake of type I consumes 4 units of ܴଵ hence ݔ cakes of type I will consume
Ͷݔ units of ܴଵ and one cake of type II consume 5 units of ܴଵ hence ݕ cakes of
type II will consume ͷݕ units of ܴଵ. But maximum number of units available of
ܴଵ is 320. Hence, the constraint is
Ͷݔ ൅ͷݕ ൑ ͵ʹͲ .
Similarly, 1 Cake of type I consumes 6 units of ܴଶ hence ݔ cakes of type I will
consume ͸ݔ units of ܴଶ and one cake of type II consume 9 units of ܴଶ hence ݕ
cakes of type II will consume ͻݕ units of ܴଶ. But maximum number of units
available of ܴଶ is 540. Hence, the constraint is
͸ݔ ൅ͻݕ ൑ ͷͶͲ
Hence the mathematical formulation of the given L.P.P is to
Maximi]e ࢆ ൌ ૛૙૙࢞൅૛૞૙࢟
Subject to,
૝࢞൅૞࢟ ൑ ૜૛૙
૟࢞൅ૢ࢟ ൑ ૞૝૙
࢞ ൒ ૙ǡ࢟ ൒ ૙
Example 10..2 :
A manufacture produce Ball pen and Ink pen each of which must b e processed
through two machines A and B. Machine A has maximum 220 hours a vailable
and machine B has maximum of 280 hours available . Manufacturin g a Ink pen
requires 6 hours on machine A and 3 hours on machine B. Manufac turing a Ball
pen requires 4 hours on machine A and 10 hours on machine B. If the profit are
Rs. 55 for Ink pen and Rs. 75 for Ball pen. Formulate the LPP t o have maximum
profit.
Solution :
Let Rs. ܼ be the profit, Which can be made by manufacturing and selling say
ݔ
number of Ink pens and
ݕ
number of Ball pens.
Here variable ݔ and ݕ are decision variables.
Since profit per Ink pen and ball pen is Rs. 55 and Rs. 75 resp ectively and we
want to maximize the ܼ ,Hence the obMective function is munotes.in

Page 144

144NUMERICAL AND STATISTICAL METHODSMaxܼ ൌ ͷͷݔ൅͹ͷݕ
We have to find ݔ and ݕ that maximize ܼ
We can construct the followi ng tabulation form of given data: Machine Time in hours required for Maximum available time in hours 1 Ink pen 1 Ball pen A ͸ͶʹʹͲB ͵ͳͲʹͺͲA Ink pen requires 6 hr on machine A and Ball pen requires 4 hr on machine A
and maximum available time of machine A is 220 hr.
1st Constraint is
͸ݔ൅Ͷݕ ൑ ʹʹͲ
Similarly, A Ink pen requires 3 hr on machine B and Ball pen re quires 10 hr on
machine B and maximum available time of machine B is 280 hr.
2nd Constraint is
͵ݔ൅ͳͲݕ ൑ ʹͺͲ
Here the production of Ball pen and Ink pen can not be negativ e:
we have Non-Negative constraints as ݔ൒Ͳ ǡݕ൒Ͳ 
Hence the required formulation of LPP is as follows:
Maxࢆ ൌ ૞૞࢞൅ૠ૞࢟
S u b j e c t t o ,
૟࢞൅૝࢟ ൑ ૛૛૙
૜࢞൅૚૙࢟ ൑ ૛ૡ૙
࢞൒૙ ǡ࢟൒૙
Example 10.. :
In a workshop 2 models of agricu lture tools are manufactured ܣଵ and ܣଶ. Each ܣଵ
requires 6 hours for I processing and 3 hours for II processing . Model ܣଶ requires
2 hours for I processing and 4 hours for II processing. The wor kshop has 2 first
processing machines and 4 second processing machine each machin e of I
processing units works for 50 hrs a week. Each machine in II pr ocessing units munotes.in

Page 145

145Chapter 10: Linear Programmingworks for 40 hrs a week. The workshop gets Rs. 10/- profit on ܣଵ and Rs. 14/- on
ܣଶ on sale of each tool. Determine the maximum profit that the wo rk shop get by
allocating production capacity on production of two types of to ol A and B
Solution:
Decision Variables :
1. Let the number of units of type ܣଵ model tools be ݔ .
2. Let the number of units of type ܣଶ model tools be ݕ .
Objective function :
The obMective of the workshop is to obtain maximum profit by al locating his
production capacity between ܣଵand ܣଶ and 10 Rs. per unit profit on model ܣଵ and
ͳͶ Rs. on model ܣଶ
ܼ׵ൌͳ Ͳ ݔ൅ͳ Ͷ ݕ
Constraints :
1. For processing of ܣଵ tool and ܣଶ tools require ͸൅ʹൌͺ hrs in I Processing
unit ൌ͸ ݔ൅ʹ ݕ
2. For processing of ܣଵtype of tools and ܣଶ type requires ͵൅Ͷൌ͹ hrs in II
processing unit ൌ͵ ݔ൅Ͷ ݕ
Total machine hours available in I processing unit ʹൈͷͲ ൌ ͳͲͲ hrs per week.
Total machine hours availab le in II processing unit ͶൈͶͲ ൌ ͳ͸Ͳ hrs per week.
Considering the time constraint , the constrain function can be written in the
following way:
͸ݔ ൅ʹݕ ൑ ͳͲͲ
͵ݔ ൅Ͷݕ ൑ ͳ͸Ͳ
Non-negative constraint :
There is no possibility of negative production in the workshop
׵ The non-negative function will be
ݔ൒Ͳ ǡݕ൒Ͳ
Mathematical form of the production of 2 types of tools in the work shop to
maximize profits under given constraints will be in the followi ng way. munotes.in

Page 146

146NUMERICAL AND STATISTICAL METHODS Maximi]e ࢆ ൌ ૚૙࢞൅૚૝࢟
S u b j e c t t o ,
૟࢞൅૛࢟ ൑ ૚૙૙
૜࢞൅૝࢟ ൑ ૚૟૙ , ࢞൒૙ ǡ࢟൒૙
Example 10..4 :
Diet for a sick person must contain at least 400 units of vitam ins, 500 units of
minerals and 300 calories. Two foods ܨଵ and ܨଶ c o s t R s . 2 a n d R s . 4 p e r u n i t
respectively. Each unit of food ܨଵ c o n t a i n s 1 0 u n i t s o f v i t a m i n s , 2 0 u n i t o f
minerals and 15 calories, whereas each unit of food ܨଶ contains 25 units of
vitamins, 10 units of minerals and 20 calories. Formulate the L .P.P. to satisfy sick
person’s requirement at minimum cost.
Solution :
After reading this carefully, Tabulation form of give data: Micronutrients ࡲ૚ࡲ૛Minimum units requirement vitamins ͳͲʹͷͶͲͲminerals ʹͲͳͲͷͲͲcalories ͳͷʹͲ͵ͲͲCost ʹͶDecision Variables :
Let ݔ for ܨଵand ݕ for ܨଶ
Objective function :
We have to find minimum the cost for a diet hence the obMective function in terms
of decision variables is
Minimize ܼൌʹ ݔ൅Ͷ ݕ
Constraints :
First constraints : ͳͲݔ ൅ʹͷݕ ൒ ͶͲͲ
S e c o n d C o n s t r a i n t s : ʹͲݔ ൅ͳͲݕ ൒ ͷͲͲ
T h i r d c o n s t r a i n t s : ͳͷݔ ൅ʹͲݕ ൒ ͵ͲͲ
Non-negative constraint
T h e r e i s n o p o s s i b i l i t y o f n e g a t i v e f o o d q u a n t i t y o f t h e d i e t
hence, ݔ൒Ͳ ǡݕ൒Ͳ munotes.in

Page 147

147Chapter 10: Linear ProgrammingMathematical formulation of LPP can be written as
Minimi]e ࢆൌ૛ ࢞൅૝ ࢟
S u b j e c t t o ,
૚૙࢞൅૛૞࢟ ൒ ૝૙૙
૛૙࢞൅૚૙࢟ ൒ ૞૙૙
૚૞࢞൅૛૙࢟ ൒ ૜૙૙ , ࢞൒૙ ǡ࢟൒૙
Example 10..5 :
A garden shop wishes to prepare a supply of special fertilizer at a minimal cost by
mixing two fertilizer, A and B. The mixture is contains : at le ast 45 units of
phosphate, at least 36 units of nitrate at least 40 units of am monium. Fertilizer A
cost the shop Rs 0.97 per Kg. fertilizer B cost the shop Rs.1.8 9 per Kg. Fertilizer
A contains 5 units of phosphate and 2 units of nitrate and 2 un its of ammonium,
fertilizer B contains 3 units of phosphate and 3 units of nitra te and 5 units of
ammonium. How many pounds of each fertilizer should the shop us e in order to
minimum their cost "
Solution:
After reading this carefully, Tabulation form of give data: Contains Fertili]er type Minimum units requirement A B Phosphate ͷ͵ͶͷNitrate ʹ͵͵͸Ammonium ʹͷͶͲCost ͲǤͻ͹ͳǤͺͻെDecision Variables :
Let ݔ for A and ݕ for B
Objective function :
We have to find minimum the cost , Hence the obMective function in terms of
decision variables is
Minimize ܼൌͲ Ǥ ͻ ͹ ݔ൅ͳ Ǥ ͺ ͻ ݕ
munotes.in

Page 148

148NUMERICAL AND STATISTICAL METHODSConstraints :
First constraints : ͷݔ ൅͵ݕ ൒ Ͷͷ
Second Constraints : ʹݔ൅͵ݕ ൒ ͵͸
T h i r d c o n s t r a i n t s : ʹݔ൅ͷݕ ൒ ͶͲ
Non-negative constraint
T h e r e i s n o p o s s i b i l i t y o f n e g a t i v e s u p p l y o f f e r t i l i z e r
hence, ݔ൒Ͳ ǡݕ൒Ͳ
Mathematical formulation of LPP can be written as
Minimi]e ࢆൌ૙ Ǥૢ ૠ ࢞൅૚ Ǥૡ ૢ ࢟
S u b j e c t t o ,
૞࢞൅૜࢟ ൒ ૝૞
૛࢞൅૜࢟ ൒ ૜૟
૛࢞൅૞࢟ ൒ ૝૙ ,
࢞൒૙ ǡ࢟൒૙
Example 10..6 :
A printing company prints two types of magazines A and B the co mpany earns
Rs. 25 and Rs. 35 on each copy of magazines A and B respectivel y. The
magazines are processed on three machines. Magazine A requires 2 hours on
machine I, 4 hours on machine II and 2 hours on machine III. Ma gazine B
requires 3 hours on machine I, 5 hours on machine II and 3 hour s on machine III.
Machines I, II and III are available for 35, 50 and 70 hours pe r week respectively
formulate the L.P.P. so as to maximize the total profit of the company.
Solution :
Tabulation form of give data: Machine Maga]ine Maximum availability A B I ʹ͵͵ͷII ͶͷͷͲIII ʹ͵͹Ͳ
munotes.in

Page 149

149Chapter 10: Linear ProgrammingDecision Variables :
Let ݔ number of copies of magazine A and ݕ number of copies of magazine B to
be printed to get maximum profit.
Objective function :
We have to Maximize the Profit, Hence the obMective function in terms of
decision variables is
M a x i m i z e ܼൌʹ ͷ ݔ൅͵ ͷ ݕ
Constraints :
First constraints : ʹݔ ൅͵ݕ ൑ ͵ͷ
Second Constraints : Ͷݔ൅ͷݕ ൑ ͷͲ
T h i r d c o n s t r a i n t s : ʹݔ൅͵ݕ ൑ ͹Ͳ
Non-negative constraint
There is no possibility of negative production of magazines
׵ The non-negative function will be
ݔ൒Ͳ ǡݕ൒Ͳ
Mathematical formulation of LPP can be written as
Maximi]e ࢆൌ૛ ૞ ࢞൅૜ ૞ ࢟
S u b j e c t t o ,
૛࢞൅૜࢟ ൑ ૜૞
૝࢞൅૞࢟ ൑ ૞૙
૛࢞൅૜࢟ ൑ ૠ૙
࢞൒૙ ǡ࢟൒૙
Example 10..7 :
A food processing and distributing units has 3 production units A , B , C i n t h r e e
different parts of a city. They have five retails out lest in t he city P, 4, R, S and T
to which the food products are transported regularly. Total sto ck available at the
production units is 500 units which is in the following ways : A 200 units
B 120 units and C 180 units. Requirement at the retails outlet s of the industry
are: A 125, B 150, C 100, D 50, E 75. Cost of transportation of products from
different production centers t o different retail outlets is in the following way: munotes.in

Page 150

150NUMERICAL AND STATISTICAL METHODS P 4 R S T A 2 12 8 5 6 B 6 10 10 2 5 C 12 18 20 8 9 How the industry can minimize the cost on transportation of pro ducts. Formulate
the linear programming problem.
Solution :
The obMective of the industry is to minimize the possible cost on transportation
Let ܼ be the obMective function.
Let ݔଵǡݔଶǡݔଷǡݔସ and ݔହ are decision variables
Tabulation form of the given data Production
centers Decision
variables Retails out lets No. of units can
be supplied ࢞૚࢞૛࢞૜࢞૝࢞૞A ݔଵ2 12 8 5 6 200 B ݔଶ6 10 10 2 5 120 C ݔଷ12 18 20 8 9 180 Units of demand 125 150 100 50 75 500
Objective function :
The obMective is to minimize the cost
Minimize cost ൌʹ ݔଵଵ൅ͳʹݔଵଶ൅ͺݔଵଷ൅ͷݔଵସ൅͸ݔଵହ൅͸ݔଶଵ൅ͳͲݔଶଶ൅
ͳͲݔଶଷ൅ʹݔଶସ൅ͷݔଶହ൅ͳʹݔଷଵ൅ͳͺݔଷଶ൅ʹͲݔଷଷ൅ͺݔଷସ൅ͻݔଷହ

Constraint Function :
Supply constraint,
In the problem requirement at the retail out lets and the suppl y at the production
units is the same. therefore supply constraint will be
ܣൌݔଵଵ൅ݔଵଶ൅ݔଵଷ൅ݔଵସ൅ݔଵହൌʹ Ͳ Ͳ
ܤൌݔଶଵ൅ݔଶଶ൅ݔଶଷ൅ݔଶସ൅ݔଶହൌͳ ʹ Ͳ
ܥൌݔଷଵ൅ݔଷଶ൅ݔଷଷ൅ݔଷସ൅ݔଷହൌͳ ͺ Ͳ munotes.in

Page 151

151Chapter 10: Linear ProgrammingDemand Constraints :
P ݔଵଵ൅ݔଵଶ൅ݔଵଷൌͳ ʹ ͷ
4 ݔଶଵ൅ݔଶଶ൅ݔଶଷൌͳ ͷ Ͳ
R ݔଷଵ൅ݔଷଶ൅ݔଷଷൌͳ Ͳ Ͳ
S ݔସଵ൅ݔସଶ൅ݔସଷൌͷ Ͳ
T ݔହଵ൅ݔହଶ൅ݔହଷൌ͹ ͷ
N o n - N e g a t i v e f u n c t i o n :
ݔଵଵǡݔଵଶǡݔଵଷǡݔଵସǡݔଵହ൒Ͳ
10.4 Graphical Method
Graphical method, the inequality constraints are taken as equal ities. Each equality
constraints is drawn on the graph paper which forms a straight line. Lines are
drawn equal to the constrains. Then the region which satisfies all inequality is
located. this region is known as feasible region. Solution dete rmine with regard to
this region is called the feasible solution. Accordingly to Had ley if an optimum
(maximum or minimum) value o f a linear programming problem exit s then it
must correspond to one of the corner points of the feasible reg ion Feasible region
corresponding to a linear program ming problem can be located by constructing
graph as given below.
Steps for solving L.P.P. graphically
1. Formulate the mathematical linear programming problem. there will be 2
variables ݔ and ݕ .
2. Since both ݔ and ݕ are non negative, graphic solution will be restricted to
the first quadrant.
. Choose an appropriate scale for ݔ and ݕ axis.
4. Each inequality in the constrain ts equation can be written as e quality.
Example. ݔ൅ݕ൑͹ Ͳ them make it ݔ൅ݕൌ͹ Ͳ
5. Given any arbitrary value to one variable and get the value of other variable
by solving the equation. Similarly given another arbitrary valu e to the
variable and find the corresponding value of the other variable . Example
ݔ൅ݕൌ͹ at any point on ݔ axis ݕwill be Ͳthe ݔ൅ݕൌ͹ and ݔൌ͹ then
at any point on ݕ axis ݔ will be zero ݕൌͷ, thus ሺͲǡͷሻ is a point on ݔ axis
and ሺͲǡͷሻ is a point on ݕ axis that satisfies the equation. munotes.in

Page 152

152NUMERICAL AND STATISTICAL METHODS6. Now plot these two sets of values connect these points by strai ght line. That
divides the first quadrant into two parts. Since the constraint s is an
inequality, one of the two sides satisfies inequality
7. Repeat these steps for every constraints stated in the linear p rogramming
problem.
. There forms a common area called feasible area.
. For greater than or greater than equal to constraints, the feas ible region will
be the area which lies above the constrains.
10. For less than or less than equal, the area is below these lines .
Example 10.4.1 :
Solve the following linea r programming by graphical method.
M a x i m i z e ܼൌͷ ݔ൅͸ ݕ
S u b M e c t t o , ʹݔ൅Ͷݕ ൑ ͳ͸
͵ݔ൅ݕ ൑ ͳʹ
͵ݔ൅͵ݕ ൑ ʹͶ , ݔ൒Ͳ ǡݕ൒Ͳ
Solution :
By converting the inequality equation to equality equation we g et:
ʹݔ൅Ͷݕ ൌ ͳ͸ ...(equation 1)
͵ݔ൅ݕ ൌ ͳʹ . . . ( e q u a t i o n 2 )
͵ݔ൅͵ݕ ൌ ʹͶ . . . ( e q u a t i o n 3 )
Equation (1)
ʹݔ ൅Ͷݕ ൌ ͳ͸ , When ݔൌͲ, then ݕൌଵ଺
ସൌͶ, When ݕൌͲ then ݔൌଵ଺
ଶൌͺ, ݔ08ݕ40Equation (2)
͵ݔ ൅ݕ ൌ ͳʹ ,When ݔൌͲ then ݕൌͳ ʹ, When ݕൌͲ then ݔൌଵଶ
ଷൌͶ, ݔ04ݕ120
munotes.in

Page 153

153Chapter 10: Linear ProgrammingEquation (3)
͵ݔ ൅͵ݕ ൌ ʹͶ ,When ݔൌͲ then ݕൌଶସ
ଷൌͺ, When ݕൌͲ then ݔൌଶସ
ଷൌͺ, ݔ08ݕ80By above equation coordinates we can draw the straight lines to obtain feasible
area

Thus the feasible region is OABC
Where ܣሺͲǡͶሻ, ܤሺ͵ǤʹǡʹǤͶሻ , and ܥሺͶǡͲሻ
Consider, ܼൌͷ ݔ൅͸ ݕ
at ܣሺͲǡͶሻ, ܼൌͷሺͲሻ൅͸ሺͶሻൌʹ Ͷ
at ܤሺ͵ǤʹǡʹǤͶሻǡܼൌͷሺ͵Ǥʹሻ൅͸ሺʹǤͶሻൌ͵ Ͳ Ǥ Ͷ 
at ܥሺͶǡͲሻ, ܼ ൌ ͷሺͶሻ൅͸ሺͲሻൌʹ Ͳ
Thus ܼ is maximum at point ܤ ,Thus the solution is ݔൌ͵ Ǥ ʹ and ݕൌʹ Ǥ Ͷ
Example 10.4.2 :
Solve LPP graphically, Maximum ܼൌͳ Ͳ ݔ൅ͷ ݕ
S u b M e c t t o , ݔ൅ݕ൑ͷ ,
ʹݔ൅ݕ ൑ ͸ ,
ݔ൒Ͳ ǡݕ൒Ͳ
munotes.in

Page 154

154NUMERICAL AND STATISTICAL METHODSSolution : C o n v e r t i n g t h e g i v e n c o n s t r a i n t s i n t o e q u a t i o n s ( o r i g n o r e t h e
inequality to find the co-ordinates for sketching the graph) we get,
ݔ൅ݕൌͷ --- Equation (1)
ʹݔ ൅ݕ ൌ ͸ - - - E q u a t i o n ( 2 )
Consider the equation (1)
ݔ൅ݕൌͷ , When ݔൌͲ then ݕൌͷ and ݕൌͲ then ݔൌͷ ݔͲͷݕͷͲ
Plot ሺͲǡͷሻ and ሺͷǡͲሻ on the graph. Join the points by a straight line and shade the
region represented by, ݔ൅ݕ൑ͷ 
Then, Consider the equation (2)
ʹݔ ൅ݕ ൌ ͸ , When ݔൌͲ then ݕൌ͸ and ݕൌͲ then ݔൌ͵ ݔͲ͵ݕ͸ͲPlot ሺͲǡ͸ሻ and ሺ͵ǡͲሻ on the graph. Join the points by a straight line and shade the
region represented by, ʹݔ൅ݕ ൑ ͸

From the graph we get the feasible region ABCD is a feasible re gion where
ܦሺͲǡͲሻǡܣሺͲǡͷሻǡܤሺͳǡͶሻǡܥሺ͵ǡͲሻ .
Consider the value of ܼ at different values of corner points of feasible region.
munotes.in

Page 155

155Chapter 10: Linear ProgrammingCorner point ܼൌ ͳ Ͳ ݔ൅ͷ ݕ ܦͲܣʹͷܤ͵Ͳܥ͵Ͳ
׵ Here ܼ attained maximum value at two points which is ܤ and ܥ
in this case any point which lies on set ܥܤ is also another optimal solution
׵ There are infinitely many optimal solution.
Example 10.4. :
Solve the following LPP graphically
Min ܼൌʹ ݔ൅͵ ݕ
SubMect to, ݔ൅ݕ൑͵
ʹݔ ൅ʹݕ ൒ ͳͲ
ݔ൒Ͳ ǡݕ൒Ͳ
Solution :
Converting the given constraints in to equations (or ignore the inequality to find
the co-ordinates for ske tching the graph) we get,
ݔ൅ݕൌ͵ --- Equation (1)
ʹݔ ൅ʹݕ ൌ ͳͲ - - - E q u a t i o n ( 2 )
Consider the equation (1) ݔ൅ݕൌ͵ , When ݔൌͲ then ݕൌ͵ and ݕൌͲ
then ݔൌ͵ ݔͲ͵ݕ͵Ͳ
Plot ሺͲǡ͵ሻ and ሺ͵ǡͲሻ on the graph. Join the points by a straight line and shade the
region represented by, ݔ൅ݕ൑͵ 
Then, Consider the equation (2)
ʹݔ ൅ʹݕ ൌ ͳͲ , When ݔൌͲ then ݕൌͷ and ݕൌͲ then ݔൌͷ ݔͲͷݕͷͲmunotes.in

Page 156

156NUMERICAL AND STATISTICAL METHODSPlot ሺͲǡͷሻ and ሺͷǡͲሻ on the graph. Join the points by a straight line and shade the
region represented by, ʹݔ൅ʹݕ ൒ ͳͲ

׵ This LPP does not have common feasible region and hence no opt imum
solution.
Example 10.4.4 :
Consider a calculator company wh ich produce a scientific calcul ator and graphing
calculator. long-term proMection indicate an expected demand of a t l e a s t 1 0 0 0
scientific and 800 graphing calculators each month. Because of limitation on
production capacity, no more than 2000 scientific and 1700 grap hing calculators
can be made monthly. To satisfy a supplying contract a total of a t l e a s t 2 0 0 0
calculator must be supplied each month. if each scientific calc ulator sold result in
Rs. 120 profit and each graphing calculator sold produce Rs. 15 0 profit, how
many of each type of calculator should be made monthly to maxim ize the net
profit "
Solution :
Decision variables
Let ݔ is the number of scientific calculators produced and ݕ is the number of
graphing calculators produced.
Objective function
We have to find maximum profit. Hence obMective function in a t erms of decision
variable is,
Maximize ܼ ൌ ͳʹͲݔ൅ͳͷͲݕ
munotes.in

Page 157

157Chapter 10: Linear ProgrammingConstraints
at least 1000 scientif ic calculators : ݔ൒ͳ Ͳ Ͳ Ͳ
a t l e a s t 8 0 0 g r a p h i n g c a l c u l a t o r s : ݕ൒ͺ Ͳ Ͳ
n o m o r e t h a n 2 0 0 0 s c i e n t i f i c c a l c u l a t o r s : ݔ൑ʹ Ͳ Ͳ Ͳ
no more than 1700 graphing calculators : ݕ൑ͳ ͹ Ͳ Ͳ
a t o t a l o f a t l e a s t 2 0 0 0 c a l c u l a t o r s : ݔ൅ݕ൒ʹ Ͳ Ͳ Ͳ
and non-negative constraints : ݔ ൒ Ͳǡݕ ൒ Ͳ
Hence the formulation of LPP can be written as
M a x i m i z e ܼൌͳ ʹ Ͳ ݔ൅ͳ ͷ Ͳ ݕ
S u b M e c t t o , ͳͲͲͲ ൑ ݔ ൑ ʹͲͲͲ
ͺͲͲ ൑ ݕ ൑ ͳ͹ͲͲ
ݔ൅ݕ൒ʹͲͲͲ
ݔ൒Ͳ ǡݕ൒Ͳ
Consider, ݔ൅ݕ൒ʹ Ͳ Ͳ Ͳ
if ݔൌͲ then ݕൌʹ Ͳ Ͳ Ͳ and when ݕൌͲ then ݔൌʹ Ͳ Ͳ Ͳ
׵ The point are ܣሺͲǡʹͲͲͲሻ and ܤሺʹͲͲͲǡͲሻ


From the graph CDEFG is feasible region
To find the value of point G, Consider ݔൌͳ Ͳ Ͳ Ͳ and ݔ൅ݕൌʹ Ͳ Ͳ Ͳ ǡ 
ݕ֜ൌͳ Ͳ Ͳ Ͳ , Point G is (1000,1000)
munotes.in

Page 158

158NUMERICAL AND STATISTICAL METHODSTo find the value of point F, Consider ݕൌͺ Ͳ Ͳ in ݔ൅ݕൌʹ Ͳ Ͳ Ͳ ǡ 
ݕ֜ൌͳ ʹ Ͳ Ͳ , Point G is (1200,800)
Here CDEFG is feasible region.
Where ܥሺͳͲͲͲǡͳ͹ͲͲ ሻǡܦሺʹͲͲͲǡͳ͹ͲͲ ሻǡܧሺʹͲͲͲǡͺͲͲ ሻǡܨሺͳʹͲͲǡͺͲͲ ሻ and
ܩሺͳͲͲͲǡͳͲͲͲሻ
Now, ܼൌͳ ʹ Ͳ ݔ൅ͳ ͷ Ͳ ݕ
at ܥሺͳͲͲͲǡͳ͹ͲͲ ሻǡ ܼൌ͵ ͹ ͷ Ͳ Ͳ Ͳ
at ܦሺʹͲͲͲǡͳ͹ͲͲ ሻǡ ܼ ൌ ͶͻͷͲͲͲ
at ܧሺʹͲͲͲǡͺͲͲ ሻǡܼ ൌ ͵͸ͲͲͲͲ
at ܨሺͳʹͲͲǡͺͲͲ ሻ,ܼ ൌ 264000
at ܩሺͳͲͲͲǡͳͲͲͲ ሻǡܼ ൌ ʹ͹ͲͲͲͲ
ܼ׵is maximum at point ܦሺʹͲͲͲǡͳ͹ͲͲሻ
׵ The maximum value of ͳʹͲݔ ൅ͳͷͲݕ is 495000 at ሺʹͲͲͲǡͳ͹ͲͲሻ .
׵ 2000 scientific and ͳ͹ͲͲ graphing calculators should be made monthly to
maximize the net profit.
Example 10.4.5 :
Solve the following LPP graphically,
M i n i m i z e ܼൌʹ ͷ ݔ൅ͳ Ͳ ݕ
S u b M e c t t o , ͳͲݔ൅ʹݕ ൒ ʹͲ
ݔ൅ʹݕ൒͸ , ݔ൒Ͳ ǡݕ൒Ͳ
Solution :
Consider the equation ͳͲݔ ൅ʹݕ ൌ ʹͲ , when ݔൌͲ then ݕൌͳ Ͳ and when ݕൌͲ
then ݔൌʹ Ǥ
Consider the for 2nd equation ݔ൅ʹ ݕൌ͸ , when ݔൌͲ then ݕൌ͵ and when ݕൌ
Ͳ then ݔൌ͸ Ǥ


munotes.in

Page 159

159Chapter 10: Linear ProgrammingPlotting the graph

From the graph we get the feasible region where ܣሺͲǡͳͲሻǡܤሺͳǤͷ͸ǡʹǤʹʹሻǡܥሺ͸ǡͲሻǡ .
For Minimize ܼ ʹͷݔ൅ͳͲݕ point ܼൌ ʹ ͷ ݔ൅ͳ Ͳ ݕ ܣͳͲͲܤ͸ͳǤʹܥͳͷͲ׵ Minimum ܼൌ͸ ͳ Ǥ ʹ  at point ܤ i.e. ݔൌͳ Ǥ ͷ ͸ and ݕൌʹ Ǥ ʹ ʹ
10.5 Summary
x The obMective of this course is to present Linear Programming. Activities
allow to train modeling practical problems in Linear programmin g. Various
tests help understanding how to solve a linear program
x Linear programming is a mathematical technique for solving cons trained
maximization and minimization problems when there are many cons traints
and the obMective function to be optimized as well as the const raints faced
are linear (i.e., can be represented by straight lines). Linear p r o g r a m m i n g
has been applied to a wide variety of constrained optimization problems.
Some of these are: the selection of the optimal production proc ess to use to
produce a product, the optimal product mix to produce, the leas t-cost input
combination to satisfy some minimum product requirement, the ma rginal
contribution to profits of the various inputs, and many others.
munotes.in

Page 160

160NUMERICAL AND STATISTICAL METHODS10.6 References
Following books are recommended for further reading :-
• N u m e r i c a l M e t h o d s f o r E n g i n e e r s - Steven C. Chapra, Raymond P. Canale 
Tata Mc Graw Hill.
x 4uantitative Techniques - Dr. C. Satyadevi  S. Chand.
10.7 Exercise
4. (1) A firm manufactures headache pills in two size A and B. size A contains 3
grains of aspirin, 6 grains of bicarbonate and 2 grain of codei ne. Size B contains 2
grains of aspirin, 8 grains of bicarbonate and 10 grain of code ine. It is found by
users that it requires at least 15 grains of aspirin, 86 grains of bicarbonate and 28
grain of codeine of providing immediate effect. it is required to determine the
least number of pills a patient should take to get immediate re lief. Formulate the
problem as a standard LPP.
4. (2) An animal feed company must produce 200lbs of mixture containin g the
ingredients A and B. A cost Rs. 5 per Lb. And B cost Rs. 10 per lb. Not more
than 100 lbs. of A can used and minimum quantity to be used for B is 80lbs. Find
how much of each ingredient should be used if the company wants to minimize
the cost. Formulate the problem as a standard LPP.
4. () A painter make two painting A and B. He spends 1 hour for drawi ng and 3
hours for coloring the painting A and he spends 3 hours for dra wing and 1 hour
for coloring the painting B. he can spend at most 8 hours and 9 hours for drawing
and coloring respectively. The profit per painting of Type A is Rs. 4000 and that
of type B is Rs. 5000. formulate as LPP to maximize the profit
4. (4) A gardener wanted to prepare a pesticide using two solutions A and B. the
cost of 1 liter of solution A is Rs. 2 and the cost of 1 liter of solutions B is Rs. 3.
He wanted to prepare at least 20 liter of pesticide. The qualit y of solution A
available in a shop is 12 liter and solution B is 15 liter. How m a n y l i t e r o f
pesticide the gardener should prepare so as to minimize the cos t " formulate the
LPP
4. (5) A bakery produce two kinds of biscuits A and B, using same ingr edients
L1 and L2. The ingredients L1 and L2 in biscuits of type A are in the ratio 4:1
and in biscuits of type B are in the ratio 9:1. The Profit per Kg for biscuits of type
A and B is Rs. 8 per Kg and Rs. 10 per Kg respectively. The bak ery had 90 Kg of
L1 and 20 Kg of L2 in stock. How many Kg of biscuits A and B be produced to
maximize total profit " munotes.in

Page 161

161Chapter 10: Linear Programming4. (6) Solve graphically th e following LPP
Maximize ܼൌʹ ݔ൅͵ ݕ
subMect to, ݔ൅ʹ ݕ൒͸
ʹݔ െͷݕ ൑ ͳ , ݔ൒Ͳ ǡݕ൒Ͳ
4. (7) Solve graphically th e following LPP
Maximize ܼൌʹ ݔ൅ͷ ݕ
subMect to, Ͷݔ ൅ʹݕ ൑ ͺͲ
ʹݔ ൅ͷݕ ൑ ͳͺͲ , ݔ൒Ͳ ǡݕ൒Ͳ
4. () Solve graphically th e following LPP
Maximize ܼൌʹ ݔ൅ݕ
subMect to, Ͷݔ െݕ ൑ ͵
ʹݔ ൅ͷݕ ൑ ͹ , ݔ൒Ͳ ǡݕ൒Ͳ
4. () Solve graphically th e following LPP
Maximize ܼൌͶ Ͳ Ͳ ݔ൅ͷ Ͳ Ͳ ݕ
subMect to, ݔ൅͵ ݕ൑ͺ
͵ݔ ൅ݕ ൑ ͻ , ݔ൒Ͳ ǡݕ൒Ͳ
4. (10) Solve graphically the following LPP
Maximize ܼൌͷ Ͳ Ͳ Ͳ ݔ൅Ͷ Ͳ Ͳ Ͳ ݕ
subMect to, ͸ݔ ൅Ͷݕ ൑ ʹͶ
ݔ൅ʹ ݕ൑͸
െݔ ൅ݕ ൑ ͳ
ݕ൑ʹ, ݔ൒Ͳ ǡݕ൒Ͳ
4. (11) Solve graphically the following LPP
Minimize ܼൌ͵ Ͳ ݔ൅ͷ Ͳ ݕ
subMect to, ͵ݔ ൅Ͷݕ ൒ ͵ͲͲ
ݔ൅͵ ݕ൒ʹ ͳ Ͳ , ݔ൒Ͳ ǡݕ൒Ͳ
™™™ munotes.in

Page 162

162NUMERICAL AND STATISTICAL METHODSUNIT 5
11
RANDOM VARIABLES
Unit Structure
11.0 ObMectives
11.1 Introduction
11.2 Random Variable (R.V.) – Discrete and Continuous
1 1 . 2 . 1 D i s c r e t e R a n d o m V a r i a b l e :
11.2.2 Continuous Random Variable:
11.2.3 Distinction between continuous random variable and disc rete random
variable:
11.3 Probability Distributions of Discrete Random Variable
1 1 . 3 . 1 P r o b a b i l i t y M a s s F u n c t i o n ( p . m . f . )
11.4 Probability Distributions of Continuous Random Variable
1 1 . 4 . 1 P r o b a b i l i t y D e n s i t y F u n c t i o n ( p . d . f . )
11.5 Properties of Random variable and their probability distri butions
11.6 Cumulative Distribution Function (c.d.f.)
1 1 . 5 . 1 C u m u l a t i v e D i s t r i b u t i o n F u n c t i o n ( c . d . f . ) f o r D i s c r e t e Random
Variable
1 1 . 5 . 2 C u m u l a t i v e D i s t r i b u t i o n F u n c t i o n ( c . d . f . ) f o r C o n t i n u o u s Random
Variable
11.7 Properties of Cumulative Distribution Function (c.d.f.)
11.8 Expectation or Expected Value of Random Variable
11.8.1 Expected Value of a Discrete Random Variable :
11.8.2 Expected Value of a Continuous Random Variable :
1 1 . 8 . 3 P r o p e r t i e s o f E x p e c t a t i o n
11.9 Expectation of a Function
11.10 Variance of a Random Variable
1 1 . 1 0 . 1 P r o p e r t i e s o f V a r i a n c e
11.11 Summary
11.12 Reference
11.13 Unit End Exercise munotes.in

Page 163

163Chapter 11: Random Variables11.0 Objectives
After going through this unit, you will be able to:
x Understand the concept of ra ndom variables as a function from sample
space to real line
x Understand the concept of probability distribution function of a discrete
random variable.
x Calculate the probabilities for a discrete random variable.
x Understand the Probability Distribution of Random Variable
x Understand the probability mass function and probability densit y function
x Understand the properties of random variables.
x Make familiar with the concept of random variable
x Understand the concept of cumul ative distribution function.
x Understand the expected value and variance of a random variabl e with it s
properties Andrey Nikolaevich Kolmogorov (Russian:
25 April 1903 – 2 0 O c t o b e r 1 9 8 7 ) w h o
advances many scientific fields.
In probability theory and related fields,
a stochastic or random process is
a mathematical obMect usually defined as
a family of random variables. A stochastic
process may involve several related random
variables. A quotation attributed to
Kolmogorov is “ Every mathematician
believes that he is ahead over all others .
The reason why they don’t say this in
public, is because they are intelligent
people.” 11.1 Introduction
Many problems are concerned with values associated with outcom es of random
experiments. For example we select five items randomly from the basket with
munotes.in

Page 164

164NUMERICAL AND STATISTICAL METHODSknown proportion of the defectives and put them in a packet. Now we want to
know the probability that the packet contain more than one defe ctive. The study of
such problems requires a concept of a random variable. Random v ariable is a
function that associates numerical values to the outcomes of ex periments.
In this chapter, we considered discrete random variable that is random variable
which could take either finite or countably infinite values. W hen a random variable
; is discrete, we can assign a positive probability to each val ue that ; can take and
determine the probability distribution for ;. The sum of all th e probabilities
associate with the different values of ; is one. However not al l experiments result
in random variables that ate discrete. There also exist random variables such as
height, weights, length of life an electric component, time tha t a bus arrives at a
specified stop or experimental laboratory error. Such random va riables can assume
infinitely many values in some interval on the real line. Such variables are called
continuous random variables. If we try to assign a positive pro bability to each of
these uncountable values, the probabilities will no longer sum to one as was the
case with discrete random variable.
11.2 Random Variable (R.V.) – Discrete and Continuous
11.2.1 Discrete Random Variable:
Sample space S or Ω contains non -numeric elements. For example, in an
experiment of tossing a coin, Ω = { H, T } . However , in practice it is easier to deal
with numerical outcomes. In turn, we associate real number with each outcome.
For instance, we may call H as 1 and T as 0. Whenever we do th is, we are dealing
with a function who se domain is the sample space Ω and whose range is the set of
real numbers . Such a function is called a random variable.
Definition 1 : Random variable : Let S / Ω be the sample space corresponding to
the outcomes of a random experiment . A function X: S → R (where R is a set of
real numbers) is called as a random variable.
Random variable is a real valued mapping. A function can eith er be one-to-one or
many-to-one correspondence. A random variable assigns a real nu mber to each
possible outcome of an experiment. A random variable is a funct ion from the
sample space of a random experiment (domain) to the set of real numbers (co-
domain).
Note: Random variable are denoted by capital letters ;,<, = et c.., where as the
values taken by them are denoted by corresponding small letters x, y, z etc. munotes.in

Page 165

165Chapter 11: Random VariablesDefinition 2: Discrete random variable : A random variable ; is said to be di screte
if it takes finite or countably infinite number of possible val ues. Thus discrete
random variable takes only isolated values.
Example1:
What are the values of a random variable ; would take if it wer e defined as number
of tails when two coins are tossed simultaneously"
Solutions:
Sample Space of the experiment (Tossing of two coins simultaneo usly) is,
S / Ω = { TT, TH, HT, HH }
Let ; be the number of tails obtained tossing two coins.
X : Ω → R
;(TT) 2, ;(TH) 1, ;(HT) 1, ;(HH) 0
Since the random variable ; is a number of tails, it takes thre e distinct values
^ 0, 1, 2 `
Remark: Several random variables can be defined on the same sample spac e Ω.
For example in the Example1, one can define < Number of heads or =
Difference between number of heads and number of tails.
Following are some of the examples of discrete variable.
a) Number of days of rainfalls in Mumbai.
b) Number of patients cured by using certain drug during pandemic.
c) Number of attempts required to pass the exam.
d) Number of accidents on a sea link road.
e) Number of customers arriving at shop.
f) Number of students attending class.
Definition : Let ; be a discrete random variable defined on a sample space Ω.
Since Ω contains either finite of countable infinite elements, and X is a function on
Ω, X can take either finite or countably infinite values. Suppose ; takes values x 1,
x2 x3,…. then the set ^ x 1, x2 x3,…} is called the range set of ;.
In Example1, the range set of ; Number of tails is ^ 0,1,2`
munotes.in

Page 166

166NUMERICAL AND STATISTICAL METHODS11.2.2 Continuous Random Variable:
A sample space which is finite or countably infinite is called as denumerable of
countable. If the sample space is not countable then it is call ed continuous. For a
continuous sample space Ω we can not have one to one correspond ence between Ω
and set of n atural numbers { 1,2,…}
Random variable could also be such that their set of possible v alues is uncountable.
Examples of such random variables are time taken between arriva ls of two vehicles
at petrol pump or life in hours of an electrical component.
In general if we define a random variable ;( ω) as a real valued function on domain
Ω. If the range set of X (ω) is continuous, the random var iable is continuous. The
range set will be a subset of real line.
Following are some of the examp les of continuous random variabl e
a) Daily rainfall in mm at a particular place.
b) Time taken for an angiography operation at a hospital
c) Weight of a person in kg.
d) Height of a person in cm.
e) Instrumental error (measured in suitable units) in the measurem ent.
11.2. Distinction between continuous random variable and discr ete random
variable:
1) A continuous random variable takes all possible values in a ran ge set. The ser
is in the form of interval. On the other hand discrete random v ariable takes
only specific or isolated values.
2) Since a continuous random variable takes uncountably infinite v alues, no
probability mass can be attached to a particular value of rando m variable ;.
Therefore, P(; x) 0, for all x. However in case of a discre te random
variable, probability mass is attached to individual values tak en by random
variable. In case of continuous random variable probability is attached to an
interval which is a subset of R.
11. Probability Distributions of Discrete Random Variable
Each outcome i o f a n e x p e r i m e n t h a v e a p r o b a b i l i t y P ( i) associated with it.
Similarly every value of random variable ; xi is related to the outcome i of an
experiment. Hence, for every value of random variable xi, we have a unique real munotes.in

Page 167

167Chapter 11: Random Variablesvalue P( i) associated . Thus, every random variable ; has probability P associated
with it. This function P (; xi) from the ser of all events of the sample space Ω is
called a probability distribution of the random variable.
The probability distribution (or simply distribution) of a rand om variable ; on a
sample space Ω is set of pairs (; xi, P (; xi)) for all xi : א x(Ω), where P (; xi)
is the probability that ; takes the value xi .
Consider the experiment of tossing two unbiased coins simultane ously. ; number
of tails observed in each tossing. Then range set of ; ^ 0, 1 ,2 `. Although we
cannot in advance predict what value ; will take, we can certa inly state the
probabilities with which ; will take the three values 0,1,2. Th e following table
helps to determine such probabilities. Outcome Probability of Outcome Value of ; HH 1/4 0 TH 1/4 1 HT 1/4 1 TT 1/4 2 Following events are associated with the distinct values of ;
(; 0) ֜ ^ HH`
(; 1) ֜ ^ TH,HT`
(; 2) ֜ ^ TT`
Therefore, probabilities of various values of ; are nothing bu t the probabilities of
the events with which the respective values are associated.
P (; 0) ֜ P ^ HH` 1/4
P (; 1) ֜ P ^ TH, HT` 1/4  1/4 1/2
P (; 2) ֜ P ^TT` 1/4
Example2:
A random variable is number of tails when a coin is flipped thr ice. Find probability
distribution of the random variable.
Solution:
Sample space Ω = { HHH , THH, HTH, HHT, TTH, THT, HTT, TTT `

munotes.in

Page 168

168NUMERICAL AND STATISTICAL METHODSThe required probability distribution is Value of Random Variable ; xi 0 1 2  Probability P(; x i) 1/8 3/8 3/8 1/8
Example :
A random variable is sum of the numbers that appear when a pair of dice is rolled.
Find probability distribution of the random variable.
Solution:
;(1,1) 2, ;(1, 2) ;(2, 1) 3, ;(1, 3) ;(3, 1) ; ( 2, 2) 4 etc..
The probability distribution is, ; xi 2  4 5 6 7   10 11 12 P(; x i) 1/36 2/36 3/36 4/36 5/36 6/35 5/36 4/36 3/36 2/36 1/36
11..1 Probability Mass Function (p.m.f.)
Let X be a discrete random variable defined on a sample space Ω / S . Suppose
^x1, x2, x3, ……. xn` is the range set of ;. With each of xi, w e a s s i g n a n u m b e r
P(xi) P(; xi) called the probability mass function (p.m.f.) such that,
P (xi) ≥ 0 for i 1, 2, 3 ….,n and (AND)
σሺݔ௜ሻൌͳ୬
௜ୀଵ

The table containing, the value of ; along with the probabiliti es given by
probability mass function (p.m.f.) is called probability distri bution of the random
variable ;.
For example, ; xi ;1 ;2 ….. ….. ;i … …. ;n Total P(; x i) P1 P2 …. …. Pi … … Pn 1 Remark: Properties of a random variable can be studied only in terms o f its p.m.f.
We need not have refer to the underlying sample space Ω, once w e have the
probability distribution of random variable. munotes.in

Page 169

169Chapter 11: Random VariablesExample 4:
A fair die is rolled and number on uppermost face is noted. Fin d its probability
distribution (p.m.f.)
Solution:
; Number on uppermost face.
Therefore, Range set of ; ^1, 2, 3, 4, 5, 6 `
Probability of each of the element is 1 / 6
The Probability distribution of ; is ; xi 1 2 3 4 5 6 Total P(; x i) 1/ 6 1/ 6 1/ 6 1/ 6 1/ 6 1/ 6 1 Example 5:
A pair of fair dice is thrown and sum of numbers on the uppermo st faces is noted.
Find its probability distribution (p.m.f.).
Solution:
; Sum of numbers on the uppermost faces.
Ω contains 36 elements (ordered pairs)
Range set of ; ^ 2, 3, 4, 5, 6, 7, 8,9, 10, 11, 12 `
Since, ; (1, 1) 2 and ; (6, 6) 12 Value of ; Subset of Ω Pi P (; i) 2 ^ (1,1) ` 1/36 3 ^ (1,2), (2,1) ` 2/36 4 ^ (1,3), (2,2), (3,1) ` 3/36 5 ^ (1,4), (2,3), (3,2), (4,1) ` 4/36 6 ^ (1,5), (2,4), (3,3), (4,2), (5,1) ` 5/36 7 ^ (1,6), (2,5), (3,4), (4,3), (5,2), (6,1) ` 6/36 8 ^ (2,6), (3,5), (4,4), (5,3), (6,2) ` 5/36 9 ^ (3,6), (4,5), (5,4), (6,3) ` 4/36 10 ^ (4,6), (5,5), (6,4) ` 3/36 11 ^ (5,6) (6,5) ` 2/36 12 ^ (6,6) ` 1/36 munotes.in

Page 170

170NUMERICAL AND STATISTICAL METHODSExample 6:
Let ; represents the difference between the number of heads and the number of
tails obtained when a fair coin is tossed three times. What are the possible values
of ; and its p.m.f."
Solution:
Coin is fair. Therefore, probability of heads in each toss is P H 1/2. Similarly,
probability of tails in each toss is P T 1/2.
; can take values n – 2r where n 3 and r 0, 1, 2, 3.
e.g. ; 3 (HHH), ; 1 (HHT, HTH, THH), ; -1 (HTT, THT, HTT) a n d
; -3 (TTT)
Thus the probability distribution of ; (possible values of ; a nd p.m.f) is ; xi -3 -1 1 3 Total p.m.f. P(; x i) 1/ 8 3/ 8 3/ 8 1/ 8 1 Example 7:
Let ; represents the difference between the number of heads and the number of
tails obtained when a coin is toss ed n times. What are the poss ible values of ; "
Solution:
When a coin is tossed n times, number of heads that can be obta ined are n, n-1,
n-2, ……,2, 1, 0. Corresponding number of tails are 0, 1, 2, ……, n-2, n-1, n.
Thus the sum of number of heads and number of tails must be equ al to number of
trials n.
Hence, values of ; are from n to –n as n, n-2, n- 4, ……, n -2r
where r 0,1, 2, 3, ….., n
Note if n is even ; has one of its value as zero also. However, if n is odd ; has
values -1, 1 but not zero.
11.4 Probability Distributions of Continuous Random Variable
In case of discrete random variable using p.m.f. we get probabi lity distribution of
random variable, however in case of continuous random variable probability mass
is not attached to any particular value. It is attached to an i nterval. The probability
attached to an interval depends upon its location. munotes.in

Page 171

171Chapter 11: Random VariablesFor example, P (a  ;  b) varies for different values of a and b. In other words, it
will not be uniform. In order to obtain the probability associa ted with any interval,
we need to take into account the concept of probability density .
11.4.1 Probability Density Function (p.d.f.)
Let ; be a continuous random variable. Function f(x) defined for all real
x א-( ∞, ∞) is called probability density function p.d.f. if for any set B of real
numbers, we get probability,
P ^ ; א B ` ׬݂ሺݔሻݔ݀஻
All probability statements about ; can be answered in terms of f (x). Thus,
P { a ≤ X ≤ B} = ׬݂ሺݔሻݔ݀௕

Note that probability of a continuous random variable at any pa rticular value is
zero, since
P { X = a} = P { a ≤ X ≤ a} = ׬݂ሺݔሻݔ݀௔
௔ൌͲ
11.5 Properties of Random variable and their probability
distributions
Properties of a random variable can be studied only in terms of its p.m.f . or p.d.f.
We need not have refer to the underlying sample space Ω, once w e have the
probability distribution of random variable. Since the random v ariable is a function
relating all outcomes of a random experiment, the probability d istribution of
random variable must satisfy Axioms of probability. These in c ase of discrete and
continuous random variable are stated as,
Axiom I : Any probability must be between zero and one.
For discrete random variable : 0 ≤ p( xi) ≤ 1
For continuous random variable : For any real number a and b
0 ≤ P { a ≤ x ≤ b } ≤ 1 OR 0 ≤ ׬݂ሺݔሻݔ݀௕
௔ ≤ 1
Axiom II : Total probability of sample space must be one
For discrete random variable : σ’ሺݔ௜ஶ
୧ୀଵሻൌͳ
For continuous random variable : ׬݂ሺݔሻݔ݀ஶ
ିஶൌͳ
munotes.in

Page 172

172NUMERICAL AND STATISTICAL METHODSAxiom III : For any sequence of mutually exclusive events E 1, E2, E3,……… i.e
Ei ∩ EM = Φ for i ≠ j, probability of a union set of events is sum of their indiv idual
probabilities. This axiom can also be written as P(E1 ڂE2) P(E 1)  P (E 2) where
E1 and E 2 are mutually exclusive events.
For discrete random variable : ࡼሺڂࡱܑஶ
ܑୀ૚) σሺ୧ஶ
୧ୀଵሻ
For continuous random variable : ׬݂ሺݔሻݔ݀௕
௔ൌ׬݂ሺݔሻݔ݀௖
௔൅׬݂ሺݔሻݔ݀௕
௖ also,
P (a ≤ x ≤ b ڂ c ≤ x ≤ d) ׬݂ሺݔሻݔ݀௕
௔൅׬݂ሺݔሻݔ݀ௗ

Axiom IV : P (Φ) 0
11.6 Cumulative Distribut ion Function (c.d.f.)
It is also termed as Must a distribution function. It is the ac cumulated value of
probability up to a given value of the random variable. Let ; b e a random variable,
then the cumulative distribution function (c.d.f.) is defined a s a function F(a) such
that,
F(a) = P { X ≤ a }
11.6.1 Cumulative Distribution Function (c.d.f.) for Discrete R andom
Variable
Let ; be a discrete random variable defined on a sample space S taking values
^x1, x2,…, xn `
With probabilities p( xi), p(x2), …. p(xn) respectively. Then cumulative distribution
function (c.d.f.) denoted as F(a) and expressed in term of p.m. f. as
F ( a ) σ’ሺݔiሻ
௫iஸ௔
Note:
1. The c.d.f. is defined for all values of xi € R. However , since the random
variable takes only isolated values, c.d.f. is constant between two successive
values of ; and has steps at the points xiI, i = 1, 2, … , n. Thus, the c.d.f for a
discrete random variable is a step function.
2. F (∞) 1 and F (- ∞) 0 munotes.in

Page 173

173Chapter 11: Random VariablesProperties of a random variable can be studied only in term of its c.d.f. We need
not refer to the underlying sample space or p.m.f., once we hav e c.d.f. of a random
variable.
11.5.2 Cumulative Distribution Function (c.d.f.) for Continuous Random
Variable
Let ; be a continuous random variable defined on a sample spac e S which p.d.f.
f(x). Then cumulative distribution (c.d.f.) denoted as F (a) and ex pressed in term of
p.d.f. as,
F (a) ׬݂ሺݔሻݔ݀ஶ
ିஶ
Also, differentiating both sides we get,

ௗ௔ܨሺܽሻൌ ݂ሺܽሻ, thus the density is the derivative of the cumulative
distribution function.
11.7 Properties of Cumulative Di stribution Function (c.d.f.)
1. F (x) is defined for all x € R , real number.
2. 0 ≤ F (x) ≤ 1
3. F (x) is a non-decreasin g function of x. >if a  b, then F (a) ≤ F (b).
4. F (∞) 1 and F (- ∞) 0
Where F(- ∞) = Ž‹
୶՜ିஶ ሺšሻ, F(∞) = Ž‹
୶՜ஶ ሺšሻ
5. Let a and b be two real numbers where a  b  then using distri bution function,
we can compute probabilities of different events as follows.
i) P (a < X ≤ b) = P[X ≤ b] - P[X ≤ a]
F(b) –F(a)
ii) P (a ≤ X ≤ b) = P[X ≤ b] - P[X ≤ a] + P (X=a)
F(b) –F(a)  P(a)
iii) P (a ≤ X < b) = P[X ≤ b] - P[X ≤ a] - P (; b)  P (; a)
F(b) –F(a) - P(b)  P(a)
iv) P (a < X < b) = P[X ≤ b] - P[X ≤ a] - P (; b)
F(b) –F(a) - P(b) munotes.in

Page 174

174NUMERICAL AND STATISTICAL METHODSv) P (; ! a) 1 - P[X ≤ a] = 1 - F(a)
vi) P (X ≥ a) = 1 - P[X ≤ a] + P[X=a] = 1 – F(a)  P(a)
vii) P (; a) F(a) - Ž‹
௡՜ஶܨቀܽെଵ
௡ቁ
viii) P(;  a) Ž‹
௡՜ஶቀܽെଵ
௡ቁ
Example :
The following is the cumulative distribution function of a disc rete random variable.
i) Find the p.m.f of ; ii) P(0  ;  2) iii) P(1 ≤ X ≤ 3) iv) P(-3 < X ≤ 2 ) v) P(1 ≤ X < 1) vi) P(; even) vii) P(; ! 2) viii) P(X ≥ 3) Solution:
i) Since F(x i) σܲj௜
௝ୀଵ
F(x i-1) σܲj௜ିଵ
௝ୀଵ
׵ Pi σܲj௜
௝ୀଵ - σܲj௜ିଵ
௝ୀଵ F(x i) – F(x i-1)
׵ The p.m.f of ; is given by

ii) P(0  ;  2) F(2) –F(0) - P(2) 0.75 – 0.45-0.1 0.2
iii) P(1 ≤ X ≤ 3) = F(3) – F(1)  P(1) 0.9-0.650.2 0.45
iv) P(-3 < X ≤ 2 ) F(2) –F(-3) 0.75 – 0.1 0.65 ; xi -3 -1 0 1 2 3 5 8 F(x) 0.1 0.3 0.45 0.65 0.75 0.90 0.95 1.00
; xi -3 -1 0 1 2 3 5 8 F(x) 0.1 0.2 0.15 0.2 0.1 0.15 0.05 0.05 munotes.in

Page 175

175Chapter 11: Random Variablesv) P(1 ≤ X < 1) = F(1) – F(-1) – P(1)  P(-1) 0.65 – 0.3- 0.2  0.2 035
vi) P(; even) P(x 0)  P(x 2)  P(x 8) 0.15  0.1  0.05 0 .3
vii) P(; ! 2) 1 – F(2) 1 – 0.75 0.25 OR P(x 3)  P(x 5)  P(x 8) 0.15  0.05  0.05 0.25
viii) P(X ≥ 3) 1 – F(3)  P(3) 1- 0.9  0.15 0.25
Example :
A random variable has the following probability distribution Values of ; 0 1 2 3 4 5 6 7 8 P(x) a 3 a 5 a 7 a 9 a 11 a 13 a 15 a 17 a (1) Determine the value of a
(2) Find (i) P(x  3) (ii) P(x d 3) (iii) P(x !7) (iv)P(2 d x d 5) (v) P(2  x 5)
(3) Find the cumulative distribution function of x.
Solution:
1. Since p i is the probability mass function of discrete random variable ;,
We have 6pi 1
? a + 3 a + 5a + 7a + 9a +11a + 13a + 15a + 17a = 1
81 a = 1
a = 1/81
2. (i) P(x 3) P(x 0)  P(x 1)  P(x 2)
a  3 a  5a
9 a = 9 * (1 / 81) = 1 / 9
(ii) P( x d 3) P (x 0)  P(x 1)  P(x 2) P(x 3)
a  3 a  5 a  7 a
= 16 a = 16 * (1 / 81) = 16 / 81
(iii) P(x !7) P(x 8) 17 a =17 * (1 / 81)= 17 / 81
(iv) P (2 ≤ x ≤ 5) P(x 2) P(x 3)  P(x 4) P(x 5)
5a  7a 9a 11a = 32 a = 32 * (1 / 81) = 32 / 81
(v) P(2  x  5) P(x 3)  P(x 4) 7 a  9a = 16a = 16 *
(1 / 81) = 16 / 81 munotes.in

Page 176

176NUMERICAL AND STATISTICAL METHODS3. The distribution function is as follows: ; x 0 1 2 3 4 5 6 7 8 F(x)
P (;d x) a 4a 9a 16a 25a 36a 49a 64a 81a (or)
F(x) 1

81 4

81 9

81 16

81 25

81 36

81 49

81 64

81 81 1
81
Example 10:
Find the probability between ; 1 and 2 i.e. P( 1 ≤ X ≤ 2 ) for a continuous random
variable whose p.d.f. is given as
݂ሺݔሻൌሺଵ
଺ݔ ൅݇ሻ for 0 ≤ x ≤ 3
Solution:
Now, p.d.f must satisfy the probability Axiom II. Thus
׬݂ሺݔሻݔ݀ஶ
ିஶൌͳ
׬ቀଵ
଺ݔ൅݇ቁݔ݀ ൌ  ଷ
଴ ቂ௫మ
ଵଶ൅ݔ݇ቃ͵

Ͳ ቂଷమ
ଵଶ൅͵݇ቃ൅Ͳൌͳ
׵ 12k 1
׵ k ଵ
ଵଶ
Now, P (1 ≤ X ≤ 2) = ׬݂ሺݔሻݔ݀ ൌଶ
ଵ ׬ቀଵ
଺ݔ൅ଵ
ଵଶቁݔ݀ൌଵ
ଷଶ

Example 11:
A continuous random variable whose p.d.f. is given as
݂ሺݔሻൌቄݔ݇ሺʹെݔሻ
૙ Ͳ൏ݔ൏ʹ
݁ݏ݅ݓݎ݄݁ݐ݋
i) Find k
ii) Find P ( x  ଵ
ଶ)
Solution:
i) Now, p.d.f must satisfy the proba bility Axiom II. Thus
׬݂ሺݔሻݔ݀ஶ
ିஶൌͳ
׬ݔ݇ሺʹെݔሻݔ݀ ൌଶ
଴ ቈݔ݇2െ௞௫3ଷ቉ʹ

Ͳ 1
׵ k ଷ
ସ munotes.in

Page 177

177Chapter 11: Random Variablesii)
Now, P ( x  ଵ
ଶ) ׬݂ሺݔሻݔ݀ଵȀଶ
ିஶ
ൌ ׬ଷ
ସݔሺʹെݔሻݔ݀భ
మିஶ ൌቈଷ
ସݔ2െ௫3ଷ቉ͳȀʹ

Ͳ ହ
ଵଶ
Example 12:
A random variable is a number of tails when a coin is tossed th ree times. Find
p.m.f. and c.d.f of the random variable.
Solution:
S / Ω = { TTT, HTT, THT, TTH, HHT, HTH,THH, HHH }
n(Ω) = 8 ; xi 0 1 2 3 Total p.m.f. P(; x i) 1/ 8 3/ 8 3/ 8 1/ 8 1 c.d.f F(a) P^ ; x i ≤ a` 1/ 8 4 / 8 7 / 8 1 c.d.f. will be describe as follows:





Example 1:
A c.d.f. a random variable is as follows





Find i) P( X < 3 ) ii) P ( X = 1) F(a) 0 -∞ < a < 0 ଵ
଼ 0 ≤ a < 1 ସ
଼ 1 ≤ a < 2 ଻
଼ 2 ≤ a < 3 1 3 ≤ a < -∞ F(a) 0 -∞ < a < 0 ଵ
ଶ 0 ≤ a < 1 ଶ
ଷ 1 ≤ a < 2 ଵଵ
ଵଶ 2 ≤ a < 3 1 3 ≤ a < -∞ munotes.in

Page 178

178NUMERICAL AND STATISTICAL METHODSSolution:
i) P(X < 3 ) Ž‹
௡՜ஶܨቀ͵െଵ
௡ቁൌଵଵ
ଵଶ

ii) P (X =1) P ( X ≤ 1) - P ( X < 1)
F(1) - Ž‹
௡՜ஶܨቀ͵െଵ
௡ቁൌଶ
ଷെଵ
ଶൌଵ
଺

11. Expectation or Expected Value of Random Variable :
Expectation is a very basic concep t and is employed widely in decision theory,
management science, system analysis, theory of games and many o ther fields. It
is one of the most important concepts in probability theory of a random variable.
Expectation of ; is denoted by E( X). The expected value or mathematical
expectation of a random variable ; is the weighted average of the values that ; can
assume with probabilities of its various values as weights. Ex pected value of random
variable provides a central point for the distribution of value s of random variable. So
expected value is a mean or aver age value of the probability di stribution of the random
variable and denoted as ‘μ’ (read as ‘ mew’). Mathematical expecta tion of a random
variable is also known as its arithmetic mean.
— E( X)
11..1 Expected Val ue of a Discrete Ra ndom Variable :
If X is a discrete random variable with p.m.f. P( xi), the expectation of X, denoted by
E(X), is defined as,
E ( X) σݔ௜ܲכሺݔ௜ሻ௡
௜ୀଵ Where xi for i = 1, 2, …..n (values of X)
11..2 Expected Value of a Continuous Random Variable :
If X is a discrete random variable with p.d.f. f( x), the expectation of X, denoted by
E(X), is defined as,
E(X) ׬݂ሺݔሻݔ݀ஶ
ିஶ
11.. Properties of Expectation
1. For two random variable ; and < if E(;) and E(<) exist, E(;  <) E(;) 
E(<) . This is known as addition theorem on expectation.
munotes.in

Page 179

179Chapter 11: Random Variables2. For two independent random variable ; and <, E(;<) E(;) .E(<)
provided all expectation exist. This is known as multiplication t h e o r e m o n
expectation.
3. The expectation of a constant is the constant it self. ie E(C) C
4. E(c;) cE(;)
5. E(a;b) aE(;)  b
11. Expectation of a Function
Let < g (;) is a function of random variable X, then Y is also a random variable
with the same probability distribution of X.
For discrete random variable X and Y, probability distribution of Y is also P( xi).
Thus the expectation of Y is,
E ( Y) E>(g ( xi)@ σ݃ሺݔ௜ሻܲכሺݔ ௜ሻ௡
௜ୀଵ
For continuous random variable X and Y, probability distribution of Y is also
f(x). Thus the expectation of Y is,
E( Y) E>(g ( xi)@ ׬݃ሺݔሻ݂כሺݔሻݔ݀ஶ
ିஶ
Example 14:
A random variable is number of tails when a coin is tossed thre e times. Find
expectation (mean) of the random variable.
Solution:
S / Ω = { TTT, HTT, THT, TTH, HHT, HTH,THH, HHH }
n(Ω) = 8 ; xi 0 1 2 3 p.m.f. P(; xi) 1/ 8 3/ 8 3/ 8 1/ 8 ࢞࢏ࡼכሺ࢞࢏ሻ 0 3 / 8 6 / 8 3 / 8
E(X) σݔ௜ܲכሺݔ௜ሻସ
௜ୀଵ Where xi for i = 1, 2, …..n (values of X)
E(X) 0 ൅ଷ
଼൅଺
଼൅ଷ
଼ൌଵଶ
଼ൌଷ

munotes.in

Page 180

180NUMERICAL AND STATISTICAL METHODSExample 15:
; is random variable with probability distribution ; xi 0 1 2 p.m.f. P(; xi) 0.3 0.3 0.4 < g(;) 2;  3
Find expected value or mean of < . (i.e. E( Y))
Solution:
< g(;) 2;  3
When ; 0, < """
< 2;  3 2(0)  3 3,
Similarly,
when ; 1, < 5, when ; 2, < 7 ; xi 0 1 2 Y yi 3 5 7 p.m.f. P(Y yi) 0.3 0.3 0.4 E(Y) E>(g ( xi)@ σ݃ሺݔ௜ሻܲכሺݔ ௜ሻ௡
௜ୀଵ σݕ௜ܲכሺݔ௜ሻ௡
௜ୀଵ
E(Y) 3 uuu 
11.10 Variance of a Random Variable
The expected value of ; (i.e. E( X)) provides a measure of central tendency of the
probability distribution. However it does not provide any idea regarding the spread
of the distribution. For this purpos e, variance of a random var iable is defined .
Let ; be a discrete random variable on a sample space S. The va riance of ; denoted
by Var(X) or σ2 (read as ‘sigma square ) is defined as,
Var(X) E >(; - —)2@ E >(; – E(;))2@ σሺݔ௜െρሻ2ܲሺݔ௜ሻ௡
௜ୀଵ
Var(X) σሺݔ௜ሻ2ܲሺݔ௜ሻെʹρ௡
௜ୀଵ σݔ௜ܲሺݔ௜ሻ௡
௜ୀଵ ൅ρ2σܲሺݔ௜ሻ௡
௜ୀଵ
E(;2) – ʹρ E(;)  —2 …………….. munotes.in

Page 181

181Chapter 11: Random Variables>ܧሺܺሻൌσݔ௜ܲሺݔ௜ሻ௡
௜ୀଵ ݀݊ܽσܲሺݔ௜ሻ௡
௜ୀଵ ൌͳ@
E(;2) – ʹρ.ρ  —2
E(;2) – —2 E(;2) – >E(;)@2
For continuous random variable,
ݎܸܽሺܺሻൌ නሺݔ െρሻ2݂ሺݔሻݔ݀ஶ
ିஶൌන ሺ ݔ2െʹρݔ൅ρ2ሻ݂ሺݔሻݔ݀ஶ
ିஶ

ൌන ݔ2݂ሺݔሻݔ݀ஶ
ିஶെනʹ ρ ݂ݔ ሺݔሻݔ݀൅ න ρ2݂ሺݔሻݔ݀ஶ
ିஶஶ
ିஶ
E ( ;2) – ʹρ E(;) – >E(;)@2
E ( ;2) – ʹρ.ρ  —2
E(;2) – —2 E(;2) – >E(;)@2
Since dimensions of variance are square of dimensions of ;, foe comparison, it is
better to take square root of variance. It is known as standard deviation and
denoted be S.D.(X) or σ (sigma)
S.D.= σ = ඥƒ”ሺšሻ
11.10.1 Properties of Variance:
1. Variance of constant is zero. ie Var(c) 0
2. Var(;c) Var ;
N o t e : This theorem gives that varian ce is independent of change of origin.
3. Var ( a;) a2 var(;)
N o t e : This theorem gives that change of scale affects the variance.
4. Var ( a;b) a2Var(;)
5. Var (b- ax) a2 Var(x)
Example 16:
Calculate the variance of ;, if ; denote the number obtained on the face of fair
die. munotes.in

Page 182

182NUMERICAL AND STATISTICAL METHODSSolution:
; is random variable with probability distribution ; xi 1 2 3 4 5 6 p.m.f. P(; xi) 1/6 1/6 1/6 1/6 1/6 1/6 ࢞࢏ࡼכሺ࢞࢏ሻ 1/6 2/6 3/6 4/6 5/6 6/6 ࢞2࢏ࡼכሺ࢞࢏ሻ 1/6 4/6 9/6 16/6 25/6 36/6 E(X) σݔ௜ܲכሺݔ௜ሻ଺
௜ୀଵ Where xi for i 1, 2, 3, 4, 5,6 (values of X)
E(X) ଵ
଺൅ଶ
଺൅ଷ
଺൅ସ
଺൅ହ
଺൅଺
଺ൌଶଵ
଺ 3.5
E(X2) σݔ2௜ܲכሺݔ௜ሻ଺
௜ୀଵ Where xi for i 1, 2, 3, 4, 5,6 (values of X)
E(X2) ଵ
଺൅ସ
଺൅ଽ
଺൅ଵ଺
଺൅ଶହ
଺൅ଷ଺
଺ൌଽଵ

σ2 Var (;) E(;2) – >E(;)@2 ଽଵ
଺െቀଶଵ
଺ቁ2ൌଵ଴ହ
ଷ଺ൌʹ Ǥ ͻ ͳ ͸ ͹
Example 17:
Obtain variance of r.v. ; for the following p.m.f. ; xi 0 1 2 3 4 5 p.m.f. P(; xi) 0.05 0.15 0.2 0.5 0.09 0.01
Solution:
; is random variable with probability distribution ; xi 0 1 2 3 4 5 p.m.f. P(; xi) 0.05 0.15 0.2 0.5 0.09 0.01 ࢞࢏ࡼכሺ࢞࢏ሻ 0 0.15 0.40 1.50 0.36 0.05 ࢞2࢏ࡼכሺ࢞࢏ሻ 0 0.15 0.80 4.50 1.44 0.25
E(X) σݔ௜ܲכሺݔ௜ሻହ
௜ୀ଴ Where xi for i 0,1, 2, 3, 4, 5 (values of X)
E(X) 2.46
E(X2) σݔ2௜ܲכሺݔ௜ሻହ
௜ୀ଴ Where xi for i 0,1, 2, 3, 4, 5 (values of X)
E(X2) 7.14
σ2 Var (;) E(;2) – >E(;)@2 ൌ͹ Ǥ ͳ Ͷെሺ ʹ Ǥ Ͷ ͸ ሻ2 1.0884 munotes.in

Page 183

183Chapter 11: Random VariablesExample 1:
Obtain variance of r.v. ; for the following probability distri bution
P(x) ௫మ
ଷ଴, x = 0,1, 2,3, 4
Solution:
; is random variable with probability distribution P( x) ௫మ
ଷ଴, x = 0,1, 2,3, 4
E(X) σݔ௜ܲכሺݔ௜ሻସ
௜ୀ଴ Where xi for i 0,1, 2, 3, 4 (values of X)
σݔ௜כସ
௜ୀ଴௫మ
ଷ଴ ଵ
ଷ଴σݔଷସ
௜ୀ଴ൌଵ
ଷ଴ሺͲ൅ͳ൅ͺ൅ʹ͹൅͸Ͷ ሻൌଵ଴଴
ଷ଴ ଵ଴
ଷ 3.33
E(X) ଵ଴

E(X2) ଵ
ଷ଴σݔସସ
௜ୀ଴ Where xi for i 0,1, 2, 3, 4, 5 (values of X)
ሺܺʹሻൌଵ
ଷ଴σݔସସ
௜ୀ଴ ଵ
ଷ଴ሺͳ൅ͳ͸൅ͺͳ൅ʹͷ͸ ሻൌଷହସ
ଷ଴ 11.8
E(X2) σ2 Var (;) E(;2) – >E(;)@2 ൌͳ ͳ Ǥ ͺെሺ ͵ Ǥ ͵ ͵ ሻ2 0.6889
11.11 Summary
In this chapter, random variables, its types with its Probabili ty Distributions,
expected value and variance is discussed.
Random variable : Let S / Ω be the sample space corresponding to the outcomes
of a random experiment . A function X: S → R (where R is a set of real numbers)
is called as a random variable.
A random variable ; is said to be discrete if it takes finite o r countably infinite
number of possible values. A sample space which is finite or co untably infinite is
called as denumerable of countable. If the sample space is not countable then it is
called continuous.
The probability distribution (or simply distribution ) of a ran dom variable ; on a
sample space Ω is set of pairs (; xi, P (; xi)) for all xi : א x(Ω), where P (;
xi) is the probability that ; takes the value xi .
Let X be a discrete random variable defined on a sample space Ω / S . Suppose { x1,
x2, x3, ……. xn ` is the range set of ;. With each of xi, we assign a number P( xi) P(;
xi) called the probability mass function (p.m.f.) such that, munotes.in

Page 184

184NUMERICAL AND STATISTICAL METHODS P ( xi) ≥ 0 for i 1, 2, 3 ….,n and (AND)
σሺݔ௜ሻൌͳ୬
௜ୀଵ
Probability Density Function (p.d.f.) Let ; be a continuous r andom variable.
Function f(x) defined for all real x א-( ∞, ∞) is called probability density function
p.d.f. if for any set B of real numbers, we get probability,
P ^ ; א B ` ׬݂ሺݔሻݔ݀஻
Axiom I : Any probability must be between zero and one.
Axiom II : Total probability of sample space must be one
Axiom III : For any sequence of mutually exclusive events E 1, E2, E3,……… i.e
Ei ∩ EM = Φ for i ≠ j
Cumulative Distribution Function (c.d.f.)
It is also termed as Must a distribution function. It is the ac cumulated value of
probability up to a given value of the random variable. Let ; b e a random variable,
then the cumulative distribution function (c.d.f.) is defined a s a function F(a) such
that,
F(a) = P { X ≤ a }
Expected Value of Random Variable expected value is a mean or average value
of the probability dis tribution of t he random variable — E( X)
The variance of ; denoted by Var(X) S.D.= σ = ඥƒ”ሺšሻ
11.12 Reference
Fundamentals of Mathematical Statistics S. C. Gupta, V. K. Kapo or munotes.in

Page 185

185Chapter 11: Random Variables
11.1 Unit End Exercise
1 An urn contains 6 red and 4 white balls. Three balls are drawn at random.
Obtain the probability distribution of the number of white ball s drawn.
Hints and Answers :
; xi01 2 3
p.m.f. P(; xi ) 5/30 15/30 9/30 1/30
2 Obtain the probability distribution of the number of sixes in two tosses of a
die
Hints and Answers :
; xi01 2
p.m.f. P(; xi ) 25/36 10/36 1/36
3 If the variable ; denotes the maximum of the two numbers , whe n a pair of
unbiased die is rolled , fi nd the probability distribution of ;.
Hints and Answers:
; xi123456
p.m.f. P(; xi ) 1/36 3/36 5/36 4/36 9/36 11/36
4 A box of 20 mangoes contain 4 bad mangoes. Two mangoes are dra wn
at random without replacement from this box. Obtain the probabi lity
distribution of the number of bad mangoes in sample
Hints and Answers:
; xi01 2
p.m.f. P(; xi ) 95/138 40/138 3/138
5 Three cards are drawn at random successively, with replacement , from a well
shu൷ ed pack of 52 playing cards. Getting ‘a card of diamonds ‘ is termed as
a success. Obtain the probability distribution of the number of the successes.
Hints and Answers:
; xi01 2 3
p.m.f. P(; xi ) 27/64 27/64 9/64 1/64munotes.in

Page 186

186NUMERICAL AND STATISTICAL METHODS
6 Determine ‘k’ such that the following functions are p.m.f.s
i) f(x) kx , x 1 , 2 , 3 ,««.,10. > Ans: 1 /55 @
ii) f(x) k. x 0 , 1 , 2 , 3 > Ans: 3/19 @
7 A random variable ; has the following probability distribution .
; x i0123456
P(; xi )k 3k 5k 7k 9k 11k 13k
Find k, > Ans: 1/49 @
i) Find P( ; • 2 ) > Ans: 45/49 @
ii) Find P( 0  ; 5) >Ans: 24/49 @
8 Given the following distribution function of a random variable ;.
; xi-3 -2 -1 0 1 2 3
F(;) 0.05 0.15 0.38 0.57 0.72 0.88 1
Obtain:
i) p.m.f. of ; >Ans :
; xi-3 -2 -1 0 1 2 3
P(;) 0.05 0.1 0.23 0.19 0.15 0.16 0.12
@
ii) P ( -2 ” ; ” 1) > Ans: 0.67 @
iii) P ( ; ! 0) > Ans: 0.43 @
iv) P(-1  ;  2) > Ans: 0.34 @
v) P ( -3 ” ;  -1) > Ans: 0.15 @
vi) P( -2  ; ” 0) > Ans: 0.42 @
; xi0123456
P(; xi )k 3k 5k 7k 9k 11k 13k
9 Continuous random variable ; assumes values between 2 and 5 wi th p.d.f.
i) Find k > Ans : 2 /27 @
ii) Find P ( x  4) > Ans: 16 /27 @munotes.in

Page 187

187Chapter 11: Random Variables
10 Following is the c.d.f. of a discrete random variable ;.
; xi12345678
F(;” a) 0.08 0.12 0.23 0.37 0.48 0.62 0.85 1
Find:
i) p.m.f. of ; > Ans :
; xi12345678
P(;) 0.08 0.04 0.11 0.08 0.17 0.14 0.23 0.15
@
ii) P ( ; ” 4) > Ans : 0.31 @
iii) P( 2 ” ; ” 6) > A ns : 0.54 @

munotes.in

Page 188

188NUMERICAL AND STATISTICAL METHODSUnit 5
12
DISTRIBUTIONS: DISCRETE DISTRIBUTIONS
Unit Structure
12.0 ObMectives
12.1 Introduction
12.2 Uniform Distribution
12.2.1 Definition
1 2 . 2 . 2 M e a n a n d V a r i a n c e o f U n i f o r m D i s t r i b u t i o n
1 2 . 2 . 3 A p p l i c a t i o n s o f U n i f o r m D i s t r i b u t i o n
12.3 Bernoulli Distribution
12.3.1 Definition
1 2 . 3 . 2 M e a n a n d V a r i a n c e o f B e r n o u l l i D i s t r i b u t i o n
1 2 . 3 . 3 A p p l i c a t i o n s o f B e r n o u l l i D i s t r i b u t i o n
1 2 . 3 . 4 D i s t r i b u t i o n o f S u m o f i n d e p e n d e n t a n d i d e n t i c a l l y d istributed
Bernoulli Random variables
12.4 Binomial Distribution
12.4.1 Definition
1 2 . 4 . 2 M e a n a n d V a r i a n c e o f B i n o m i a l D i s t r i b u t i o n
1 2 . 4 . 3 A p p l i c a t i o n s o f B i n o m i a l D i s t r i b u t i o n
12.5 Poisson Distribution:
12.5.1 Definition
1 2 . 5 . 2 M e a n a n d V a r i a n c e o f P o i s s o n D i s t r i b u t i o n
1 2 . 5 . 3 A p p l i c a t i o n s o f P o i s s o n D i s t r i b u t i o n
1 2 . 5 . 4 C h a r a c t e r i s t i c s o f P o i s s o n D i s t r i b u t i o n
12.6 Summary
12.7 Reference
12.8 Unit End Exercise munotes.in

Page 189

189Chapter 12: Distributions: Discrete Distributions12.0 Objectives
After going through this unit, you will be able to:
x Understand the need of standard probability distribution as mod els
x Learn the Probability distributions and compute probabilities
x Understand the specific situati ons for the use of these models
x Learn interrelations among the di fferent probability distributi ons.
12.1 Introduction
In previous unit we have seen the general theory of univariate probability
distributions. For a discrete random variable, p.m.f. can be ca lculated using
underlying probability structure o n the sample space of the ran dom experiment.
The p.m.f. can be expressed in a mathematical form. This probab ility distribution
can be applied to variety of real-life situations which possess s o m e c o m m o n
features. Hence these are also called as ‘Probability Models’.
Discrete distributions:
Uniform, Bernoulli, Binomial, Poisson Continuous distributions:
Uniform distributions, Exponential
In this unit we will study some probability distributions. Viz. Uniform, Binomial,
Poisson and Bernoulli distributions.




munotes.in

Page 190

190NUMERICAL AND STATISTICAL METHODS12.2 Uniform Distribution
Uniform distribution is the simplest statistical distribution. When a coin is tossed
the likelihood of getting a tail or head is the same. A good ex ample of a discrete
uniform distribution would be the possible outcomes of rolling a 6-sided fair die.
Ω = { 1, 2, 3, 4, 5, 6 }
In this case, each of the six numbers has an equal chance of a ppearing. Therefore,
each time the fair die is thrown, each side has a chance of 1/6 . The number of
values is finite. It is impo ssible to get a value of 1.3, 4.2, or 5.7 when rolling a fair
die. However, if another die is added and they are both thrown, the distribution
that results is no longer uniform because the probability of th e sums is not equal.
A deck of cards also has a uniform distribution. This is becaus e an individual has
an equal chance of drawing a spade, a heart, a club, or a diamo nd i.e. 1/52.
Consider a small scale company with 30 employees with employee id 101 to 130.
A leader for the company to be selected at random. Therefore a employee id is
selected randomly from 101 to 130. If ; denotes the employee i d selected then
since all the id’s are equally likely , the p.m.f. of ; is given by,
P(x) ଵ
ଷ଴ x 101, 102, …….150
0 otherwise
Such distribution is called as a discrete uniform distribution. The discrete uniform
distribution is also known as the equally likely outcomes dis tribution.
The number of values is finite. It is impossible to get a value of 1.3, 4.2, or 5.7
when rolling a fair die. Howeve r, if another die is added and t hey are both
thrown, the distribution that results is no longer uniform beca use the probability
of the sums is not equal. Another simple example is the probabi lity distribution of
a coin being flipped. The possible outcomes in such a scenario can only be two.
Therefore, the finite value is 2.
12.2.1 Definition: Let X be a discrete random taking values 1, 2, ……, n. Then ; is said to follow uniform discrete uniform distribution if it s p.m.f is given by P (; x) ͳ
݊ x = 1, 2,,….n
0 otherwise munotes.in

Page 191

191Chapter 12: Distributions: Discrete Distributions‘n’ is called as the parameter of the distribution. Whenever , the parameter value is
known, the distribution is known completely. The name is ‘unifo rm’ as it treats
all the values of the variable ‘uniformly’. It is applicable wh ere all the values of
the random variable are equally likely.
Some examples or the situation where it applied
1. Let ; denote the number on the face of unbiased die, after it is rolled. P (; x) ͳ
͸  x 1, 2, 3,4,5,6
0  otherwise 2. A machine generates a digit randomly from 0 to 9 P (; x) ͳ
ͳͲ  x = 0, 1, 2, ….9
0  otherwise
12.2.2Mean and Variance of Uniform Distribution
Let ; is said to follow Uniform discrete uniform distribution a nd its p.m.f is
given by P (; x) ͳ
݊ x = 1, 2,,….n
0 otherwise
Mean E(;) σݔ௜ܲכሺݔ௜ሻ௡
௜ୀଵ ଵ
௡σݔ௜௡
௜ୀଵ ௡ሺ௡ାଵሻ
ଶ௡ൌ௡ାଵ

Variance (X) = E(;2) – >E(;)@2
E(;2) σݔ2௜ܲכሺݔ௜ሻ௡
௜ୀଵ ൌଵ
௡σݔ2௜௡
௜ୀଵൌሺ௡ାଵሻሺଶ௡ାଵሻ

Variance (X) = E(;2) – >E(;)@2 ሺ௡ାଵሻሺଶ௡ାଵሻ
଺െሺ௡ାଵሻమ
ସൌ௡మିଵ
ଵଶ
Standard Deviation (S.D.) (;) = σ ඥݎܸܽሺܺሻ ට௡మିଵ
ଵଶ
munotes.in

Page 192

192NUMERICAL AND STATISTICAL METHODS12.2. Applications of Uniform Distribution
Example1: Find the variance and standard deviation of ;, where ; is the square
of the score shown on a fair die.
Solutions: Let ; is a random variable which shows the square of the score on a
fair die.
; ^ 1, 4, 9,16, 25, 36 ` each on have the probability ଵ

Mean E(;) σݔ௜ܲכሺݔ௜ሻ௡
௜ୀଵ ଵ
଺ሺͳ൅Ͷ൅ͻ൅ͳ͸൅ʹͷ൅͵͸ሻ ଽଵ

Variance (X) = E(;2) – >E(;)@2
E(;2) σݔ2௜ܲכሺݔ௜ሻ௡
௜ୀଵ ଵ
଺ሺͳ൅ͳ͸൅ͺͳ൅ʹͷ͸൅͸ʹͷ൅ͳʹͻ͸ሻ ଶଶ଻ହ

Variance (X) = E(;2) – >E(;)@2 ଶଶ଻ହ
଺െቂଽଵ
଺כଽଵ
଺ቃൌହଷଽ଺
ଷ଺
Standard Deviation ( S.D.) (X)= σ = ඥݎܸܽሺܺሻൌටହଷଽ଺
ଷ଺
12. Bernoulli Distribution
A trial is performed of an experiment whose outcome can be clas sified as either
success or failure. The probability of success is p ( 0 ≤ p ≤ 1) and probability of
failure is (1-p). A random variable ; which takes two values 0 and 1 with
probabilities ‘q’ and ‘p’ i.e. P(x 1) p and P(x 0) q, p  q 1 (i.e q 1 –p),
is called a Bernoulli variate and is said to be a Bernoulli Dis tribution, where p and
q takes the probabilities for success and failure respectively . It is discovered by
Swiss Mathematician James or Jacq ues Bernoulli (1654-1705). It is applied
whenever the experiment results in only two outcomes. One is su ccess and other
is failure. Such experiment is called as Bernoulli Trial.
12..1 Definition: Let ; be a discrete random tak ing values either success (1/ True / p) or failure (0 / False / q). Then ; is said to fol low Bernoulli discrete
distribution if its p.m.f is given by P (; x) ݌௫ݍଵି௫ x 0, 1 0 otherwise Note: 0 ≤ p ≤ 1, p  q 1 This distribution is known as Bernoulli distribution with param eter ‘p’ munotes.in

Page 193

193Chapter 12: Distributions: Discrete Distributions12..2 Mean and Variance of Bernoulli Distribution
Let X follows Bernoulli Distribution with parameter ‘p’. Theref ore its p.m.f. is
given by P (; x) ݌௫ݍଵି௫ x 0, 1 0 otherwise Note: 0 ≤ p ≤ 1 , p  q 1 Mean E(;) σݔ௜ܲכሺݔ௜ሻଵ
௜ୀ଴ ൌσݔ௜݌௫ݍଵି௫ ଵ
௜ୀ଴
S u b s t i t u t e t h e v a l u e o f x= 0, and x 1, we get
E(;) (0 î p0 î q1-0)  (1 î p1 î q1-1) p
Similarly, E(;2) σݔ௜ଶ
ܲכሺݔ௜ሻଵ
௜ୀ଴ ൌσݔ௜ଶ݌௫ݍଵି௫ ଵ
௜ୀ଴
Substitute the value of x= 0, and x 1, we get
E(;2) p
Variance (X) = E(;2) – >E(;)@2 p – p2 p (1- p) = pq ………… (p  q 1)
Standard Deviation ( S.D.) (X)= σ = ඥݎܸܽሺܺሻ ඥݍ݌
Note: if p q ଵ
ଶ the Bernoulli distribution is reduced to a Discrete Uniform
Distribution as P (; x) ͳ
ʹ x 0,1
0 otherwise 12.. Applications of Bernoulli Distribution
Examples of Bernoulli’ s Trails are:
1) Toss of a coin (head or tail)
2) Throw of a die (even or odd number)
3) Performance of a student in an examination (pass or fail)
4) Sex of a new born child is recorded in hospital, Male 1, Female 0
5) Items are classified as ‘defective=0’ and ‘non -defective = 1’. munotes.in

Page 194

194NUMERICAL AND STATISTICAL METHODS12..4 Distribution of Sum of independent and identically dis tributed
Bernoulli Random variables
Let < i, i= 1,2,…n be ‘n’ independent Bernoulli random variables with pa rameter
‘p’ (‘p’ for success and ‘q’ for failure p + q = 1)
i.e. P >< i 1@ p and P >< i 0@ = q, for i = 1,2,…n.
Now lets define, X which count the number of ‘1’s (Successes) in ‘n’
independent Bernoulli trials,
; σ୧௡
௜ୀଵ
In order to derive probability of ‘ x’ successes in ‘n’ trials i.e. P [ X = x@
Consider a particular sequence of ‘ x’ successes and remaining (n – x) failures as
1 0 1 1 1 0 1 0 ……… 1 0
Here ‘1’ (Success p) occurs ‘x’ times and ‘0’ (Failure q) occurs (n – x) times.
Due to independence, probability of such a sequence is given as follows:
݌݌݌ǥǤǤ݌ᇣᇧᇧᇧᇤᇧᇧᇧᇥ௫௧௜௠௘௦ ݍݍݍݍǥǥǥǤݍᇣᇧᇧᇧᇧᇤᇧᇧᇧᇧᇥൌ݌௫
ሺ௡ି௫ሻ௧௜௠௘௦ݍሺ௡ି௫ሻ
However, the successes ( 1’s) can occupy any ‘x’ places out of ‘ n’ places in a
sequence in ൫୬
୶൯ ways, therefore P (; x) ቀ݊
ݔቁ݌௫ݍ௡ି௫ x 0, 1, 2…..n 0 otherwise Note: 0 ≤ p ≤ 1 , p  q 1 This gives us a famous distribution called as ‘ Binomial Distribution’
12.4 Binomial Distribution
This distribution is very useful in day to day life. A binomia l random variable
counts number of successes when ‘n ’ Bernoulli trials are perfor med. A single
success/failure experiment is al so called a Bernoulli trial or Bernoulli experiment,
and a sequence of outcomes is called a Bernoulli process for a single trial,
i.e., n 1, the binomial distribution is a Bernoulli distribut ion.
Binomial distribution is denoted by X → B (n, p)
Bernoulli distribution is Must a binomial distribution with n = 1
i.e. parameters (1, p). munotes.in

Page 195

195Chapter 12: Distributions: Discrete Distributions12.4.1 Definition:
A discrete random variable ; taking values 0, 1, 2, …….n is said to follow a
binomial distribution with parameters ‘n’ and ‘p’ if its p.m.f. is given by P (; x) ቀ݊
ݔቁ݌௫ݍ௡ି௫ x 0, 1, 2…..n 0 otherwise Note: 0 ≤ p ≤ 1 , p  q 1 Remark:
1) The probabilities are term in the binomial expansion of ( p  q )n, hence the
name ‘ Binomial Distribution ‘ is given
2) σሺšሻൌ୬
୶ୀ଴ σൌ୬
୶ୀ଴൫୬
୶൯’୶“୬ି୶ൌሺ ’൅“ ሻ୬ൌͳ
3) The binomial distribution is frequently used to model the numbe r of
successes in a sample of size n drawn with replacement from a p opulation
of size N. If the sampling is carried out without replacement, the draws are
not independent and so the resulting distribution is a hypergeo metric
distribution, not a binomial one
12.4.2 Mean and Variance of Binomial Distribution
Let ; follows Binomial Distribution with parameters ‘n’ and ‘p’ if its p.m.f. is
given by P (; x) ቀ݊
ݔቁ݌௫ݍ௡ି௫ x 0, 1, 2…..n 0 otherwise Note: 0 ≤ p ≤ 1 , p  q 1 Mean E(;) σݔ௜ܲכሺݔ௜ሻଵ
௜ୀ଴ ൌσݔ௜൫௡
௫൯݌௫ݍ௡ି௫ ଵ
௜ୀ଴ Mean E(;) σš୧כሺš୧ሻ୬
୧ୀ଴
σš୧൫୬
୶൯’୶“୬ି୶ ୬
୧ୀ଴
σš୧Ǥ୬Ǩ
୶Ǩሺ୬ି୶ሻǨ’୶“୬ି୶ ୬
୧ୀ଴
Substitute the value of x= 0, we get first term as ‘0’
σǤሺ୬ିଵሻǨ
ሺ୶ିଵሻǨሺ୬ି୶ሻǨ’୶“୬ି୶ ୬
୧ୀଵ munotes.in

Page 196

196NUMERICAL AND STATISTICAL METHODS ݌݊σǤሺ୬ିଵሻǨ
ሺ୶ିଵሻǨሺ୬ିଵିሺ୶ିଵሻሻ ሻǨ’୶ିଵ“୬ି୶ ୬
୧ୀଵ

݌݊σǤ൫୬ିଵ
୶ିଵ൯’୶ିଵ“୬ିଵିሺ୶ିଵሻ ୬
୧ୀଵ

݌݊ሺ݌൅ݍሻ௡ିଵ ----- Using Binomial Expansion
݌݊ǥǥǥǥǥǥǤǤሺ݌൅ݍ ൌ ͳሻ
Mean E (;) — ݌݊(------------------------------ 1) E(;2) E>; (;-1)@  E>;@ -------------------------(2)
E>; (;-1)@ σš୧ሺš୧െͳሻ൫୬
୶൯’୶“୬ି୶ ୬
୧ୀ଴
σ୶ሺ୶ିଵሻ୬Ǩ
୶Ǩሺ୬ି୶ሻǨ’୶“୬ି୶ ୬
୧ୀ଴
݊ሺ݊െͳሻ݌ଶσǤ൫୬ିଶ
୶ିଶ൯’୶ିଶ“୬ିଶିሺ୶ିଶሻ୬
୧ୀଶ
݊ሺ݊െͳሻ݌ଶሺ݌൅ݍሻሺ௡ିଶሻ
݊ሺ݊െͳሻ݌ଶ---------------------------------------(3)
Using (1), (2) and (3)
E(;2) ݊ሺ݊െͳሻ݌ଶ൅݌݊(------------------------ 4)
Variance (X) = E(;2) – >E(;)@2
Variance (X) = ሾ݊ሺ݊െͳሻ݌ଶ൅݌݊ሿെሺ݌݊ሻଶൌݍ݌݊
Standard Deviation ( S.D.) (X)= σ = ඥݎܸܽሺܺሻ ඥݍ݌݊ NOTE: Binomial Theorem (Binomial Expansion) it states that, where n is a positive integer: (a  b)n an  (nC1)an-1b  (nC2)an-2b2 + … + (nCn-1)abn-1  bn
൫௡
௥൯ൌ௡Ǩ
ሺ௡ି௥ሻǨ௥Ǩൌ nCr 12.4. Applications of Binomial Distribution
We get the Binomial distribution under the following experiment al conditions.
1) The number of trials ‘ n’ is finite. (not very large)
2) The trials are independent of each other. munotes.in

Page 197

197Chapter 12: Distributions: Discrete Distributions3) The probability of success in any trial‘ p’ is constant for eac h trial.
4) Each trial (random experiment) must result in a success or a fa ilure
(Bernoulli trial).
Following are some of the real li fe examples of Binomial Distri bution
1. Number of defective items in a lot of n items produced by a mac hine
2. Number of mail births out of ‘n’ births in a hospital
3. Number of correct answers in a multiple choice test.
4. Number of seeds germinated in a row of ‘n’ planted seeds
5. Number of rainy days in a month
6. Number of re- captured fish in a sample of ‘n’ fishes.
7. Number of missiles hitting the targets out of ‘n’ fired.
In all above situations, ‘p’ is the probability of success is assumed to be constant.
Example 1:
Comment on the following: “ The mean of a binomial distribution is 5 and its
variance is 9”
Solution:
The parameters of the binomial distribution are n and p
We have mean Ÿ np 5
Variance Ÿ npq 9
׵ ݍൌ୬୮୯
୬୮ ଽ
ହ൐ͳ
Which is not admissible since q c annot exceed unity. (p  q 1 ) Hence the given
statement is wrong.
Example 2:
Eight coins are tossed simultaneously. Find the probability o f getting atleast six
heads.
Solution:
Here number of trials, n 8, p denotes the probability of gett ing a head.
P 1 / 2 and q 1 – p 1 / 2 munotes.in

Page 198

198NUMERICAL AND STATISTICAL METHODSIf the random variable ; denotes the number of heads, then the probability of a
success in n trials is given by
P r o b a b i l i t y o f g e t t i n g a t l e a s t 6 h e a d s i s g i v e n b y
P ( X ≥ 6) P(; 6)  P(; 7)  P(; 8)

ଶఴ൫଼
଺൯  ଵ
ଶఴ൫଼
଻൯൅ଵ
ଶఴ൫଼
଼൯

ଶఴሾ൫଼
଺൯  ൫଼
଻൯൅ ൫଼
଼൯ሿ

ଶହ଺ሾʹͺ൅ͺ൅ͳ ሿൌଷ଻
ଶହ଺
Example :
Ten coins are tossed simultaneously. Find the probability of ge tting (i) at least
seven heads (ii) exactly seven heads (iii) at most seven heads
Solution:
Here number of trials, n 10, p denotes the probability of get ting a head.
P 1 / 2 and q 1 – p 1 / 2
If the random variable ; denotes the number of heads, then the probability of a
success in n trials is given by P (; x) ቀ݊
ݔቁ݌௫ݍ௡ି௫ x 0, 1, 2…..n
൬ͺ
ݔ൰൬ͳ
ʹ൰௫
൬ͳ
ʹ൰଼ି௫
ൌ൬ͺ
ݔ൰൬ͳ
ʹ൰଼
ൌͳ
ʹ଼൬ͺ
ݔ൰ Note: 0 ≤ p ≤ 1 , p  q 1
P (; x) ቀ݊
ݔቁ݌௫ݍ௡ି௫ x 0, 1, 2…..n
൬ͳͲ
ݔ൰൬ͳ
ʹ൰௫
൬ͳ
ʹ൰ଵ଴ି௫
ൌ൬ͳͲ
ݔ൰൬ͳ
ʹ൰ଵ଴
ൌͳ
ʹଵ଴൬ͳͲ
ݔ൰ Note: 0 ≤ p ≤ 1 , p  q 1
munotes.in

Page 199

199Chapter 12: Distributions: Discrete Distributionsi) Probability of getting at least 7 heads is given by
P( X ≥ 7) P(; 7)  P(; 8)  P(; 9)  P(; 10)

ଶభబ൫ଵ଴
଻൯  ଵ
ଶభబ൫ଵ଴
଼൯൅ଵ
ଶభబ൫ଵ଴
ଽ൯൅ଵ
ଶభబ൫ଵ଴
ଵ଴൯

ଶభబሾ൫ଵ଴
଻൯ ൫ଵ଴
଼൯൅൫ଵ଴
ଽ൯ ൅൫ଵ଴
ଵ଴൯ሿ

ଵ଴ଶସሾͳʹͲ൅Ͷͷ൅ͳͲ൅ͳ ሿൌଵ଻଺
ଵ଴ଶସ
ii) Probability of getting exactly 7 heads is given by .
P(; 7) ଵ
ଶభబ൫ଵ଴
଻൯ൌଵଶ଴
ଵ଴ଶସ
iii) Probability of getting at most 7 heads is given by .
P( X ≤ 7) 1 - P(; ! 7)
1 – ^ P(; 8)  P(; 9)  P(; 10) `
1 - ଵ
ଶభబሾ ൫ଵ଴
଼൯൅൫ଵ଴
ଽ൯ ൅൫ଵ଴
ଵ଴൯ሿ
1 - ଵ
ଵ଴ଶସሾͶͷ൅ͳͲ൅ͳ ሿൌͳെହ଺
ଵ଴ଶସ ଽ଺଼
ଵ଴ଶସ
Example 4:
20 wrist watches in a box of 100 are defective. If 10 watches are selected at
random, find the probability that (i) 10 are defective (ii) 10 are good (iii) at least
one watch is defective (iv)at most 3 are defective.
Solution:
20 out of 100 wrist watches are defective, so Probability of d efective wrist watch
p 20 /100
p ଶ଴
ଵ଴଴ൌଵ
ହ ׵q 1 – p ͳെଶ଴
ଵ଴଴ൌ଼଴
ଵ଴଴ൌସ

Since 10 watches are selected at random, n 10 P ( ; x ) ቀ݊
ݔቁ݌௫ݍ௡ି௫ x = 0 , 1, 2…..n
൬ͳͲ
ݔ൰൬ͳ
ͷ൰௫
൬Ͷ
ͷ൰ଵ଴ି௫
munotes.in

Page 200

200NUMERICAL AND STATISTICAL METHODSi) Probability of selecting 10 defective watches
P(; 10) ൫ଵ଴
ଵ଴൯ቀଵ
ହቁଵ଴
ቀସ
ହቁଵ଴ିଵ଴
ൌͳ Ǥଵ
ହభబǤͳൌଵ
ହభబ
ii) Probability of selecting 10 good watches (i.e. no defective)
P(; 0) ൫ଵ଴
଴൯ቀଵ
ହቁ଴
ቀସ
ହቁଵ଴ି଴
ൌͳ Ǥ ͳ Ǥସభబ
ହభబൌሺସ
ହሻ10
iii) Probability of selecting at leas t one defective watch
P(X ≥ 1) = 1 – P(;  1) 1 - P(; 0)
1 - ൫ଵ଴
଴൯ቀଵ
ହቁ଴
ቀସ
ହቁଵ଴ି଴
ൌͳെሺସ
ହሻ10
iv) Probability of selecting at most 3 defective watches
P(X ≤ 3) = P(; 0)  P(; 1)  P(; 2)  P(; 3)
൫ଵ଴
଴൯ቀଵ
ହቁ଴
ቀସ
ହቁଵ଴
൅൫ଵ଴
ଵ൯ቀଵ
ହቁଵ
ቀସ
ହቁଽ
൅൫ଵ଴
ଶ൯ቀଵ
ହቁଶ
ቀସ
ହቁ଼

൫ଵ଴
ଷ൯ቀଵ
ହቁଷ
ቀସ
ହቁ଻

ͳǤͳǤቀସ
ହቁଵ଴
൅ͳͲǤଵ
ହସవ
ହవ൅ͶͷǤଵ
ହమସఴ
ହఴ൅ͳʹͲǤଵ
ହయସళ
ହళ
0.859 (Approx.)
Example 5:
With the usual notation find p for binomial random variable ; i f n 6 and
>9 P(; 4) P(; 2)@
Solution:
The probability mass function of binomial random variable ; is given by
Here n 6,
P (; x) ൫଺
௫൯ሺ݌ሻ௫ሺݍሻ଺ି௫
P (; 4) ൫଺
ସ൯ሺ݌ሻସሺݍሻ଺ିସ ൫଺
ସ൯ሺ݌ሻସሺݍሻଶ
P (; 2) ൫଺
ଶ൯ሺ݌ሻଶሺݍሻ଺ିଶ ൫଺
ଶ൯ሺ݌ሻଶሺݍሻସ
P (; x) ቀ݊
ݔቁ݌௫ݍ௡ି௫ x 0, 1, 2…..n Note: 0 ≤ p ≤ 1 , p  q 1 munotes.in

Page 201

201Chapter 12: Distributions: Discrete DistributionsGiven that, 9 P(; 4) P(; 2)
9 ൫଺
ସ൯ሺ݌ሻସሺݍሻଶ ൫଺
ଶ൯ሺ݌ሻଶሺݍሻସ
9 15 ሺ݌ሻସሺݍሻଶ 15 ሺ݌ሻଶሺݍሻସ
9 ሺ݌ሻସሺݍሻଶ ሺ݌ሻଶሺݍሻସ
9 p2 = q2
Taking positive square root on both sides we get,
3 p = q
3p = 1- p
4p = 1 ݌׵ൌଵ
ସ= 0.25
12.5 Poisson Distribution
Poisson distribution was discovered by a French Mathematician-c um-Physicist
Simeon Denis Poisson in 1837. Poisson distribution is also a di screte distribution.
He derived it as a limiting case of Binomial distribution. For n-trials the binomial
distribution is (p  q)n  the probability of x successes is given by P(; x) ൌ
σš୧൫୬
୶൯’୶“୬ି୶ ୬
୧ୀ଴ If the number of trials n is very large and the probability of
success ‘ p’ is very small so that the product np = m is non – negative and finite.
12.5.1 Definition:
A discrete random variable ;, taking on one of the countable i nfinite values 0, 1,
2,……. is said to follow a Binomial distribution with parameters ‘ λ’ or ‘m’, if for
some m ! 0, its p.m.f. is given by P ( ; x ) ݁ି௠݉௫
ݔǨ x 0 , 1, 2…..
m ! 0 0 otherwise Note: ݁ି௠൒Ͳ ǡ݉ܦܰܣ ௫൒Ͳ  ǡݔǨ൒Ͳ ǡ
Hence, ௘ష೘௠ೣ
௫Ǩ൒Ͳ Note: ݁௠ൌ௠బ
଴Ǩ൅௠భ
ଵǨ൅௠మ
ଶǨെെെെെൌ σ௠ೣ
௫Ǩ∞
௫ୀ଴
݁ൌଵబ
଴Ǩ൅ଵభ
ଵǨ൅ଶమ
ଶǨെെെെെൌʹǤ͹ͳͺʹͺ , 0 1, 1 1
Since, ݁ି௠൒Ͳ ܲ׵ ሺሻ൒Ͳ  ˆ ‘ ”  ƒ Ž Ž  š  ƒ  †
σሺšሻൌσஶ
௫ୀ଴௘ష೘௠ೣ
௫Ǩ ݁ି௠ǡσஶ
௫ୀ଴௠ೣ
௫Ǩൌ݁ି௠Ǥ݁௠ൌͳ munotes.in

Page 202

202NUMERICAL AND STATISTICAL METHODSIt is denoted by X → P(m) or X → P(λ)
Since number of trials is very large and the probability of suc cess p is very small,
it is clear that the event is a rare event. Therefore Poisson d istribution relates to
rare events.
12.5.2 Mean and Variance of Poisson Distribution
Let X is a Poisson random variable with parameter ‘m’ (or ‘λ’), if its p.m.f. is
given as P (; x) ݁ି௠݉௫
ݔǨ x 0, 1, 2…..
m ! 0 0 otherwise Note: ݁ି௠൒Ͳ ǡ݉ܦܰܣ ௫൒Ͳ ǡݔǨ൒Ͳ ǡ
Hence, ௘ష೘௠ೣ
௫Ǩ൒Ͳ ׵ Mean E(;) σš୧ሺš୧ሻஶ
୧ୀ଴
σݔ୧ஶ
୧ୀ଴௘ష೘௠ೣ
௫Ǩ
The term corresponding to x 0 is zero.
׵ ൌσஶ
୧ୀଵ௘ష೘௠ೣషభ
ሺ௫ିଵሻǨൌ݁݉ ି௠σஶ
୧ୀଵ௘ష೘௠ೣషభ
ሺ௫ିଵሻǨ ݁݉ି௠Ǥ݁௠ൌ݉
׵ Mean E(;) — m
But, E(;2) E>; (;-1)@  E>;@
E>; (;-1)@ σݔ௜ሺݔ௜െͳሻ
ஶ
௜ୀ଴ ܲሺݔ௜ሻൌ σݔ௜ሺݔ௜െͳሻ
ஶ
௜ୀ଴௘ష೘௠ೣ
௫Ǩ
ൌ݉ଶ݁ି௠σஶ
୧ୀଶ௠ೣషమ
ሺ௫ିଶሻǨൌ ݉ଶ݁ି௠Ǥ݁௠ൌ݉ଶ
E>; (;-1)@ ݉ଶ
E(;2) E>; (;-1)@  E>;@ ݉ଶ  ݉
Variance (X) = E(;2) – >E(;)@2
Variance (X) = ሺ݉ଶ  ݉ሻ - (݉)2 ݉
Standard Deviation ( S.D.) (X)= σ = ඥݎܸܽሺܺሻ ξ݉ munotes.in

Page 203

203Chapter 12: Distributions: Discrete DistributionsThus the mean and variance of Poisson distribution are equal an d each is equal to
the parameter of distribution ( Ԣ݉Ԣݎ݋ԢߣԢሻ
12.5. Applications of Poisson Distribution
Some examples of Poisson variates are :
1) The number of blinds born in a town in a particular year.
2) Number of mistakes committed in a typed page.
3) The number of students scoring very high marks in all subMects.
4) The number of plane accidents in a particular week.
5) The number of defective screws in a box of 100, manufactured by a reputed
company.
6) Number of accidents on the express way in one day.
7) Number of earthquakes occurring in one year in a particular sei smic zone.
8) Number of suicides reported in a particular day.
9) Number of deaths of policy holders in one year.
Conditions:
Poisson distribution is the limiting case of binomial distribu tion under the
following conditions:
1. The number of trials n is indefinitely large i.e., n → ∞
2. The probability of success ‘ p’ for each trial is very small; i .e., p → 0
3. np m (say) is finite, m ! 0
Characteristics of Poisson Distribution:
Following are the characteristics of Poisson distribution
1. Discrete distribution: Poisson distribution is a discrete distr ibution like
Binomial distribution, where the random variable assume as a co untably
infinite number of values 0,1,2 ….
2. The values of p and q: It is applied in situation where the pro bability of
success p of an event is very small and that of failure q is ve ry high almost
equal to 1 and n is very large.
3. The parameter: The parameter of the Poisson distribution is m. If the value
of m is known, all the probabilities of the Poisson distributio n can be
ascertained. munotes.in

Page 204

204NUMERICAL AND STATISTICAL METHODS4. Values of Constant: Mean m variance so that standard devia tion ξ݉
Poisson distribution may have either one or two modes.
5. Additive Property: If ; 1 and ; 2 a r e t w o i n d e p e n d e n t P o i s s o n d i s t r i b u t i o n
variables with parameter m 1 and m 2 respectively. Then (; 1 ; 2) also
follows the Poisson distribution with parameter (m 1  m 2) i.e. (; 1 ; 2) → P
(m1  m 2)
6. As an approximation to binomial distribution: Poisson distribut ion can be
taken as a limiting form of Binomial distribution when n is lar ge and p is
very small in such a way that product np m remains constant .
7. Assumptions: The Poisson distribution is based on the following
assumptions.
i) The occurrence or non- occurrence of an event does not influenc e the
occurrence or non-occurrence of any other event.
ii) The probability of success for a short time interval or a sma ll region
of space is proportional to the length of the time interval or space as
the case may be.
iii) The probability of the happening of more than one event is a ve ry
small interval is negligible.
Example 1:
Suppose on an average 1 house in 1000 in a certain district has a fire during a
year. If there are 2000 houses in that district, what is the p robability that exactly
5 houses will have a fire during the year" >given that e-2 0.13534@
Solution:
Mean np , n 2000, p ଵ
ଵ଴଴଴
m np ʹͲͲͲכଵ
ଵ଴଴଴ 2
׵ m 2, now According to Poisson distribution P (; x) ݁ି௠݉௫
ݔǨ x 0, 1, 2…..
m ! 0 ۾׵ ሺ܆ ൌ ૞ሻൌࢋష࢓࢓࢞
࢞Ǩൌ܍ష૛૛૞
૞Ǩ
۾ሺ܆ ൌ ૞ሻൌሺ଴Ǥଵଷହଷସሻכଷଶ
ଵଶ଴ 0.36 munotes.in

Page 205

205Chapter 12: Distributions: Discrete DistributionsExample 2:
In a Poisson distribution 3P(; 2) P(; 4) Find the parameter ‘ m’ .
Solution: P (; x) ݁ି௠݉௫
ݔǨ x 0, 1, 2…..
m ! 0 Given, 3P(; 2) P(; 4)
׵ ͵‡ି୫ଶ
ʹǨൌ‡ି୫ସ
ͶǨ
׵ଶൌଷǤସǨ
ଶǨ 36
݉׵ ൌ േ͸
Since mean is always positive ׵ m 6
Example :
If 2 of electric bulbs manufactured by a certain company are d efective. Find the
probability that in a sample of 200 bulbs i) less than 2 bulbs ii) more than 3 bulbs
are defective.>e-4 0.0183@
Solution :
The probability of a defective bulb p 2 / 100 0.02
Given that n 200 since p is small and n is large, we use Poisson Distribution,
mean m np
m np 200 0.02 4
Now, Poisson Probability Function P (; x) ݁ି௠݉௫
ݔǨ x 0, 1, 2…..
m ! 0 i) Probability of less than 2 bulbs are defective
P (;  2) P (; 0)  P (; 1)
ୣషరସబ
଴Ǩ൅ୣషరସభ
ଵǨൌ‡ିସሺͳ൅Ͷሻൌ‡ିସכͷൌͲǤͲͳ͵ͺכͷൌͲǤͲͻͳͷ

munotes.in

Page 206

206NUMERICAL AND STATISTICAL METHODSii) Probability of getting more than 3 defective bulbs
P (; ! 3) 1 - P (X ≤ 3)
1 –^ P (; 0)  P (; 1)  P (; 2)  P (; 3) `
1 – ‡ିସ (1  4  ସమ
ଶǨ൅ସయ
ଷǨ)
1 – 0.0183 (14810.67)
0 . 5 6 7
Example 4:
In a company previous record show that on an average 3 workers are absent
without leave per shift. Find the probability that in a shift
i) Exactly 2 workers are absent
ii) More than 4 workers will be absent
iii) At most 3 workers will be absent
Solution :
This is a case of Poisson dist ribution with parameter ‘m=3’
i) P (; 2) ୣషయଷమ
ଷǨ 0.2241
ii) P (; ! 4) 1 - P (X ≤ 4)
1 –^ P (; 0)  P (; 1)  P (; 2)  P (; 3)  P (; 4) `
1 – ^ ୣషయଷబ
଴Ǩ൅ୣషయଷభ
ଵǨୣషయଷమ
ଶǨ  ୣషయଷయ
ଷǨ൅ୣషయଷర
ସǨ ` 0.1845
iii) P (X ≥ 3) = 1 - P (X ≤ 3)
1 –^ P (; 0)  P (; 1)  P (; 2)  P (; 3) `
1 – ^ ୣషయଷబ
଴Ǩ൅ୣషయଷభ
ଵǨୣషయଷమ
ଶǨ  ୣషయଷయ
ଷǨ ` 0.5767
Example 5:
Number of accidents on Pune Mumbai express way each day is a Po isson random
variable with average of three accidents per day. What is the p robability that no
accident will occur today"
Solution :
This is a case of Poisson distribution with m 3,
P (; 0) ୣషౣଷబ
଴Ǩ ‡ିଷ 0.0498 munotes.in

Page 207

207Chapter 12: Distributions: Discrete DistributionsExample 6:
Number of errors on a single page has Poisson distribution with average number
of errors one per page . Calculate probability that there is at least one error on a
page.
Solution :
This is a case of Poisson distribution with m 1,
P (X ≥ 1) 1 - P (; 0) ͳെୣషభଵబ
଴Ǩ ͳെ‡ିଵ 0.0632
12.6 Summary
In this chapter, Discrete distribution, its types Uniform, Bern oulli, Binomial and
Poisson with its mean , variance and its application is discuss ed. Distributi
on Definition Mean
E(;) Varianc
e (;) Uniform P ( ; x ) ͳ
݊ x = 1, 2,,….n
0 otherwise
݊൅ͳ
ʹ ඨ݊ଶെͳ
ͳʹ Bernoulli P ( ; x ) ݌௫ݍଵି௫ x 0 , 1 0 otherwise Note: 0 ≤ p ≤ 1 , p + q = 1
p
pq Binomial P ( ; x ) ቀ݊
ݔቁ݌௫ݍ௡ି௫ x = 0 , 1,2,…n 0 otherwise Note: 0 ≤ p ≤ 1 , p + q = 1
— ݌݊ ݍ݌݊ Poisson P ( ; x ) ௘ష೘௠ೣ
௫Ǩ x = 0 , 1, 2….. m > 0
0 Otherwise — m ݉ 12.7 Reference
Fundamentals of Mathematical Statistics S. C. Gupta, V. K. Kapo or munotes.in

Page 208

208NUMERICAL AND STATISTICAL METHODS
12. Unit End Exercise
1 A random variable ; has the following discrete uniform distri bution
If P(;) , ; 0, 1 , «..n, Find E(;) and Var(;)
>Hints and Answers : n/2 , n(n2/ 12 @
2 Let ; follow a discrete uniform distribution over 11, 12, ««., 20.
Find i) P(; ! 15) ii) P(12” ;” 18 ) iii) P(; ” 14) iv) Mean and S.D. of ;
>Hints and Answers : i)0.5 ii) 0.6 iii)0.4 iv) Mean 15.5 and S.D. 2.8723 @
3 Let ; be the roll number of a student selected at random from 20 students
bearing roll numbers 1 to 20. Write the p.m.f. of ;.
>Hints and Answers:
P(;) 1 /20  x 1,2,3,«.,20
0  otherwise @
4 Let ; and < be two discrete uniform r.v.s assuming values 1,2, «..,10, ;
and < are independent. Find p.m.f. of = ;< . Also obtain i) P(= 13)
ii) P(=” 12 ).
Hints and Answers:
0.08 ii) 0.64
5 A radar system has a probability of 0.1 of detecting a certain target during a
single scan. Find the probability that the target will be detec ted i) at least twice
in four scans ii) at most once in four scans.
>Hints and Answers: i) 0.0523 ii) 0.9477 @
6 If the probability that any person 65 years old will be dead w ithin a year is
0.05. Find the probability that out of a group of 7 such person s (i) Exactly one,
(ii) none, (iii) at least one, (iv) that more than one,
>Hints and Answers: i) 0.2573 ii) 0.6983 iii) 0.3017 iv) 0.9556 @
7 For a B (5, p ) distribution , P(; 1) 0.0768 , P(; 2) 0.2304. Find the value
of p.
>Hints and Answers: 0.6 @munotes.in

Page 209

209Chapter 12: Distributions: Discrete Distributions
8 Suppose ; ĺ B( n, p ),
i) If E(;) 6 and Var(;) 4.2 , fi nd n and p.
ii) If p 0.6 E(;) 6 fi nd n and Var(;).
iii) If n 25 , E(;) 10, fi nd p and Var(;).
>Hints and Answers: i) 20, 0.3 ii) 10 , 2.4 iii) 0.4 , 6 @
9 In a summer season a truck driver experiences on an average on e puncture is
1000 km . Applying Poisson distribution, fi nd the probability that there will
be
i) No Puncture ii) two punctures in a Mourney of 3000 km
>Hints and Answers: i) 0.049787 ii) 0.224042 @
10 A book contains 400 misprints distributed randomly throughout its 400 pages.
What is probability that a page observed at random, contains at least two
misprints"
>Hints and Answers: 0.264 @

munotes.in

Page 210

210NUMERICAL AND STATISTICAL METHODSUnit 5
1
DISTRIBUTIONS: CONTINUOUS
DISTRIBUTIONS
Unit Structure
13.0 ObMectives
13.1 Introduction
13.2 Uniform Distribution
13.2.1 Definition
1 3 . 2 . 2 M e a n a n d V a r i a n c e o f U n i f o r m D i s t r i b u t i o n
1 3 . 2 . 3 A p p l i c a t i o n s o f U n i f o r m D i s t r i b u t i o n
13.3 Exponential Distribution
13.3.1 Definition
1 3 . 3 . 2 M e a n a n d V a r i a n c e o f E x p o n e n t i a l D i s t r i b u t i o n
1 3 . 3 . 3 D i s t r i b u t i o n F u n c t i o n
1 3 . 3 . 4 A p p l i c a t i o n s o f E x p o n e n t i a l D i s t r i b u t i o n
13.4 Normal Distribution
13.4.1 Definition
1 3 . 4 . 2 P r o p e r t i e s o f N o r m a l D i s t r i b u t i o n
13.4.3 Properties and Applications of Normal Distribution
1 3 . 4 . 4 = t a b l e a n d i t s U s e r M a n u a l
13.5 Summary
13.6 Reference
13.7 Unit End Exercise munotes.in

Page 211

211Chapter 13: Distributions: Continuous Distributions1.0 Objectives
After going through this unit, you will be able to:
x Make familiar with the concept of continuous random variable
x Understand how to study a continuous type distribution
x Make Students to feel it very interesting and can find applica tions.
x Make the students familiar with applications and properties of exponential
distribution.
x To understand why life time distributions of electronic equipme nts can be
assumed to be exponential distributed.
x To understand the importance of normal distributions in the the ory of
distribution
x To study different properties of normal distribution useful in practice to
compute probabilities and expected values.
1.1 Introduction
In continuous set up, many times it is observed that the variab le follows a specific
pattern. This behavior of the random variable can be described in a mathematical
form called ‘ Probability Models’.
In this unit we will study important continuous probability dis tributions such as
Uniform, Exponential, Normal
1.2 Uniform Distribution
In this distribution, the p.d.f. of the random variable remains constant over the
range space of the variable.
1.2.1 Definition:
Let ; be a continuous random v ariable on an interval ( c, d) . ; is said to be a
uniform distribution with if the p.d.f. is constant over the ra nge ( c, d) . Further
the probability distribution must satisfy the Axioms of probabi lity.
Notation: X → U >c, d@
This distribution is also known as ‘Rectangular Distribution’ , as the graph of for
this describes a rectangle over the ; axis and between the ordi nates at ; c and
; d munotes.in

Page 212

212NUMERICAL AND STATISTICAL METHODSProbability Density Function (p.d.f.)
p.d.f. of Uniform random variable is given by, f (x) ͳ
݀െܿ
if c < x < d
0 otherwise Note that, 0 ≤ f(x) ≤ 1 Axiom I is satisfied
׬݂ሺݔሻݔ݀ ൌ׬ଵ
ௗି௖ௗ
௖ஶ
ିஶdx =ଵ
ௗି௖ሾݔሿ
௖ௗൌௗି௖
ௗି௖ൌͳ Axiom II is satisfied
(c, d) are parameters of the distribution.
Note: Whenever the value of parameter is known, the distributio n is known
completely. Knowing the parameter or parameters we can compute probabilities
of all events, as well as quantities such as mean, variance etc . Different values of
parameters specify different probability distributions of same kind.
A special case of uniform distribution very commonly used in di gital electronics
is a step function. This is a uniform distribution with paramet ers (0, 1) and its
p.d.f. is
݂ሺݔሻൌቄͳ݂݅Ͳ ൏ ݔ ൏ ͳ
Ͳ݁ݏ݅ݓݎ݄݁ݐܱ
1.2.2 Mean and Variance of Uniform Distribution
Mean of Uniform random variable with parameter ( c, d) is,
Mean E(;) ׬݂ݔሺݔሻݔ݀ ൌ׬ଵ
ௗି௖ݔௗ
௖ஶ
ିஶdx =ଵ
ௗି௖ቂ௫మ
ଶቃ
௖ௗൌௗమି௖మ
ଶሺௗି௖ሻൌௗା௖

Variance of Uniform random variable with parameter ( c, d) is,
Variance (X) = E(;2) – >E(;)@2
E(;2) ׬ݔଶ݂ሺݔሻݔ݀ ൌଵ
ௗି௖׬ݔଶௗ
௖ஶ
ିஶdx =ଵ
ௗି௖ቂ௫య
ଷቃ
௖ௗൌௗయି௖య
ଷሺௗି௖ሻൌௗమାௗ௖ା௖మ

Variance (X) = E(;2) – >E(;)@2 ௗమାௗ௖ା௖మ
ଷെሺௗା௖ሻమ
ସൌሺௗି௖ሻమ
ଵଶ
Standard Deviation (S.D.) (;) = σ ඥݎܸܽሺܺሻ ටሺௗି௖ሻమ
ଵଶൌ ௗି௖
ଶξଷ munotes.in

Page 213

213Chapter 13: Distributions: Continuous Distributions1.2. Applications of Uniform Distribution
1. It is used to represent the di stribution of rounding-off errors .
2. It is used in life testing and traffic flow experiments
3. It is being the simplest conti nuous distribution is widely appl ied in research.
4. If a random variable < follows any continuous distribution then i s
distribution function ; F (<) can be shown to follow U>0, 1@. This result
is useful in sampling from any continuous probability distribut ions.
Example1: If mean and variance of a U >c, d@ random variable are 5 and 3
respectively, determine the values of c and d.
Solutions: Let ; is a random variable which follows Uniform distribution with
parameters (c,d)
X → U >c,d@
Mean E(;) ௗା௖
ଶ 5 ׵ c  d 10
Variance (X) = ሺௗି௖ሻమ
ଵଶ= 3 ׵( d – c)2 36 ׵ d –c 6
׵ c  d 10 and d –c 6 after solving these equations we get c 2 and d 8
Example2: ; is continuous random variable with f(x) constant ( k) over a ≤ X ≤ b
and 0 elsewhere. Find i) p.d.f ii) Mean
Solutions:
i) P.d.f. is f (x) k For a ≤ X ≤ b 0 Elsewhere f ( x ) must satisfy Axioms of probability, Hence
׬݂ݔሺݔሻݔ݀ ൌ׬ݔ݀ݔ݇ ൌ ݇ ሺܾെܽሻൌͳ௕
௔ஶ
ିஶ
׵ k ൌଵ
ሺ௕ି௔ሻ
Thus the p.d.f. is f (x) ͳ
ܾെܽ For a ≤ X ≤ b 0 Elsewhere munotes.in

Page 214

214NUMERICAL AND STATISTICAL METHODSii) Mean
න݂ݔሺݔሻݔ݀ ൌනͳ
ܾെܽݔ݀ݔ ൌͳ
ܾെܽ௕
௔ஶ
ିஶනݔ݀ݔ ൌͳ
ܾെܽ௕
௔ቈݔଶ
ʹ቉
௔௕ൌܾଶെܽ
ʹሺܾ െ ܽሻൌܽ൅ܾ
ʹ
Example: Let X → U >-a, a@. Find the value of a such that P>_;_@ ! 1 ଺

Solutions: P>_;_@ ! 1 1 - P[|X|] ≤ 1
1 - P >- 1 ≤ X ≤ 1]
׵ P >-1 ≤ X ≤ 1] = ׬݂ݔሺݔሻݔ݀ ൌ׬ଵ
ଶ௔ݔ݀ଵ
ିଵଵ
ିଵൌଵ

׵ a 7
Example4: Amar gives a birthday party to his friends. A machine fills the ice
cream cups. The quantity of ice cream per cup is uniformly dist ributed over 200
gms to 250 gms. i) what is the probability that a friend of Ama r gets a cup with
more than 230gms of ice cream" ii) If in all twenty five peop le attended the
party and each had two cups of ice cream, what is the expected quantity of ice
cream consumed in the party"
Solutions: Let X = Quantity of ice cream per cup, X → U >c, d@ → >200, 250@
i) P >; ! 230@ 1 – P >X ≤ 230@
1 – ሺଶଷ଴ିଶ଴଴ሻ
ሺଶହ଴ିଶ଴଴ሻൌͳെͲ Ǥ ͸ൌͲ Ǥ Ͷ
ii) On an average, quantity per cup is given by
Mean (200  250) / 2 225 gms ׵ Total quantity consumed 225 î 2 î 25
11250 gms 11.25 kg
Example5: ; is Uniform continuous random variable with p.d.f. given as, f (x) ͳ
ͺ For 0 ≤ x ≤ 8 0 Elsewhere Determine i) P( 2 ≤ X ≤ 5 ) ii) P( 3 ≤ X ≤ 7 ) iii) P( X ≤ 6)
iv) P(; ! 6) v) P( 2 ≤ X ≤ 12 ) munotes.in

Page 215

215Chapter 13: Distributions: Continuous DistributionsSolutions: i) P( 2 ≤ X ≤ 5 ) ׬ଵ
଼ݔ݀ ൌଵ
଼ሾݔሿ
ଶହൌହ
ଶଷ

ii) P( 3 ≤ X ≤ 7 ) ׬ଵ
଼ݔ݀ ൌଵ
଼ሾݔሿ
ଷ଻ൌ଻
ଷସ
଼ൌଵ

iii) P( X ≤ 6) ׬ଵ
଼ݔ݀ ൌ׬ଵ
଼ݔ݀ ൌଵ
଼ሾݔሿ
଴଺ൌ଺
଴଺
ିஶ଺
଼ൌଷ

iv) P(; ! 6) 1 – P (X ≤ 6) 1 - ଷ
ସൌଵ

v) P( 2 ≤ X ≤ 12 ) ൌ׬ଵ
଼ݔ݀ ൌ׬ଵ
଼ݔ݀൅׬ሺͲሻݔ݀ ൌଵଶ
଼଼
ଶଵଶ
ଶ଺
଼െͲൌଷ
ସ 1. Exponential Distribution
Exponential distribution is one of the important distributions and has wide
applications in operation research, life testing experiments, i n reliability theory
and survival analysis. Life time of an electronic component, ti me until decay of a
radioactive element is modeled by the exponential distribution. E x p o n e n t i a l
distribution is closely related t o Poisson distribution. If the number of occurrences
of the event follows Poisson distribution then the distribution o f t h e t i m e t h a t
elapses between these successive occurrences follows an exponen tial distribution.
For example, if the number of patients arriving at hospital fol lows Poisson
distribution, then the time gap between the current arrival and the next arrival is
exponential. This has specific application in 4ueuing Theory.
1..1 Definition: Let ; be a continuous random variable taking non-negative
values is said to follow expone ntial distribution with mean θ if its probability
density function (p.d.f.) is given by f (x) ͳ
ߠ݁ି௫Ȁఏ  x ≥ 0, θ > 0 0  otherwise Notation: X → Exp(θ)
Note : (1) f(x) ≥ 0 ׶ x ≥ 0, θ > 0 and ݁ିೣ
ഇ൐Ͳ
(2) ׬݂ݔሺݔሻݔ݀ ൌ׬ଵ
ఏ݁ି௫Ȁఏݔ݀ ൌଵ
ఏஶ
଴ஶ
ିஶቂെ௘షೣȀഇ
ଵȀఏቃ
଴ஶൌൣ െ ݁ି௫Ȁఏ൧
଴ஶൌͳ
From (1) and (2) it is clear that f(x) is p.d.f. munotes.in

Page 216

216NUMERICAL AND STATISTICAL METHODS () if θ = 1, then the distribution is called standard exponential distribu tion.
Hence probability density function is given by f (x) ݁ି௫  x ≥ 0, θ > 0 0  otherwise 1..2 Mean and Variance of Exponential Distribution
If X → Exp(θ) , then its p.d.f is given by f (x) ͳ
ߠ݁ି௫Ȁఏ  x ≥ 0, θ > 0 0  otherwise Mean of Exponential random variable with parameter (0 ,∞) is,
Mean E(;) ׬݂ݔሺݔሻݔ݀ ൌ׬ଵ
ఏ݁ି௫Ȁఏݔ݀ݔஶ
଴ஶ
଴ =׬௫
ఏ݁ି௫Ȁఏݔ݀ஶ
଴ൌ ߠ
Variance of Exponential random variable with parameter (0 ,∞) is,
Variance (X) = E(;2) – >E(;)@2
E(;2) ׬ݔଶ݂ሺݔሻݔ݀ ൌ׬ଵ
ఏ݁ି௫Ȁఏݔݔଶݔ݀ஶ
଴ஶ
଴ =׬௫మ
ఏ݁ି௫Ȁఏݔ݀ஶ
଴ൌʹ ߠଶ
Variance (X) = E(;2) – >E(;)@2 ʹߠଶെߠଶൌߠଶ
Standard Deviation ( S.D.) (X)= σ = ඥݎܸܽሺܺሻ ξߠଶൌ ߠ
Note: For Exponential distribution, mean and standard deviation are same.
13.3.3 Distribution Function of Exponential Distribution i.e. E xp (θ)
Let X → Exp(θ). Then the distribution function is given by
Fx (;) P > X≤ x@ ׬݂ሺݐሻݐ݀ ൌ׬ଵ
ఏ݁ି௧Ȁఏݐ݀ ൌ௫
଴௫
଴ଵ
ఏ׬݁ି௧Ȁఏݐ݀ ൌ௫
଴ଵ
ఏቂെ௘ష೟Ȁഇ
ଵȀఏቃ
଴௫
ൣെ݁ି௧Ȁఏ൧
଴௫
Fx (;) 1- ݁ି௫Ȁఏ  x ! 0, θ > 0
Note: When ; is life time of a component, P >; ! x@ is taken as reliability
function or survival function.
Here P >; ! x@ 1 – P >X ≤ x@ 1 - F x (;) 1- ݁ି௫Ȁఏ  x ! 0, θ > 0 munotes.in

Page 217

217Chapter 13: Distributions: Continuous Distributions1..4 Applications of Exponential Distribution
Exponential distribution is use d as a model in the following si tuations:
1) Life time of an electronic component
2) The time between successive r adioactive emissions or decay
3) Service time distribution at a service facility.
4) Amount of time until interrupt occurs on server
5) Time taken to serve customer at ATM counter.
Example1: Suppose that the life time of a ;<= company T.V. tube is
exponentially distributed with a mean life 1600 hours. What is the probability that
i) The tube will work upto 2400 hours"
ii) The tube will survive after 1000 hours"
Solutions: Let ; : Number of hours that T.V. tube work
X → Exp(θ) where θ 1600
We know that if X → Exp(θ) then
P >X ≤ x@ 1- ݁ି௫Ȁఏ  x ! 0, θ > 0
i) P >X ≤ 2400 @ 1- ݁ିଶସ଴଴Ȁଵ଺଴଴ 1 - ݁ିଵǤହ 1 – 0.223130 0.77687
ii) P >; ! 1000@ - ݁ିଵ଴଴଴Ȁଵ଺଴଴ൌ݁ି଴Ǥ଺ଶହൌͲ Ǥ ͷ ͵ ͷ ͵ ͵ ͳ ͻ
( ׶ If X → Exp(θ) , P >; ! x@ 1- ݁ି௫Ȁఏൌ݁ି௫Ȁఏ)
Example2: T h e l i f e t i m e i n h o u r s o f a c e r t a i n e l e c t r i c c o m p o n e n t f o l l o w s
exponential distribution with distribution function.
F(x) 1- ݁ି଴Ǥ଴଴ସ௫; x ≥ 0
i) What is the probability that the component will survive 200 hou rs"
ii) What is the probability that it will fail during 250 to 350 hou rs"
iii) What is the expected life time of the component"
Solutions: Let ; : Life time (in hours) of the electric component, X → Exp(θ)
and
F(x) 1- ݁ି଴Ǥ଴଴ସ௫ ; x ≥ 0 ׵ ݁ି଴Ǥ଴଴ସ௫ൌ݁ି௫Ȁఏ munotes.in

Page 218

218NUMERICAL AND STATISTICAL METHODS׵ െͲǤͲͲͶݔൌെݔȀߠ ׵ ͲǤͲͲͶൌ ͳȀߠ ׵ ߠൌଵ଴Ǥ଴଴ସൌʹ ͷ Ͳ  ݏݎݑ݋݄ i) P >; ! 200@ ݁ି௫Ȁఏൌ݁ିమబబ
మఱబൌ݁ି଴Ǥ଼ൌͲ Ǥ Ͷ Ͷ ͻ ͵ ʹ ͻ ii) P >250  ;  350@ P>; ! 250@ – P >; ! 350@ ݁ିమఱబ
మఱబെ݁ିయఱబ
మఱబ ݁ିଵ - ݁ିଵǤସ
0.367879 – 0.246597
iii) Expected life time of the component = E(X) = θ = 250 hours
Example: The life time of a microprocessor is exponentially distributed with
mean 3000 hours. What is the probability that
i) The microprocessor will fail within 300 hours"
ii) The microprocessor will function for more than 6000 hours"
Solutions: Let ; : Life time (in hours) of the microprocessor,
X → Exp(θ) and θ = 3000
i) P (Microprocessor will fail within 300 hours)
P ( X ≤ 300)
׬ଵ
ଷ଴଴଴݁ିೣ
యబబబݔ݀ ൌ ͲǤͲͻͷͳଷ଴଴

ii) P (Microprocessor will function for more than 6000 hours)
P ( X ≥ 6000 )
׬ଵ
ଷ଴଴଴݁ିೣ
యబబబݔ݀ ൌ ݁ିଶൌͲ Ǥ ͳ ͵ ͷ ͵ஶ
଺଴଴଴
1.4 Normal Distribution
In this section we deal with the most important continuous dist ribution, known as
normal probability distribution or simply normal distribution. It is important for
the reason that it plays a vital role in the theoretical and applied statistics. It is
one of the commonly used distribution. The variables such as in telligence
quotient, height of a person, weight of a person, errors in mea surement of
physical quantities follow normal distribution. It is useful in s t a t i s t i c a l q u a l i t y munotes.in

Page 219

219Chapter 13: Distributions: Continuous Distributionscontrol, statistical inferen ce, reliability theory, operation r esearch, educational and
psychological statistics. Normal distribution works as a limiti ng distribution to
several distributions such as Binomial, Poisson.
The normal distribution was first discovered by DeMoivre (Engli sh
Mathematician) in 1733 as limiting case of binomial distributio n. Later it was
applied in natural and social science by Laplace (French Mathem atician) in 1777.
The normal distribution is also known as Gaussian distribution in honour of Karl
Friedrich Gauss(1809).
1.4.1 Definition:
A continuous random variable ; is said to follow normal distrib ution with
parameters mean μ and standard deviation σ2, if its probability density function
(p.d.f.) is
݂ሺݔሻൌͳ
ߪξʹߨ݁ିଵ
ଶఙమሺ௫ିρሻమ
Ǣെλ ൏ ݔ ൏ λǢെλ ൏ ρ ൏ λǢߪ ൐ Ͳ
݂ሺݔሻൌͳ
ߪξʹߨ݁ିሺ௫ିρሻమ
ଶఙమǢെλ ൏ ݔ ൏ λǢെλ ൏ ρ ൏ λǢߪ ൐ Ͳ
Note:
1. The mean μ and standard deviation σ2 are called the parameters of Normal
distribution.
N o t a t i o n : T h e n o r m a l d i s t r i b u t i o n i s e x p r e s s e d b y X → N(μ, σ2)
2. If — 0 and σ2 1,then the normal variable is called as standard normal
variable.
X → N(μ, σ2) => X → N(0, 1) . Generally it is denoted by =. The p.d.f.
of = is
݂ሺݖሻൌͳ
ξʹߨ݁ି௭మ
ଶǢെλ ൏ ݖ ൏ λǢߨ݄ݐ݅ݓ ൌ ͵ǤͳͶͳͳͷͻ݁݀݊ܽ ൌ ʹǤ͹ͳͺʹͺ
The advantage of the above function is that it doesn’ t contain any
parameter. This enable us to compute the area under the normal probability
curve.
3. The probability density curve of N(μ, σ2) is bell shaped, symmetric about —
and mesokurtic . Naturally, the curve of standard normal distri bution is
symmetric around zero. munotes.in

Page 220

220NUMERICAL AND STATISTICAL METHODS
4. The maximum height of probability density curve is 1 / ( σ ξʹߨሻ
5. As the curve is symmetric about —, the mean, median and mode c oincide
and all are equal to —.
6. The parameter σ2 is also the variance of ;, hence the standard deviation
(s.d.) (X) = σ
Relation between N(μ, σ2) and N(0, 1)
If X → N(μ, σ2) then = ሺ࢞ିρሻ
࣌՜ۼሺ૙ǡ૚ሻǤ This result is useful while computing
probabilities of N(μ, σ2) variable. The statistical table give probabilities of a
standard normal i.e. N(0,1) variable.
1.4.2 Properties of Normal Distribution: ¾ The normal curve is bell shaped a nd is symmetric at x P
¾ Mean, median, and mode of the distribution are coincide
i.e.,Mean Median Mode —
¾ It has only one mode at x — (i.e., unimodal)
¾ The points of inflection are at x PrV
¾ The x axis is an asymptote to the curve (i.e. the curve continu es to approach
but never touches the x axis)
¾ The first and third quartiles are equidistant from median.
¾ The mean deviati on about m ean is 0.8 V
¾ 4uartile deviation 0.6745 V
¾ If ; and < are independent normal variates with mean P1 and P2, and
variance V12 and V22 respectively then their sum (;  <) is also a normal
variate with mean PP and variance VV)
munotes.in

Page 221

221Chapter 13: Distributions: Continuous Distributions¾ Area Property P PVuPV 0.6826
P PVuPV 0.9544
P PVuPV 0.9973 1.4. Properties and Applications of Normal Distribution Let ; be random variable which follows normal distribution with m e a n P a n d
variance V2 .The standard normal variate is defined as ܼൌሺ௑ିρሻ
஢ which follows
standard normal distribution with mean 0 and standard deviation 1 i.e. = → N
(0,1). The p.d.f. of = is
݂ሺݖሻൌͳ
ξʹߨ݁ି௭మ
ଶǢെλ ൏ ݖ ൏ λǢߨ݄ݐ݅ݓ ൌ ͵ǤͳͶͳͳͷͻ݁݀݊ܽ ൌ ʹǤ͹ͳͺʹͺ
The advantage of the above function is that it doesn’t contain any parameter. This
enable us to compute the area under the normal probability curv e.
Area Properties of Normal Curve:
The total area under the normal probability curve is 1. The cur ve is also called
standard probability curve. The area under the curve between th e ordinates at x
a and x b where a  b, represents the probabilities that x li es between x a and
x b
i.e., P(a d x d b)
To find any probabil ity value of ;, we first standardize it by using ܼൌሺ௑ିρሻ
஢, and
use the area probability normal table.
For e.g. the probability that the normal random variable ; to l ie in the interval
(—- σ, μ + σ) is given by
munotes.in

Page 222

222NUMERICAL AND STATISTICAL METHODS
P PV[PV P d]d 
2P(0  z  1) (Due to symme try)
2 (0.3413) (from the area ta ble)
0.6826 P PV[PV P(-2  z  2)
2P(0  z  2) (Due to s ymmetry)
2 (0.4772)
0.9544
P PV[PV P ]
2P(0  z  3) (Due to s ymmetry)
2 (0.49865) 0.9973



munotes.in

Page 223

223Chapter 13: Distributions: Continuous DistributionsThus we expect that the values in a normal probability curve will lie between the
range
Pr 3V, though theoretically it range from f to f.

Example 1: Find the probability that the standard normal variate lies betw een 0
and 1.56
Solution: P(0  z  1.56) Area between z 0 and z 1.56
0.4406 (f rom table)

munotes.in

Page 224

224NUMERICAL AND STATISTICAL METHODSExample2: Find the area of the standard normal variate from –1.96 to 0.
Solution: Area between z 0 z 1.96 is same as the area z 1.96 to z 0
P(-1.96  z  0) P(0  z  1.96) (by symmetry)
0.4750 (from the table)


Example : Find the area to the right of z 0.25
Solution: P(z ! 0.25) P(0  z  f) – P(0  z  0.25)
0.5000 - 0.0987 (from the table) 0.4013




munotes.in

Page 225

225Chapter 13: Distributions: Continuous DistributionsExample 4: Find the area to the left of z 1.5
Solution: P(z  1.5) P(- ∞ < z < 0 )  P(0  z  1.5)
0.5  0.4332 (from the tab le)
0.9332
Example 5: Find the area of the standard normal variate between –1.96 and 1.5
Solution: P(-1.96  z  1.5) P (-1.96  z  0)  P(0  z  1.5)
P (0  z  1.96)  P(0  z  1.5)
0.4750  0.4332 (from the table)
0.9082
Example 6: Given a normal distribution with P 50 and V 8, find the
probability that x assumes a value between 42 and 64
Solution: Given that P 50 and V 8 The standard normal variate ܼൌሺ௑ିρሻ
஢
munotes.in

Page 226

226NUMERICAL AND STATISTICAL METHODSIf ; 42, ܼଵൌሺ௑ିρሻ
஢ൌሺସଶିହ଴ሻ
଼ൌି଼
଼ -1
If ; 64, ܼଶൌሺ௑ିρሻ
஢ൌሺ଺ସିହ଴ሻ
଼ൌଵସ
଼ 1.75
? P(42  x  64) P( 1  z 1.75)
P( 1 z  0)  P(0  z 1.95)
P(0z1)  P (0  z 1.75) (by s ymmetry)
0.3413 0 .4599 (from the table)
0 .8012

Example 7: Students of a class were given an aptitude test. Their marks we re
found to be normally distributed with mean 60 and standard devi ation 5. What
percentage of students scored.
i) More than 60 marks (ii) Le ss than 56 marks (iii) Between 45 and 65 marks
Solution: Given that P 60 and V 5 The standard normal variate ܼൌሺ௑ିρሻ
஢
i) If ; 60, ܼൌሺ௑ିρሻ
஢ൌሺ଺଴ି଺଴ሻ
ହൌ଴
ହ 0
? P(x ! 60) P(z ! 0)
P( < z < ∞) = 0.5000
Hence percentage () of students scored more than 60 marks is
0.5000 100 50
munotes.in

Page 227

227Chapter 13: Distributions: Continuous Distributions
ii) If ; 56, ܼൌሺ௑ିρሻ
஢ൌሺହ଺ି଺଴ሻ
ହൌିସ
ହ - 0.8
? P(x  56) P(z  -0.8)
P(- f  z  0) – P(0.8  z  0) (by symmetry)
P(0  z  f) – P(0  z  0.8)
0.5  0.2881 (from the table)
0.2119
Hence the percentage of students score less than 56 marks is
0.2119 (100) 21.19 
iii) If ; 45, ܼൌሺ௑ିρሻ
஢ൌሺସହି଺଴ሻ
ହൌିଵହ
ହ -3
; 65, ܼൌሺ௑ିρሻ
஢ൌሺ଺ହି଺଴ሻ
ହൌହ
ହ 1
P(45  x  65) P( 3  z  1)
P( 3  z  0)  P (0  z  1)
P(0  z  3)  P(0  z  1) (by symmetry)
0.4986  0.3413 (from the table)
0.8399
Hence the percentage of students scored between 45 and 65 marks i s
0.8399 (100) 83.99 
munotes.in

Page 228

228NUMERICAL AND STATISTICAL METHODS
Example : ; is normal distribution with mean 2 and standard deviation 3. Find
the value of the variable x s uch that the probability of the in terval from mean to
that value is 0.4115
Solution: Given — 2, σ 3
Suppose z 1 is required standard value,
Thus P (0  z  z 1) 0.4115
From the table the value co rresponding to the area 0.4115 is 1. 35 (i.e. z 1 1.35)
ࢆ૚ൌሺࢄെρሻ
ોൌሺ܆െ૛ሻ
૜ൌ૚ Ǥ૜ ૞
(; – 2) 3 1.35
; 4.05  2 6.05 Example : In a normal distribution 31  of the items are under 45 and 8  are
over 64. Find the mean a nd variance of the distribution.
Solution: Let x denotes the items are given and it follows the normal d istribution
with mean — and standard deviation σ
The points x 45 and x 64 are located as shown in the figure .
Since 31  of items are under x 45,
position of x into the left of the ordinate x —
Since 8  of items are above x 64,
position of this x is to the right of ordinate x —


munotes.in

Page 229

229Chapter 13: Distributions: Continuous DistributionsWhen ; 45, ܼൌሺ௑ିρሻ
஢ൌሺସହିρሻ
஢ - z 1 (Say)
Since x is left of x P, =1 is taken as negative
When ; 64, ܼൌሺ௑ିρሻ
஢ൌሺ଺ସିρሻ
஢ z 2 (Say)

From the diagram P(x  45) 0.31
P(z  - z 1) 0.31
P(- z 1  z  0) P(- f z  0) – P(- f  z  z 1)
P(- z 1  z  0) 0.5 - 0.31 0.19
P(0  z  z 1) 0.19 (by symmetry)
z 1 0.50 (from the table)
Also from the diagram P(; ! 64) 0.08
P(0  z  z 2) P(0  z  f) – P(z 2  z  f)
0.5 - 0.08 0.42
z 2 1.40 (from the table)
Substituting the values of z 1 and z 2 we get ܼൌሺ௑ିρሻ
஢ൌሺସହିρሻ
஢ൌെͲǤͷͲ
ܼ ൌሺ௑ିρሻ
஢ൌሺ଺ସିρሻ
஢ൌͳ Ǥ Ͷ Ͳ
Solving P - 0.50 V 45 (1)
P  1.40 V 64 ----- (2)
(2) – (1) Ÿ 1.90 V 19 ŸV 10
Substituting V 10 in (1)
P 45  0.50 (10) 45  5.0 50.0
Hence mean 50 and variance V2 100 1.4.4 = table and its User Manual

munotes.in

Page 230

230NUMERICAL AND STATISTICAL METHODS

Patterns for Finding Areas Under the Standard Normal (a) Area between
a given z value
d0Use Table A in Appendix I 0 z (b) Area between
z values on
either side of 0  z1 0z1 0
Af t0Area from z1 to Area from 0 to (c) Area between
z values on
same side of 0 —0 z10 z2 Area between z1 and z2 A r e a f r o m 0
t0 z1
Af 0 t(d) Area to the right
of a positive z value
or to the left of a
negative z value —0 z1 0 0 z1
Area to the right of z1 Area to the right of 0* Area from 0 to
z1 (e) Area to the right
of a negative z value
or to the left of a
positive  z1 0
At t h i h t fz1 0
Af t0
Area to the right of 0
hi h i 0 5000FIGURE 2 munotes.in

Page 231

231Chapter 13: Distributions: Continuous DistributionsTABLE A Areas of a Standard Normal Distribution
(Alternate Version of Table)
The table entries represent the area under the standard normal curve from
0 to the specified value of z. z .00 .01 .02 .0 .04 .05 .06 .07 .0 .0 0.0 .0000 .0040 .0080 .0120 .0160 .0199 .0239 .0279 .0319 .0359 0.1 .0398 .0438 .0478 .0517 .0557 .0596 .0636 .0675 .0714 .0753 0.2 .0793 .0832 .0871 .0910 .0948 .0987 .1026 .1064 .1103 .1141 0.3 .1179 .1217 .1255 .1293 .1331 .1368 .1406 .1443 .1480 .1517 0.4 .1554 .1591 .1628 .1664 .1700 .1736 .1772 .1808 .1844 .1879 0.5 .1915 .1950 .1985 .2019 .2054 .2088 .2123 .2157 .2190 .2224 0.6 .2257 .2291 .2324 .2357 .2389 .2422 .2454 .2486 .2517 .2549 0.7 .2580 .2611 .2642 .2673 .2704 .2734 .2764 .2794 .2823 .2852 0.8 .2881 .2910 .2939 .2967 .2995 .3023 .3051 .3078 .3106 .3133 0.9 .3159 .3186 .3212 .3238 .3264 .3289 .3315 .3340 .3365 .3389 1.0 .3413 .3438 .3461 .3485 .3508 .3531 .3554 .3577 .3599 .3621 1.1 .3643 .3665 .3686 .3708 .3729 .3749 .3770 .3790 .3810 .3830 1.2 .3849 .3869 .3888 .3907 .3925 .3944 .3962 .3980 .3997 .4015 1.3 .4032 .4049 .4066 .4082 .4099 .4115 .4131 .4147 .4162 .4177 1.4 .4192 .4207 .4222 .4236 .4251 .4265 .4279 .4292 .4306 .4319 1.5 .4332 .4345 .4357 .4370 .4382 .4394 .4406 .4418 .4429 .4441 1.6 .4452 .4463 .4474 .4484 .4495 .4505 .4515 .4525 .4535 .4545 1.7 .4554 .4564 .4573 .4582 .4591 .4599 .4608 .4616 .4625 .4633 1.8 .4641 .4649 .4656 .4664 .4671 .4678 .4686 .4693 .4699 .4706 1.9 .4713 .4719 .4726 .4732 .4738 .4744 .4750 .4756 .4761 .4767 2.0 .4772 .4778 .4783 .4788 .4793 .4798 .4803 .4808 .4812 .4817 2.1 .4821 .4826 .4830 .4834 .4838 .4842 .4846 .4850 .4854 .4857 2.2 .4861 .4864 .4868 .4871 .4875 .4878 .4881 .4884 .4887 .4890 2.3 .4893 .4896 .4898 .4901 .4904 .4906 .4909 .4911 .4913 .4916 2.4 .4918 .4920 .4922 .4925 .4927 .4929 .4931 .4932 .4934 .4936 2.5 .4938 .4940 .4941 .4943 .4945 .4946 .4948 .4949 .4951 .4952 2.6 .4953 .4955 .4956 .4957 .4959 .4960 .4961 .4962 .4963 .4964 2.7 .4965 .4966 .4967 .4968 .4969 .4970 .4971 .4972 .4973 .4974 munotes.in

Page 232

232NUMERICAL AND STATISTICAL METHODSz .00 .01 .02 .0 .04 .05 .06 .07 .0 .0 2.8 .4974 .4975 .4976 .4977 .4977 .4978 .4979 .4979 .4980 .4981 2.9 .4981 .4982 .4982 .4983 .4984 .4984 .4985 .4985 .4986 .4986 3.0 .4987 .4987 .4987 .4988 .4988 .4989 .4989 .4989 .4990 .4990 3.1 .4990 .4991 .4991 .4991 .4992 .4992 .4992 .4992 .4993 .4993 3.2 .4993 .4993 .4994 .4994 .4994 .4994 .4994 .4995 .4995 .4995 3.3 .4995 .4995 .4995 .4996 .4996 .4996 .4996 .4996 .4996 .4997 3.4 .4997 .4997 .4997 .4997 .4997 .4997 .4997 .4997 .4997 .4998 3.5 .4998 .4998 .4998 .4998 .4998 .4998 .4998 .4998 .4998 .4998 3.6 .4998 .4998 .4998 .4999 .4999 .4999 .4999 .4999 .4999 .4999 For values of z greater than or equal to 3.70, use 0.4999 to approximate the
shaded area under the standard normal curve.
1.5 Summary
In this chapter, Continuous Distributions , its types Uniform, Exponential with
its mean , variance and its application is discussed.
A special and very useful distribution called as Normal distrib ution and its
application in day to day life also discussed. Distribution Definition Mean
E(;) Variance
(;) Uniform X → U [ c, d ] f ( x ) ͳ
݀െܿ
if c < x < d
0 otherwise

ሺ݀ ൅ܿሻ
ʹ
ሺ݀ െܿሻଶ
ʹ Exponential f ( x ) ͳ
ߠ݁ି௫Ȁఏ
 x ≥ 0 , θ > 0 0  otherwise

ߠ
ߠଶ
Normal distribution with paramete rs mean — and standard deviati on
σ2, X → N(μ, σ2)
݂ሺݔሻൌͳ
ߪξʹߨ݁ିଵ
ଶఙమሺ௫ିμሻమ
Ǣെ∞൏ݔ൏∞Ǣെ∞൏μ൏∞Ǣߪ൐Ͳ munotes.in

Page 233

233Chapter 13: Distributions: Continuous Distributions
1.6 Unit End Exercise
1 If ; ĺ U (2, 8 ) State Mean and Variance of ;.
>Hints and Answers :Mean 5, Variance 3 @
2 If ; is a continuous random variable having uniform distributi on with parameters
‘a’ and ‘b’ such that mean 1 and variance 3.Find P(; 0)
>Hints and Answers : 1/3 @
3 On a route buses travels at an half hourly intervals, starting at 8.00 am. On
a given day, a passenger arrives at the stop at a time ;, which is uniformly
distributed over the interval >8.15am, 8.45am @. What is the probability that the
passenger will have to wait for more than 15 minutes for the bu s"
>Hints and Answers: 1/2 @
4 The mileage which car owners get with certain kind of radial t yres ( measured
in ‘000’ km) is a random variable with mean 40. Find the probab ilities that one
of these tyres will last.
i) At least 20000 km
ii) At most 30000 km
>Hints and Answers: i) 0.6065 ii) 0.5276 @
5 The lifetime of a microprocessor is exponentially distributed with mean 3000
hours. Find the probability that
i) The microprocessor will fail within 300 hours
ii) The microprocessor will function for more than 6000 hours
>Hints and Answers: i) 0.0951 ii) 0.1353 @
6 The time until next earthquake occurs in a particular region i s assumed to be
exponentially distributed with mean 1/2 per year. Find the prob ability that the
next earthquake happens
i) Within 2 years
ii) After one and half year
>Hints and Answers: i) 0.9817 ii) 0.0497 @munotes.in

Page 234

234NUMERICAL AND STATISTICAL METHODS
7 Let ; be normally distributed random variable with parameters ( 100,25)
Calculate:
i) P( ; •108 ) ii) P ( 90 ” ; ” 110)
>Hints and Answers: i) 0.27425 ii) 0.9545 @
8 If ; is a standard normal variable > ; ĺN(0,1)@, determine th e following
probabilities using normal probability tables
(i) P(; •1.3) (ii) P(0”;”1.3) (iii) P(; ” 1.3) (iv) P(-2”;”1. 3) (v)P(0.5”;”1.3)
>Hints and Answers: (i)0.09680 (ii) 0.4032 (iii)0.9032 (iv)0.88045
(v)0.21174 @
9 The annual rainfall (in inches ) in a certain region is normal ly distributed with
mean 40 and standard deviation 4. What is the probability t hat in the current
year it will have rainfall of 50 inches or more
>Hints and Answers: i) P(;•50) 0.0062097 @
10 The intelligent quotient (I4) of adults is known to be normal ly distributed with
mean 100 and variance 16. Calculate the probability that a rand omly selected
adult has I4 lying between 90 and 110.
>Hints and Answers: 0.98758 @
munotes.in