CCMAED06-Research-Methodology-In-Education-English-munotes

Page 1

1 1
EDUCATIONAL RESEARCH
Unit Structure
1.0 Objectives
1.1 Introduction
1.2 Sources of Acquiring Knowledge
1.3 Meaning, Steps and Scope of Educational Research
1.4 Scientific Method, aims and characteristics of research as a
scientific activity
1.5 Ethical considerations in Educati onal Research
1.6 Paradigms of Educational research
1.7 Types of Research
1.7.a Fundamental
1.7.b Applied Research
1.7.c. Action Research
1.8 Difference between the terms research method and research
methodology.
1.9 Research Proposal: Its meaning and Components
1.10 Let Us Sum Up
1.11 Unit End Exercise
1.12 References
1.0 OBJECTIVES:
After reading this unit, you will be able to:
 To explain the concept of Educational Research
 To describe the scope of Educational Research
 To state the purpose of Educational Research
 To explai n what is scientific enquiry.
 To explain importance of theory development.
 To explain relationship among science, education and educational
research.
 To identity fundamental research
 To identity applied research
 To identify action research munotes.in

Page 2


Research Methodology in Education
2  To Differentiate between fundamental, applied, and action research
 To identify different paradigms of research
 Distinguish between research method and research methodology.
 Discuss purposes of research proposal
 List down various components of research proposal
 Prepare wri te up for research proposal for a given topic.
1.1 INTRODUCTION:
Research purifies human life. It improves its quality. It is search for
knowledge. If shows how to Solve any problem scientifically. It is a
careful enquiry through search for any kind of Knowled ge. It is a journey
from known to unknown. It is a systematic effort to gain new knowledge
in any kind of discipline. When it seeks a solution of any educational
problem it leads to educational research.
Curiosity, inquisitiveness are natural gifts secured by a man. They inspire
him to quest, increase his thirst for knowledge / truth. After trial and error,
he worked systematically in the direction of the desired goal. His
adjustment and coping with situation make him successful in his task.
Thereby he lear ns something ‘s, becomes wise and prepares his own
scientific procedure while performing the same task for second time. So, is
there any relationship among science, education and educational
Research?
―Research is the voyage of discovery. It is the quest f or answers to
unsolved problems.
Research is required in any field to come up with new theories or modify,
accept, or nullify the existing theory. From time immemorial it has been
seen so many discoveries and inventions took place through research and
world has got so many new theories which help the human being to solve
his problems. Graham Bell, Thomas Edison, JC Bose, John Dewey,
Skinner, Piaget Research like have given us theories which may cause
educational progress research needs expertise.
1.2 SOURCES O F ACQUIRING KNOWLEDGE:
From the time we were born and the present day, each one of us has
accumulated a body of knowledge. Curiosity, the desire to learn about one
‘s environment and the desire to improve one‘s life through problem -
solving is natural to al l human beings. For this purpose, human beings
depend on several methods / sources of acquiring knowledge as follows:
1. Learned Authority : Human beings refer to an authority such as a
teacher, a parent or the boss or an expert or consultant and seek his / he r
advice. Such an authority may be based on knowledge or experience or
both. For example, if a child has difficulty in learning a particular subject,
he / she may consult a teacher. Learned authority could also be a book /
dictionary / encyclopedia / journ al / web -site on internet. munotes.in

Page 3


Educational Research

3 2. Tradition : Human beings easily accept many of the traditions of their
culture or forefathers. For example, in matters of food, dress,
communications, religion, home remedies for minor ailments, the way a
friend will react to an i nvitation, one relies on family traditions. On the
other hand, students, in case of admission criteria and procedures,
examination patterns and procedures, methods of maintaining discipline,
co-curricular activities, acceptable manner of greeting teachers and peers
rely on school traditions. Long established customs or practices are
popular sources of acquiring knowledge. This is also known as tenacity
which implies holding on to a perspective without any consideration of
alternatives.
3. Experience : Our own p rior personal experiences in matters of
problem -solving or understanding educational phenomena is the most
common, familiar and fundamental source of knowledge.
4. Scientific Method : In order to comprehend and accept learning
acquired through these sources, w e use certain approaches which are as
follows:
(a) Empiricism : It implies relying on what our senses tell us. Through a
combination of hearing and seeing we come to know the sound of a train.
i.e. through these two senses, we learn to associate specific sounds with
specific objects. Our senses also enable us to compare objects /
phenomena / events. They provide us with the means for studying and
understanding relationships between various concepts (eg. level of
education and income).
(b) Rationalism : It includes me ntal reflection. it places emphasis on ideas
rather than material substances. if we see logical interconnectedness
between two or more things, we accept those things. For example, we may
reason that conducive school / college environment is expected to lea d to
better teacher performance.
(c) Fideism : It implies the use of our beliefs, emotions or gut reactions
including religion. We believe in God because our parents told us though
we had not sensed God, seen or heard him nor had concluded that that his
existen ce is logically proved.
1.3 MEANING, STEPS AND SCOPE OF EDUCATIONAL
RESEARCH:
MEANING OF EDUCATIONAL RESEARCH:
Educational Research as nothing but cleansing of educational Research is
nothing but cleansing of educational process. Many experts think
Educati onal Research as under -
According to Mouly, ―Educational Research is the systematic application
of scientific method for solving for solving educational problem.
munotes.in

Page 4


Research Methodology in Education
4 Travers thinks, Educational Research is the activity for developing science
of behavior in educational situations. It allows the educator to achieve his
goals effectively.
According to Whitney, Educational Research aims at finding out solution
of educational problems by using scientific philosophical method.
Thus, Educational Research is to solve educational problem in s ystematic
and scientific manner, it is to understand, explain, predict and control
human behaviour.
Educational Research Characterizes as follows:
 It is highly purposeful.
 It deals with educational problems regarding students and teachers as
well.
 It is pr ecise, objective, scientific and systematic process of
investigation.
 It attempts to organize data quantitatively and qualitatively to arrive at
statistical inferences.
 It discovers new facts in new perspective. i.e. It generates new
knowledge.
 It is based on some philosophic theory.
 It depends on the researcher’s ability, ingenuity and experience for its
interpretation and conclusions.
 It needs interdisciplinary approach for solving educational problem.
 It demands subjective interpretation and deductive re asoning in some
cases.
 It uses classrooms, schools, colleges department of education as the
laboratory for conducting researches.
STEPS OF RESEARCH:
The various steps involved in the research process can be summarized as
follows;
Step 1: Identifying the Ga p in Knowledge
The researcher, on the basis of experience and observation realizes that
some students in the class do not perform well in the examination. So,
he/she poses an un answered questi on: ―Which factors are associated with
students ‘academic performance?
Step 2: Identifying the Antecedent / Causes
On the basis of experience, observation and a review of related literature,
he / she realizes that students who are either very anxious or not a t all
anxious do not perform well in the examination. Thus he / she identifies munotes.in

Page 5


Educational Research

5 anxiety as one of the factors that could be associated with students
‘academic performance.
Step 3: Stating the Goals
The researcher now states the goals of the study:
1. To ascert ain the relationship of anxiety with academic performance of
students.
2. To ascertain the gender differences in the anxiety and academic
performance of students.
3. To ascertain the gender difference in the relationship of anxiety with
academic performance of s tudents.
Step 4: Formulating Hypotheses
The researcher may state his / her hypotheses as follows:
1. There is a significant relationship between anxiety and academic
performance of students.
2. There is a significant gender difference in the anxiety and academic
performance of students.
3. There is a significant gender difference in the relationship of anxiety
with academic performance of students.
Step 5 : Collecting Relevant Information
The researcher uses appropriate tools and techniques to measure anxiety
and ac ademic performance of students, selects a sample of students and
collects data from them.
Step 6 : Testing the Hypotheses
He / she now uses appropriate statistical techniques to verify and test the
hypotheses of the study stated in Step 4.
Step 7 : Interpr eting the Findings
He / she interprets the findings in terms of whether the relationship
between anxiety and academic performance is positive or negative, linear
or curvilinear. He / she finds that this relationship is curvilinear i.e. when a
student‘s anx iety is either very low or very high, his / her academic
performance is found to be low. But when a student‘s anxiety is moderate,
his / her academic performance is found to be high.
He / she now tries to explain this finding based on logic and creativity.
Step 8 : Comparing the Findings with Prior researchers ‘Findings
At this step, the researcher tries to find out whether his / her conclusions
match those of the prior researches or not. If not, then the researcher
attempts to find out why conclusions do n ot match with other researches
by analyzing prior studies further.
munotes.in

Page 6


Research Methodology in Education
6 Step 9 : Modifying Theory
On the basis of steps 7 and 8, the researcher speculates that anxiety alone
cannot influence academic performance of students. There could be a third
factor which influences the relationship between anxiety and academic
performance of students. This third factor could be study habits of
students. For instance, students who have very low level of anxiety may
have neglected their studies throughout the year and hence their academic
performance is poor. On the other hand, students who have very high level
of anxiety may
not be able to remember what they have learnt or cannot concentrate on
studies due to stress or may fall sick very often and hence cannot study
properly . Hence their academic performance is poor. However, students
with a moderate level of anxiety are motivated enough to study regularly
and systematically all through the year and hence their academic
performance is high.
Thus, the loosely structured theory on students ‘academic performance
needs to incorporate one more variable, namely, study habits of students.
In other words, it needs to be modified.
Step 10 : Asking New Questions
Do study habits and anxiety interact with each other and influence
academic performance of students? i.e. we can now start with a fresh topic
of research involving three variables rather than two.
Check your Progress
1. What is the aim of Educational Research?
2. Name the method which is mainly applicable in Educational Research?
3. Whic h approach is adopted in Educational Research?
4. Name the places which can act as lab oratory for conducting Education
Research.
SCOPE OF EDUCATIONAL RESEARCH:
Name of Educational Research changes with the gradual development
occurs with respect to knowledge and technology, so Educational
Research needs to extend its horizon. Being scientific study of educational
process, it involves:
 Individuals (Student, teachers, educational managers, parents.)
 institutions (Schools, colleges, research –institutes)
It disc overs facts and relationship in order to make educational process
more effective. It relates social sciences like education.
munotes.in

Page 7


Educational Research

7 It includes process like investigation, planning (design) collecting data,
processing of data, their analysis, interpretation and d rawing inferences.
It covers areas from formal education and conformal education as well.
Check your Progress
1. Name disciplines on which education depends.
2. How education is an art?
3. How education is a science?
4. Name the areas of Educational Research in additi on to formal
education? Name the aspects in which Educational Research can seek
improvement?
5. Name the human elements involved in Educational Research?
6. Name the institutions involved in Educational Research?
7. Name the essential qualities of Educational Resea rches.
8. What is the goal of Educational Research with respect to new
knowledge?
Question -1
(i) to solve education problem
(ii) scientific method
(iii) interdisciplinary approach
(iv) classroom, school, college, department of education.
Question -2
(v) Philosophy, Psychology, Sociol ogy, History, Economics.
(vi) Since it imparts knowledge
(vii) It explains working of human mind/growth, educational program
(viii) Non-formal education, educational technology.
(ix) Curriculum, textbooks, teaching methods.
(x) Teachers, Students, Educational managers, Parents.
(xi) Scho ol, college, research -institutes.
(xii) Updated knowledge, imagination, insight, scientific attitude.
(xiii) to generate new knowledge. munotes.in

Page 8


Research Methodology in Education
8 1.4 SCIENTIFIC METHOD, AIMS AND
CHARACTERISTICS OF RESEARCH AS A
SCIENTIFIC ACTIVITY:
RELATIONSHIP AMONG SCIENCE, EDUCATION AND
EDUCATI ONAL RESEARCH:
Science helps to find out the truth behind the phenomenon. It is an
approach to gathering of knowledge rather than mere subject matter. It has
following two main functions:
 to develop a theory.
 to deduce hypothesis from that theory.
Scientis t uses an empirical approach for data collection and rational
approach for development of the theory.
Research shows a way to solve life – problems scientifically. It is a
reliable tool for progress of knowledge. Being systematic and
methodological, it is treated as a science. It also helps to derive the truth
behind the knowledge. It offers methods of improving quality of the
process and the product as well. Ultimately, Science and research go hand
in hand to find out solution of the problem.
Since Philoso phy offers a sound basis to education, Education is
considered as an art. However, Scientific progress makes education
inclining towards a science rather than an art.
Science belongs to precision and exactness. It suffers hardly from any
variable. But educ ation as a social science suffers from many variables, so
goes away from exactness. Educational Research tries to make educative
process more scientific. But education is softening from multivariable, so
it can ‘t be as exact as physical sciences. If the s tudy is systematically
designed to achieve educational goals, it will be educational research. Let
us summaries this discussion with Good ‘s thought –
―If we wish wisdom, we must expect science. If we wish in increase in
wisdom, we must expect research ‖
Knowledge is educator ‘s need. Curiosity and thirst for search makes him
to follow scientific way wisely. Indirectly, he plays a role
of educational rese archer. Ultimately, he is able to solve the educational
problem and generate new knowledge. All the three aspects. (Science,
education and educational research) have truth as a common basis, More
or less, they need exactness and precision. While solving a problem.
AIMS AND CHARACTERISTICS:
An enquiry is a natural technique for a search. But when it ‘s used
systematically and scientifically, it takes the form of a method. So
scientific enquiry is also known as Scientific Method. munotes.in

Page 9


Educational Research

9 Bacon ‘s inductive method con tributes to human knowledge. It is difficult
to solve many problems either by inductive or by deductive method. So,
Charles Darwin seeks happy blending of inductive and deductive method
in his scientific method. In this method, initially knowledge gained f rom
previous knowledge, experience, reflective thinking and observation is
unorganized. Later on it proceeds inductively from part to whole and
particular to general and ultimately to meaningful hypothesis. Thereafter,
it proceeds deductively from whole to part, general to particular and
hypothesis to logical conclusion.
This method is different from the methods of knowledge – generation like
trial and error, experience, authority and intuition. It is a parallel to Dewey
‘s reflective thinking; because the researcher himself is engrossed in
reflective thinking while conducting research.
Scientific method follows five steps as under:
Identification and definition of the problem: The researcher states the
identified problem in such a manner that it can be solv ed through
experimentation or observation.
Formulation of hypothesis: It allows to have an intelligent guess for the
solution of the problem.
Implication of hypothesis through deductive reasoning: Here , the
researcher deduces the implications of suggested hypothesis, which may
be true.
Collection and analysis of evidence: The researcher is expected here to
test the deduced implications of the hypothesis by collecting concerned
evidence related to them through experimentation and observation.
Verification o f the hypothesis: Later on, the researcher verifies whether
the evidence support hypothesis. If it supports, the hypothesis is accepted,
if it doesn’t the hypothesis is not accepted and later on it is modified if it
is necessary.
A peculiar feature of thi s method is not to prove the hypothesis as an
absolute truth but to conclude that the evidence does or doesn’t ‘t support
the hypothesis.
1.5 ETHICAL CONSIDERATIONS OF RESEARCH:
Research exerts a significant influence over educational systems. Hence a
research er needs to adhere to an ethical code of conduct. These ethical
considerations are as follows:
While a researcher may have some obligations to his / her client in case
of sponsored research where the sponsoring agency has given him / her
financial aid for conducting the research, he / she has obligations to the
users, the larger society, the subjects (sample / respondents) and
professional colleagues. He / she should not discard data that can lead to
unfavorable conclusions and interpretations for the spon soring agency. munotes.in

Page 10


Research Methodology in Education
10
The researcher should maintain strict confidentiality about the
information obtained from the respondents. No information about the
personal details of the respondents should be revealed in any of the
records, reports or to other individual s without the respondents
‘permission.
The researcher should not make use of hidden cameras, microphones,
tape-recorders or observers without the respondents ‘permission.
Similarly, private correspondence should not be used without the
concerned responden t ‘s permission.
In an experimental study, when volunteers are used as subjects, the
researcher should explain the procedures completely (eg. the experiment
will go on for six months) along with the risks involved and the
demands that he / she would make upon the participants of the study
(such as the subjects will be required to stay back for one hour after
school hours etc.). If possible, the subjects should be informed about the
purpose of the experiment / research. While dealing with school
children (m inors) or mentally challenged students, parents ‘or guardians
‘consent should be obtained. This phenomenon is known as informed
consent ‘.
The researcher should accept the fact that the subjects have the freedom
to decline to participate or to withdraw fr om the experiment.
In order to ensure the subjects ‘inclusion and continuation in the
experiment, the researcher should never try to make undue efforts
giving favorable treatment after the experiment, more (additional
marks) in a school subject, money and soon.
In experimental research which may have a temporary or permanent
effect on the subjects, the researcher must take all precautions to protect
the subjects from mental and physical harm, danger and stress.
The researcher should make his / her data av ailable to peers for
scrutiny.
The respondents / subjects / participants should be provided with the
reasons for the experimental procedures as well as the findings of the
study if they so demand.
The researcher should give due credit to all those who ha ve helped him
/ her in the research procedure, tool construction, data collection, data
analysis or preparation of the research report.
If at all the researcher has made some promise to the participants, it
must be honored and fulfilled.

munotes.in

Page 11


Educational Research

11 1.6 PARADIGMS OF EDU CATIONAL RESEARCH:
The idea of social construction of rationality can be pursued by
considering Kuhn ‘s idea of scientific paradigm. Thomas Kuhn, himself a
historian of science, contributed to a fruitful development in the
philosophy of science with his bo ok ―The Structure of Scientific
Revolutions ‖ published in 1962. It brought into focus two streams of
thinking about what could be regarded as scientific ‘, the Aristotelian
tradition with its teleological approach and the Galilean with its causal and
mecha nistic approach. It introduced the concept of paradigm ‘into the
philosophical debate.
Definition and Meaning of Paradigm of Research:
“Paradigm ” derives from the Greek verb for―exhibiting side by side. In
lexica it is given with the translations ―example or table of changes in
form and differences in form. Thus, Paradigms are ways of organizing
information so that fundamental, abstract relationships can be clearly
understood.
The idea of paradigm directs attention to science as having recognized
patte rns of commitments, questions, methods, and procedures that underlie
and give direction to scientific work. Kuhn focuses upon the paradigmatic
elements of research when he suggests that science has emotional and
political as well as cognitive elements. We can distinguish the underlying
assumptions of a paradigm by viewing its discourse as having different
layers of abstractions. The layers exist simultaneously and are
superimposed upon one another.
The concept of paradigm provides a way to consider the dive rgence in
vision, custom, and tradition. It enables us to consider science as having
different sets of assumptions, commitments, procedures and theories of
social affairs.
A paradigm determines the criteria according to which one selects and
defines proble ms for inquiry and how one approaches them theoretically
and methodologically.
A paradigm could be regarded as a cultural manmade object, reflecting the
dominant notions about scientific behaviour in a particular scientific
community, be it national or int ernational, and at a particular pointing
time. Paradigms determine scientific approaches and procedures which
stand out as exemplary to the new generation of scientists – as long as they
do not oppose them.
A revolution in the world of scientific paradigms occurs when one or
several researchers at a given time encounter anomalies or differences, for
instance, make observations, which in a striking way to not fit the
prevailing paradigm. Such anomalies can give rise to a crisis after which
the universe under study is perceived in an entirely new light. Previous
theories and facts become subject to thorough rethinking and revaluation. munotes.in

Page 12


Research Methodology in Education
12 History of Paradigms of Research:
Educational research faces a particular problem, since education, is not a
well-defined, unit ary discipline but a practical art. Research into
educational problems is conducted by scholars with many disciplinary
affiliations. Most of them have a background in psychology or other
behavioral sciences, but quite a few of them have a humanistic
backgr ound in philosophy and history. Thus, there cannot be any
prevailing paradigm or normal science‘ in the very multifaceted field of
educational research. However, when empirical research conducted by
behavioral scientists, particularly in the Anglo -Saxon co untries, in the
1960‘s and early 1970‘s began to be accused of dominating research with
a positivist quantitatively oriented paradigm that prevented other
paradigms of a humanistic or dialectical nature being employed, the
accusations were directed at thos e with a behavioral science background.
During twentieth century two main paradigms were employed in
researching educational problems. The one is modeled on the natural
sciences with an emphasis on empirical quantifiable observations which
lend themselves to analyses by means of mathematical tools. The task of
research is to establish causal relationships, to explain. The other paradigm
is derived from the humanities with an emphasis on holistic and
qualitative information and interpretive approaches.
The t wo paradigms in educational research developed historically as
follows. By the mid -nineteenth century, when August come \91798 -1857)
developed positivism in sociology and John Stuart Mill (1806 -1873)
empiricism in psychology. They came to serve as models a nd their
prevailing paradigm was taken over by social scientists, particularly in the
Anglo -Saxon countries. In European Continent there was another from
German idealism and Hegelianism. The ―Galilean mechanistic
conception became the dominant one particul arly with mathematical
physics as the methodological ideal.
There are three strands for the other main paradigm in educational
research. According to the first strand, Wilhelm Dilthey (1833 -1911)
maintained that the humanities had their own logic of resear ch and pointed
out that the difference between natural sciences and humanities was that
the former tried to explain, whereas the latter tried to understand the
unique individual in his or her entire, concrete setting.
The second strand was represented by t he phenomenological philosophy
developed by Edmund Husserl in Germany. It emphasized the importance
of taking a widened perspective and of trying to―get to the roots of
human activity. The third strandli ne
Humanistic paradigm consists of the critical philo sophy, which developed
with certain amount of neo -Marxism.
The paradigm determines how a problem is formulated and
methodologically handled. According to the traditional positivist
conception, problems related to, for example, to classroom behviour munotes.in

Page 13


Educational Research

13 should be investigated primarily in terms of the individual actor, either the
pupils, who might be neurotic, or the teacher who might be ill prepared for
this her job. The other conception is to formulate the problem in terms of
the larger setting, that of the sc hool, or rather that of the society at large.
By means of such mechanisms as testing, observation and the like, one
does not try to find out why the pupil or the teacher deviates from the
normal. Rather an attempt is made to study the particular individual as a
goal directed human being with particular and unique motives.
Interdependence of the Paradigms:
One can distinguish between two main paradigms in educational research
planning and with different basis of knowledge. On one hand there is
functional - structural, objective – rational, goal -directed, manipulative,
hierarchical, and technocratic approach. On the other hand, there is the
interpretive, humanistic, consensual, subjective, and collegial one.
The first approach is derived from classical positivi sm. The second one,
more popular now, partly derived from the critical theory of the Frankfurt
school, particularly from Habermas ‘s theory of communicative action.
The first app roach is―linear and consists of a straight forward rational
action toward preconceived problem. The second approach leaves room
for reinterpretation and reshaping of the problem during the process of
dialogue prior to action and even during action.
Keeves (1988) argues that the various research paradigms employed in
education, the empirical -positivist, the hermeneutic or phenomenological,
and the ethnographic -anthropological are complementary to each other.
He talks about the ―unity of educational research, makes a distinction
between paradigms and approaches, and contends that there is, in the final
analysis, only one paradigm but many approaches.
For example, the teaching -learning process can be observed and /or video
recorded. The observations can be quan tified and the data analyzed by
means of advanced statistical method. Content can be studied in the light
of national traditions, and the philosophy underlying curriculum
constructions. Both the teaching -learning process and its outcomes can be
studied in a comparative, cross - national perspective.
Depending upon the objective of a particular research project, emphasis is
laid more on the one or on the error paradigm. Thus, qualitative and
quantitative paradigms are more often than not complementing each ot her.
For example, it is not possible to arrive at any valid information about a
school or national system concerning the level of competence achieved in,
for instance, science by visiting a number of classrooms and thereby trying
to collect impressions. Sa mple surveys like one collected by IEA
(International Association for the Evaluation of Educational Achievement)
would be an important tool. But such surveys are not much useful if it
comes to accounting for factors behind the differences between school
systems. Here the qualitative information of different kinds is required.
munotes.in

Page 14


Research Methodology in Education
14 Policymakers, planners, and administrators want generalizations and rules
which apply to a wide variety of institutions with children of rather diverse
backgrounds. The policymaker an d planner are more interested in
collectivity than in the individual child. They operate from the perspective
of the whole system. Whereas, the classroom practitioners are not very
much helped by generalizations which apply ―on the whole or ―by and
large b ecause they are concerned with the timely, the particular child here
and now.
Need for contemporary approaches:
The behavioural sciences have equipped educational researchers with a
store of research tools, such as observational methods and tests, which
helps them to systematize observation which would otherwise would not
have been considered in the more holistic and intuitive attempts to make,
for instance, informal observations or to conduct personal interviews.
Those who turn to social science research i n order to find the
―best pedagogy or the most―efficient methods of teaching are in away
victims of traditional science which claimed to be able to arrive at
generalizations applicable in practically every context. But, through
critical philosophy research ers have become increasingly aware that
education does not take place in a social vacuum. Educational researchers
have also begun to realize that educational practices are not independent
of the cultural and social context in which they operate. Nor they a re
neutral to educational policies. Thus, the two main paradigms are not
exclusive, but complementary to each other.
Check your progress:
Answer the following questions.
1. Define the term paradigm?
2. List the two paradigms of research?
1.7 TYPES OF RESEARCH:
1.7A) FUNDAMENTAL RESEARCH :
It is basic approach which is for the sake of knowledge. Fundamental
research is usually carried on in a laboratory or other sterile environment,
sometimes with animals. This type of research, which has no immediate or
planned appl ication, may later result in further research of an applied
nature. Basic researches involve the development of theory. It is not
concerned with practical applicability and most closely resembles the
laboratory conditions and controls usually associated wi th scientific
research. It is concerned establishing generally principles of learning.
For example, much basic research has been conducted with animals to
determine principles of reinforcement and their effect on learning. Like munotes.in

Page 15


Educational Research

15 the experiment of skinner on cats gave the principle of conditioning and
reinforcement.
According to Travers, basic research is designed to add to an organized
body of scientific knowledge and does not necessarily produce results of
immediate practical value. Basic research is primar ily concerned with the
formulation of the theory or a contribution to the existing body of
knowledge. Its major aim is to obtain and use the empirical data to
formulate, expand or evaluate theory. This type of research draws its
pattern and spirit from the physical sciences. It represents a rigorous and
structured type of analysis. It employs careful sampling procedures in
order to extend the findings beyond the group or situations and thus
develops theories by discovering proved generalizations or principl es. The
main aim of basic research is the discovery of knowledge solely for the
sake of knowledge.
Another system for classification is sometimes used for the research
dealing with these who types of questions. This classification is based on
goal or objec tive of the research. The first type of research, which has its
aim obtaining the empirical data that can be used to formulate, expand or
evaluate theory is called basic research. This type of study is not oriented
in design or purpose towards the solution of practical problem.
Its essential aim is to expand the frontiers of knowledge without regard to
practical application. Of course, the findings may eventually apply to
practical problems that have social value.
For example, advances in the practice of me dicine are dependent upon
basic research in biochemistry and microbiology. Likewise, progress in
educational practices has been related to progress in the discovery of
general laws through psychological, educational, sociological research.
Check your progr ess – 1:
Answer the following questions:
1. What is fundamental research?
2. Where can the basic researches be conducted?
1.7 B) APPLIED RESEARCH :
The second type of research which aims to solve an immediate practical
problem is referred to as applied research. According to Travers, applied
research is undertaken to solve an immediate practical problem and the
goal of adding to scientific knowledge is secondary.
It is research performed in relation to actual problems and under the
conditions in which they are fou nd in practice. Through applied research,
educators are often able to solve their problems at the appropriate level of
complexity, that is, in the classroom teaching learning situations. We may
depend upon basic research for the discovery of more general l aws of
learning, but applied research much is conducted in the order to determine munotes.in

Page 16


Research Methodology in Education
16 how these laws operate in the classroom. This approach is essential if
scientific changes in teaching practice are to be affected. Unless educators
undertake to solve their o wn practical problems of this type no one else
will. It should be pointed out that applied research also uses the scientific
method of enquiry. We find that there is not always a sharp line of
demarcation between basic and applied research. Certainly, appl ications
are made from theory to help in the solution of practical problems. We
attempt to apply the theories of learning in the classroom. On the other
hand, basic research may depend upon the findings of the applied research
to complete its theoretical f ormulations. A classroom learning experiment
can throw some light on the learning theory. Furthermore, observations in
the practical situations serve to test theories and may lead to the
formulation of new theories.
Most educational research studies are cl assified at the applied end of the
continuum ; they are more concerned with what ? works best than with
why?. For example, applied research tests the principle of reinforcement to
determine their effectiveness in improving learning (e.g. programmed
instructi on) and behaviour (e.g. behaviour modification). Applied research
has most of the characteristics of fundamental research, including the use
of sampling techniques and the subsequent inferences about the target
population. Its purpose, however, is improvin g a product or a process –
testing theoretical concepts in actual problem situations. Most educational
research is applied research, for it attempts to develop generalizations
about teaching – learning processes and instructional materials.
The applied res earch may also be employed a university or research
institute or may be found in private industry or working for a government
agency. In the field of education such a person might be employed by a
curriculum publishing company, a state department of educat ion, or a
college of education at a university. Applied researches are also found in
the settings in which the application or practitioner ‘s role is primary. This
is where the teachers, clinical psychologists, school psychologists, social
workers physicia ns, civil engineers, managers, advertising specialists and
so on are found. Many of these people receive training in doing research,
and they use this knowledge for two purposes.
(1) To help practitioners understand, evaluate, and use the research
produced by basic and applied researches in their own fields and,
(2) To develop a systematic way of addressing the practical problems and
questions that arise as they practice their professions.
For example, a teacher who notices that a segment of the class is not
adequa tely motivated in science might look at the research literature on
teaching science and then systematically try some of the findings
suggested by the research.
Some of the recent focus of applied educational research have been
grading practices, collective bargaining for school personnel, curriculum
content, instructional procedures, educational technology, and assessment
of achievement. The topics have been investigated with applied research munotes.in

Page 17


Educational Research

17 because the questions raised in these areas generally have limite d or no
concrete knowledge of theory, we can draw upon directly to aid in
decision making.
Check your progress – 2:
Answer the following questions:
1. What do you mean by applied research?
2. Applied research be conducted?
1.7C) ACTION RESEARCH:
Research designe d to uncover effective ways of dealing with problems in
the real world can be referred to as action research.
This kind of research is not confined to a particular methodology or
paradigm. For example, a study of the effectiveness of training teenage
paren ts to care for their infants. The study is based on statistical and other
evidence that infants of teenage mothers seemed to be exposed to more
risks than other infants. The mother and children were recruited for
participation in the study while the childr en were still in neonate period.
Mothers were trained at home or in an infant nursery. A controlled group
received no training. The mothers trained at home were visited at 2 -weeks
interval over a 12 -month period. Those trained in nursery setting attended
3-days per week for 6 months, were paid minimum wage, and assisted as
staff in centre. Results of the study suggested that the children of both
group of trained mothers benefited more in terms of their health and
cognitive measures than did the controlled children. Generally greater
benefits were realized by the children of the mothers trained in the nursery
that with the mothers trained at home.
Thus, the study shows that such researches have direct application to real
world problems. Second, elements of b oth quantitative and qualitative
approaches can be found in the study. For example, quantitative measure
of weight, height, and cognitive skills were obtained in this study.
However, at the start itself from the personal impressions and observations
withou t the benefit of systematic quantitative data, the researchers were
able to say that the mother in the nursery center showed some unexpected
vocational aspirations to become nurses. Third, treatments and methods
that are investigated are flexible and might change during the study in
response to the results as they are obtained. Thus, action research is more
systematic and empirical than some other approaches to innovation and
change, but it does not lead to careful controlled scientific experiments
that are generalizable to a wide variety of situations and settings.
The purpose of action research is to solve classroom problems through the
application of scientific methods. It is concerned with a local problem and
is conducted in a local setting. It is not co ncerned with whether the results
are generalizable to any other setting and is not characterized by the same
kind of control evidence in other categories of research. The primary goal munotes.in

Page 18


Research Methodology in Education
18 of action research is the solution of a given problem, not contribution to
science. Whether the research is conducted in one classroom or many
classrooms, the teacher is very much a part of the process. The more
research trainings the teacher involved have had, the more likely it is that
the research will produce valid, if not generalizable research.
The value of action research is confined primarily to those who are
conducting it. Despite its shortcomings, it does represent a scientific
approach to the problem solving that is considerably better than changed
based on the alleg ed effectiveness of untried procedures, and infinitely
better than no changes at all. It is a means by which concerned school
personnel can attempt to improve the educational process, at least within
their environment. Of course, the true value of action r esearch to true
scientific progress is limited. True progress requires the development of
sound theories having implications for many classrooms, not just one or
two. One sound theory that includes ten principles of learning may
eliminate the need of hundr eds of would – be action research studies.
Given the current status of educational theory, however, action research
provides immediate answers to problem that cannot wait for theoretical
solutions.
As John Best puts it, action research is focused on immedi ate applications.
Its purposes are to improve school practices and at the same time, to
improve those who try to improve the practices, to combine the research
processes, habits of thinking, ability to work harmoniously with others,
and professional spirit .
If most classroom teachers are to be involved in research activity, it will
probably be in the area of action research. Many observers have projected
action research nothing more than the application of common sense or
good management. Whether or not it is worthy of the term research it does
not apply scientific thinking and methods to real life problems and
represents a greater improvement over teachers ‘subjective judgments and
decision based upon stereotype thinking and limited personal experience.
The concept of action research under the leadership of Corey has been
instrumental in bringing educational research nearer to educational
practitioners. Action research is research undertaken by practitioners in
order that they may attempt to solve their loca l, practical problems by
using the method of science.
Check your progress – 3
Answer the following questions:
Q.1 What do you mean by action research?
Q.2 What benefits can teachers get from action research?

munotes.in

Page 19


Educational Research

19 1.7 DIFFERENCE BETWEEN THETERMS RESEARCH
METHOD AND RESEA RCH METHODOLOGY:
While preparing the design of the study, it is necessary to think of research
method. It is simply the method for conducting research. Generally, such
methods are divided into quantitative and qualitative methods. Such
quantitative methods include descriptive research, evaluation research and
assessment research. Assessment type of studies includes surveys, public
opinion polls, assessment of educational achievement. Evaluation studies
include school surveys, follow up studies. Descriptive research studies are
concerned with analysis of the relationships between non manipulated
variables. Apart from these quantitative methods, educational research
also includes experimental and quasi experimented research, survey
research and causal -comparat ive research.
Qualitative research methods include ethnography, phenomenology,
ethno -methodology, narrative research, grounded theory, symbolic
interaction and case study.
Thus, the researcher should mention about methods of research used in his
research w ith proper justification for its use.
The term methodology ‘seems to be broader, in the sense it includes
nature of population, selection of sample, selection / preparation of tools,
collection of data and how data will be analyzed. Here the method of
research is also included.
1.8 RESEARCH PROPOSAL: ITS MEANING AND
COMPONENTS:
Preparing the research proposal is an important step because at this stage,
entire research project gets a concrete shape. Researcher‘s insight and
inspiration are translated into a step -by-step plan for discovering new
knowledge.
Proposal is more than research design. Research design is a subset of
proposal. Ordinarily research design will not talk much about theoretical
frame work of the study. It will be also silent about the review of related
studies. A strong rationale for conducting research is also not part of
research design. At the stage of writing proposal, the entire research work
shapes into concrete form. In the proposal, the researcher demonstrates
that he is familiar with what he is doing.
Following are a few purposes of a research proposal:
The proposal is like the blue print which the architect designs before
construction of a house. It conveys the plan of entire research work
along with justification of conducting the sam e.
The proposal is to be presented to funding agency or a departmental
research committee. Now presentation of research proposal is
compulsory before the committee as per U.G.C. guidelines of July munotes.in

Page 20


Research Methodology in Education
20 2009. In such a committee, a number of experts participate and suggest
important points to help and guide researcher. In fact, this is a very
constructive activity. In C.A.S.E., a research proposal is presented on
three occasions. First, in the researcher‘s forum on Saturday, second in
Tuesday seminar and finally before the committee consisting of Dean,
Head, Guide and other experts. Such fruitful discussion helps in
resolving many issues. When such presentation is there, it always brings
seriousness on the part of researcher and guide also. During such
presentati on, strengths and limitations of proposal will be come out.
Funding agency also provides funds based on strength and quality of
proposal.
Research proposal serves as a plan of action. It conveys researcher and
others as to how study will be conducted. The re is indication of time
schedule and budget estimates in the proposal which guides researcher
to complete the task in time with in sanctioned budget.
The proposal approved by committee serves as a bond of agreement
between researcher and guide. Entire pr oposal becomes a mirror for
both to execute the study further.
Thus, a research proposal serves mainly following purposes.
(i) It communicates researcher‘s plan to all others interested.
(ii) It serves as a plan of action.
(iii) It is an agreement between researcher and the guide.
(iv) Its presentation before experts provides further rethinking on the
entire work.
Following components are generally included in the research proposal. It
is not necessary to follow this list rigidly. It should provide useful outline
for writing o f any research proposal.
Normally, a research proposal begins with an Introduction, this gives
clearly the background or history of the problem selected. Some also calls
this as a theoretical / conceptual framework. This will include various
theories / con cepts related to problem selected. Theoretical frame work
should have logical sequence. Suppose researcher wants to study the
achievement of class IX students in mathematics in particular area, then
conceptual frame may include:
Objectives of teaching mat hematics, its purpose of secondary school
level
Importance of achievement in mathematics
Level of achievement as studied by other researchers
Factors affecting achievements of mathematics munotes.in

Page 21


Educational Research

21
Various commissions and committees’ views on achievement in
math ematics.
All these points can be put into sequence logically. Whenever needed
theoretical support be given. This is an important step in research
proposal. Generally, any proposal begins with this type of introduction.
A. Identification of Research Topic: So urces and Need:
As discussed earlier, researcher will spell out as to how the problem
emerged, its social and educational context and its importance to the field.
Some researchers name this caption as background of the study or
Theoretical / Conceptual fra me work of the study. In short, here the entire
topic of the research is briefly introduced along with related concepts and
theories in the field.
B. Review of Related Literature:
In this section, one presents what is so far known about the problem under
investigation. Generally theoretical / conceptual frame work is already
reported in earlier section. In this section researcher concentrates on
studies conducted in the area of interest. here, a researcher will locate
various studies conducted in his area and interest. Try to justify that all
such located studies are related to your work. For locating such studies,
one will refer following documents / sources.
 Surveys of research in education (Edited earlier by Prof. M. B. Buch
and Later on by NCERT, New Delhi)
 Ph. D. Theses available in various libraries.
 Current Index to Journals in Education (CIJE)
 Dissertation Abstract International (DAI)
 Educational Resources Information Centre (ERIC) by U.S. office of
education.
 Various national / International journals, I nternet resources (For detail
see Ary, D., Jacobs, L.C., and Razavih A. (1972). Introduction to
Research in Education N. Y. Holt, Rinehart and Winston, ING pp
55 –70)
In research proposal, the review of studies conducted earlier is reported
briefly. There are two was of reporting the same. One way could be all
such related studies be reported chronologically in brief indicating
purpose, sample, tools and major findings. Of course, this will increase the
volume of research proposal. Second studi es with similar trends be put
together and its important trend/s be highlighted. This is bit difficult, but
innovative. Normally in review the surname of author and year in bracket
is mentioned. There is also a trend to report studies conducted in other
countries separately. It is left to guide and researcher whether such
separate caption is necessary or not. munotes.in

Page 22


Research Methodology in Education
22 At the end of review, in research proposal, there should be conclusion. (Of
course, a separate caption like conclusion be avoided.)
Here, the research er shares the insights he has gained from the review.
Also, on the basis of review he will justice the need of conducting present
study. The researcher should conclude with following points:
 What has been done so far in this area?
 Where? (Area wise)
 When? (Year wise)
 How? (Methodology wise)
 What needs to be done?
Thus, the researcher will identify the Research Gap ‘.
C. Rationale and Need of the Study :
Rationale should answer the question – why ‘this study is conducted? It
why ‘is answered properly, then ratio nale a strong one. For strong
rationale, the earlier section of review will be of much help. Identified
research gaps will convey as to why this study is conducted. Suppose the
investigator wants to study the following problem: Development and Try
out of C AI in Teaching of Science for Class VIII in Mumbai ..
Here, the researcher should try to answer why CAI only? Why it is in
science teaching only? Why it is for class VIII only? Why it is in Mumbai
only?
If these questions are answered adequately, then ratio nale becomes strong.
Here one has to identify gaps in the area of science teaching especially
with reference to CAI. Apart from this, the need for conducting the present
study be justified.
D. Definition of Terms :
Every research study involves certain key or technical terms which have
some special connotation in the context of study; hence it is always
desirable to define such key words. There are two types definitions, (i)
Theoretical / constitutive and (ii) Operational.
A constitutive definition elucidates a term and perhaps gives some more
insight into the phenomena described by the terms. Thus, this definition is
based on some theory. While an operational definition is one which
ascribes meaning to a concept by specifying the operations that must be
perfor med in order to measure the concept, e.g. the word achievement ‘has
many meanings but operationally it can be defined as, ―the scores
obtained by the students in English test constructed by researcher in 2009.
Here it is clear that achievement in English w ill be measured by
administering to test constructed by Mr. So and so in 2009. Apart from munotes.in

Page 23


Educational Research

23 operational definitions, one can define some terms which have definite
meaning with reference to particular investigation. The terms like Lok
Jumbish, Minimum Levels of Learning, Programmed Learning etc. can be
define in particular context of research.
E. Variables:
Variables involved in the research need to be identified here. Their
operational definitions should be given in the research proposal.
Especially in study whe re experimental research is conducted, variables be
specified with enough care. Their classification should be done in terms
and dependent variables, independent variables, intervening variables,
extraneous variables etc. Controlling of some variables need to be
discussed at an appropriate stage in proposal.
F. Research questions objectives and hypotheses :
While reading the statement of the problem, there may be bit confusion to
avoid such confusions there is a need to have specification of a research
problem. This specification can be done by writing research questions,
objectives, hypotheses, by writing operational definitions thus, objectives
give more clarity to researchers and reactors objectives are the foundations
of the research, as they will guide the entire process of research. List of
objectives should not be too lengthy not ambiguous. The objectives we
stated clearly to indicate what the researcher is trying to investigate.
While conducting any research, researcher would definitely aim at
assuring ce rtain questions. The researcher should frame such questions in
a praise way. Some researchers simply put the objectives in the question
form, which is just duplication of objectives, which be avoided.
Depending on the nature of study, the researcher would formulate
hypotheses, The proposition of a hypothesis is derived from theoretical
constructs, previous researches on earlier researches, the researcher can
write research or will hypothesis will be more suitable however as per
evidences from previous resea rches one can decide the nature of
hypothesis.
Formulation of hypothesis is an indication that researcher has sufficient
knowledge in the area and it also gives direction for data collection and
analysis. A hypothesis has to be:
(I) testable, (ii) have exp lanatory power, (iii) state expected relationship
between variables. (iv) consistent with existing body of knowledge.
G. Assumptions:
Best and Kahn (2004) assumptions are statements of what the
researcher believes to be facts but cannot verify. If the researc her is
proceeding with certain assumptions, then same need to be reported in the
research proposal.
munotes.in

Page 24


Research Methodology in Education
24 H. Scope, Limitations and Delimitations:
In any research, it is not possible to cover all aspects of the area of
interest, variables, population and so on. Th us, a study has always certain
limitations. Limitations are those conditions beyond the control of the
researcher that may play restriction on conclusions. Sometimes, the tool
used is not revalidated. This itself becomes limitation of the study. Thus,
limitation is a broad term, but delimitation is a narrow term. It indicates
boundaries of the study. The study on achievement in English can be
delimited to only grant - in-aid school, which includes schools who follow
Maharashtra State Board, so here beyond th is conclusion cannot be
extended. This can be made more specific by specifying the population
and sample.
I. Method, Sample and Tools:
Method: A researcher should report about method of research. As
discussed in (b), researcher should mention as to how study will be
conducted. Depending on nature of study – qualitative or quantitative the
method of research needs to be reported along with justification. i.e. how
particular method suits one ‘s study be discussed in brief. If it is survey, do
not write simply su rvey, but indicate further the type of survey too. If it is
experimental design, mention specifically which type of experimental
design.
Sample : You might have already studied about sampling in details. This
section of research proposal will mention about selection of sample. First,
the researcher should mention about would like too in for. One must
describe the population along with total size. This is especially needed in
case of randomization and stratification. Researcher should mention about
probabilit y non probability sampling design. Accordingly, selection of
sample needs to be detailed out along with its justification. Many
researchers write about randomization without mentioning size of
population. the researcher also writes about stratified samplin g without
details of various strata along with its size. As from the sample statistics,
population parameter is to be estimated, solution of sample be done with
enough care.
In case of qualitative research, investigator may go for theoretical
sampling. It is necessary to derailed out have , it need be, description of
field is necessary.
Tools: You have already shared about various tools of data collection. In
this section of proposal selection and description of tool is for be reported
with proper justificat ion. Steps of construction of particular tool need to be
reported in brief. If readymade tools are used then its related details need
to be reported. Details like author of the tool, its reliability, validity, and
norms, along with scoring procedure need t o be reported. It has been
found that many researchers fail to report the year when tool was
constructed. As far as possible, very old tools need to be avoided. In case
of readymade tools, always look for which population it was desirable to
use valid and reliable tools. munotes.in

Page 25


Educational Research

25 J. Significance of the Study:
If we have already reported strong rationale then, hardly there is any need
to go for significance. In rationale part, one must describe as to how this
study will contribute to the field of education. How the find ings / results of
particular research will influence educational process in general need to be
reported in the rationale only.
(Note: There are various models for writing research proposal. It differs
from university to university. Many funding agencies ha ve their own
format for proposal.)
K. Technique/s of Data Analysis:
This is crucial step in proposal. As to how collected data will be tabulated
and organized for the purpose of further analysis is to
be reported in this section. If it is quantitative researc h, parametric or non -
parametric statistical techniques will be used need to be reported. Before
applying any technique for data analysis, verify the needed assumptions
about that particular technique. Suppose if one wants to go for ANOVA,
verify about assu mption for normality, nature of data – especially in
interval or ratio scale, homogeneity of variances and randomization. If it is
qualitative analysis, detailed out about nature of data, its tabulation,
orgnaisation and description. If data are to be anal yzed with the help of
content analysis, how exactly it will be done needs to be detailed out.
Whichever technique one is using, it needs to be in tune with objectives
and hypotheses of study.
L. Bibliography:
During preparation of proposal, researcher consult s various sources like
books, journals, reports, Ph.D. theses etc. All such primary / secondary
sources need to be reported in the bibliography. Generally American
Psychological Association – Publication Manual be followed to write
references. All authors quoted in proposal need to be listed in
bibliography. Authors who are not quoted but they are useful for further
reading be also listed. Consistency and uniformity be observed in
reporting references.
M. Time Frame:
The proposal submitted for M.Phil or Ph.D. degrees, generally do not
require time frame in all universities, but there is a fixed limit for these
courses. It is always advisable to give detailed schedule if research work,
as it helps to keep researcher ale rt. Proposals to be submitted to funding
agency definitely ask for time frame. Time frame need to be reported
keeping following points in view. Time / duration mentioned by funding
agency be properly dividend.
 Time required for preliminary work like review of literature.
 Time required for preparing tool/s. munotes.in

Page 26


Research Methodology in Education
26  Time requires for data collection, field visits etc.
 Time required for data analysis and report writing.
N. Budget:
The proposal submitted to the funding agency needs details regarding
financial estimates. I t may include expected expenditure keeping various
budget needs. Following budget needs be kept in view along with amount.
 Remuneration for project team, i.e. principal investigator and project
team.
 Remuneration for secretarial staff like clerk, data entr y operator,
accountants, helpers etc.
 Remuneration for appointing project fellow, field investigators etc.
 Expenditure towards purchase of books, journals, tools etc.
 Expenditure towards Printing, Xeroxing, Stationery etc.
 Expenditure for data entry, tabul ation and analysis of data.
 Expenditure for field work, travel for monitoring purpose etc.
 Expenditure for preparing final report.
While preparing budget, examine the guidelines given by particular
funding agency.
O. Chapterisation:
Generally, scheme of chapt erisation is given in synopsis. If at all it is to be
reported in research proposal write down various
caption, sub captions in each chapter, format for thesis is given by few
universities, same be followed.
Check Your Progress
 Select one topic for resear ch in education and write a various steps of
research proposal at length.
1.10 LET US SUM UP
 Educational Research is systematic application of scientific method for
solving educational problems, regarding students and teachers as well.
 Results democratic e ducation are slow and sometimes defective. So it
needs Educational Research to solve educational problems.
 Educational Research involves individuals like teachers / students and
educational institutions. It covers areas from formal education to non -
formal education.
munotes.in

Page 27


Educational Research

27  Educational Research solves educational problems, purifies educative
process and generates new knowledge.
 Scientific enquiry / Scientific method is happy blending of Inductive
and deductive method. Initially it proceeds from part to whole to st ate
meaningful hypothesis. Later on, it proceeds from whole to part and
hypothesis to logical conclusion.
 A theory is general explanation of phenomenon. It can be refined and
modified as factual knowledge. It is developed on the basis of results
of scienti fic method.
 Fundamental Research is basic research which is for the sake of
knowledge. Applied research to solve an immediate practical problem.
 Action research seeks effective way to solve a problem of the
concerned area without using a particular method ology / paradigm.
Paradigm of research is way to select, define and solve the problem
method.
1.11 UNIT END EXERCISE
1 What is meant by Educational Research?
2 What is the need of Educational Research?
3 What is its scope?
4 State the purpose of Education Research ?
5 What are the steps of scientific enquiry /method?
6 Explain how scientific method relates theory development?
7 What is theory? What are the principles to be considered while stating a
theory?
8 Explain the relationship among Science, Education and Educational
Research?
9 Why do we need a conduct fundamental research?
10 In which situations can applied research be conducted?
11 How is action research different from the other types?
12 What benefits can teachers get from action research?
13 How the paradigms are dependent on each other illustrate?


munotes.in

Page 28


Research Methodology in Education
28 1.12 REFERENCES
1) Best, J.W. and Kahn, J.V. (2004), Research in Education, New
Delhi, Prentice Hall ofIndia.
2) Gay, L.R. and Airasian, P. (2000), Educational Research
:Competencies for Analysis and Application, New Jersey :Mersil.
3) Kothari, R.C. (1985), Research Methodology, New Delhi : Wiley
EasternLtd.
4) Best – Kahn, (1995), Research in education. New Delhi, Prentice
Hall of India.
5) Ebel R.L. (1969), ‗Encyclopedia of Educational Research,‘ London,
The MacmillanCo.
6) Kaul L. (1984), ‗Methodo logy of Educational Research,‘ New
Delhi, Vikas Publishing House.
7) Thomas H. Popkewitz, Paradigms and Ideologies of Research in
Education.
8) John W. Best and James V. Kahn (1996) Research in Education
9) L. R. Gay (1990) Research in Education



munotes.in

Page 29

29 2
RESEARCH DESIGN
Unit Structure:
2.0 Objectives
2.1 Introduction
2.2 Meaning, Definition, Purpose and Components of research design.
2.3 Concept of Universe/Population, sample and sampling
2.4 Need for sampling
2.5 Advantages and Disadvantages of sampling
2.6 Characteristics of a good sample
2.7 Techniques of sampling
2.8 Types of probability sampling
2.9 Types of Non -probability sampling
2.10 Review of related Research - Purpose, Need and Organization
2.11 Let us sum up
2.12 Unit End Exercise
2.0 OBJECTIVES:
After learning of this unit, you will be able to :
 State meaning of research design
 Describe purpose of research design
 Components of Research Design
 Define universe, sample and sampling
 State the need of sampling
 List the advantages and disadvantages of sampling
 State the characteristics of a good sample
 Differentiate between techniques of sampling
 Explain types of probability & Non - probability sampling
2.1 INTRODUCTION:
The researcher is concerned with the generalizability of the data beyond
the sample. For studying any problem it is impossible to study the entire
population. It is therefore convenient to pick out a sample out of the munotes.in

Page 30


Research Methodology in Education
30 universe proposed to be covered by the study. The process of sampling
makes it possible to draw valid inferences or generalizations on the basis
of careful observation of variables within a small proportion of the
population.
2.2 MEANING, DEFINITION, PURPOSE AND
COMPONENTS OF RESEARCH DESIGN
Mean ing of Research Design :Before starting a research, the
investigator will look for problem, he will read books, journals, research
reports and other related literature. Based on this, he will finalize the topic
for research. During this process, he will be in close contact with his
guide. As soon as the topic is decided, first task is to decide about design.
Research design is a blue print or structure with in which research is
conducted. It constitutes the blue print for the collection, measurement and
analysis of data.
According to Gay and Airasian (2000), ―A desi gn is general strategy for
conducting a res earch study. The nature of the hypothesis, the variables
involved, and the constraints of the real world all contribute to the
selection of design.
Kothari (1988) says,― Decisions regarding WHAT?,WHERE?, WHEN?,
HOW MUCH?, by WHAT? means concerning an inquiry or a research
study constitute research design.
Thus, it can be said that research design is an outline of what the
researcher will do from writi ng of objectives, hypotheses and its
operational implications to find analysis of data. Research design should
be able to convey following :
 What is the study about?
 Where will study be carried out?
 What type of data is necessary?
 Where necessary data is a vailable?
 How much time is needed to complete the study?
 What will be the sampling design?
 Which tools will be identified to collect data?
 How data will be analysed?
Depending upon the types of research the structure of design may vary.
Suppose, one is conducting an experimental research, then identification
of variables, control of variables, types of experimental design etc. be
discussed properly. If someone is cond ucting qualitative research, then
one should stress on understanding of setting, nature of data, holistic
approach, selection of participants, inductive data analysis. Thus,
according to nature and type of study the components of design will be
decided. munotes.in

Page 31


Research Design

31 In short, any efficient research design will help the researcher to carry out
the study in a systematic way.
PURPOSE OF RESEARCH DESIGN:
 A research design helps the investigator to obtain answers to research
problem and issues involved in the research, since it is the outline of
entire research process.
 Design also tells us about how to collect data, what observation are to
be carry out, how to make them, how to analyse the data.
 Design also guides investigator about statistical techniques to be used
for anal ysis.
 Design also guides to control certain variables in experimental
research.
Thus, design guides the investigator to carry out research step by step in
an efficient way. The design section is said to be complete / adequate if
investigator could carry ou t his research by following the steps described
in design.
COMPONENTS OF RESEARCH DESIGN:
The components of the research design will depend upon the type of
research design you choose. But following are the common components of
a research design:
 Purpose statement
 Techniques for collecting data
 Methods of analyzing the data
 Type of research methodology
 Possible challenges to conducting research
 Setting of research study
 Timeline
 Measurement of analysis
2.3 CONCEPT OF UNIVERSE/ POPULATION, SAMPLE
AND SAMPLING:
Universe or Population : It refers to the totality of objects or individuals
regarding which inferences are to be made in a sampling study.
Or
It refers to the group of people, items or units under investigation and
includes every individual.
First, the population is selected for observation and analysis. munotes.in

Page 32


Research Methodology in Education
32 Sample : It is a collection consisting of a part or subset of the objects or
individuals of population which is selected for the purpose, representing
the population sample obtained by collecti ng information only about some
members of a population.
Sampling : It is the process of selecting a sample from the population. For
this population is divided into a number of parts called Sampling Units.
2.4 NEED FOR SAMPLING
 Large population can be convenie ntly covered
 Time, money and energy is saved.
 Helpful when units of area are homogenous.
 Used when percent accuracy is not acquired.
 Used when the data is unlimited.
2.5 ADVANTAGES AND DISADVANTAGES OF
SAMPLING:
Advantages of Sampling :
Economical : Manageable sample will reduce the cost compare to entire
population.
Increased speed : The process of research like collection of data,
analysis and Interpretation of data etc take less time than the
population.
Greater Scope : Handling data become s easier and manageable in case
of a sample. Moreover comprehensive scope and flexibility exists in the
case of a sample.
Accuracy : Due to limited area of coverage , completeness and accuracy
is possible. The processing of data is done accurately producing
authentic results.
Rapport : Better rapport is established with the respondents, which
helps in validity and reliability of the results.
Disadvantages of Sampling :
Biasedness : Chances of biased selection leading to erroneous
conclusions may prevail. Bias in the sample may be due to faulty
method of selection of individuals or the nature of phenomenon itself.
Selection of true representative sample : It the problem under study is
of a complex nature , it becomes difficult to select a true representative
sample, otherwise results will not be accurate & will be usable.
munotes.in

Page 33


Research Design

33
Need for specialized knowledge : The researcher need s knowledge,
training and experience in sampling technique, statistical analysis and
calculation of probable error. Lack of those may lea d to serious
mistakes.
Changeability of units : If the units of population are not
homogeneous, the sampling technique will be unscientific. At times , all
the individuals may not be accessible or may be uncooperative. In such
a case, they have o be replaced. This introduces a change in the subjects
to be studied.
Impossibility of sampling : Sometimes population is too small or too
heterogeneous to select a representative sample. In such cases ‘ census
study’ is the alternative (Information about each member of the
population) Sampling error also comes because of expectation of high
standard of accuracy.
2.6 CHARACTERISTICS OF A GOOD SAMPLE:
A good sample should posses s the following characteristics
 A true representative of the population
 Free from error due to bias
 Adequate in size for being reliable
 Units of sample should be independent and relevant
 Units of sample should be complete precise and up-to-date
 Free from random sampling error
 Avoiding substituting the original sample for convenience.
2.7 TECHNIQUES OF SAMPLING:
There are different types of sampling techniques based on two factors
viz. (1) the representation basis and (2) the element selection technique on
the representation basis. The sample may be probability sampling or it
may be non -probability sampling. On the element basis , the sample may
be either unrestricted or restricted. Here we will discuss about two types of
sampling viz.
a. Probability Sampling and
b. Non-Probability Sampling.
Difference between Probability and Non - Probability Sampling :
(1) A probability sample is one in which each member of the population
has an equal chance of being selected but in a non - probability sample , a
particular member of the population being chosen is unknown.
munotes.in

Page 34


Research Methodology in Education
34 (2) In probability sampling , randomness is the element of control. In non -
probability sampling, it relies on personal judgment .
2.8 TYPES OF PROBABILITY SAMPLING:
Following are the types of probability sampling
1) Simple random sampling
2) Systematic sampling
3) Stratified sampling
4) Cluster sampling
5) Multi stage sampling
Simple Random Sampling :In this all members have the same chance
(probability) of being selected. Random method provides an unbiased
cross selec tion of the population. For Example, we wish to draw a sample
of 50 students from a population of 400 students. Place all 400 names in a
container and draw out 50 names one by one.
Systematic Sampling :Each member of the sample comes after an equal
interval from its previous member.
one student out of every eight students in the population. The starting
Cluster Sampling (Area Sampling) : A r esearcher selects sampling units
at random and then does complete observation of all units in the group.
For exampl e, your research involves kindergarten schools. Select
randomly 15 schools. Then study all the children of 15 schools. In cluster
sampling the unit of sampling consists of multiple cases. It is also known
as area sampling, as the selection of individual me mber is made on the
basis of place residence or employment.
Multistage Sampling : The sample to be studied is selected at random at
different stages . For example , we need to select a sample of m iddle class
working couples in M aharashtra state. The first sta ge will be randomly
selecting a specific number of districts in a state. The second stage involves
selecting randomly a specific number of rural
and urban areas for the study. At the third stage, from each area, a specific
number of middle class families will be Selected and at the last stage, working
couples will be selected from these families.
2.9 TYPES NON -PROBABILITY SAMPLING:
The following are techniques of non -probability sampling :
a) Purposive Sampling
b) Convenience Sampling
c) Quota Sampling
d) Snowball Sampling
munotes.in

Page 35


Research Design

35 A) Purposive Sampling : In this sampling method, the researcher
selects a "typical group" of individuals who might represent the larger
population and then collects data from this group. For example, if a
researcher wants to survey the attitu de towards the teaching profession of
teachers teaching students from lower socio - economic stratum, he or she
might survey the teachers teaching in schools catering to students from
slums (more specifically, teachers teaching in Municipal schools) with th e
assumption that since all teachers teaching in Municipal schools cater to
students from the lower socio -economic stratum, they are representative of
all the teachers teaching students from lower socio -economic stratum.
B) Convenience Sampling : It r efers to the procedures of obtaining
units or members who are most conveniently available. It consists of units
which are obtained because cases are readily available. In selecting the
incidental sample, the researcher determines the required sample size and
then simply collects data on that number of individuals who are available
easily.
C) Quota Sampling : The selection of the sample is made by the
researcher, who decides the quotas for selecting sample from specified
sub groups of the population. Here, the researcher first identifies those
categories which he or she feels are important to ensure the
representativeness of the population, then establishes a sample size for
each category, and finally selects individuals on an availability basis. For
example, an interviewer might be need data from 40 adults and 20
adolescents in order to study students’ television viewing habits. He
therefore, will go out and select 20 adult men and 20 adult women, 10
adolescent girls and 10 adolescent boys so that they could int erview them
about their students’ television viewing habits.
Snowball Sampling : In snowball sampling, the researcher identifying
and selecting available respondents who meet the criteria for inclusion in
his/her study. After the data have been collected f rom the subject, the
researcher asks for a referral of other individuals, who would also meet the
criteria and represent the population of concern.
2.10 REVIEW OF RELATED LITERATURE
PURPOSE AND NEED FOR REVIEW OF RELATED
LITERATURE:
 Familiarize with the current state of knowledge and research on the
topic.
 Provide foundation and context to the research topic.
 Identify the areas of prior scholarship to avoid duplicating the study
and give due credit to the researchers.
 Develop the th eoretical framework and methodology.
 Identify the gaps in research and justify the need for your research. munotes.in

Page 36


Research Methodology in Education
36  Establish the relationship of your work in the context of the area of
research and other research studies.
Different ways to organize review relate d literature
CHRONOLOGICAL (by date): This is one of the most common ways,
especially for topics that have been talked about for a long time and have
changed over their history. Organise it in stages of how the topic has
changed: the first definitions of i t, then major time periods of change as
researchers talked about it, then how it is thought about today.
BROAD -TO-SPECIFIC : Another approach is to start with a section on
the general type of issue you're reviewing, then narrow down to
increasingly specific issues in the literature until you reach the articles that
are most specifically similar to your research question, thesis statement,
hypothesis, or proposal. This can be a good way to introduce a lot of
background and related facets of your topic when th ere is not much
directly on your topic but you are tying together many related, broader
articles.
MAJOR MODELS or MAJOR THEORIES : When there are multiple
models or prominent theories, it is a good idea to outline the theories or
models that are applied the most in your articles. That way you can group
the articles you read by the theoretical framework that each prefers, to get
a good overview of the prominent approaches to your concept.
PROMINENT AUTHORS : If a certain researcher started a field, and
there a re several famous people who developed it more, a good approach
can be grouping the famous author/researchers and what each is known to
have said about the topic. You can then organise other authors into groups
by which famous authors' ideas they are follo wing. With this organisation
it can help to look at the citations your articles list in them, to see if there
is one author that appears over and over.
CONTRASTING SCHOOLS OF THOUGHT : If you find a dominant
argument comes up in your research, with research ers taking two sides and
talking about how the other is wrong, you may want to group your
literature review by those schools of thought and contrast the differences
in their approaches and ideas.
How to structure and write your literature review
 Consider possible ways of organizing your literature review:
o Chronological, ie. by date of publication or trend
o Thematic
o Methodological
 Use Cooper's taxonomy to explore and determine what elements and
categories to incorporate into your review munotes.in

Page 37


Research Design

37  Revise and proofread your review to ensure your arguments,
supporting evidence and writing is clear and precise.
2.11 LET US SUM UP
In this unit we discussed the concept of population, sample and sampling.
Need of sampling advantages and disadvantages of sampling were
discussed and also Characteristics of a good sample are elaborated. In the
second part , the types of probability and non -probability sampling were
detailed.
2.12 UNIT END EXERCISE
Q.1 Define the following
(a) Sample
(b) Sampling
Q.2 Write short note for the following.
(a) Need for sampling
(b) Advantages of sampling
(c) Disadvantages of sampling
(d) Characteristics of a good sample
Q.3 Differentiate between probability and non -probability sampling.
Q.4 Discuss the types of probability sampling.
Q.5 Discuss the types of non -probability sampling.
REFERENCES :
1) Kulbir Singh Siddhu (1992), Methodology of Research in Education,
Sterling Publishers Private Limited pg,252.
2) C.R. Kothari (1991), Research Methodology – Methods and
Techniques, wiley Eaetern Limited, pg68.


munotes.in

Page 38

38 3
VARIABLES AND HYPOTHESES
Unit Structure
3.0 Objectives
3.1 Introduction
3.2 Meaning of variables
3.3 Types of variables (independent, dependent, Extraneous, Intervening
and Moderator)
3.4 Controlling Extraneous and Intervening Variables
3.5 Concept of hypothesis
3.6 Sources of hypothesis
3.7 Types of hypotheses (Research, Directional, Non -Directional, Null,
Statistical and question form)
3.8 Formulating hypothesis
3.9 Characteristics of a good hypothesis
3.10 Hypothesis testing and theory
3.11 Errors in testing of hypothesis
3.12 Summary
3.0 OBJECTIVES:
After reading this unit you will be able to:
 Define variables
 Identify the different types of variables
 Show the relationship between the variables
 Explain the concept of hypotheses
 State the sources of hypotheses
 Explain different types of hypotheses
 Identify types of hypotheses
 Frame hypotheses skillful
 Describe the characteristics of a good hypothesis
 Explain the significance level in hypothesis testing
 Identify the errors in testing of hypothesis

munotes.in

Page 39


Variables and Hypotheses

39 3.1 INTRODUCTION:
Each person/thing we collect data on is called an observation (in our
research work thes e are usually people/subjects). Observation
(participants) possess a variety of characteristics. If a characteristic of an
observation (participant) is the same for every member of the group i.e.it
does not vary, it is called a constant . If a characteristic of an observation
(participant) differs for group members it is called a variable . In research
we do n ot get excited about constants (since everyone is the same on that
characteristic); we are more inter ested in variables .
3.2 MEANING OF VARIABLES
A variable is any entity that can take on different values . So what does
that mean? Anything that can vary can be considered a variable. For
instance, age can be considered a variable because age can take different
values for different people or for the same person at different times.
Similarly, country can be considered a variable because a person's country
can be assigned a value.
A variable is a concept o r abstract idea that can be described in
measurable terms. In research, this term refers to the measurable
characteristics, qualities, traits, or attributes of a particular individual,
object, or situation being studied.
Variables are properties or characteristics of some event, object, or person
that can take on different values or amounts.
Variables are things that we measure, control, or manipulate in research.
They differ in many respects, most notably in the role they are given in
our research a nd in the type of measures that can be applied to them.
By itself, the statement of the problem usually provides only general
direction for the research study; it does not include all the specific
information. There is some basic terminology that is extrem ely important
in how we communicate specific information about research problems and
about research in general.
Let us analyse an example; if a researcher is interested in the effects of
two different teaching methods on the science achievement of fifth -grade
students, the grade level is constant, because all individuals involved are
fifth-graders. This characteristic is the same for everyone; it is a ‘constant’
condition of the study.
After the different teaching methods have been implemented, the fifth -
graders involved would be measured with a science achievement test. It is
very unlikely that all of the fifth -graders would receive the same score on
this test, hence the score on the science achievement test becomes a
variable, because different individuals will have different scores; at least,
not all individuals will have the same scores. We would say that science
achievement is a variable, but we would mean, specifically, that the score
on the science achievement test is a variable. munotes.in

Page 40


Research Methodology in Education
40 There is another varia ble in the preceding example – the teaching method.
In contrast to the science achievement test score, which undoubtedly
would be measured on a scale with many possible values, teaching method
is a categorical variable consisting of only two categories, th e two
methods. Thus , we have different kinds of variables and different names
or classifications for them.
A concept which can take on different quantitative values is called a
variable. As such the concepts like weight, height, income are all
examples of variables. Qualitative phenomena (or the attributes) are also
quantified on the basis of the presence or absence of the concerning
attributes(s).Age is an example of continuous variable, but the number of
male and female respondents is an example of discre te variable.
3.3 TYPES OF VARIABLES:
There are many classification systems given in the literature the names we
use are descriptive; they describe the roles that variables play in a research
study. The variables described below by no means exhaust the differen t
systems and names that exist, but they are the most useful for
communicating about educational research.
3.3.1 Independent variables:
Independent variables are variables which are manipulated or controlled or
changed. In the example “a study of the effect of t eacher praise on the
reading achievement of second -graders”, the effect of praise, the
researcher is trying to determine whether there is a cause -and-effect
relationship, so the kind of praise is varied to see whether it produces
different scores on the re ading achievement test. We call this a
manipulated independent variable (treatment variable). The amount
and kind of praise is manipulated by the researcher. The researcher could
analyze the scores for boys and girls separately to see whether the results
are the same for both genders. In this case gender is a classifying or
attributes independent variable. The researcher cannot manipulate
gender, but can classify the children according to gender.
3.3.2 Dependent variables:
Dependent variables are the outcome variables and are the variables for
which we calculate statistics. The variable which changes on account of
independent variable is known as dependent variable .
Let us take the example, a study of the effect of teacher praise on th e
reading achievement of second -graders; the dependent variable is reading
achievement. We might compare the average reading achievement scores
of second -graders in different praise conditions such as no praise, oral
praise, written praise, and combined or al and written praise.
The following example further illustrates the use of variables and
constants. In a study conducted to determine the effect of three different
teaching methods on achievement in elementary algebra, each of three munotes.in

Page 41


Variables and Hypotheses

41 ninth -grade algebra s ections in the same school, taught by the same
teacher, is taught using one of the methods. Both boys and girls are
included in the study. The constants in the study are grade level, school,
and teacher. (This assumes that, except for method, the teacher c an hold
teaching effectiveness constant.) The independent variables in the study
are teaching method and gender of the student. Teaching method has three
levels that arbitrarily can be designated methods A, B, and C; gender of
the student, of course, has t wo levels. Achievement in algebra, as
measured at the end of the instructional period, is the dependent variable.
The terms dependent and independent variable apply mostly to
experimental research where some variables are manipulated, and in this
sense, th ey are "independent" from the initial reaction patterns, features,
intentions, etc. of the subjects. Some other variables are expected to be
"dependent" on the manipulation or experimental conditions. That is to
say, they depend on "what the subject will d o" in response. Somewhat
contrary to the nature of this distinction, these terms are also used in
studies where we do not literally manipulate independent variables, but
only assign subjects to "experimental groups" based on some pre -existing
properties of the subjects. . Independent variables are those that are
manipulated whereas dependent variables are only measured or registered.
Consider other examples of independent and dependent variables:
Example 1: A study of teacher -student classroom interaction at different
levels of schooling.
Independent variable: Level of schooling, four categories –
primary, upper primary, secondary and junior college.
Dependent variable: Score on a classroom observation inventory, which
measures teacher – student interaction
Example 2: A comparative study of the professional attitudes of
secondary school teachers by gender.
Independent variable: Gender of the teacher – male, female.
Dependent variable: Score on a professional attitude inventory.
3.3.3 Extraneous variable:
Independe nt variables that are not related to the purpose of the study, but
may affect the dependent variable are termed as extraneous variables.
Suppose the researcher wants to test the hypothesis that there is a
relationship between children’s gains in social stu dies achievement and
their self -concepts. In this case self -concept is an independent variable and
social studies achievement is a dependent variable. Intelligence may as
well affect the social studies achievement, but since it is not related to the
purpos e of the study undertaken by the researcher, it will be termed as an
extraneous variable. Whatever effect is noticed on dependent variable as a munotes.in

Page 42


Research Methodology in Education
42 result of extraneous variable(s)is technically described as an ‘experimental
error’. A study must always be so d esigned that the effect upon the
dependent variable is attributed entirely to the independent variable(s), and
not to some extraneous variable or variables.
E.g. Effectiveness of different methods of teaching Social Science.
Here variables such as teacher’s competence, Teacher’s enthusiasm, age, socio
economic status also contribute substantially to the teaching learning
process. It cannot be controlled by the researcher. The conclusions lack
incredibility because of extraneou s variables.
3.3.4 Intervening variables:
They intervene between cause and effect. It is difficult to observe, as they
are related with individuals’ feelings such as boredom, fatigue excitement
at times some of these variables cannot be controlled or measured bu t have
an important effect upon the result of the study as it intervenes between
cause and effect. Though difficult, it has to be controlled through
appropriate design.
Eg.“Effect of immediate reinforcement on learning the parts of speech”.
Factors other t han reinforcement such as anxiety, fatigue, and motivation
may be intervening variables. They are difficult to define in operational,
observable terms however they cannot be ignored and must be controlled
using appropriate research design.
3.3.5 Moderator:
A th ird variable that when introduced into an analysis alters or has a
contingent effect on the relationship between an independent and a
dependent variable. A moderator variable is an independent variable that
is not of primary interest that has levels, which when combined with the
levels of the independent variable of interest produces different effects.
For example, suppose that the researcher designs a study to determine the
impact of the lengths of reading passages on the comprehension of the
reading passa ge. The design has three levels of passage length: 100 words,
200 words, and 300 words. The participants in the study are fourth -fifth-
and sixth -graders.
Suppose that the three grade levels all did very well on the 100 - word
passage, but only the sixth -graders did very well on the 300 - word
passage. This would mean that successfully comprehending reading
passages of different lengths was moderated by grade level.
Check your progress:
1. What is a Variable?
2. Identify the variables in this example “Teaching effectiveness of
secondary school teachers in relation to their characteristics”. munotes.in

Page 43


Variables and Hypotheses

43
3.4 CONTROLLING EXTRANEOUS AND
INTERVENING VARIABLES
All experimental designs have one central characteristic: they are based on
manipulating the independent variable and measuring the effect on the
dependent variable. Experimental designs result in inferences drawn from
the data that explain the relationships between the variables.
The classic experimental design consists of the experimental group and the
control group. In the experimental group the independent variable is
manipulated. In the control the dependent variable is measured when no
alteration has been made on the independent variable. The dependent
variable is measured in the experimental group the same way, and at the
same time, as in the control group.
The prediction is that the dependent variable in the experimental group
will change in a specific way and that the dependent variable in the control
group will not change.
Controlling Unwanted Influences:
To obtai n a reliable answer to the research question, the design should
eliminate unwanted influences. The amount of control that the researcher
has over the variables being studied varies, from very little in exploratory
studies to a great deal in experimental d esign, but the limitations on
control must be addressed in any research proposal.
These unwanted influences stem from one or more of the following:
extraneous variables, bias, the Hawthorne effect, and the passage of time.
Extraneous Variables :
Extraneous variables are variables that can interfere with the action of the
independent variable. Since they are not part of the study, their influence
must be controlled.
In the research literature, the extraneous variables also referred to as
intervening variable s, directly affect the action of the independent variable
on the dependent variables. Intervening variables are those variables that
occur in the study setting. They include economic, physical, and
psychological variables. Therefore, it is important to con trol extraneous
variables to study the effect of independent variable on dependent
variable. We must be very careful to control all possible extraneous
variables that might intervene the dependant variable.
Methods of controlling extraneous variables include :
 randomization
 homogeneous sampling techniques
 matching
 building the variables into the design
 statistical control munotes.in

Page 44


Research Methodology in Education
44 Randomization : Theoretically, randomization is the only method of
controlling all possible extraneous variables. The random assignm ent of
subjects to the various treatment and control groups means that the groups
can be considered statistically equal in all ways at the beginning of the
experiment. It does not mean that they actually are equal for all variables.
However, the probabilit y of their being equal is greater than the
probability of their not being equal, if the random assignment was carried
out properly. The exception lies with small groups where random
assignment could result in unequal distribution of crucial variables. If this
possibility exists, the other method would be more appropriate. In most
instances, however, randomization is the best method of controlling
extraneous variables.
A random sampling technique results in a normal distribution of
extraneous variables in the sample; this approximates the distribution of
those variables in the population. The purpose of randomization is to
ensure a representative sample.
Randomization comes into play when we randomly assign subjects to
experimental and control groups, thus ensuring that the groups are as
equivalent as possible prior to the manipulation of the independent
variable. Random assignment assures that the researcher is unbiased.
Instead, assignment is predetermined for each subject.
Homogeneous Sample : One simple and effective way of controlling an
extraneous variable is not to allow it to vary. We may choose a sample
that is homogenous for that variable. For example, if a researcher believes
that gender of the subject might affect the dependant variable, he/she
could select the subjects of the desired gender only. If the researcher
believes that socio - economic status might influence the dependant
variable, he/she would select subject from a particular range of socio -
economic status. After selecting students from a homogenous population
the researcher may assign the subjects to experimental and control group
randomly.
Matching : When randomization is not possible, or when the experimental
groups are too small and contain some crucial variables, subjects can be
matche d for those variables. The experimenter chooses subjects who
match each other for the specified variables. One of these matched
subjects is assigned to the control group and the other to the experimental
group, thus ensuring the equality of the groups at t he outset.
The process of matching is time consuming and introduces considerable
subjectivity into sample selection. Therefore, it should be avoided
whenever possible. If we use matching, limit the number of groups to be
matched and keep the number of vari ables for which the subjects are
matched low. Matching with more than five variables becomes extremely
cumbersome, and it is almost impossible to find enough matched partners
for the sample. Matching may be used in all research designs when we are
looking at certain outcomes and want to have as much control as possible. munotes.in

Page 45


Variables and Hypotheses

45 Building Extraneous Variables into the Design : When extraneous
variables cannot be adequately controlled by randomization, they can be
built into the design as independent variables. They wo uld have to be
added to the purpose of study and tested for significance along with other
variables. In this way, their effect can be measured and separated from the
effect of the independent variable.
Statistical Control : In experimental designs, the effe ct of the extraneous
variables can be subtracted statistically from the total action of the
variables. The technique of analysis of covariance (ANCOVA) may be
used for this purpose. Here, one or more extraneous variables are
measured along with the dependa nt variables. This method adds to the cost
of the study because of the additional data collection and analysis
required. Therefore, it should be used only as a last resort.
Methods of controlling intervening variables:
Since the intervening variables cann ot be observed during the experiment,
it is not easy to control them. They need to be controlled statistically while
analyzing the data. To control the effects of these variables, the researcher
sometimes may have to identify and measure them and measure t he
dependent variable being studied and statistically reduce or minimize the
effect of the intervening variable.
3.5 CONCEPT OF HYPOTHESIS
Hypothesis is usually considered as the principal instrument in research.
The derivation of a suitable hypothesis goes hand in hand with the
selection of a research problem. A hypothesis, as a tentative hunch,
explains the situation under observation so as to design the study to prove
or disprove it. What a researcher is looking for is a working or positive
hypothesis. It is very difficult, laborious and time consuming to make
adequate discriminations in the complex interplay of facts without
hypothesis. It gives definite point and direction to the study, prevents blind
search and indiscriminate gathering of data and helps to delimit the field
of inquiry.
3.5.1 Meaning:
The word Hypothesis (plural is hypotheses ) is derived from the Greek
word – ‘hypotithenai’ meaning ‘to put under’ or ‘to suppose’ for a
hypothesis to be put forward as a scientific hypothesis , the scientific
method requires that one can test it.
Etymologically hypothesis is made up of two words, “hypo”(less than)
and “thesis”, which mean less than or less certain than a thesis. It is the
presumptive statement of a proposition or a reasonable guess, based upon
the available evidence, which the researcher seeks to prove through his
study.
munotes.in

Page 46


Research Methodology in Education
46 According to Lundberg, ―A hypothesis is a tentative generalization, the
validity of which remains to be tested. In its most elementary stage, the
hypothesis may be any hunch, guess, imaginative idea, which becomes the
basis for action or investigation.
Goode and Hatthave defined it as ―a proposition which can be put to test to
determine its validity. A hypothesis is a statement temporarily accepted as
true in the light of what is, at the time, known about a phenomenon, and it
is employed as a basis for action in the search of new truth.
A hypothesis is a tentative assumption drawn from knowledge and theory
which is used as a guide in the investigation of other facts and theories that
are yet unknown.
It is a guess, supposition or tentative inference as to the e xistence of some
fact, condition or relationship relative to some phenomenon which serves
to explain such facts as already are known to exist in a given area of
research and to guide the search for new truth.
Hypotheses reflect the research worker’s guess as to the probable outcome
of the experiments.
A hypothesis is therefore a shrewd and intelligent guess, a supposition,
inference, hunch, provisional statement or tentative generalization as to
the existence of some fact, condition or relationship relative to some
phenomenon which serves to explain already known facts in a given area
of research and to guide the search for new truth on the basis of empirical
evidence. The hypothesis is put to test for its tenability and for
determining its validity.
In this connection Lundberg observes: Quite often a research hypothesis is
a predictive statement, capable of being tested by scientific methods, that
relates an independent variable to some dependent variable. For example,
consider statements like the following ones: “Students who receive
counseling will show a greater increase in creativity than students not
receiving counseling” or “There is a positive relationship between
academic aptitude scores and scores on a social adjustment inventory for
high school students”
These are hypotheses capable of being objectively verified and tested.
Thus, we may conclude that a hypothesis states what we are looking for
and it is a proposition which can be put to a test to determine its validity.
3.5.2 Importance of the Hypotheses:
The importance of hypotheses is generally recognized more in the studies
which aim to make predictions about some outcome. In experimental
research, the researchers is interested in making predictions about the
outcome of the experiment or what the result s are expected to show and
therefore the role of hypotheses is considered to be of utmost importance.
In the historical or descriptive research, on the other hand, the researcher
is investigating the history of a city or a nation, the life of a man, the munotes.in

Page 47


Variables and Hypotheses

47 happening of an event, or is seeking facts to determine the status quo of
some situation and thus may not have a basis for making a prediction of
results. A hypothesis, therefore, may not be required in such fact -finding
studies. Hillway (1964) too is of the view that “when fact -finding alone is
the aim of the study, a hypothesis may not be required.”
Most historical or descriptive studies, however, involve not only fact -
finding but interpretation of facts to draw generalizations. If a researcher is
tracing t he history of an educational institution or making a study about
the results of a coming assembly poll, the facts or data he gathers will
prove useful only if he is able to draw generalizations from them.
Whenever possible, a hypothesis is recommended for all major studies to
explain observed facts, conditions or behaviour and to serve as a guide in
the research process. The importance of hypotheses may be summarized
as under.
1. Hypotheses facilitate the extension of knowledge in an area. They
provide tentative explanations of facts and phenomena, and can be
tested and validated. It sensitizes the investigator to certain aspects
of situations which are relevant from the standpoint of the problem
inhand.
2. Hypotheses provide the researcher with rational st atements, consisting of
elements expressed in a logical order of relationships which seek to
describe or to explain conditions or events, that have not yet been
confirmed by facts. The hypotheses enable the researcher to relate logically
known facts to int elligent guesses about unknown conditions. It is a guide
to the thinking process and the process of discovery . It Is the
investigator’s eye – as or to guiding light in the work of darkness.
3. Hypotheses provide direction to the research. It defines what is
relevant and what is irrelevant. The hypotheses tell the researcher
specifically what he needs to do and find out in his study. Thus, it
prevents the review of irrelevant literature and the collection of
useless or excess data. Hypotheses provide a basis for selecting the
sample and the research procedures to be used in the study. The
statistical techniques needed in the analysis of data, and the
relationships between the variables to be tested, are also implied by
the hypotheses. Furthermore, the hypothe ses help the researcher to
delimit his study in scope so that it does not become broad or
unwieldy.
4. Hypotheses provide the basis for reporting the conclusions of the
study. It serves as a framework for drawing conclusions. The
researcher will find it very convenient to test each hypothesis
separately and state the conclusions that are relevant to each. On the
basis of these conclusions, he can make the research report
interesting and meaningful to the reader. It provides the outline for
setting conclusions in a meaningful way.
munotes.in

Page 48


Research Methodology in Education
48 Hypothesis has a very important place in research although it occupies a
very small pace in the body of a thesis. It is almost impossible for a
research worker not to have one or more hypotheses before proceeding
with his work.
3.6 SOURC ES OFHYPOTHESIS:
The derivation of a good hypothesis demands characteristic of experience
and creativity. Though hypothesis should precede the gathering of data, a
good hypothesis can come only from experience. Some degree of data
gathering, the review of related literature, or a pilot study must precede the
development and gradual refinement of the hypothesis. A good
investigator must have not only an alert mind capable of deriving relevant
hypothesis, but also a critical mind capable of rejecting faulty h ypothesis.
What is the source of hypotheses? They may be derived directly from
the statement of the problem; they may be based on the research literature,
or in some cases, such as in ethnographic research, they may (at least in
part) be generated from dat a collection and analysis. The various sources
of hypotheses may be:
 Review of similar studies in the area or of the studies on similar
problems;
 Examination of data and records, if available, concerning the problem
for possible trends, peculiarities and o ther clues;
 Discussions with colleagues and experts about the problem, its origin
and the objectives in seeking a solution.
 Exploratory personal investigation which involves original field
interviews on a limited scale with interested parties and individu als
with a view to secure greater insight into the practical aspects of the
problem.
 Intuition is often considered a reasonable source of research
hypotheses -- especially when it is the intuition of a well -known
researcher or theoretician who “knows what is known”
 Rational Induction is often used to form “new hypotheses ”by logically
combining the empirical findings from separate areas of research
 Prior empirical research findings are perhaps the most common source
of new research hypotheses, especially wh en carefully combined using
rational induction
 Thus hypothesis are formulated as a result of prior thinking about the
subject, examination of the available data and material including
related studies and the council of experts.

munotes.in

Page 49


Variables and Hypotheses

49 Check your progress:
1. Define hypothesis.
2. Hypothesis is stated in researches concerned with?
3. What are the sources of hypotheses?
3.7 TYPES OF HYPOTHESIS:
3.7.1 Research hypothesis: When a prediction or a hypothesized relationship is
to be tested by scientific methods, it is termed as research hypothesis. The
research hypothesis is a predictive statement that relates an independent variable
to a dependent variable. Usually a research hypothesis must contain, at least, one
independent and one dependent variable. A research hypothesis must be
stated in a testable form for its proper evaluation. As already stressed, this
form should indicate a relationship between the variables in clear, concise,
and understandable language. Research hypotheses are classified as being
directional or non -directional.
3.7.2 Directional hypothesis: The hypotheses which stipulate the
direction of the expected differences or relationships are terms as
directional hypotheses. For example, the research hypothesis:“There will
be a positive relationship between individual’ s attitu de towards high caste
Hindus and his socio - economic status,” is a directional research hypothesis.
This hypothesis stipulates that individual with favorable attitude towards
high caste Hindus will generally come from higher socio -economic Hindu
families a nd therefore it does stipulate the direction of the relationship.
Similarly, the hypothesis: “Adolescent boys with high IQ will exhibit low
anxiety than adolescent boys with low IQ” is a directional research
hypothesis because it stipulates the direction o f the difference between
groups.
3.7.3 Non-directional hypothesis: A research hypothesis which does not
specify the direction of expected differences or relationships is a non -
directional research hypothesis. For example, the hypotheses:“There will
be difference in the adaptability of fathers and mothers towards rearing of
their children” or “There is a difference in the anxiety level of adolescent
girls of high IQ and low IQ” are non -directional research hypotheses.
Although these hypotheses stipulate there will be a difference, the
direction of the difference is not specified. A research hypothesis can take
either statistical form, declarative form, the null form, or the question
form.
3.7.4 Statistical hypothesis: When it is time to test whether the data support or
refute the research hypothesis, it needs to be translated into a statistical
hypothesis. A statistical hypothesis is given in statistical terms. Technically, in
the context of inferential statistics, it is a statement about one or more parameters
that are m easures of the populations under study. Statistical hypotheses often are
given in quantitative terms, for example: “The mean reading achievement of
the population of third -grade students taught by Method A equals the
mean reading achievement of the populat ion taught by Method B.”
Therefore, we can say that statistical hypotheses are, concerned with
populations under study. We use inferential statistics, to draw conclusions munotes.in

Page 50


Research Methodology in Education
50 about population values even though we have access to only a sample of
participants. In order to use inferential statistics, we need to translate the
research hypothesis into a testable form, which is called the null
hypothesis. An alternative or declarative hypothesis indicates the situation
corresponding to when the null hypothesis is no t true. The stated
hypothesis will differ depending on whether or not it is a directional
research hypothesis.
3.7.5 Declarative hypothesis: When the researcher makes a positive
statement about the outcome of the study, the hypothesis takes the
declarative form. For example, the hypothesis: “The academic
achievement of extroverts is significantly higher than that of the
introverts,” is stated in the declarative form. In such a statement of
hypothesis, the researcher makes a prediction based on his theoretical
formulations of what should happen if the explanations of the behaviour
he has given in his theory are correct.
3.7.6 Null hypothesis: In the null form, the researcher makes a statement
that no relationship exists. The hypothesis, “There is no significant
differenc e between the academic achievement of high school athletes and
that of non -athletes,” is an example of null hypothesis. Since null
hypotheses can be tested statistically, they are often termed as statistical
hypotheses. They are also called the testing hyp otheses when declarative
hypotheses are tested statistically by converting them into null form. It
states that even where it seems to hold good it is due to mere chance. It is
for the researcher to reject the null hypothesis by showing that the
outcome me ntioned in the declarative hypothesis does occur and the
quantum of it is such that it cannot be easily dismissed as having occurred
by chance.
3.7.7 Question form hypothesis: In the question form hypothesis, a
question is asked as to what the outcome will be in stead of stating what
outcome is expected. Suppose a researcher is interested in knowing
whether programmed instruction has any relationship to test anxiety of
children.
The declarative form of the hypothesis might be: “Teaching children
through the progr ammed instruction material will decrease their test
anxiety”.
The null form would be: “teaching children through programmed
instruction material will have no effect on their test anxiety.’ This
statement shows that no relationship exists between programme d
instruction and test anxiety.
The question form puts the statement in the form: “Will teaching
children through programmed instruction decrease their test anxiety?”

munotes.in

Page 51


Variables and Hypotheses

51 3.8 FORMULATING HYPOTHESIS:
Hypotheses are guesses or tentative generalizations, but these g uesses are
not merely accidents. Collection of factual information alone does not lead
to successful formulation of hypotheses. Hypotheses are the products of
considerable speculation and imaginative guess work. They are based
partly on known facts and exp lanations, and partly conceptual. There are
no precise rules for formulating hypotheses and deducing consequences
from them that can be empirically verified.
However, there are certain necessary conditions that are conducive to their
formulation. Some of t hem are:
Richness of background knowledge . A researcher may deduce
hypotheses inductively after making observations of behaviour, noticing
trends or probable relationships. For example, a classroom teacher daily
observes student behaviour. On the basis of his experience and his
knowledge of behaviour in a school situation, the teacher may attempt to
relate the behaviour of students to his own, to his teaching methods, to
changes in the school environment, and so on. From these observed
relationships, the t eacher may inductively formulate a hypothesis that
attempts to explain such relationships.
Background knowledge, however, is essential for perceiving relationships
among the variables and to determine what findings other researchers have
reported on the pr oblem under study. New knowledge, new discoveries,
and new inventions should always form continuity with the already
existing corpus of knowledge and, therefore, it becomes all the more
essential to be well versed with the already existing knowledge.
Hypot heses may be formulated correctly by persons who have rich
experiences and academic background, but they can never be formulated
by those who have poor background knowledge.
 Versatility of intellect : Hypotheses are also derived through
deductive reasoning from a theory. Such hypotheses are called deductive
hypotheses. A researcher may begin a study by selecting one of the
theories in his own area of interest. After selecting the particular theory,
the researcher proceeds to deduce a hypothesis from this the ory through
symbolic logic or mathematics. This is possible only when the researcher
has a versatile intellect and can make use of it for restructuring his
experiences. Creative imagination is the product of an adventure, sound
attitude and agile intellect . In the hypothesis’s formulation, the researcher
works on numerous paths. He has to take a consistent effort and develop
certain habits and attitudes. Moreover, the researcher has to saturate
himself with all possible information about the problem and the n think
liberally at it and proceed further in the conduct of the study.
 Analogy and other practices : Analogies also lead the researcher to
clues that he might find useful in the formulation of hypotheses and for
finding solutions to problems. For example, suppose a new situation munotes.in

Page 52


Research Methodology in Education
52 resembles an old situation in regard to a factor X. If the researcher knows
from previous experience that the old situation is related to other factors Y
and Z as well as to X, he reasons that perhaps a new situation is also
related to Y and Z. The researcher, however, should use analogies with
caution as they are not fool proof tools for finding solutions to problems.
At times, conversations and consultations with colleagues and expert from
different fields are also helpful i n formulating important and useful
hypotheses.
3.9 CHARACTERISTICS OF A GOOD HYPOTHESIS
Hypothesis must possess the following characteristics:
i) Hypothesis should be clear and precise. If the hypothesis is not clear
and precise, the inferences drawn on its basis cannot be taken as
reliable.
ii) Hypothesis should be capable of being tested. Some prior study may
be done by researcher in order to make hypothesis at testable one. A
hypothesis “is testable if other deductions can be made from it
which, in turn, can be con firmed or disproved by observation.”
iii) Hypothesis should state relationship between variables, if it happens
to be a relational hypothesis.
iv) Hypothesis should be limited in scope and must be specific. A
researcher must remember that narrower hypotheses are g enerally
more testable and he should develop such hypotheses.
v) Hypothesis should be stated as far as possible in most simple terms
so that the same is easily understandable by all concerned. But one
must remember that simplicity of hypothesis has nothing to do with
its significance.
vi) Hypothesis should be consistent with most known facts i.e. it must
be consistent with a substantial body of established facts. In other
words, it should be one which judge accept as being the most likely.
vii) The hypotheses selected should be amenable to testing within a reasonable
time. The researcher should not select a problem which involves
hypotheses that are not agreeable to testing within a reasonable and
specified time. He must know that there are problems that cannot be
solve d for a long time to come. These are problems of immense
difficulty that cannot be profitably studied because of the lack of
essential techniques or measures.
viii) Hypothesis must explain the facts that gave rise to the need for
explanation. This means that by using the hypothesis plus other
known and accepted generalizations, one should be able to deduce
the original problem condition. Thus, hypothesis must actually
explain what it claims to explain, it should have empirical reference.
munotes.in

Page 53


Variables and Hypotheses

53 Check your progress:
1. What are the different types of hypotheses?
2. List the characteristics of hypothesis.
3.10 HYPOTHESIS TESTING AND THEORY
When the purpose of research is to test a researc h hypothesis, it is termed
as hypothesis -testing research. It can be of the experimental design or of
the non -experimental design.
Research in which the independent variable is manipulated is Termed
‘experimental hypothesis – testing research’ and research in which an
independent variable is not manipulated is called ‘non - experimental
hypothesis -testing research’.
Let us get acquainted with relevant terminologies used in hypothesis
testing.
Null hypothesis and alternative hypothesis:
In the context of stat istical analysis, we often talk about null hypothesis
and alternative hypothesis. If we are to compare method A with method B
about its superiority and if we proceed on the assumption that both
methods are equally good, then this assumption is termed as th e null
hypothesis. As against this, we may think that the method A is superior or
the method B is inferior, we are then stating what is termed as alternative
hypothesis. The null hypothesis is generally symbolized as H 0 and the
alternative hypothesis as H a. The null hypothesis and the alternative
hypothesis are chosen before the sample is drawn. Generally, in
hypothesis testing we proceed on the basis of null hypothesis, keeping the
alternative hypothesis in view. Why so? The answer is that on the
assumptio n that null hypothesis is true, one can assign the probabilities to
different possible sample results, but this cannot be done if we proceed
with the alternative hypothesis. Hence the use of null hypothesis (at times
also known as statistical hypothesis) i s quite frequent.
a) The level of significance: This is very important concept in the context of
hypothesis testing. It is always some percentage (usually 5%) which should be
chosen with great care, thought and reason. In case we take the significance level
at 5 per cent, then this implies that H 0 will be rejected when the sampling result
(i.e. observed evidence) has a less than 0.05 probability of occurring if H 0 is true.
In other words, the 5 percent level of significance means that researcher is
willing to take as much as a 5 percent risk of rejecting the null hypothesis when it
(H0) happens to be true. Thus, the significance level is the maximum value of the
probability of rejecting H 0 when it is true and is usually determined in advance
before testing the hypothesis.
b) The criteria for rejecting the null hypothesis may differ.
Sometimes the null hypothesis is rejected only when the quantity of the
outcome is so large that the probability of its having occurred by mere
chance is 1 time out of 100. We consider the probability of its having munotes.in

Page 54


Research Methodology in Education
54 occurred by chance to be too little and we reject the chance theory of the
null hypothesis and take the occurrence to be due to a genuine tendency.
On other occasions, we may be bolder and reject the null hypothesis even
when the quantity of the reported outcome is likely to occur by chance 5
times out of 100.Statistically the former is known as the rejection of the
null hypothesis at 0.1 level of significance and the latter as the rejection at
0.5 level. It may be pointed out that if the researcher is able to reject the
null hypothes is, he cannot directly uphold the declarative hypothesis. If an
outcome is not held to be due to chance, it does not mean that it is due to
the very cause and effect relationship asserted in the particular declarative
statement. It may be due to something else which the researcher may have
failed to control.
c) Decision rule or test of hypothesis: Given a hypothesis H 0 and an
alternative hypothesis H a we make a rule which is known as decision rule
according to which we accept H0(i.e. reject H a) or reject H 0 (ie. accept H a).
For instance, if H 0 is that a certain lot is good (there are very few defective
items in it) against H a that the lot is not good (there are too many defective
items in it), then we must decide the number of items to be tested and the
criterion for accepting or rejecting the hypothesis. We might test 10 times
in the lot and plan our decision saying that if there are none or only 1
defective item among the 10, we will accept H 0 otherwise we will reject
H0 (or accept H a). This sort of basis i s known as decision rule.
d) Two -tailed and One -tailed tests: In the context of hypothesis testing,
these two terms are quite important and must be clearly understood. A two -tailed
test rejects the null hypothesis if, say, the sample mean is significantly hig her
or lower than the hypothesized value of the mean of the population. Such a
test is appropriate when the null hypothesis is some specified value and
the alternative hypothesis is a value not equal to the specified value of the
null hypothesis. In a two -tailed test, there are two rejection regions, one on
each tail of the curve which can be illustrated asunder:
If the significance level is 5 per cent and the two -tailed test is to be
applied, the probability of the rejection area will be 0.05 (equally divi ded
on both tails of the curve as 0.025) and that of the acceptance region will
be0.95
But there are situations when only one -tailed test is considered
appropriate. A one -tailed test would be used when we are to test, say,
whether the population mean is ei ther lower than or higher than some
hypothesized value. We should always remember that accepting H 0, on the
basis of sample information does not constitute the proof that H 0, is true.
We only mean that there is no statistical evidence to reject it.
3.11 ERRORS IN TESTING OF HYPOTHESIS
Type I and Type II errors: in the context of testing of hypotheses, there are
basically two types of errors we can make. We may reject H 0 when H 0 is
true and we may accept H 0 when in fact H 0 is not true. The former is
known as Type I error and the latter as Type II error. In other words, Type munotes.in

Page 55


Variables and Hypotheses

55 I error means rejection of hypothesis which should have been accepted
and Type II error means accepting the hypothesis which should have been
rejected. Type I error is denoted by (alpha) known as (alpha) error, also
called the level of significance of test; and Type II error is denoted by ß
(beta) known as ß error. In a tabular form the said two errors can be
presented as follows:
Table 3.1 Decision
The probability of Type I error is usually determined in advance and is
understood as the level of significance of testing the hypothesis. If type I
error is fixed at 5 per cent, it means that there are about 5 chances in 100
that we will reject H 0 when H 0 is true.
We can control Type I error just by fixing it at a lower level. For instance,
if we fix it at 1 per cent, we will say that the maximum probability of
committing Type I error would only be 0.01.
But with a fixed sample size, n, when we try to reduce Type I error, the
probability of committing Type II error increases. Both types of errors
cannot be reduced simultaneously. There is a trade - off between two ty pes
of errors which means that the probability of making one type of error can
only be reduced if we are willing to increase the probability of making the
other type of error. To deal with this trade -off in business situations,
decision -makers decide the a ppropriate level of Type I error by examining
the costs or penalties attached to both types of errors. If Type I error
involves the time and trouble of reworking a batch of chemicals that
should have been accepted, whereas Type II error means taking a chan ce
that an entire group of users of this chemical compound will be poisoned,
then in such a situation one should prefer a Type I error to a Type II error.
As a result, one must set very high level for Type I error in one’s testing
technique of a given hypo thesis. Hence, in the testing of hypothesis, one
must make all possible effort to strike an adequate balance between Type I
and Type II errors.
Check your progress:
1. Explain the term level of significance?
2. What are the two types of error in the testing of t he hypothesis?
3.12 LET US SUM UP
It is important for the researcher to formulate hypotheses before data are
gathered. This is necessary for an objective and unbiased study. It should
be evident from what you have read so far that in order to carry out Accept H 0 Reject H 0
H0 (true) Correct decision Type I error (alpha error)
H0 (false) Type II error (ß error) Correct decision
munotes.in

Page 56


Research Methodology in Education
56 research; you need to start by identifying a question which demands an
answer, or a need which requires a solution. The problem can be generated
either by an initiating idea, or by a perceived problem area. We also
studied that there are important qualitie s of hypotheses which distinguish
them from other forms of statement. A good hypothesis is a very useful
aid to organizing the research effort. It specifically limits the enquiry to
the interaction of certain variables; it suggests the methods appropriated
for collecting, analyzing and interpreting the data; and the resultant
confirmation or rejection of the hypothesis through empirical or
experimental testing gives a clear indication of the extent of knowledge
gained. The hypothesis must be conceptually cl ear. The concepts utilized
in the hypothesis should be clearly defined – not only formally but also, if
possible, operationally.
Hypothesis testing is the often -used strategy for deciding whether a
sample data offer such support for a hypothesis that gener alization can be
made. Thus, hypothesis testing enables us to make probability statements
about population parameter(s).
3.13 REFERENCES
1. Best J.W, Kahn J.V ;( 2009); Research in education , tenth edition,
Dorling Kindersley (India) Pvt Ltd, NewDelhi.
2. Wiers ma W ,Jurs S G;(2009); research methods in education an
introduction, ninth edition, Pearson education, Dorling Kindersley
(India) Pvt Ltd, NewDelhi.
3. Kothari C.R; (2009) Research methodology methods and techniques ,
second edition, New Delhi, New ageinterna tional
(P) Ltd publishers.
4. Gupta S; (2007); Research methodology and statistical techniques;
New Delhi, Deep and Deep publication PvtLtd.
5. Mc Burney, D.H; White T,L; (2007); Research methods , seventh
edition; Delhi, Akash press.
6. Kaul, L; (2004); Methodology of educational research , third edition,
New Delhi, UBS Publishers’ Distributors Pvt Ltd.
7. https://www.scribbr.com/methodology/quantitative -research/



munotes.in

Page 57

57 4A
QUANTITATIVE AND QUALITATIVE
RESEARCH -I
Unit Structure
4A.0 Objectives
4A.1 A) Quantitative Research – Concept
4A.1 B) Qualitative Research – Concept
4A.2 Nature of Descriptive Research
4A.3 Correlational Research
4A.4 Causal -Comparative Research
4A.5 Document Analysis
4A.6 Ethnography
4A.7 Case Study
4A.8 Analytical Method
4A.9 Survey Research
4A.10 References
4A.0 OBJECTIVES:
After reading this unit, the student will be able to:
(a) State the nature of descriptive research
(b) Explain how to conduct correlati onal research
(c) Explain how to conduct correlational research
(d) Explain how to conduct causal -comparative research
(e) Explain how to conduct case study research
(f) Explain the concept of documentary research
(g) Explain how to conduct ethnographic research
(h) Explain the c oncept of analytical research
(i) Explain difference between qualitative and quantitative research.
(j) Explain the concept of survey research munotes.in

Page 58


Research Methodology in Education
58 4A.1A) QUANTITATIVE RESEARCH -CONCEPT:
Quantitative research is the process of collecting and analyzing numerical
data. It can be used to find patterns and averages, make predictions, test
causal relationships, and generalize results to wider populations.
Quantitative research is the opposite of qualitativ e research , which
involves collecting and analyzing non -numerical data (e.g. text, video, or
audio).
Quantitative research is widely used in the natural and social sciences:
biology, chemistry, psychology, economics, sociology, marketing, etc.
4A.1 B) QUA LITATIVE RESEARCH -CONCEPT:
Qualitative research involves collecting and analyzing non -numerical data
(e.g., text, video, or audio) to understand concepts, opinions, or
experiences. It can be used to gather in -depth insights into a problem or
generate new ideas for research.
Qualitative research is the opposite of quantitative research , which
involves collecting and analyzing numerical data for statistical analysis.
Qualitative researc h is commonly used in the humanities and social
sciences, in subjects such as anthropology, sociology, education, health
sciences, history, etc.
4A.2 NATURE OF DESCRIPTIVERESEARCH:
The descriptive research attempts to describe, explain and interpret
condit ions of the present i.e. what is? The purpose of a descriptive
research is to examine a phenomenon that is occurring at a specific
place(s) and time. A descriptive research is concerned with conditions,
practices, structures, differences or relationships t hat exist, opinions held,
processes that are going on or trends that are evident.
Types of Descriptive Research Methods
In the present unit, the following descriptive research methods are
described in detail:
1. Correlational Research
2. Causal -Comparative R esearch
3. Analytical Method
4. Survey Research
4A.3 CORRELATIONAL METHOD:
Correlational research describes what exists at the moment (conditions,
practices, processes, structures etc.) and is therefore, classified as a type of
descriptive method. Nevertheless, these conditions, practices, processes or munotes.in

Page 59


Quantitative and Qualitative
Research -I
59 structures described are markedly different from the way they are usually
described in a survey or an observational study.
Correlational research comprises of collecting data to determine whether,
and to what exten t, a relationship exists between two or more quantifiable
variables. Correlational research uses numerical data to explore
relationships between two or more variables. The degree of relationship is
expressed in terms of a coefficient of correlation. If the relationship exists
between variables, it implies that scores on one variable are associated
with or vary with the scores on another variable. The exploration of
relationship of the relationship between variables provides insight into the
nature of the va riables themselves as well as an understanding of their
relationships. If the relationships are substantial and consistent, they
enable a researcher to make predictions about the variables.
Correlational research is aimed at determining the nature, degree and
direction of relationships between variables or using these relationships to
make predictions. Correlational studies typically investigate a number of
variables expected to be related to a major, complex variable. Those
variables which are not found to be related to this major, complex variable
are omitted from further analysis. On the other hand, those variables
which are found to be related to this major, complex variable are further
analysed in a causal -comparative or experimental study so as to dete rmine
the exact nature of the relationship between them.
In a correlational study, hypotheses or research questions are stated at the
beginning of the study. The null hypotheses are often used in a
correlational study.
Correlational study does not specify cause -and-effect relationships
between variables under consideration. It merely specifies concomitant
variations in the scores on the variables. For example, there is a strong
relationship between students‘ scores on academic achievement in
Mathematics and their scores on academic achievement in Science. This
does not suggest that one of these variables is the cause and the other is
the effect. In fact, a third variable, viz., students‘ intelligence could be the
cause of students‘ academic achievement in bo th, Mathematics and
Science.
Steps of a Correlation Research
1. Selection of a Problem: Correlational study is designed (a) to
determine whether and how a set of variables are related, or
(b) to test the hypothesis of expected relationship between among the s et
of two or more variables. The variables to be included in the study need to
be selected on the basis of a sound theory or prior research or observation
and experience. There has to be some logical connection between the
variables so as to make interpret ations of the findings of the study more
meaningful, valid and scientific. A correlational study is not done just to
find out what exists: it is done for the ultimate purpose of explanation and
prediction of phenomena. If a correlational study is done just to find out munotes.in

Page 60


Research Methodology in Education
60 what exists, it is usually known as a shot gun ‘approach and the findings
of such a study are very difficult tointerpret.
2. Selection of the Sample and the Tools: The minimum acceptable
sample size should be 30, as statistically, it is regarded as a large sample.
The sample is generally selected using one of the acceptable sampling
methods. If the validity and the reliability of the variables to be studied are
low, the measurement error is likely to be high and hence the sample size
should be lar ge. Thus it is necessary to ensure that valid and reliable tools
are used for the purpose of collecting the data. Moreover, suppose you are
studying the relationship between classroom environment and academic
achievement of students. If your tool measuring classroom environment
focuses only on the physical aspects of the classroom and not its psycho -
social aspects, then your findings would indicate a relationship only
between academic achievement of students and the physical aspects of the
classroom environ ment and not the entire classroom environment since the
physical aspects of the classroom environment is not the only
comprehensive and reliable measure of classroom environment. Thus, the
measurement instruments should be valid and reliable.
3. Design and Pr ocedure: The basic design of a correlational study is
simple. It requires scores obtained on two or more variables from each
unit of the sample and the correlation coefficient between the paired
scores is computed which indicates the degree and direction o f the
relationship between variables.
4. Interpretation of the Findings: In a study designed to explore or test
hypothesized relationships, a correlation coefficient is interpreted in terms
of its statistical significance.
Co relational research is of the fol lowing two types:
(a) Relationship Studies : These attempt to gain insight into variables that
are related to complex variables such as academic performance, self -
concept, stress, achievement motivation or creativity.
(b) Prediction Studies : These are conducted to facilitate decisions about
individuals or to aid in various types of selection. They are also conducted
to determine predictive validity of measuring tools as well as to test
variables hypothesized to be predictors of a criterion variable.
Some questions t hat could be examined through correlation research are as
follows:
1. How is job satisfaction of a teacher related to the extent of autonomy
available in job?
2. Is there a relationship between Socio -Economic Status of parents and
their involvement with the scho ol?
3. How well do Common Entrance Test Scores for admission to B.Ed.
reflect / predict teacher effectiveness? munotes.in

Page 61


Quantitative and Qualitative
Research -I
61 Check Your Progress - I
(a) State the meaning of correlational research.
(b) Explain the steps of correlational research.
4A.4 CAUSAL -COMPARATIVE RESEARCH:
It is a type of descriptive research since it describes conditions that already
exist. It is a form of investigation in which the researcher has no direct
control over independent variable as its expression has already occurred or
because they are essentia lly non - manipulable. It also attempts to identify
reasons or causes of pre - existing differences in groups of individuals i.e.
if a researcher observes that two or more groups are different on a
variable, he tries to identify the main factor that has led to this difference.
Another name for this type of research is ex post facto research (which in
Latin means ―after the fact) since both the hypothesized cause and the
effect have already occurred and must be studied in retrospect.
Causal -comparative studies attempt to identify cause -effect relationships,
correlational studies do no t. Causal -comparative studies involve
comparison, correlational studies involve relationship. However, neither
method provides researchers with true experimental data. On the other
hand, causal -comparative and experimental research both attempt to
establis h cause -and-effect relationships and both involve comparisons. In
an experimental study, the researcher selects a random sample and then
randomly divides the sample into two or more groups. Groups are
assigned to the treatments and the study is carried out . However, in causal -
comparative research, individuals are not randomly assigned to treatment
groups because they already were selected into groups before the research
began. In experimental research, the independent variable is manipulated
by the researc her, whereas in causal - comparative research, the groups are
already formed and already different on the independent variable.
Inferences about cause -and-effect relationships are made without direct
intervention, on the basis of concomitant variation of in dependent and
dependent variables. The basic causal -comparative method starts with an
effect and seeks possible causes. For example, if a researcher observes that
the academic achievement of students from different schools. He may
hypothesize the possible cause for this as the type of management of
schools, viz. private -aided, private -unaided, or government schools (local
or state or any other). He therefore decides to conduct causal -comparative
research in which academic achievement of students is the effe ct that has
already occurred and school types by management is the possible
hypothesized cause. This approach is known as retrospective causal -
comparative research since it starts with the effects and investigates the
causes.
In another variation of this type of research, the investigator starts with a
cause and investigates its effect on some other variable.
munotes.in

Page 62


Research Methodology in Education
62 i.e. such research is concerned with the question what is the effect of X on
Y when X has already occurred? ‘For example, what long - term effect has
occurred on the self -concept of students who are grouped according to
ability in schools? Here, the investigator hypothesizes that students who
are grouped according to ability in schools are labeled brilliant ‘, average
‘or dull‘ and this over a period o f time could lead to unduly high or unduly
poor self -concept in them. This approach is known as prospective causal -
comparative research since it starts with the causes and investigates the
effects. However, retrospective causal -comparative studies are far more
common in educational research.
Causal -comparative research involves two or more groups and one
independent variable. The goal of causal -comparative research is to
establish cause -and-effect relationships just like an experimental research.
However, i n causal -comparative research, the researcher is able to identify
past experiences of the subjects that are consistent with a treatment ‘and
compares them with those subjects who have had a different treatment or
no treatment. The causal - comparative resea rch may also involve a pre -test
and a post -test. For Instance ,are searcher wants to compare the effect of
Environmental Education in the B.Ed. syllabus on student -teachers
‘awareness of environmental issues and problems attitude towards
environmental prot ection. Here, a researcher can develo p and administer a
pre-test before being taught the paper on ―Environmental Education ‖
and a post-test after being taught the same. At the same time, the pre -test as
well as the post -test are also administered to a group whi ch was not taught
the paper on Environmental Education. This is essentially a non -
experimental research as there is no manipulation of the treatment
although it involves a pre -test and a post -test. In this type of research, the
groups are not randomly assign ed to exposure to the paper on
Environmental Education. Thus, it is possible that other variables could
also affect the outcome variables. Therefore, in a causal -comparative
research, it is important to think whether differences other than the
independent variable could affect the resu lts.
In order to establish cause -and-effect in a causal -comparative research, it
is essential to build a convincing rational argument that the independent
variable is influencing the dependent variable. It is also essential to ensure
that other uncontrolle d variables do not have an effect on the dependent
variable. For this purpose, the researcher should try to draw a sample that
minimizes the effects of other extraneous variables. According to
Picciano, instating a hypothesis in a cau sal comparative study, the word
effect is frequently used.
Conducting a Causal -Comparative Study
Although the independent variable is not manipulated, there are control
procedures that can be exercised to improve interpretation of results.
Design and Procedure
The researcher se lects two groups of participants, accurately referred to as
comparison groups . These groups may differ in two ways as follows: munotes.in

Page 63


Quantitative and Qualitative
Research -I
63 (i) One group possesses a characteristic that the other does not.
(ii) Each group has the characteristic, but to differing degrees or
amou nts.
(iii) Definition and selection of the comparison groups are very important
parts of the causal -comparative procedure.
(iv) The independent variable differentiating the groups must be clearly
and operationally defined, since each group represents a different
popu lation.
(v) In causal -comparative research the random sample is selected from
two already existing populations, not from a single population as in
experimental research.
(vi) As in experimental studies, the goal is to have groups that are as
similar as possible on all relevant variables except the independent
variable.
(vii) The more similar the two groups are on such variables, the more
homogeneous they are on everything but the independent variable.
Control Procedures
Lack of randomization, manipulation, and control are all sources of
weakness in a causal -comparative study.
Random assignment is probably the single best way to try to ensure
equality of the groups.
A problem is the possibility that the groups are different on some other
important variable (e.g. gender, experience, or age) besides the
identified independent variable.
Matching
Matching is another control technique.
 If a researcher has identified a variable likely to influence performance
on the dependent variable, the researcher may control for that varia ble
by pair-wise matching of participants.
For each participant in one group, the researcher finds a participant in
the other group with the same or very similar score on the control
variable.
If a participant in either group does not have a suitable match, the
participant is eliminated from the study.
The resulting matched groups are identical or very similar with respect
to the identified extraneous variable. munotes.in

Page 64


Research Methodology in Education
64
The problem becomes serious when the researcher attempts to
simultaneously match partici pants on two or more variables.
Comparing Homogeneous Groups or Subgroups
To control extraneous variables, groups that are homogeneous with
respect to the extraneous variable are compared.
This procedure may lower the number of participants and limit t he
generalizability of the findings.
A similar but more satisfactory approach is to form subgroups within
each group that represent all levels of the control variable.
Each group might be divided into two or more subgroups on the basis
of high, average, and low levels of ‗Independent variable‘.
Suppose the independent variable in the study is students ‘IQ. The
subgroups then will comprise of high, average, and low levels of IQ.
The existence of comparable subgroups in each group controls forIQ.
In addition to controlli ng for the variable, this approach also permits
the researcher to determine whether the independent variable affects
the dependent variable differently at different levels of the control
variable.
The best approach is to build the control variable right into the
research design and analyze the results in a statistical technique called
factorial analysis of variance.
A factorial analysis allows the researcher to determine the effect of the
independent variable and the control variable on the dependent
variable both separately and in combination.
It permits determination of whether there is interaction between the
independent variable and the control variable such that the
independent variable operates differently at different levels of the
control varia ble.
Independent variables in causal -comparative research can be of following
types:
Type of Variable Examples
Organismic Variables Age
Gender
Religion
Caste
Ability Variables Intelligence munotes.in

Page 65


Quantitative and Qualitative
Research -I
65 Scholastic Ability
Specific Aptitudes
Personality Varia bles Anxiety Level
Stress -proneness
Aggression Level
Emotional Intelligence
Introversion / Extroversion
Self-Esteem
Self-Concept
Academic or Vocational Aspirations
Brain Dominance
Learning, Cognitive or Thinking Styles
Psycho -Social Mat urity
Home Background
Related Variables Home Environment
Socio -Economic Status
Educational Background of Parents
Economic Background of Parents
Employment Status of Parents
Single Parent v/s Both Parents
Employment Status of Mother (Working or
Non- Working)
Birth Order
No. of Siblings
School Related
Variables School Environment
Classroom Environment
Teacher Personality
Teaching Style
Leadership Style
School Type by Management ( Private -aided
v/s Private -unaided v/s Government)
School Type by Gender( Single -sex v/s Co -
educational) munotes.in

Page 66


Research Methodology in Education
66 School Type by Denomination (Run by a
non- religious organisation v/s Run by a
religious organisation whose one of the
objectives is to propagate a specific religion.)
School Type by Board Affiliat ion (SSC,
CBSE, ICSE, IB, IGCSE)
School Size
Per Student Expenditure
Socio -Economic Context of the School

The Value of Causal -Comparative Research : In a large majority of
educational research especially in the fields of sociology of education and
educational psychology, it is not possible to manipulate independent
variables due to ethical considerations especially when one is dealing with
variables such as anxiety, intelligence, home environment, teacher
personality, negative reinforcement, equality of opportunity and so on. It
is also not possible to control such variables as in an experimental
research. For studying such topics and their influence on students, causal -
comparative method is the most appropriate.
The Weaknesses of Causal -Comparative Research : There are three
major limitations of causal -comparative research. These include, (a) lack
of control or the inability to manipulate independent variables
methodologically, (b) the lack of power to assign subjects randomly to
groups and (c) the da nger of inappropriate interpretations. The lack of
randomization, manipulation, and control factors make it difficult to
establish cause -and-effect relationships with any degree of confidence.
The statistical techniques used to compare groups in a causal - comparative
research include the t -test when two groups are to be compared and
ANOVA when more than two groups are to be compared. The technique
of ANCOVA may also be used in case some other variables likely to
influence the dependent variable need to be controlled statistically.
Sometimes, chi square is also used to compare group frequencies, or to see
if an event occurs more frequently in one group than another.
Use of Analysis of Covariance (ANCOVA): It is used to adjust initial
group differences on var iables used in causal - comparative and
experimental research studies. Analysis of covariance adjusts scores on a
dependent variable for initial differences on some other variable related to
performance on the dependent. Suppose we were doing a study to com pare
two methods, X and Y, of teaching sixth standard students to solve
mathematical problems. Covariate analysis statistically adjusts the scores
of method Y to remove the initial advantage so that the results at the end
of the study can be fairly compare d as if the two groups started equally.
munotes.in

Page 67


Quantitative and Qualitative
Research -I
67 Check Your Progress - II
(a) State the meaning of causal -comparative research.
(b) Explain the steps and procedure of conducting causal - comparative
research.
(c) Explain the strengths and weaknesses of causal -comparative resea rch.
(d) Give examples of causal -comparative research in education .
4A.5 DOCUMENTARYANALYSIS
Documentary Analysis is closely related to historical research since in
such surveys we study the existing documents. But it is different from
historical research in w hich our emphasis is on the study of the past; and
in the descriptive research we emphasize on the study of the present.
Descriptive research in the field of education may focus on describing the
existing school practices, attendance rate of the students, health records,
and soon.
The method of documentary analysis enables the researcher to include
large amounts of textual information and systematically identify its
properties. Documentary analysis today is a widely used research tool
aimed at determining t he presence of certain words or concepts within
texts or sets of texts. Researchers quantify and analyze the presence,
meanings and relationships of such words and concepts, then make
inferences about the messages within the texts, the writer(s), the audie nce
and even the culture and time of which these are a part. Documentary
analysis could be defined as a research technique for the objective,
systematic, and quantitative description of manifest content of
communications. It is a technique for making infe rences by objectively
and systematically identifying specified characteristics of messages. The
technique of documentary analysis is not restricted to the domain of
textual analysis, but may be applied to other areas such as coding student
drawings or codi ng of actions observed in videotaped studies, analyzing
past documents such as memos, minutes of the meetings, legal and policy
statements and so on. In order to allow for replication, however, the
technique can only be applied to data that are durable in nature. Texts in
documentary analysis can be defined broadly as books, book chapters,
essays, interviews, discussions, newspaper headlines and articles,
historical documents, speeches, conversations, advertising, theater,
informal conversation, or really a ny occurrence of communicative
language. Texts in a single study may also represent a variety of different
types of occurrences.
Documentary analysis enables researchers to sift through large amount of
data with comparative ease in a systematic fashion. It can be a useful
technique for allowing one to discover and describe the focus of
individual, group, institutional or social attention. It also allows inferences
to be made which can then be corroborated using other methods of data
collection. Much documen tary analysis research is motivated by the search munotes.in

Page 68


Research Methodology in Education
68 for techniques to infer from symbolic data what would be too costly, no
longer possible, or too obtrusive by the use of other techniques. These
definitions illustrate that documentary analysis emphasizes an integrated
view of speech/texts and their specific contexts. Document analysis is the
systematic exploration of written documents or other artifacts such as
films, videos and photographs. In pedagogic research, it is usually the
contents of the artefacts, rather than say, the style or design, that are of
interest.
Why analyses documents?
Documents are an essential element of day -to-day work in education.
They include:
Student essays Exam papers Minutes of meetings Module outlines Policy
documents
In some p edagogic research, analysis of relevant documents will inform
the investigation. If used to triangulate, or give another perspective on a
research question, results of document analysis may complement or refute
other data. For example, policy documents in an institution may be
analyzed and interviews with staff or students and observation of classes
may suggest whether or not new policies are being implemented. A set of
data from documents, interviews and observations could contribute to a
study of a partic ular aspect of pedagogy.
How can documents be analyzed?
The content of documents can be explored in systematic ways which look
at patterns and themes related to the research question(s). For example, in
making a case study of deep and surface learning in a particular course,
the question might be 'How has deep learning been encouraged in this
course in the last three years?'
Minutes of course meetings could be examined to see whether or how
much this issue has been discussed; Student handouts could be analy sed to
see whether they are expressed in ways which might encourage deep
learning. Together with other data -gathering activities such as student
questionnaires or observation of classes, an research study might then be
based on an extended research questio n so that strategies are implemented
to develop deep learning.
In the example of deep learning, perhaps the most obvious way to analyse
the set of minutes would be to use a highlighting pen every time the term
'deep learning' was used. You might also choos e to highlight 'surface
learning' a term with an implied relationship to deep learning. You might
also decide, either before starting the analysis, or after reading the
documents, that there are other terms or inferences which imply an
emphasis on deep lea rning. You might therefore go through the documents
again, selecting additional references.
munotes.in

Page 69


Quantitative and Qualitative
Research -I
69 The levels of analysis will vary but a practitioner -researcher will need to
be clear and explicit about the rationale for, and the approach to, selection
of conten t.
Advantages and disadvantages of document analysis
Robson (2002) points out the advantages and disadvantages of content
analysis. An advantage is that documents are unobtrusive and can be used
without imposing on participants; they can be checked and re -checked for
reliability.
A major problem is that documents may not have been written for the
same purposes as the research and therefore conclusions will not usually
be possible from document analysis alone.
Check Your Progress - III
1. State the meaning of d ocumentary research.
2. Explain the applications of documentary research.
4A.6 ETHNOGRAPHY:
Meaning
Ethnographic studies are usually holistic, founded on the idea that human
beings are best understood in the fullest possible context, including the
place wher e they live, the improvements they have made to that place,
how they make a living and gather food, housing, energy and water for
themselves, what their marriage customs are, what language(s) they speak
and so on. Ethnography is a form of research focusing on the sociology of
meaning through close field observation of socio -cultural phenomena.
Typically, the ethnographer focuses on a community (not necessarily
geographic, considering also work, leisure, classroom or school groups
and other communities). Eth nography may be approached from the point
of view of art and cultural preservation and as a descriptive rather than
analytic endeavour. It essentially is a branch of social and cultural
anthropology. The emphasis in ethnography is on studying an entire
culture. The method starts with selection of a culture, review of the
literature pertaining to the culture, and identification of variables of
interest - typically variables perceived as significant by members of the
culture. Ethnography is an enormously wide area with an immense
diversity of practitioners and methods. However, the most common
ethnographic approach is participant observation and unstructured
interviewing as a part of field research. The ethnographer becomes
immersed in the culture as an active participant and records extensive field
notes. In an ethnographic study, there is no preset limit of what will be
observed and interviewed and no real end point in as is the case with
grounded theory.
Hammersley and Atkinson define ethnography as, "We see the term as
referring primarily to a particular method or sets of methods. In its most munotes.in

Page 70


Research Methodology in Education
70 characteristic form it involves the ethnographer participating, overtly or
covertly, in people's lives for an extended period of time, watching what
happens, listening to what is said, asking questions —in fact, collecting
whatever data are available to throw light on the issues that are the focus
of the research. Johnson defines ethnography as "a descriptive account of
social life and culture in a particular social syste m based on detailed
observations of what people actually do."
Assumptions in an Ethnographic Research
According to Garson, these are as follows:
a. Ethnography assumes that the principal research interest is primarily
affected by community cultural understand ings. The methodology
virtually assures that common cultural understandings will be identified
for the research interest at hand. Interpretation is apt to place great
emphasis on the causal importance of such cultural understandings. There
is a possibility that an ethnographic focus will overestimate the role of
cultural perceptions and underestimate the causal role of objective forces.
b. Ethnography assumes an ability to identify the relevant community of
interest. In some settings, this can be difficult. Co mmunity, formal
organization, informal group and individual -level perceptions may all play
a causal role in the subject under study and the importance of these may
vary by time, place and issue. There is a possibility that an ethnographic
focus may overest imate the role of community culture and underestimate
the causal role of individual psychological or of sub -community (or for
that matter, extra - community) forces.
c. Ethnography assumes that the researcher is capable of understanding
the cultural mores of t he population under study, has mastered the
language or technical jargon of the culture and has based findings on
comprehensive knowledge of the culture. There is a danger that the
researcher may introduce bias toward perspectives of his or her own
culture .
d. While not inherent to the method, cross -cultural ethnographic research
runs the risk of falsely assuming that given measures have the same
meaning across cultures.
Characteristics of Ethnographic Research:
According to Hammersley and Sanders, ethnography is characterized by
the following features:
(a) People's behaviour is studied in everyday contexts.
(b) It is conducted in a natural setting.
(c) Its goal is more likely to be exploratory rather than evaluative.
(d) It is aimed at discovering the local person‘s or―native‘s point of
view, wherein, the native may be a consumer or an end -user. munotes.in

Page 71


Quantitative and Qualitative
Research -I
71 (e) Data are gathered from a wide range of sources, but observation
and/or relatively informal conversations are usually the principl e
ones.
(f) The approach to data collection is unstructured in that it does not
involve following through a predetermined detailed plan set up at the
beginning of the study nor does it determine the categories that will
be used for analysing and interpreting t he soft data obtained. This
does not mean that the research is unsystematic. It simply means that
initially the data are collected as raw form and a wide amount as
feasible.
(g) The focus is usually a single setting or group of a relatively small
size. In life history research, the focus may even be a single
individual.
(h) The analysis of the data involves interpretation of the meanings and
functions of human actions and mainly takes the form of verbal
descriptions and explanations, with quantification and statist ical
analysis playing a subordinate role at most.
(i) It is cyclic in nature concerning data collection and analysis. It is
open to change and refinement throughout the process as new
learning shapes future observations. As one type of data provides
new inform ation, this information may stimulate the researcher to
look at another type of data or to elicit confirmation of an
interpretation from another person who is part of the culture being
studied.
Guidelines for Conducting Ethnography
Following are some broad guidelines for conducting fieldwork:
1. Be descriptive in taking field notes. Avoid evaluations.
2. Collect a diversity of information from different perspectives.
3. Cross -validate and triangulate by collecting different kinds of data
obtained using observations, interviews, programme documentation,
recordings and photographs.
4. Capture participants' views of their own experiences in their own
words. Use quotations to represent programme participants in their
own terms.
5. Select key informants carefully. Draw on the w isdom of their informed
perspectives, but keep in mind that their perspectives are limited.
6. Be conscious of and perceptive to the different stages of fieldwork.
(a) Build trust and rapport at the entry stage. Remember that the
researcher -observer is also b eing observed and evaluated. (b) Stay
attentive and disciplined during the more routine middle -phase of
fieldwork. (c) Focus on pulling together a useful synthesis as munotes.in

Page 72


Research Methodology in Education
72 fieldwork draws to a close. (d) Be well -organized and meticulous in
taking detailed field notes at all stages of fieldwork. (e) Maintain an
analytical perspective grounded in the purpose of the fieldwork: to
conduct research while at the same time remaining involved in
experiencing the observed setting as fully as possible. (f) Distinguish
clearly between description, interpretation and judgment. (g) Provide
formative feedback carefully as part of the verification process of
fieldwork. Observe its effect. (h) Include in your field notes and
observations reports of your own experiences, thoughts and feelings.
These are also field data. Fieldwork is a highly personal experience.
The meshing of fieldwork procedures with individual capabilities and
situational variation is what makes fieldwork a highly personal
experience. The validity and meaningfu lness of the results obtained
depend directly on the observer's skill, discipline, and perspective.
This is both the strength and weakness of observational methods.
Techniques Used in Conducting Ethnography
These include the following:
A. Listening to convers ations and interviewing. The researcher needs to
make notes or audio -record these.
B. Observing behaviour and its traces, making notes and mapping
patterns of behaviour, sketching of relationship between people, taking
photographs, video -recordings of daily l ife and activities and using
digital technology and web cameras.
Stages in Conducting Ethnography
According to Spradley, following are the stages in conducting an
ethnographic study:
1. Selecting an ethnographic project.
2. Asking ethnographic questions and coll ecting ethnographic data.
3. Making an ethnographic record.
4. Analyzing ethnographic data and conducting more research as
required.
5. Outlining and writing an ethnography.
Steps of Conducting Ethnography
According to Spradley, ethnography is a non -linear research process but is
rather, a cyclical process. As the researcher develops questions and
uncovers answers, more questions emerge and the researcher must move
through the steps again.
According to Spradley, following are the steps of conducting an
ethnographic study (However, all research topics may not follow all the
steps listed here):
munotes.in

Page 73


Quantitative and Qualitative
Research -I
73 1. Locating a social situation. The scope of the topic may vary from the
micro ethnography of a singl e social situation to macro – ethnography
of a complex society. According to H ymes, there are three levels of
ethnography including (i) comprehensive ethnography which
documents an entire culture,(ii) the topic oriented ethnography which
looks at aspe cts of a culture and (iii) hypothesis -oriented ethnography
which beings with an ide a about why something happens in a culture.
Suppose you want to conduct research on classroom environment. This
step requires that you select a category of classroom environment and
identify social and academic situations in which it issued.
2. Collecting dat a. There are four types of data collection used in
ethnographic research, namely, (a) watching or being part of a social
context using participant and non -participant observation and noted in
the form of on observer notes, logs, diaries, and so on, (b) ask ing open
and closed questions that cover identified topics using semi -structured
interviews, (c) asking open questions that enable a free development of
conversation using unstructured interviews and (d) using collected
material such as published and unpub lished documents, photographs,
papers, videos and assorted artifacts, letters, books or reports. The
problem with such data is that the more you have, greater is the effort
required to analyse. Moreover, as the study progresses, the amount of
data increase s making it more difficult and sharp to analyse the data.
Yet more data leads to better codes, categories, theories and
conclusions. What is 'enough' data is subject to debate and may well be
constrained by the time and resource the researcher has availabl e.
Deciding when and where to collect data can be a crucial decision. A
profound analysis at one point may miss others, whilst a broad
encounter may miss critical finer points. Several deep dives can be a
useful method. Social data can be difficult to acce ss due to ethics,
confidentiality and determination necessary in such research. There is
often less division of activity phases in qualitative research and the
researcher may be coding as he proceeds with the study. The
researcher usually uses theoretical and selective sampling for data
collection.
3. Doing participant observation. Formulate open questions about the
social situations under study. Malinowski opines that ethnograp hic
research should begin with foreshadowed problems. These problems
are questions that researchers bring to a study and to which they keep
an open eye but to which they are not enslaved. Collect examples of
the classroom environment. Select research tools/techniques. Spradley
provides a matrix of questions about cultural space, objects, acts
activities, events, time, actors, goals and feelings that researchers can
use when just starting the study.
4. Making an ethnographic record. Write descriptions of classroom
environment and the situations in which it issued.
munotes.in

Page 74


Research Methodology in Education
74 5. Making descriptive observat ions. Select method for doing analysis.
6. Making domain analysis. Discover themes within the data and apply
existing theories, if any, as applicable. Domain analysis requires the
researcher to first choose one semantic relationship such as―causes
or― classes. Second, you select a portion of your data and begin reading
it and while doing so, fill out a domain analysis worksheet where you
list all the terms that fit the semantic relationship you chose. Now
formulate struct ural questions for each domain. Structural questions
occur less frequently as compared to descriptive questions in normal
conversation. Hence they require more framing. Types of structural
questions include the following:
(i) Verification and elicitation quest ions such as (a) verification of
hypotheses (Is the teacher -student relationship a conducive?), (b)
domain verification (Are there different types of teacher -student
relationships? What are the different types? (c) verification of included
terms (is teache rs ‘strike an illegal activity?) and (d) verification of
semantic relationship (Is teaching beautiful?).
(ii) Frame substitution . This requires starting with real sentence like
"you get a lot of brickbats in administration". Then ask, can you think
of any other terms that go in that sentence instead of brickbats? You
get a lot of in administration. (This can be done systematically by
giving them list of terms to choose from).
(iii) Card sorts . Write phrases or words on cards. Then lay them out and
ask the questions me ntioned above. The researcher can ask which
words are similar. Testing hypotheses about relations between
domains and between domains and items. Like: "Are there different
kinds of classroom climates?" If yes, it is a domain. Then ask "what
kinds of classr oom climates are there? ‖ The final step in domain
analysis is to make a list of all the hypothetical domains you have
identified, the relationships in these domain and the structural
questions that follow your analysis.
7. Making focused observations.
8. Making a taxonomic analysis. T axonomy is a scientific process of
classifying things and arranging them in groups or a set of categories
(domains) organised on a single semantic relationships. The researcher
needs to test his taxonomies against data given by informants. Make
comparisons of two or three symbols such as word, event, constructs.
9. Making selected observations.
10. Making a componential analysis which is a systematic search for the
attributes or features of cultural symbols that distinguish them from
others and give them meaning. The basic idea in componential
analysis is that all items in a domain can be decomposed into
combinations of semantic features which combine to give the item
meaning. munotes.in

Page 75


Quantitative and Qualitative
Research -I
75 11. Discovering cultural themes. A theme is a postulate or position,
explicit or implicit, wh ich is directly or indirectly approved and
promoted in a society. Strategies of discovering cultural themes
include (i) in -depth study of culture, (ii) making a cultural inventory,
(iii) identifying and analysing components of all domains, (iv)
searching f or common elements across all domains such as gender,
age, SES groups etc., (v) identifying domains that clearly show a
strong pattern of behaviour, (vi) making schema of cultural scene and
(vii) identifying generic (etic) codes usually functional such as social
conflict, inequality, cultural contradictions in the institutional social
system, strategies of social control, managing interpersonal relations,
acquiring status in the institution and outside, solving educational and
administrative problems and so on.
12. Taking a cultural inventory.
13. Writing an ethnography
Guidelines for Interviewing
According to Patton, following are some useful guidelines that can be used
for effective interviewing:
1. Throughout all phases of interviewing, from planning through data
collection to analysis, keep centered on the purpose of the research
endeavor. Let that purpose guide the interviewing process.
2. The fundamental principle of qualitative interviewing is to provide a
framework within which respondents can express their own
unde rstandings in their own terms.
3. Understand the strengths and weaknesses of different types of
interviews: the informal conversational interview; the interview
guide approach; and the standardized open -ended interview.
4. Select the type of interview (or combin ation of types) that is most
appropriate to the purposes of the research effort.
5. Understand the different kinds of information one can collect
through interviews: behavioural data; opinions; feelings; knowledge;
sensory data; and background information.
6. Think about and plan how these different kinds of questions can be
most appropriately sequenced for each interview topic, including
past, present, and future questions.
7. Ask truly open -ended questions.
8. Ask clear questions, using understandable and appropriate language.
9. Ask one question at a time.
10. Use probes and follow -up questions to solicit depth and detail. munotes.in

Page 76


Research Methodology in Education
76 11. Communicate clearly what information is desired, why that
information is important, and let the interviewee know how the
interview is progressing.
12. Listen attentively and respond appropriately to let the person know
he or she is being heard.
13. Avoid leading questions.
14. Understand the difference between a depth interview and an
interrogation. Qualitative evaluators conduct depth interviews; police
investigators and tax auditors conduct interrogations.
15. Establish personal rapport and a sense of mutual interest.
16. Maintain neutrality toward the specific content of responses. You are
there to collect information not to make judgments about that person.
17. Observe while i nterviewing. Be aware of and sensitive to how the
person is affected by and responds to different questions.
18. Maintain control of the interview.
19. Tape record whenever possible to capture full and exact quotations
for analysis and reporting.
20. Take notes to cap ture and highlight major points as the interview
progresses.
21. As soon as possible after the interview check the recording for
malfunctions; review notes for clarity; elaborate where necessary ;
and record observations.
22. Take whatever steps are appropriate and necessary to gather valid
and reliable information.
23. Treat the person being interviewed with respect. Keep in mind that it
is a privilege and responsibility to peer into another person’s
experience.
24. Practice interviewing. Develop your skills.
25. Enjoy intervi ewing. Take the time along the way to stop and "hear"
the roses.
Writing Ethnographic Research Report
The components of an ethnographic research report should include the
following:
1. Purpose / Goals /Questions.
2. Research Philosophy. munotes.in

Page 77


Quantitative and Qualitative
Research -I
77 3. Conceptual/Theoretical Fr amework
4. Research Design /Model.
5. Setting/Circumstances.
6. Sampling Procedures.
7. Background and Experience of Researcher.
8. Role/s of Researcher.
9. Data Collection Method.
10. Data Analysis/Interpretation.
11. Applications/Recommendations.
12. Presentation Format and Sequence.
Advantages of Ethnography
These are as follows:
1. It provides the researcher with a much more comprehensive
perspective than other forms of research
2. It is also appropriate to behaviors that are best understood by
observing them within their natural environm ent(dynamics)
Disadvantages of Ethnography
These are as follows:
1. It is highly dependent on the researcher‘s observations and
interpretations
2. There is no way to check the validity of the researcher‘s conclusion,
since numerical data is rarely provided
3. Obser ver bias is almost impossible to eliminate
4. Generalizations are almost non -existent since only a single situation is
observed, leaving ambiguity in the study.
5. It is very time -consuming.
Check Your Progress - IV
(a) State the characteristics of ethnographic rese arch.
(b) Explain the steps of conducting ethnographic research.
munotes.in

Page 78


Research Methodology in Education
78 4A.7 CASE STUDY:
Case study research is descriptive research that involves describing and
interpreting events, conditions, circumstances or situations that are
occurring in the present. Case st udy seeks to engage with and report the
complexities of social activity in order to represent the meanings that
individual social actors bring to their social settings. It excels at bringing
us to an understanding of a complex issue or object and can exten d
experience or add strength to what is already known through previous
research. Case studies emphasize detailed contextual analysis of a limited
number of events or conditions and their relationships. Darwin's theory of
evolution was based, in essence, on case study research, not
experimentation, for instance. In education, this is one of the most widely
used qualitative approaches of research.
According to Odum, ―The case study method is a technique by which
individual factor whether it be an institution or just an episode in the life
of an individual or a group is analyzed in its relationship to any other in
the group. Its distinguishing charac teristic is that each respondent is
(individual, family, classroom, institution, cultural group) is taken as a unit
and the unitary nature of individual case is the focus of analysis. It seeks
to engage with and report the complexity of social and/or educa tional
activity in order to represent the meanings that individual actors in the
situation bring to that setting. It assumes that social and/or educational
reality is created through social interactions, situated in specific contexts
and histories and seek s to identify and describe followed by analysing and
theorizing. It assumes that things may not be as they seem and involve in -
depth analysis so as to understand a case ‘rather than generalizing to a
larger population. It derives much of its philosophical underpinnings and
methodology from ethnography, symbolic interactionism, ethno -
methodology and phenomenology. It follows the social constructivism
‘perspective of social sciences.
Most case studies are usually qualitative in nature. Case study research
excels at enabling us to understand a complex issue or object and can
extend experience or add strength to what is already known through
previous research. Case studies involve a detailed contextual analysis of a
limited number of events or conditions and the ir relationships. Social
scientists have made a wide use of this qualitative research method to
examine contemporary real -life situations and provide the basis for the
application of ideas and extension of methods. Yin defines the case study
research metho d as an empirical inquiry that investigates a contemporary
phenomenon within its real -life context; when the boundaries between
phenomenon and context are not clearly evident; and in which multiple
sources of evidence are used.
However, some case studies c an also be quantitative in nature especially if
they deal with cost -effectiveness, cost -benefit analysis or institutional
effectiveness. Many case studies have been done by combining the
qualitative as well as the quantitative approaches in which initially the
qualitative approach has been used and data have been collected using munotes.in

Page 79


Quantitative and Qualitative
Research -I
79 interviews and observations followed by the quantitative approach. The
approach of case studies ranges from general field studies to interview of a
single individual or group. A cas e study can be precisely focused on a
topic or can include a broad view of life and society. For example, a case
study can focus on the life of a single gifted student, his actions,
behaviour, abilities and so on in his school or it can focus on the social life
of an individual including his entire background, experiences, motivations
and aspirations that influence his behaviour society. Examples of case
studies include a case‘ of curriculum development, of innovative training,
of disruptive behaviour, of a n ineffective institution and so on.
Case studies can be conducted to develop a research -based ‘theory with
which to analyse situations: a theory of, for and about practice. It is
essential to note that since most case studies focus on a single unit or sma ll
number of units, the findings cannot be generalised to larger populations.
However, its utility cannot be underestimated. A case study is conducted
with a fundamental assumption that though human behaviour is situation -
specific and individualized, there is a predictable uniformity in basic
human nature.
A case study can be conducted to explore, to describe or to explain a
phenomenon. It could be a synchronic study in which data are collected at
one point of time or it could be longitudinal in nature. It could be
conducted at a single site or it could be multi -site. In other words, it is
inherently a very flexible methodology.
A case typically refers to a person, either a learner, a teacher, an
administrator or an entity, such as a school, a university, a classroom or a
programme. In some policy -related research, the case could be a country.
Case studies may be included in larger quantitative or qualitative studies
to provide a concrete illustration of findings, or they may be conducted
independently, eithe r longitudinally or in a more restricted temporal
period. Unlike ethnographic research, case studies do not necessarily focus
on cultural aspects of a group or its members. Case study research may
focus on a single case or multiple cases.
Characteristics o f a Case Study
Following are the characteristics of a case study:
1. It is concerned with an exhaustive study of particular instances. A
case is a particular instance of a phenomenon. In education, examples
of phenomena include educational programmes, curricu la, roles,
events, interactions, policies, process, concept and so on. Its
distinguishing feature is that each respondent (individual, class,
institution or cultural group) is treated as a unit.
2. It emphasises the study of interrelationship between differen t attributes
of a unit.
3. According to Cooley, case study deepens our perception and gives us a
clear insight into life… It gets at behaviour directly and not by an
indirect or abstract approach. munotes.in

Page 80


Research Methodology in Education
80 4. Each case study needs to have a clear focus which may include those
aspects of the case on which the data collection and analysis will
concentrate. The focus of a study could be a specific topic, theme,
proposition or a working hypothesis.
5. It focuses on the natural history of the unit under study and its
interaction with the social world around it.
6. The progressive records of personal experience in a case study reveals
the internal strivings, tensions and motivations that lead to specific
behaviours or actions of individuals or the unit of analysis.
7. In order to ensure that the case study is intensive and in -depth, data are
collected over a long period of time from a variety of sources
including human and material and by using a variety of techniques
such as interviews and observations and tools such as questionnaires,
documents, artifacts, diaries and soon.
8. According to Smith, as cited by Merriam, (1998), these studies are
different from other forms of qualitative of research in that they focus
on a single unit or a bounded system . A system is said to be a
bounded system if it includes a finite or limited number of cases to
interviewed or observed within a definite amount of time.
9. It may be defined as an in -depth study of one or more instances of a
phenomenon - an individual, a group, an institution, a classroom or an
even t- with the objective of discovering meaning, investigating
processes, gaining an insight and an understanding of an individual,
group or phenomena within the context in such a way that it reflects
there all if context of the participants involved in the p henomena.
These individuals, groups, institutions, classrooms or events may
represent the unit of analysis in a case study. For example, in a case
study, the unit of analysis may be a classroom and the researcher may
decide to investigate the events in thr ee such classrooms.
10. According to Yin, case studies typically involve investigation of a
phenomenon for which the boundaries between the phenomenon and
its context are not clearly evident. These boundaries should be clearly
clarified as part of the case stu dy. He further emphasises the
importance of conducting a case study in its real life context. In
education, the classroom or the school is the real life context of a case
study as the participants of such a case study are naturally found in
these settings.
11. There are two major perspectives in a case study, namely, the ethic
perspective and the epic perspective. The ethic perspective is that of
the researcher (i.e. the outsider‘s perspective) whereas the epic
perspective is that of the research participants i ncluding teachers,
principals and students (i.e. the insider‘s perspective). This enables the
researcher to study the local, immediate meanings of social actions of
the participants and to study how they view the social situation of the munotes.in

Page 81


Quantitative and Qualitative
Research -I
81 setting and the phe nomenon under study. A comprehensive case study
includes both the perspectives.
12. A case study can be a single -site study or a multi -site study.
13. Cases are selected on the basis of dimensions of a theory (pattern -
matching) or on diversity on a dependent pheno menon (explanation -
building).
14. No generalization is made to a population beyond cases similar to
those studied.
15. Conclusions are phrased in terms of model elimination, not model
validation. Numerous alternative theories may be consistent with data
gathered f rom a case study.
16. Case study approaches have difficulty in terms of evaluation of low -
probability causal paths in a model as any given case selected for study
may fail to display such a path, even when it exists in the larger
population of potential cases.
17. Acknowledging multiple realities in qualitative case studies, as is now
commonly done, involves discerning the various perspectives of the
researcher, the case/participant, and others, which may or may not
converge.
Components of a Case Study Design
Accor ding to Yin, following are the five component elements of a case
study design:
1. Study questions
2. Study propositions (if any are being used) or theoretical framework
3. Identification of the units of analysis
4. The logical linking of the data to the propositions ( or theory)
5. The criteria for interpreting the findings.
The purpose of a case study is a detailed examination of a specific activity,
event, institution, or person/s. The hypotheses or the research questions are
stated broadly at the beginning at the study. A study‘s questions are
directed towards how and why considerations and enunciating and
defining these are the first task of the researcher. The study‘s propositions
could be derived from these how and why questions. These propositions
could help in devel oping a theoretical focus. However, all case studies
may not have propositions. For instance, an exploratory case study may
give only a purpose statement or criteria that could guide the research
process. The unit of analysis defines what the case study is focusing on,
whether an individual, a group, n institution, a city, a society, a nation and
so on. Linkages between the data and the propositions (or theory) and the
criteria for interpreting the findings are usually the least developed aspects
of case st udies (Yin,1994).
munotes.in

Page 82


Research Methodology in Education
82 Types of Case Study Designs
Yin (1994) and Winston (1997) have identified several types of case study
designs. These are as follows:
(A) Exploratory Case Study Design : In this type of case study design,
field work and data collection are ca rried out before determining the
research questions. It examines a topic on which there is very little prior
research. Such a study is a prelude to a large social scientific study.
However, before conducting such an exploratory case study, its
organization al framework is designed in advance so as to ensure its
usefulness as a pilot study of a larger, more comprehensive research. The
purpose of the exploratory study is to elaborate a concept, build up a
model or advocate propositions.
(B) Explanatory Case Study Design : These are useful when providing
explanation to phenomena under consideration. These explanations are
patterns implying that one type of variation observed in a case study is
systematically related to another variation. Such a pattern can be a
relational pattern or a causal pattern depending on the conceptual
framework of the study. In complex studies of organisations and
communities, multivariate cases are included so as to examine a plurality
of influences. Yin and Moore (1988) suggest the use of a pattern -matching
technique in such a research wherein several pieces of information from
the same case may be related to some theoretical proposition.
(C) Descriptive Case Study Design : A descriptive case study necessitates
that the researcher present a descr iptive theory which establishes the
overall framework for the investigator to follow throughout the study. This
type of case study requires formulation and identification of a practicable
theoretical framework before articulating research questions. It is also
essential to determine the unit of analysis before beginning the research
study. In this type of case study, the researcher attempts to portray a
phenomenon and conceptualize it, including statements that recreate a
situation and context as much as po ssible.
(D) Evaluative Case Study Design : Often, in responsive evaluation,
quasi -legal evaluation and expertise -based evaluation, a case study is
conducted to make judgments. This may include a deep account of the
phenomenon being evaluated and identification of most important and
relevant constructs, themes and patterns. Evaluative case studies can be
conducted on educational programmes funded by the Government such as
―Sarva Shiksha Abhiyan or Orientation Programmes and Refresher
Courses conducted by Academic Staff Colleges for college teachers or
other such programmes organised by the State and Local Governments for
secondary and primary schoolteache rs.
Steps of Conducting a Case Study
Following are the steps of a case study:
1. Identifying a current topic which is of interest to the researcher. munotes.in

Page 83


Quantitative and Qualitative
Research -I
83 2. Identifying research questions and developing hypotheses (ifany).
3. Determining the unit of sampling and the num ber of units. Select the
cases.
4. Identifying sources, tools and techniques of data collection. These
could include interviews, observations, documentation, student records
and school databases. Collect data in the field.
5. Evaluating and Analysing Data.
6. Repor t writing.
Each of these is described in detail in the following paragraphs.
Step 1 : Identifying a current topic which is of interest to the
researcher
In order to identify a topic for case study research, the following questions
need to be asked:
(i) What ki nd of topics can be addressed using the case study method?
(ii) How can a case study research be designed, shaped and scoped in
order to answer the research question adequately?
(iii) How can the participation of individuals/institutions be obtained for
the case stud y research?
(iv) How can case study data be obtained from case participants in an
effective and efficient manner?
(v) How can rigor be established in the case study research report so that it
is publishable in academic journals?
According to Maxwell, there are eigh t different factors that could
influence the goals of a case study as follows:
1. To grasp the meanings that events, situations, experiences and actions
have for participants in the study which is part of the reality that the
researcher wants to understand.
2. To understand the particular context within which the participants are
operating and its influence on their actions, in addition to the context
in which one‘s research participants are embedded. Qualitative
researchers also take into account the contextual factors that influence
the research itself.
3. To identify unanticipated phenomena and influences that emerge in the
setting and to generate new grounded theories about such aspects.
4. To grasp the process by which events and actions take place that lead
to par ticular outcomes. munotes.in

Page 84


Research Methodology in Education
84 5. To develop causal explanations based on process theory (which
involves tracing the process by which specific aspects affect other
aspects), rather than variance theory (which involves showing a
relationship between two variables as in qua ntitative research).
6. To generate results and theories that are understandable and
experientially credible, both to the participants in the study and to
others.
7. To conduct summative evaluations designed to improve practice rather
than merely to assess the v alue of a final programme or product.
8. To engage in collaborative and action research practitioners and
research participants.
Step 2 : Identifying research questions and developing hypotheses (if
any)
The second step in case study research is to establish a research focal point
by forming questions about the situation or problem to be studied and
determining a purpose for the study. The research objective in a case study
is often a programme, an entity, a person or a group of people. Each
objective is likel y to be connected to political, social, historical and
personal issues providing extensive potential for questions and adding
intricacy to the case study. The researcher attains the objective of the case
study through an in -depth investigation using a vari ety of data gathering
methods to generate substantiation that leads to understanding of the case
and answers the research questions. Case study research is usually aimed
at answering one or more questions which begin with "how" or "why."
The questions are concerned with a limited number of events or conditions
and their inter -relationships. In order to formulate research questions,
literature review needs to be undertaken so as to establish what research
has been previously conducted. This helps in refining the research
questions and making them more insightful. The literature review,
definition of the purpose of the case study and early determination of the
significance of the study for potential audience for the final report direct
how the study will be de signed, conducted and publicly reported.
Step 3: Determining the unit of sampling and the number of units.
Select the cases.
Sampling Strategies in a Case Study: In a case study design, purposeful
sampling is done which has been defined by Patton as selecti ng
information -rich cases for study in -depth. a case study research,
purposeful sampling is preferred over probability sampling as they
enhance the usefulness of the information acquired from small samples.
Purposive samples are expected to be conversant a nd informative about
the phenomenon under investigation.
A case study requires a plan for choosing sites and participants in order to
start data collection. The plan is known asan munotes.in

Page 85


Quantitative and Qualitative
Research -I
85 emergent design in which research decisions depend on preceding
information. This necessitates purposive sampling, data collection and
partial, simultaneous analysis of data as well as interactive rather than
distinct sequential steps.
During the phase of designing a case study research, the researcher
determines whether to use si ngle or multiple real -life cases to examine in -
depth and which instruments and data collection techniques to use. When
multiple cases are used, each case is treated as a single case. Each case/s
conclusions can then be used as contributing information to t he entire
study, but each case remains a single case for collecting data and analysis.
Exemplary case studies carefully select cases and carefully examine the
choices available from among many research tools available so as to
enhance the validity of the s tudy. Careful selection helps in determining
boundaries around the case. The researcher must determine whether to
study unique cases‘, or typical cases‘. He also needs to decide whether to
select cases from different geographical areas. It is necessary at this stage
to keep in mind the goals of the study so as to identify and select relevant
cases and evidence that will ful fil the goals of the study and answer the
research questions raised. Selecting multiple or single cases is a key
element, but a case st udy can include more than one unit of embedded
analysis. For example, a case study may involve study of a single type of
school (For example, Municipal School) and a school belonging to this
type. This type of case study involves two levels of analysis and increases
the complexity and amount of data to be gathered and analyzed. Multiple
cases are often preferable to single cases, particularly when the cases may
not be representative of the population from which they are drawn and
when a range of behaviors/p rofiles, experiences, outcomes, or situations is
desirable. However, including multiple cases limits the depth with which
each case may be analyzed and also has implications for the structure and
length of the final report.
Step 4: Identifying sources, too ls and techniques of data collection
Sources of Data in a Case Study: A case study method involves using
multiple sources and techniques in the data collection process. The
researcher determines in advance what evidence to collect and which
techniques of d ata analysis to use so as to answer the research questions.
Data collected is normally principally qualitative and soft data, but it may
also be quantitative also. Data are collected from primary documents such
as school records and databases, students‘ re cords, transcripts and results,
field notes, self -reports or think - aloud protocols and memoranda.
Techniques used to collect data can include surveys, interviews,
questionnaires, documentation review, observation and physical artefacts.
These multiple too ls and techniques of data collection add texture, depth,
and multiple insights to an analysis and can enhance the validity or
credibility of the results.
Case studies may make use of field notes and databases to categorize and
reference data so that it is readily available for subsequent re -
interpretation. Field notes record feelings and intuitive hunches, pose munotes.in

Page 86


Research Methodology in Education
86 questions, and document the work in progress. They record testimonies,
stories and illustrations which can be used in reporting the study. They
may inform of impending preconceptions because of the detailed exposure
of the client to special attention or give an early signal that a pattern is
emerging. They assist in determining whether or not the investigation
needs to be reformulated or redefined bas ed on what is being observed.
Field notes should be kept separate from the data being collected and
stored for analysis.
According to Cohen and Manion, the researcher must use the chosen data
collection tools and techniques systematically and properly in c ollecting
the evidence. Observations and data collection settings may range from
natural to artificial, with relatively unstructured to highly structured
elicitation tasks and category systems depending on the purpose of the
study and the disciplinary trad itions associated with it.
Case studies necessitate that effective training programmes be developed
for investigators, clear protocols and procedures be established in advance
before starting the field work and conduct a pilot study in before moving
into t he field so as to eliminate apparent obstacles and problems. The
researcher training programme need to cover the vital concepts of the
study, terminology, processes and methods and need to teach researcher/s
how to apply the techniques being used in the st udy accurately. The
programme should also be aimed at training researcher/s to understand
how the collection of data using multiple techniques strengthens the study
by providing opportunities for triangulation during the analysis phase of
the study. The pr ogramme should also include protocols for case study
research including time deadlines, formats for narrative reporting and field
notes, guidelines for collection of documents, and guidelines for field
procedures to be used. Investigators need to be good l isteners who can
hear exactly the words being used by those interviewed. Qualities of
effective investigators also include being able to ask good questions and
interpret answers. Effective investigators not only review documents
looking for facts but also read between the lines and pursue collaborative
evidence elsewhere when that seems appropriate. Investigators need to be
flexible in real -life situations and not feel threatened by unexpected
change, missed appointments or lack of space. Investigators need to
understand the goals of the study and comprehend the issues and must be
open to contrary findings. Investigators must also be aware that they are
going into the world of real human beings who may be threatened or
unsure of what the case study will brin g.
After investigators are trained, the final advance preparation step is to
select a site for pilot study and conduct a pilot test using all the data
collection tools and techniques so that difficult and tricky areas can be
uncovered and corrected. Resear chers need to anticipate key problems and
events, identify key people, prepare letters of introduction, establish rules
for confidentiality, and actively seek opportunities to revisit and revise the
research design in order to address and add to the origin al set of research
questions. munotes.in

Page 87


Quantitative and Qualitative
Research -I
87 Throughout the design phase, researchers must ensure that the construct
validity, internal validity, external validity, and reliability of the tools and
the research method are adequate. Construct validity requires the
researc her to use the suitable measures for the concepts being studied.
Internal validity (especially important in explanatory or causal studies)
demonstrates that certain conditions/events (causes) lead to other
conditions/events (effect/s) and necessitates the use of multiple sets of
evidence from multiple sources to reveal convergent lines of inquiry. The
researcher makes efforts to ascertain a series of substantiation forward and
backward. External validity reflects whether findings are generalisable
beyond th e immediate case/s. The more variations in places, people and
procedures a case study can withstand and still yield the same findings, the
more will be its external validity. Techniques such as cross -case
examination and within -case examination along with literature review help
in ensuring external validity. Reliability refers to the stability, accuracy
and precision of measurement. Exemplary case study design ensures that
the procedures used are well documented and can be repeated with the
same results ove r and over again. Establishing a trusting relationship with
research participants, using multiple data collection procedures, obtaining
sufficient pertinent background information about case participants and
sites and having access to or contact with the c ase over a period of time
are, in general, all decided advantageous.
Step 5 : Evaluating and analyzing data
The case study research generates a huge quantity of data from multiple
sources. Hence systematic organisation of the data is essential in prevent
the researcher from losing sight of the original research purpose and
questions. Advance preparation assists in handling huge quantity of
largely soft data in a documented and systematic manner. Researchers
prepare databases for categorizing, sorting, stori ng and retrieving data for
analysis. The researcher examines raw data so as to find linkages between
the research object and the outcomes with reference to the original
research questions. Throughout the evaluation and analysis process, the
researcher rema ins open to new opportunities and insights. The case study
method, with its use of multiple data collection methods and analysis
techniques, provides researchers with opportunities to triangulate data in
order to strengthen the research findings and conclu sions. According to
Creswell, analysis of data in case study research usually involves an
iterative, spiraling or cyclical process that proceeds from more general to
more specific observations. According to Miles and Huberman, data
analysis may commence du ring interviews or observations and continue
during transcription, when recurring themes, patterns and categories
become apparent. Once written records are available, analysis involves the
coding of data and the identification of prominent points or struct ures.
Having additional coders is highly desirable, especially in structural
analyses of discourse, texts, syntactic structures or interaction patterns
involving high -inference categories leading ultimately to the
quantification of types of items within ca tegories. Data reduction may
include quantification or other means of data aggregation and reduction,
including the use of data matrices, tables, and figures. munotes.in

Page 88


Research Methodology in Education
88 The strategies used in analysis require researchers to move beyond initial
impressions to improve the likelihood of precise and consistent findings.
Data need to be consciously sorted in many different ways to expose or
create new insights and will deliberately look for contradictory data to
disconfirm the analysis. Researchers categorize, tabulate an d recombine
data to answer the initial research questions and conduct cross -checking of
facts and incongruities in accounts. Focused, short, repeated interviews
may be essential to collect supplementary data to authenticate key
observations or check a fact .
Precise techniques that could be used for data analysis include placing
information into arrays, creating matrices of categories, creating flow
charts or other displays and tabulating frequency of events. Researchers
can use quantitative data to substant iate and support the qualitative data so
as to comprehend the raison d'être or theory underlying relationships.
Besides, multiple investigators could be used to gain the advantage
provided when diverse perspectives and insights scrutinize the data and
the patterns. When the multiple observations converge, reliability of the
findings enhances. Inconsistent discernments, on the other hand,
necessitate the researchers to inquire more intensely. Moreover, the cross -
case search for patterns, keeps investigators from reaching untimely
conclusions by requiring that investigators look at the data in diverse
ways. Cross -case analysis divides the data by type across all cases
investigated. One researcher then examines the data of that type carefully.
When a pattern fr om one data type is substantiated by the evidence from
another, the result is stronger. When substantiation conflicts, deeper
probing of the variation is necessary to identify the cause/s or source/s of
conflict. In all cases, the researcher treats the evi dence reasonably to
construct analytic conclusions answering the original "how" and "why"
research questions.
Step 6 : Report writing
Case studies report the data in a way that transforms a multifarious issue
into one that can be understood, permitting the reader to question and
examine the study and reach an understanding independent of the
researcher. The objective of the written report is to depict a multifaceted
problem in a way that conveys an explicit experience to the reader. Case
studies should pres ent data in a way that leads the reader to apply the
experience in his or her own real - life situation. Researchers need to pay
exacting consideration to displaying adequate evidence to achieve the
reader‘s confidence that all avenues have been explored, c learly
communicating the confines of the case and giving special attention to
conflicting propositions.
In general, a research report in a case study should include the following
aspects:
 A statement of the study's purpose and the theoretical context.
 The problem or issue being addressed. munotes.in

Page 89


Quantitative and Qualitative
Research -I
89  Central research questions.
 A detailed description of the case(s) and explanation of decisions
related to sampling and selection.
 Context of the study and case history, where relevant. The research
report should provide su fficient contextual information about the case,
including relevant biographical and social information (depending on
the focus), such as teaching - learning history, students‘ and teachers‘
background, years of studying/working in the institution, data
collection site(s) or other relevant descriptive information pertaining to
the case and situation.
 Issues of access to the site/participants and the relationship between
you and the research participant (case).
 The duration of the study.
 Evidence that you obt ained informed consent, that the participants'
identities and privacy are protected, and, ideally, that participants
benefited in some way from taking part in the study.
 Methods of data collection and analysis, either manual or computer -
based data manageme nt and analysis (see Weitzman & miles, 1995),
or other equipment and procedures used.
 Findings, which may take the form of major emergent themes,
developmental stages, or an in -depth discussion of each case in relation
to the research questions; and illust rative quotations or excerpts and
sufficient amounts of other data to establish the validity and credibility
of the analysis and interpretations.
 A discussion of factors that might have influenced the interpretation of
data in undesired, unanticipated, or conflicting ways.
A consideration of the connection between the case study and larger
theoretical and practical issues in the field is essential to report. The report
could include a separate chapter handling each case separately or treating
the case as a chronological recounting. Some researchers report the case
study as a story. During the report preparation process, the researcher
critically scrutinizes the report trying to identify ways of making it
comprehensive and complete. The researcher could use r epresentative
audience groups to review and comment on the draft report. Based on the
comments, the researcher could rewrite and revise the report. Some case
study researchers suggest that the report review audience should include
the participants of the s tudy.
Strengths of Case Study Method
1. It involves detailed, holistic investigation of all aspects of the unit
under study.
2. Case studies data are strong in reality. munotes.in

Page 90


Research Methodology in Education
90 3. It can utilize a wide range of measurement tools and techniques.
4. Data can be collected over a period of time and is contextual.
5. It enables the researcher to assess and document not just the empirical
data but also how the subject or institution under study interacts with
the larger social system.
6. Case study reports are often written in non -technic al language and are
therefore easily understood by laypersons.
7. They help in interpreting similar other cases.
Weaknesses of Case Study Method
1. The small sample size prevents the researcher from generalizing to
larger populations.
2. The case study method has b een criticized for use of a small number of
cases can offer no grounds for establishing reliability or generality of
findings.
3. The intense exposure to study of the case biases the findings.
4. It has also been criticized as being useful only as an exploratory tool.
5. They are often not easy to cross -check.
Yet researchers continue to use the case study research method with
success in carefully planned and crafted studies of real -life situations,
issues, and problems.
Check Your Progress - V
(a) State the meaning of case study research.
(b) Explain the characteristics of case study research.
(c) Explain the steps of case study research.
4A.8 ANALYTICAL METHOD:
It involves the identification and interpretation of data already existing in
documents, pictures and artifacts. It is a form of research in which events,
ideas, concepts or artifacts are examined through analysis of documents,
records, recordings or other media. Here, contextual information is very
essential to for an accurate interpretation of data. Historical researc h
comprises of systematic collection and analysis of documents, records and
artifacts with the objective of providing a description and interpretation of
past events or persons. Its application lies in a range of research methods
such as historical researc h which could use both quantitative and
qualitative data, legal analysis which focuses on selected laws and court
decisions with the objective of understanding how legal principles and munotes.in

Page 91


Quantitative and Qualitative
Research -I
91 precedents apply to educational practices, concept analysis which is
carried out to understand the meaning and usage of educational concepts
(eg. school -based reforms, ability grouping, affective teacher education)
and content analysis which is carried out to understand the meaning and
identify properties of large amounts of textual information in a systematic
manner.
Characteristics of Analytical Research
Following are the characteristics of analytical research:
1. It does not create‘ or generate‘ data through research tools and
techniques.
2. The topic of analytical research deal s with the past.
3. It reinterprets existing data.
4. It predominantly uses primary sources for collecting data.
5. Internal and external criticism is used as a technique while searching
for facts and providing interpretative explanations.
6. It uses documents, relics and oral testimonies for collecting data.
Objectives Analytical Research
Following are the objectives of analytical research:
1. It offers understanding of the past/existing/available data.
2. It enables the researcher to shed light on existing policies by
interpreting the past.
3. It generates a sense of universal justification and underlying principles
and aims of education in a society.
4. It reinterprets the past for each age group.
5. It uses data and logic to analyse the past and demythologizes idealized
conception s of the past.
Check Your Progress - VI
(a) State the meaning of analytical research.
4A.9 SURVEY RESEARCH
Survey research is the collection of data attained by asking individual s
questions either in person, on paper, by phone or online. Conducting
surveys is one form of primary research, which is the gathering data first -
hand from its source. The information collected may also be accessed
subsequently by other parties in secondar y research. munotes.in

Page 92


Research Methodology in Education
92 Survey research is used to gather the opinions, beliefs and feelings of
selected groups of individuals, often chosen for demographic sampling .
These demographics include age, gender, ethnicity or income levels. The
most famous public survey focused on demographics is the United States
Census , which occurs every ten years.
Common types of surveys include interviews and questionnaires, which
are comprised of multiple choice questionnaires, opinions and polls.
Questionnaires are distributed through mail surveys, group administered
questionnaires or in -person drop -offs. Interviews can be held in person or
over the phone and are often a more personal form of research than
questionnaires. There are several issues to consider when creating a
survey, including content, wording, response format and question
placement and sequence. All of these choices can affect the answers given
by participating individuals.
Survey research is used academia, government and business. Governments
use research surveys to learn about their populations to help better serve
its citizens, while political candida tes use survey research to gauge the
preferences and opinions of voters. Businesses use surveys to gather
information about customer attitudes and experiences to help market
consumer products. In academia, surveys are applied in fields like
demographics, s tatistics and social research.
Types of Surveys
Surveys can be divided into two broad categories: the questionnaire and
the interview . Questionnaires are usually paper -and-pencil instruments
that the respondent completes. Interviews are completed by the
interviewer based on the respondent says. Sometimes, it’s hard to tell the
difference between a questionnaire and an intervie w. For instance, some
people think that questionnaires always ask short closed -ended questions
while interviews always ask broad open -ended ones. But you will see
questionnaires with open -ended questions (although they do tend to be
shorter than in intervi ews) and there will often be a series of closed -ended
questions asked in an interview.
Survey research has changed dramatically in the last ten years. We have
automated telephone surveys that use random dialing methods. There are
computerized kiosks in pub lic places that allows people to ask for input. A
whole new variation of group interview has evolved as focus group
methodology. Increasingly, survey research is tightly integrated with the
delivery of service. Your hotel room has a survey on the desk. You r waiter
presents a short customer satisfaction survey with your check. You get a
call for an interview several days after your last call to a computer
company for technical assistance. You’re asked to complete a short survey
when you visit a website. Here , I’ll describe the major types of
questionnaires and interviews, keeping in mind that technology is leading
to rapid evolution of meth.
munotes.in

Page 93


Quantitative and Qualitative
Research -I
93

4A.10 REFERENCES
 Auberbach, C. F. and Silverstein, L. B. Qualitative Data:
AnIntroduction to Coding and Analysis . Ne w York: New York
University Press. 2003.
 Baron, R. A. Psychology . India : Allyn and Bacon. 2001.
 Brewer, M. (2000). Research Design and Issues of Validity. In Reis,H.
& Judd, C. (eds) Handbook of Research Methods in Social and
Personality Psychology. Cambr idge:Cambridge University Press.
 Cohen, L and Manion, L. Research Methods in Education . London :
Routledge, 1989.
 Creswell, J. W. Educational Research, Planning, Conducting,
andEvaluating Quantitative and Qualitative Research , University of
Netvaska : Merr ill Prentice Hall, 2002,
 Creswell, J. W. Qualitative Inquiry and Research Design:
Choosingamong Five Traditions . Sage Publications, Inc. 1998.
 Picciano, A. G. Educational Research Primer . London : Continuum.
 2004.


munotes.in

Page 94

94 4B
QUANTITATIVE AND QUALITATIVE
RESEARCH -II
(HISTORICAL RESEARCH)
Unit Structure:
4B.0 Objectives
4B.1 Introduction
4B.2 Meaning
4B.3 The purpose of Historical Research
4B.4 Characteristics of Historical Research
4B.5 Scope of Historical Research in Education
4B.6 Approaches to the study of History
4B.7 Steps in Historical Research
4B.8 Problems and Weaknesses to be avoided in Historical Research.
4B.9 Criteria of Evaluating Historical Research.
4B.0 OBJECTIVES:
After reading this unit the student wi ll be able to:
Define the meaning of historical research, its purposes ad
characteristics, scope and approaches to the study of history.
Explain the steps of historical research.
State the Weaknesses to be avoided in Historical Research.
4B.1 INTRODUCTION :
History usually refers simply to an account of the past of human societies.
It is the study of what “can be known to the historian)…through th e
surviving record.” Gottschal K referred to this as ‘ history as record ’, He
further stated that “The process of critically examining and analyzing the
records and survivals of the past is…called historical method. The
imaginative reconstruction of the past from the data derived by that
process is called historiography (the w riting of history)”.
4B.2 MEANING:
Historical research has been defined as the systematic and objective
location, evaluation and synthesis of evidence in order to establish
facts and draw conclusions about past events. It involves a critical munotes.in

Page 95


Quantitative and Qualitative
Research -II
95 inquiry of a previous age with the aim of reconstructing a faithful
representation of the past. In historical research, the investigator
studies documents and other sources that contain facts concerning
the research theme with the objective of achieving better
understa nding of present policies, practices, problems and
institutions. An attempt is made to examine past events or
combinations of events and establish facts in order to arrive at
conclusions concerning past events or predict future events.
Historical research is a type of analytical research. Its common
methodological characteristics include (i) identifying a research
topic that addresses past events, (ii) review of primary and secondary
data, (iii) systematic collection and objective evaluation of data related to
past occurrences with the help of techniques of criticism for historical
searches and evaluation of the information and (iv) synthesis and
explanation of findings in order to test hypotheses concerning causes,
effects or trends of these events that may help to explain present events
and anticipate future events. Historical studies attempt to provide
information and understanding of past historical, legal and policy events.
The historical method consists of the techniques and guidelines by which
historia ns use historical sources and other evidences to research and then
to write history.
4B.2 THE PURPOSE OF HISTORICAL RESEARCH:
Conducting historical research in education can serve several purposes as
follows:
1. It enables educationists to find out solutions to contemporary problems
which have their roots in the past. i.e. it serves the purpose of bringing
about reforms in education. The work of a historical researcher
sometimes sensitizes educators to unjust or misguided practices in the
past which may have u nknowingly continued into the present and
require reform. A historical researcher studies the past with a detached
perspective and without any ego -involvement with the past practices.
Hence it could be easier for educationists to identify misguided
practic es thus enabling them to bring about reforms.
2. It throws light on present trends and can help in predicting future trends. If
we understand how an educationist or a group of educationists acted in the
past, we can predict how they will act in future. Simila rly, studying the past
enables a researcher to understand the factors/causes affecting present trends.
In order to make such future predictions reliable and trustworthy, the
historical researcher needs to identify and clearly describe in which ways the
past differs from the present context and how the present social, economic
and political situations and policies could have an impact on the present and
the future.
3. It enables a researcher to re -evaluate data in relation to selected
hypotheses, theories and g eneralizations that are presently held about
the past. munotes.in

Page 96


Research Methodology in Education
96 4. It emphasizes and analyzes the relative importance and the effect of
the various interactions in the prevailing cultures.
5. It enables us to understand how and why educational theories and
practices developed.
4B.3 CHARACTERISTICS OF HISTORICAL
RESEARCH
These are as follows:
1. It is not a mere accumulation of facts and data or even a portrayal of
past events.
2. It is a flowing, vibrant report of past events which involves an analysis
and explanation of these occurrences with the objective of recapturing
the nuances, personalities and ideas that influenced these events.
3. Conducting historical research involves the process of collecting and
reading the research material collected and writing the manuscript
from the data collected. The researcher often goes back -and-forth
between collecting, reading, and writing. i.e. the process of data
collection and analysis are done simultaneously are not two distinct
phases of research.
4. It deals with discovery of data th at already exists and does not involve
creation of data using structured tools.
5. It is analytical in that it uses logical induction.
6. It has a variety of foci such as issues, events, movements and concepts.
7. It records and evaluates the accomplishments of ind ividuals, agencies
or institutions.
4B.4 SCOPE OF HISTORICAL RESEARCH IN
EDUCATION
1. General educational history of specific periods such as (a) ancient
India, (b) A during British rule, (c) Independent India etc.
2. History of specific levels of education (a) primary education,
(b) secondary education, (c) tertiary education etc. in India.
3. History of specific types of education such as (a) adult education, (b) distance education, (c) disadvantage d education, (d) women’s education
in India.
4. Historical study of specific educational institutions such as (i) University of Mumbai , (ii)Aligarh Muslim University and soon.
5. History of the role of the teacher in ancient India. munotes.in

Page 97


Quantitative and Qualitative
Research -II
97 6. History of specific components of education such as (a) curriculum,
(b) text -books, (c) teaching -learning methods, (d) aims and objectives
of education, (e) teacher -student relationships, (f) evaluation process
and soon.
7. History of national education policies in India.
8. History of admission processes in professional / technical courses
(medicine, engineering, management) in India.
9. History of teacher education.
10. Historical biographies of major contributors to education such as
Mahatma Gandhi, Maharshi Karve, Mahar shi Phule, Shri Aurobindo,
Gurudev Tagore and soon.
11. History of educational administration.
12. History of public financing of education.
13. History of educational legislation in India.
14. History of educational planning.
15. History of contemporary problems in India.
16. Historical study of the relationship between politics and education in
India.
17. Historical study of the impact of the British rule in India.
18. Comparative history of education in India and some other country
/countries.
19. Historical study of the system of state -sponsored inspection in India.
20. Historical study of education in specific Indian states such as
Maharashtra, Tamil Nadu, Madhya Pradesh, Rajasthan etc.
In other words, historical research in education may be concerned with an
individual, a group, an idea a movement or an institution.
If a historical study focuses on an entire country / society / system, i.e. if it
is broad in scope, it is said to be a macro -level historical research. On the
other hand, if its focus is narrow and includes a selective set of p eople or
events of interest, it is said to be a micro -level historical research.
4B.5 APPROACHES TO THE STUDY OFHISTORY:
According to Monaghan and Hartman, there are four major approaches to
the study of the past:
a. Qualitative Approach: This is what most laypersons think of as history: the
search for a story inferred from a range of written or printed evidence. The
resultant history is organized chronologically and presented as a factual tale: munotes.in

Page 98


Research Methodology in Education
98 a tale of a person who created reading textb ooks, such as a biography of
William Holmes Mc Guffey (Sullivan, 1994) or the Lindley Murray
family (Monaghan, 1998) in the Western context. The sources of
qualitative history range from manuscripts such as account books, school
records, marginalia, letter s, diaries and memoirs to imprints such as
textbooks, children‘s books, journals, and other books of the period under
consideration.
b. Quantitative Approach : Here, rather than relying on history by
quotation, as the former approach has been negatively called, researchers
intentionally look for evidence that lends itself to being counted and that is
therefore presumed to have superior validity and generalizability.
Researchers have sought to estimate the popularity of a particular textbook
by tabulating the n umbers printed, based on copyright records. The
assumption is that broader questions such as the relationship between
education and political system in India or between textbooks and their
influence on children can thus be addressed more authoritatively.
c. Content Analysis : Here the text itself is the focus of examination.
This approach uses published works as its data (in the case of history of
textbooks, these might be readers, or examples of the changing contents of
school textbooks in successive editions ) and subjects them to a careful
analysis that usually includes both quantitative and qualitative aspects.
Content analysis has been particularly useful in investigating constructs
such as race, casteetc.
d. Oral History : Qualitative, quantitative, and conten t approaches use
written or printed text as their database. In contrast, the fourth approach,
oral history, turns to living memory. For instance, oral historians
interested in women‘s education could ask their respondents about their
early experiences and efforts in women’s education.
These four approaches are not, of course, mutually exclusive.
Indeed, historians avail themselves of as many of these as their
question, topic, and time period permit. This integration is possible
because the nature of histori cal research cuts across a variety of
approaches, all of which commence with the recognition of a topic and the
framing of a question. In other words, a historical study may be
quantitative in nature, qualitative in nature or a combination of the
approache s.
Its purpose can be mainly descriptive, aiming to understand some specific
development in a particular period of time in a particular culture; or it
could be explanatory, trying to test and accept / reject widely held
assumptions.
A historical investiga tion is conducted with objectivity and the desire to
minimize bias, distortion and prejudice. Thus, it is similar to descriptive
method of research in this aspect. Besides, it aims at describing all aspects
of the particular situation under study (or all t hat is accessible) in its
search for the truth. Thus, it is holistic, comprehensive in nature and is munotes.in

Page 99


Quantitative and Qualitative
Research -II
99 similar to the interpretive approach. Though it is not empirical in nature
(does not collect data through direct observation or experimentation), it
does m ake use of reports (all the available written and/or oral material), it
definitely qualifies to be a scientific activity. This is because it requires
scholarship to conduct a systematic and objective study and evaluation and
synthesis of evidence so as to arrive at conclusions. In other words,
historical research is scientific in nature.
Moreover, any competent researcher in other types of empirical studies
reviews the related literature so as to find out prior researches and
theoretical work done on a part icular topic. This requires studying
journals, books, encyclopedias, unpublished theses and so on. This is
followed by interpretation of their significance. These steps are common
to empirical research and historical research. i.e. to some extent, every
researcher makes use of the historical method in his/her research.
However, it should be mentioned here that historical researcher in
education “discovers” already existing data from a wide range of historical
sources such as documents, relics, autobiographi es, diaries or photographs.
On the other hand, in other types of educational studies, the researcher
“creates” data through observations, measurement through tests and
experimentation. To this extent, historical research differs from descriptive
and experi mental researches.
Check Your Progress - I
Q.1 Define the following
(a) Characteristics of historical research in education.
(b) Purposes of historical research.
Q.2 (a) Give examples of research topics in historical research.
(b) Explain the approaches to historical research.
4B.6 STEPS IN HISTORICAL RESEARCH:
The essential steps involved in conducting a historical research are as
follows:
A. Identify a topic/subject and define the problems/questions to be
investigated.
B. Search for sources of data.
C. Evaluate the historical sources.
D. Analyze, synthesize and summarize interpreting the data / information.
E. Write the research report.
Since most historical studies are largely qualitative in nature, the search
for sources of data, eva luating, analyzing, synthesizing and summarizing munotes.in

Page 100


Research Methodology in Education
100 information and interpreting the findings may not always be discreet,
separate, sequential steps i.e. the sequence of steps in historical research is
flexible.
Let us now look at each of these steps in details.
A. Identify a Topic and Define the Problem
According to Borg, “In historical research, it is especially important that
the student carefully defines his problem and appraises its appropriateness
before committing himself too fully. Many problems are not adaptable to
historical research methods and cannot be adequately treated using this
approach. Other problems have little or no chance of producing significant
results either because of the lack of pertinent data or because the problem
is a trivial one .”
Beach has classified the problems that prompt historical inquiry into five
types:
1. Current social issues are the most popular source of historical
problems in education. e.g. Rural education, adult and continuing
education, positive discrimination in edu cation etc.
2. Histories of specific individuals, histories of specific educational
institutions and histories of educational movement. These studies are
often conducted with “the simple desire to acquire knowledge about
previously unexamined phenomena”.
3. A hi storical study of interpreting ideas or events that previously had
seemed unrelated. For example, history of educational financing and
history of aims of education in India may be unrelated. But a person
reviewing these two researches separately may detect some
relationship between the two histories and design a study to understand
this relationship.
4. A historical study aimed at synthesizing old data or merge them with
new historical facts discovered by the researcher.
5. A historical inquiry involving reinterp retation of past events that have
been studied by other historical researchers. This is known as
revisionist history.
On the other hand, in order to identify a significant research problem,
Gottschalk recommends that four questions should be asked:
(i) Where do the events take place?
(ii) Who are the persons involved?
(iii) When do the events occur?
(iv) What kinds of human activity are involved?
munotes.in

Page 101


Quantitative and Qualitative
Research -II
101 The scope of the study can be determined on the basis of the extent of
emphasis placed on the four questions identified by Gottschalk i.e. the
geographical area included, the number of persons involved, the time span
included and the number and kinds of hu man activities involved often, the
exact scope and delimitation of a study is decided by a researcher only
after the relevant material has been obtained. The selection of a topic in
historical research depends on several personal factors of the researcher
such as his/her motivation, interest, historical knowledge and curiosity,
ability to interpret historical facts and so on. If the problem selected
involves understanding an event, an institution, a person, a past period,
more clearly, it should be taken up for a research.
The topic selected should be defined in terms of the types of written
materials and other resources available to you.
This should be followed by formulating a specific and testable hypothesis
or a series of research questions, if required. This will provide a clear
focus and direction to data collection, analysis and interpretation. i.e. it
provides a structure to the study.
According to Borg, without hypotheses, historical research often becomes
little more than an aimless gathering of fac ts.
B. Search for Sources of Data
Historical research is not empirical in that it does not include direct
observation of events or persons. Here, the researcher interprets past
events on the basis of traces they have left. He uses the evidence of past
acts an d thoughts. Thus, through he/she does not use his/her own
observation but on other people’s Observations. The researcher’s job here
is to test the truthfulness of the reports of other people’s observations. These
observations are obtained from several sour ces of historical data. Let us
now try to discuss various sources of historical data.
Sources of Historical Data
These sources are broadly classified into two types:
(a) Primary Sources : Gottschalk defines a primary data source as “the
testimony of any eyewitn ess, or of a witness by any other of the senses, or
of a mechanical device like the Dictaphone – that is, of one who…was
present at the events of which he tells. A primary source must thus have
been produced by a contemporary of the events it narrates.”In other words,
primary sources are tangible materials that provide a description of an
historical event and were produced shortly after the event happened. They
have a direct physical relationship to the event being studied. Examples of
primary sources inclu de new paper report, letters, public documents, court
decisions, personal diaries, autobiographies, artifacts and eyewitness’s
verbal accounts. These primary sources of data can be divided into two
broad categories as follows:
munotes.in

Page 102


Research Methodology in Education
102 (i) The remains or relics of a gi ven historical period. These could include
photographs, coins, skeletons, fossils, tools, weapons, utensils, furniture,
buildings and pieces of art and culture (object dart) . Though these were
not originally meant for transmitting information to future gen erations
they could prove very useful sources in providing reliable and sound
evidence about the past. Most of these relics provide non -verbal
information.
(ii) Those objects that have a direct physical relationship with the events being
reconstructed. This includes documents such as laws, files, letters, manuscripts,
government resolutions, charters, memoranda, wills, news -papers, magazines,
journals, films, gove rnment or other official publications, maps, charts ,log -
books, catalogues, research reports, record of minutes of meetings,
recording, inscriptions, transcriptions and so on.
(b) Secondary Sources: A secondary source is one in which the
eyewitness or the participant i.e. the person describing the event was not
actually present but who obtained his/her descriptions or narrations from
another person or source. This another person may or may not be a
primary source. Secondary sources, thus, do not have a dire ct physical
relationship with the event being studies. They include data which are not
original. Examples of secondary sources include textbooks, biographies,
encyclopedias, reference books, replicas of art objects and paintings and
so on. It is possible t hat secondary sources contain errors due to passing of
information from one source to another. These errors could get multiplied
when the information passes through many sources thereby resulting in an
error of magnitude in the final data. Thus, wherever p ossible, the
researcher should try to use primary sources of data. However, that does
not reduce the value of secondary sources.
In conclusion, the various sources of historical information - both primary
and secondary can be summarized as follows:

It mu st be mentioned here that the branch of historical research using all
or some types of oral records is known as oral history.
It should also be mentioned here that some objects can be classified as
documents or relics depending on the how they are used in a historical munotes.in

Page 103


Quantitative and Qualitative
Research -II
103 study. For example, in a research study on how a historical figure (a
politician, a freedom fighter or a social reformer) is presented in textbooks
of different periods, the textbook will be classified as a document as the
emphasis here is on analyzing its content -matter given in a verbal form.
On the other hand, in a research study on printing methods in the past, the
textbook can be used as a relic as the focus here is not on analyzing its
contents but on its physical, outward characteristics or features.
Searching for Historical Data
The procedure of searching for historical data should be systematic
and pre -planned. The researcher should know what information he
needs so as to identify important sources of data and provide a
direction to his search for relevant data. Using his knowledge,
imagination and resourcefulness, he needs to explore the kinds of
data required, persons involved, institutions involved. This will help
him to identify the kinds of records the require and whom he should
interview. Since a historical research is mainly qualitative in nature all the
primary and secondary sources cannot be identified in advance. It is
possible that as one collects some data, analyzes and interprets it, the need
for further pertinent data may ar ise depending on the interpretive
framework. This will enable him to identify other primary or secondary
sources of data.
The search for sources of data begins with wide reading of preliminary
sources including published bibliographies, biographies, atlas, specialized
chronologies, dictionaries of quotations and terms. Good university and
college libraries tend to have a great deal of such preliminary materials.
This will enable a researcher to identify valuable secondary sources on the
topic being studied such books on history relating to one’s topic. For
extensive materials on a subject, the researcher may need to go to a large
research library or a library with extensive holdings on a specific subject.
Such secondary materials could include other historia n’s conclusions and
interpretations, historical information, references to other secondary and
primary sources. The historical researcher needs to evaluate the secondary
sources for their validity and authenticity. Now the researcher should turn
his attent ion to the primary sources. These are usually available in the
institution or the archives especially if the source concerns data pertaining
to distant past or data pertaining to events in which the chief witnesses are
either dead or inaccessible. In case of data concerning the recent past, the
researcher can contact witnesses or participants themselves in order to
interview them and/or study the documents possessed by them.
However, it is not possible for a historical researcher to examine all the
materia l available. Selecting the best sources of data is important in a
historical study. In a historical study the complete “population” of
available data can never be obtained or known. Hence the sample of
materials examined must always be a purposive one. Wha t it represents
and what it fails to represent should be considered. The researcher needs
to identify and use a sample that should be representative enough for
wider generalization. munotes.in

Page 104


Research Methodology in Education
104 c) Evaluation of the Historical Sources
The data of historical sources is subject to two types of evaluation. These
two types are: (i) external evaluation or criticism and (ii) internal
evaluation or criticism. Let us now look at these in detail.
(i) External Criticism of Data :
This is sometimes also known as lower criticism of data. External
criticism regards the issue of authenticity of the data from the
psychological attitude of the researcher in that it is primarily concerned
with the question, is the source of data genuine? External criticism seeks
to determine whether the document or the artifact that the researcher is
studying is genuinely valid primary data . It is possible to get counterfeit
documents or artifacts. External criticism of the sources of data is of
paramount importance in establishing the credibi lity of the research.
Although, theoretically, the main purpose of external criticism is the
establishment of historical truth, in reality its actual operation is chiefly
restricted to the negative role i.e. to identity and expose forgeries, frauds,
hoaxes desertions and counterfeits. In order to identify such forgeries,
researcher needs to look at problems pertaining to plagiarism, alterations
of document, insertions, deletions or unintentional omissions. This will
reveal whether the historical source of d ata is authentic or not.
Establishing authenticity of documents may involve carbonating,
handwriting analysis, identification of ink and paper, vocabulary usage,
signatures, script, spelling, names of places and writing style and other
considerations. In o ther words, it examines the document and its external
features rather than the statements it contains. It tries to determine whether
(a) the information it contains was available at the time the document was
written? (b) this information is consistent with what is known about the
author or the period from another source?
In other words, external criticism is aimed at answering questions about
the nature of the historical source such as who wrote it? Where? When?
Under which circumstances? Is it original? I s it genuine? and soon.
ii) Internal Criticism of Data :
Having established the authenticity of the source of historical data, the
researcher now focuses his/her attention on the accuracy and wroth of the
data contained in the document. Internal criticism is concerned with the
meaning of the written material. It is also known as higher criticism of
data. It deals answering questions such as what does it mean? What was
the author attempting to say? What thought was the author trying to
convey? Is it possible that people would act in the way described in the
document? Is it possible that events described occurred so quickly? What
inferences or interpretations could be extracted from these words? Do the
financial data / figures mentioned in the document seem re asonable for
that period in the past? What does the decision of a court mean? What do
the words of the decision convey regarding the intent and the will of the
count? Is there any (unintended) misinformation given in the document? Is
there any evidence of deception? and so on here, the researcher needs to munotes.in

Page 105


Quantitative and Qualitative
Research -II
105 be very cautious so that he does not reject a statement only because the
event described in the document appears to be improbable.
In addition to answering these questions, internal criticism should also
include establishing the credibility of the author of the document.
According to Travers, the following questions could be answered so as to
establish the author’s credibility: Was he a trained or untrained observer of
the event? i.e. How competent was he? What was his relationship to the
event? To what extent was he under pressure, from fear or vanity resulting
in distortion or omission of facts? What was the intent of the writer of the
document? To what extent was he an expert at recording the particular
event? Were the habits of the author such that they might interfere with
the accuracy of recording? Was he too antagonistic or too sympathetic to
give a true picture? How long after the event did he record his testimony?
Was he able to remember accurately? Is he in agreement with other
independent witnesses?
These questions need to be answered for two reasons:
i) Perceptions are individualized and selective. Even if eyewitnesses are
competent and truthful, they could still record different descriptions of the
events they witnessed or experienced.
ii) Research studies in Psychology indicate that eye witnesses can be very
unreliable, especially if they are emotionally aroused or under stress at the
time of the event. (e.g. at the time of demolition of Babri Masjid or at the
time of Gujarat riots in 2002. )
This brings us to the question of bias especially when life histories or
communal situations are being studied. According to Plummer, there are
three possible sources of bias as follows:
Source One: The Life History Informant
Is misinformation (unintended) given? Has there been evasion?
Is there evidence of direct lying and deception? Is a ‘front’ being
presented?
What may the informant ‘take for granted’ and hence not reveal?
How far is the informant ‘pleasing you’?
How much has been forgotten?
How much may be self -deception?
Source Two : The Social Scientist Research
Could any of the following be shaping the outcome?
(a) Attitudes of researcher : age, gender, class, raceetc.
(b) Demeanor of researcher : dress, speech, body language etc. munotes.in

Page 106


Research Methodology in Education
106 (c) Personality of researcher : anxiety, need for approval, hostility,
warmth etc.
(d) Attitudes researcher: religion, politics, tolerance, general
assumptions
(e) Scientific role of researcher: theory held etc. (researcher expectancy)
Source Three : The Interaction
The joint act needs to be examined. Is bias coming from
(a) The physical setting –‘social ‘space
(b) The prior interaction?
(c) Non-verbal communication?
(d) Vocal behavior?
Often, internal and external criticism are interdependent and
complementary processes. The internal and external criticism of data
require a high level of scholarship.
D. Analysis, Synthesis, Summarizing and Interpretation of Data:
We have seen how data can be located and evaluated. Let us now look at
how to collect and control the data s o that the greatest return from the
innumerable hours spent in archives, document rooms and libraries can be
reaped. The research should not only learn how to take notes but also learn
how to organize the various notes, note cards, bibliography cards and
memoranda so as to derive useful and meaningful facts for interpretation.
Hence before beginning historical research, the researcher should have a
specific and systematic plan for the acquisition, organization, storage and
retrieval of the data. Following a re some suggestions that may help you in
systematizing your research efforts.
(i) Note cards and Bibliography Cards:
It would be convenient for you to prepare bibliography cards of size 3
5
inches for taking down bibliographical notes. A bibliography card is
valuable not only for gathering and recording of information but also for
locating it again at a future date, if necessary, without going back to the
library again and again. Such a card contains the essential information
concerning a bibliographical so urce. Keep plenty of such cards with you
when you go to the library so that you can report very valuable references
encountered unexpectedly. You can also noted own the document’s
relation to your research. A sample of a bibliographic reference card could
be as follows: munotes.in

Page 107


Quantitative and Qualitative
Research -II
107

You can ideally have two copies of such a bibliographic card. One copy
can be arranged according to the authors’ names alphabetically whereas
the other copy can be can be arranged as per serial number of the card.
On the other hand, a note card can be of size 4
6 or 5
7 inches for
substantive notes. It is advisable to place only one item of information on
each card. Each card can be given a code so as to indicate the place /
question / theme / period / person to which the note relates. Th ese cards
then can be arranged as per the question, theme, period, place or person
under study so as to make analysis easier. In other words, note cards can
be kept in multiple copies. (e.g. in triplicate or quadruplicate) depending
on the ultimate analys is of the data. Given here is a sample note card.
Main Heading: Card No.


Sub – Heading:
Source :Author : Year: pp.

Title: Bibliography card No.

In this card, one can mention the bibliography card no. which can be
referred to for further information it required. The reverse of the card can
be used if the space is found to be insufficient for necessary information. munotes.in

Page 108


Research Methodology in Education
108 (ii) Summary of Quantitative Data:
Usually historical studies are chiefly qualitative in nature since the data
obtained includes verbal and / or symbolic material from an instituti on,
society or culture’s past. However, when the study involves quantitative
data pertaining to the past events, you need to think carefully about the
relevance of the data to your research. This is because recording and
analysis of quantitative data is ti me-consuming and sometimes expensive.
Examples of quantitative data in historical research include records
ofstudents’ and teachers’ attendance rates, examination results, financial
information such as budgets, income and expenditure statements, salaries,
fees and so on. Content analysis is one of the methods involving
quantitative data. The basic goal of content analysis is to take a verbal,
non-quantitative document and transform it into quantitative data.
(iii) Interpretation of Historical Data:
Once the researcher establishes the validity and authenticity of data,
interpretation of the facts in the light of the topic of research is necessary.
This step requires caution, imagination, ingenuity, insight and
scholarliness. The scientific status of h is study depends on these
characteristics. The researcher needs to be aware of his/her biases, values,
prejudices and interest as these could influence the analysis and
interpretation of the data as well as the perceptions of the researcher. He
needs to make sense out of the multitude of data gathered which generally
involves a synthesis of data in relation to a hypothesis or question or
theory rather than mere accumulation or summarization. In doing so, he /
she should avoid biases and unduly projecting his / her own personality
onto the data. The data should be fitted into a logically parsimonious
structure. The researcher should be clear about the interpretative
framework so as to become sensitive towards bias in other historical
researchers’ interpreta tions who have conducted research on the same or
similar topics.
In historical research, ‘causes’ are in the form of antecedents or
precipitating factors. They are not ‘causes’ in the strictly scientific sense.
Such antecedents are always complex and hence the researcher should
avoid over simplification while interpreting them. Past events are mainly
in the form of human behaviour. Therefore ‘causes’ in historical research
could be interpreted in terms of motives of the participants involved.
The researcher needs to identify the motives of the people involved in the
event under study while interpreting the data. These motives may be
multiple in nature and interact with each other. This makes interpretation
of the data a difficult task. For example, a new go vernment decides to
change the prevalent textbooks. The motives here could be many such as
its political ideology does not match the prevalent textbooks, it had a
personal grudge against the authors of the prevalent textbooks or the
ministers concerned wan ted to derive personal glory out of his actions.
These reasons may influence each other making the task of interpretation
of data difficult. munotes.in

Page 109


Quantitative and Qualitative
Research -II
109 Historical researchers can make use of concepts from other social and
behavioural science disciplines in analyzing interpreting data. Some
examples of such concepts may be bureaucracy, role, institution (from
sociology), leadership, institutional effectiveness (From management),
culture (from anthropology), motive, personality attitude etc. (from
psychology) and so on.
The researcher also can make use of the concepts of historical time and
historical space white interpreting the data.
The concept of historical time makes use of a chronology of events. i.e.
the researcher needs to identify the chain of events (chronology) of
substantive history and then try to understand the meaning of these events,
the relationship among the events and the relationship of the events to the
research topic. The researcher is studying more than one set of
chronological data within the same time frame may gain increased insight
into multiple events and their causes.
The concept of historical space deals with ‘where’ the event originated,
spread or culminated. This could provide a different insight into the
meaning of the data.
The historical researcher can also use analogy as a source of hypothesis or
as a frame of reference for interpretation. i.e. He / she can draw parallels
between one historical event and other events. Here, one has to be aware
of similarities, differences as well as exceptions while comparing two
historical events, otherwise, such an extrapolation will be unreliable. Also,
it is risky to interpret an event by comparin g with another event in another
culture at another time.
iv) Making Inferences and Generalizations in Historical Research:
In order to identify and explain the cause/s’ of a historical event, the
research must be aware of his/ her assumptions which are the n used in
ascribing causation to subsequent events. Some examples of such
assumption could include (i) history repeats itself, or (ii) historical events
are unique. The researcher must make clear whether his / her analysis is
based on the former assumption or the latter.
Some examples of ‘causes’ of historical events identified in prior
researches include (i) strong ideology (eg. Maharshi Karve’s ideology of
women’s education) (ii) actions of certain key persons (e.g. Mohamed Ali
Jinnah’s actions for India’ s partition), (iii) Advances in Science in
technology (e.g. use of computers in education), (iv) economic /
geographical / psychological / sociological factors or a combination of all
these (e.g. privatization of education) etc.
The historian’s objective is not only to establish facts but also to determine
trends in the data and causes of events leading to generalizations i.e. he /
she needs to synthesize and interpret and not merely summarize the data.
These data, as in other types of researches, are obta ined not from the entire
population of persons, settings, events or objects pertaining to the topic, munotes.in

Page 110


Research Methodology in Education
110 but from a small sample. Moreover, this sample is selected from the
remains of the past. It can not be selected from the entire population of
documents or relics that existed during the period under study. Such
remains may not be representative. This necessitates a very careful and
cautious approach in locating consistency in different documents and
relics while making generalizations. Also, the researcher s hould not rely
on only one document pertaining to an individual from the past while
making a generalization as it will not be known whether the individual
held a particular opinion about an educational issue consistently or had
changed it over a period of time. If he had changed his opinion, the
researcher must find out when and how it was changed, under what
conditions and what were the consequences. This makes it imperative that
the researcher uses as many primary and secondary sources as possible on
a topic. If the evidence is limited, he needs restrict the generalizability of
his interpretations to that extent.
E) Writing the Research Report :
This task involves the highest level of scholarship. In a historical research,
data collection is flexible. Besi des, due to the relative lack of conclusive
evidence on which valid generalizations can be established, the writing of
historical research has to be a little freer so as to allow subjective
interpretation of the data. (This by no means implies distortion o f truth).
Thus reports of historical research have no standard formats. The
presentation of data analysis, interpretations and the findings depend on
the nature of the problem.
There are several board ways of reporting historical investigation as
follows:
i) The researcher can report the historical facts as answers to different
research questions. Answer to each question could be reported in a
separate chapter.
ii) He / she can present the facts in a chronological order with each
chapter pertaining to a specific h istorical period chronologically.
iii) Report can also be written in a thematic manner where each chapter
deals with a specific theme /topic.
iv) Chapters could also deal with each state of India or each district of
an Indian state separately.
v) Chapter could also pertain to specific historical persons separately.
vi) The researcher can also combine two or more of these approaches
while writing the research report.
In addition, the report should contain a chapter each on introduction,
methodology, review of related lite rature, findings, the researcher’s
interpretations and reflections on the interpretative process.
munotes.in

Page 111


Quantitative and Qualitative
Research -II
111 Check Your Progress – II
1. Explain how will you identify a research topic for studying the history
of education?
2. What are the different sources of historical data? How will you
evaluate these sources?
3. What care will you take in making inferences and generalizations in
historical research?
4. How will you organize writing the research report?
The researcher needs to demonstrate his / her scholarship and grasp of t he
topic, his / her insights into the topic, and the plausibility and clarity of
interpretations. This requires creativity, ingenuity and imagination so as to
make the research report adequate and comprehensive.
4B.8 PROBLEMS AND WEAKNESSES TO BE AVOIDED
IN HISTORICAL RESEARCH
Some of the weaknesses, problems and mistakes that need to be avoided in
historical research are as follows:
1. The problem of research should not be too broad.
2. It should be selected after ensuring that sources of data are existent,
accessible and in a language known to the researcher.
3. Excessive use of easy -to-find secondary sources of data should be
avoided. Though locating primary sources of data time - consuming
and requires efforts, they are usually more trustworthy.
4. Adequate internal and external criticism of sources of historical data is
very essential for establishing the authenticity and validity of the data.
It is also necessary to ascertain whether statements concerning
evidence by one participant have influenced opinions of othe r
participant or witnesses.
5. The researcher needs to be aware of his/her own personal values,
interests and biases. For this purpose, it is necessary for the researcher
to quote statements along with the context in which they were made.
Lifting them out of context shows the intention of persuading the
readers. The researcher also needs to avoid both -extreme generosity or
admiration as well as extreme criticism. The researcher needs to avoid
reliance on beliefs such as “old is gold” “new is always better” or“
change implies progress” . All such beliefs indicate researcher’s bias and
personal values.
6. The researcher needs to ensure that the concepts borrowed from other
disciplines are relevant to his/her topic.
7. He/She should avoid unwarranted causal inferences arising on account munotes.in

Page 112


Research Methodology in Education
112 of (i) oversimplification (causes of historical event may be multiple,
complex and interactive), (ii) Faulty interpretation of meanings of
words, (iii) inability to distinguish between facts, opinions and
situations, (iv) inability to id entify and discard irrelevant or
unimportant facts and (v) Faulty generalization based on inadequate
evidence, faulty logic and reasoning in the analysis of data, use of
wrong analogy and faulty comparison of events in unsimilar cultures.
8. The researcher ne eds to synthesize facts into meaningful chronological
and thematic patterns.
9. The report should be written in a logical and scientific manner. It
should avoid flowery or flippant language, emotional words, dull and
colourless language or persuasive style.
10. The researcher should avoid projecting current problems onto
historical events as this is likely to created historians.
4B.9 CRITERIA OF EVALUATING HISTORICAL
RESEARCH:
Mouly has provided the following criteria of evaluating historical
research:
1. Problem: Has the problem been clearly defined? It is difficult enough
to conduct historical research adequately without adding to the
confusion by starting out with a nebulous problem. Is the problem
capable of solution? Is it within the competence of the investiga tor?
2. Data: Are data of a primary nature available in sufficient
completeness to provide a solution, or has there been an
overdependence on secondary or unverifiable sources?
3. Analysis: Has the dependability of the data been adequately
established? Has the r elevance of the data been adequately explored?
4. Interpretation: Does the author display adequate mastery of his data
and insight into their relative significance? Does he display adequate
historical perspective? Does he maintain his objective or does he all ow
personal bias to distort the evidence? Are his hypotheses plausible?
Have they been adequately tested? Does he take a sufficiently broad
view of the total situation? Does he see the relationship between his
data and other ‘historical facts’?
5. Presentatio n: Does the style of writing attract as well as inform? Does
the report make a contribution on the basis of newly discovered data or
new interpretation, or is it simply ‘uninspired back work’? Does it
reflect scholarliness?

munotes.in

Page 113


Quantitative and Qualitative
Research -II
113 Check your Progress III
1. What care will you take to avoid weaknesses in conducting historical
research?
2. State the criteria of evaluating historical research.
Suggested Readings:
 Garraghan, G. J. A Guide to Historical Method . New York: Fordham
University Press.1946.
 Gottschalk, L. Understanding History . New York : Alfred A. Knopf.
1951.
 McMillan, J. H. and Schumacher, S. Research in Education : A
Conceptual Introduction . Boston, MA : Little, brown and Company.
1984.
 Shafer, R. J. A Guide to Historical Method . Illinois : The Dorsey
Press.1974.




munotes.in

Page 114

114 4C
QUANTITATIVE AND QUALITATIVE
RESEARCH -III
(EXPERIMENTAL DESIGN)

Unit Structure
4C.0 Objectives
4C.1Introduction
4C.2 Experimental Designs
4C.3 Factorial Design
4C.4 Nested Design
4C.5 Single Subject Design
4C.6 Internal and External Experimental Validity
4C.7 Unit End Exercise
4C.0 OBJECTIVES:
After going through this module, you will be able to:
 Conceptualize experimental method of educational research;
 Describe the salient features of experimental research; on
 Conceptualize various experimental design
 Conceptualize the internal and external experimental validity
 Conceptualize the process of controlling the intervening and
extraneous variables; and
 Apply experimental method for appropriate research problem.
4C.1 INTRODUCTION:
The experimental meth od in educational research is the application and
adaptation of the classical method of experimentation. It is a scientifically
sophisticated method. It provides a method of investigation to derive basic
relationships among phenomena under controlled condi tion or, more
simply, to identify the conditions underlying the occurrence of a given
phenomenon. Experimental research is the description and analysis of
what will be, or what will occur, under carefully controlled conditions. munotes.in

Page 115


Quantitative and Qualitative
Research -III
115 Experimenters manipulate cer tain stimuli, treatments, or environmental
conditions and observe how the condition or behaviour of the subject is
affected or changed. Such manipulations are deliberate and systematic.
The researchers must be aware of other factors that could influence th e
outcome and remove or control them in such a way that it will establish a
logical association between manipulated factors and observed factors.
Experimental research provides a method of hypothesis testing.
Hypothesis is the heart of experimental researc h. After the experimenter
defines a problem he has to propose a tentative answer to the problem or
hypothesis. Further, he has to test the hypothesis and confirm or
disconfirm it.
Although, the experimental method has greatest utility in the laboratory, i t
has been effectively applied non -laboratory settings such as the classroom.
The immediate purpose of experimentation is to predict events in the
experimental setting. The ultimate purpose is to generalize the variable
relationships so that they may be ap plied outside the laboratory to a wider
population of interest.
Characteristics of Experimental Method
There are four essential characteristics of experimental research: (i) Cool,
(ii) Manipulation (iii) Observation, and (iv) Replication.
Control: Variables that are not of direct interest to the researcher, called
extraneous variables, need to be controlled. Control refers to removing or
minimizing the influence of such variables by several methods such as:
randomization or random assignment of subj ects to groups; matching
subjects on extraneous variable(s) and then assigning subjects randomly to
groups; making groups that are as homogenous as possible on extraneous
variable(s); application of statistical technique of analysis of covariance
(ANCOVA); balancing means and standard deviations of the groups.
Manipulation: Manipulation refers to a deliberate operation of the
conditions by the researcher. In this process, a pre -determined set of
conditions, called independent variable or experimental variab le. It is also
called treatment variable. Such variables are imposed on the subjects of
experiment. In specific terms manipulation refers to deliberate operation
of independent variable on the subjects of experimental group by the
researcher to observe its effect. Sex, socio -economic status, intelligence,
method of teaching, training or qualification of teacher, and classroom
environment are the major independent variables in educational research.
If the researcher, for example, wants to study the effect of ‘X’ method of
teaching on the achievement of students in mathematics, the independent
variable here is the method of teaching. The researcher in this experiment
needs to manipulate ‘X’ i.e. the method of teaching. In other words, the
researcher has to tea ch the experimental groups using ‘X’ method and see
its effect on achievement.
Observation : In experimental research, the experimenter observes the
effect of the manipulation of the independent variable on dependent munotes.in

Page 116


Research Methodology in Education
116 variable. The dependent variable, for e xample, may be performance or
achievement in atask.
Replication : Replication is a matter of conducting a number of sub -
experiments, instead of one experiment only, within the framework of the
same experimental design. The researcher may make a multiple
comparison of a number of cases of the control group and a number of
cases of the experimental group. In some experimental situations, a
number of control and experimental groups, each consisting of equivalent
subjects, are combined within a single experim ent.
Check your progress – 1
What are the characteristics of experimental research?
4C.2 EXPERIMENTAL DESIGNS:
Experimental design is the blueprint of the procedures that enable
the researcher to test hypotheses by reaching valid conclusions about
relation ships between independent and dependent variables (Best,
1982, p.68). Thus, it provides the researcher an opportunity for the
comparison as required in the hypotheses of the experiment and
enables him to make a meaningful interpretation of the results of t he
study. The designs deal with practical problems associated with the
experimentation such as: (i) how subjects are to be selected for
experimental and control groups, (ii) the ways through which variables are
to be manipulated and controlled, (iii) the w ays in which extraneous
variables are to be controlled, how observations are to be made, and (iv)
the type of statistical analysis to be employed.
Variables are the conditions or characteristics that the experimenter
manipulates, controls, or observes. The independent variables are the
conditions or characteristics that the experimenter manipulates or controls
in his or her attempt to study their relationships to the observed
phenomena. The dependent variables are the conditions or characteristics
that appe ar or disappear or change as the experimenter introduces,
removes or changes the independent variable. In educational research
teaching method is an example of independent variable and the
achievement of the students is an example of dependent variable. Th ere
are some confounding variables that might influence the dependent
variable. Confounding variables are of two types; intervening and
extraneous variables. Intervening variables are those variables that cannot
be controlled or measured but may influence the dependent variable.
Extraneous variables are not manipulated by the researcher but influence
the dependent variable. It is impossible to eliminate all extraneous
variables, but sound experimental design enables the researcher to more or
less neutralize their influence on dependent variables.
There are various types of experimental designs. The selection of a
particular design depends upon factors like nature and purpose of
experiment, the type of variables to be manipulated, the nature of the data, munotes.in

Page 117


Quantitative and Qualitative
Research -III
117 the facilities available for carrying out the experiment and the competence
of the experimenter. The following categories of experimental research
designs are popular in educational research:
(i) Pre-experimental designs – They are least effective and provide lit tle
or no control of extraneous variables.
(ii) True experimental designs – employ randomization to control the
effects of variables such as history, maturation, testing, statistical
regression, and mortality.
(iii) Quasi -experimental designs – provide less satisfact ory degree of
control and are used only when randomization is not feasible.
(iv) Factorial designs - more than one independent variables can be
manipulated simultaneously. Both independent and interaction
effects of two or more than two factors can be studied wi th the help
of this factorial design.
Symbols used :
In discussing experimental designs a few symbols are used. E –
Experimental group
C – Control group
X – Independent variable Y – Dependent variable
R – Random assignment of subjects to groups
Yb– Dependent variable measures taken before experiment / treatment
(pre-test)
Ya– Dependent variable measures taken after experiment/ treatment (Post -
test)
Mr– Matching subjects and then random assignment to groups.
a. Pre-Experimental design:
There are two type s of pre -experimental designs:
1. The one group pre -test post -test design:
This is a simple experimental research design without involvement
of a control group. In this design the experimenter takes dependent
variable measures (Y b) before the independent vari able
(X) is manipulated and again takes its measures (Y a) afterwards: The
difference if any, between the two measurements (Y b and Y a) is computed
and is ascribed to the manipulation of X.
Pre-test Independent variable Post-test
Yb X Y a munotes.in

Page 118


Research Methodology in Education
118 The experimenter, in order to evaluate the effectiveness of computer -based
instruction (CBI) in teaching of science to grade V students, administers
an achievement test to the whole class (Y b) before teaching through CBI.
The test is administered over the same class again to measure Y a. The
means of Y b and Y a are compared and the difference if any is ascribed to
effect of X, i.e. teaching through CBI.
The design has the inherent limitation of using on e group only. The design
also lacks scope of controlling extraneous variables like history,
maturation, pre -test sensitization, and statistical regression etc.
2. The two groups static design:
This design provides some improvement over the previous by adding a
control group which is not exposed to the experimental treatment. The
experimenter may take two sections of grade -V of one school or grade -V
of one school or grade -V students of two different schools (intact classes)
as experimental and control gr oups respectively and assume the two
groups to be equivalent. No pre -test is taken to as certain it.
Group Independent Variable Post-test
E X Ya
C - Ya
This design compares the post -test scores of experimental group (Y a E)
that has received experimental treatment (X) with that of control group (Y a
C) that has not received X.
The major limitation of the design is that there is no provision for
establishing the equivalence of the experimental (E) and control (C)
groups. However, since no pretest is used, this design controls for the
effects of extraneous variables such history, m aturation, and pre -testing.
Check your progress – 2
What is pre -experimental design.?
b. Quasi -Experimental Design:
Researchers commonly try to establish equivalence between the
experimental and control groups, the extent they are successful in
doing so; to this extent the design is valid. Sometimes it is extremely
difficult or impossible to equate groups by random selection or
random assignment, or by matching. In such situations, the
researcher uses quasi -experimental design.
The Non -Equivalent Groups Desi gn is probably the most frequently
used design in social research. It is structured like a pretest -posttest
randomized experiment, but it lacks the key feature of the randomized
designs -- random assignment. In the Non - Equivalent Groups Design, we
most of ten use intact groups that we think are similar as the treatment and munotes.in

Page 119


Quantitative and Qualitative
Research -III
119 control groups. In education, we might pick two comparable classrooms or
schools. In community -based research, we might use two similar
communities. We try to select groups that are as si milar as possible so we
can fairly compare the treated one with the comparison one. But we can
never be sure the groups are comparable. Or, put another way, it's unlikely
that the two groups would be as similar as they would if we assigned them
through a r andom lottery. Because it's often likely that the groups are not
equivalent, this designed was named the nonequivalent group design to
remind us.
Pre-test
Yb Independent Variable
X Post-test
Ya (Experimental)
Yb - Ya (Control)
So, what does the term "nonequivalent" mean? In one sense, it just means
that assignment to group was not random. In other words, the researcher
did not control the assignment to groups through the mechanism of
random assignment. As a result, t he groups may be different prior to the
study. This design is especially susceptible to the internal validity threat of
selection. Any prior differences between the groups may affect the
outcome of the study. Under the worst circumstances, this can lead us to
conclude that our program didn't make a difference when in fact it did, or
that it did make a difference when in fact it didn't.
The counterbalanced design may be used when the random
assignment of subject to experimental group and control group is not
possible. This design is also known as rotation group design. In
counterbalanced design each group of subject is assigned to
experimental treatment at different times during the experiment. This
design overcomes the weakness of non -equivalent design. When intact
groups are used, rotation of groups provides an opportunity to eliminate
any differences that might exist between the groups. Since all the groups
are exposed to all the treatments, the results obtained cannot be attributed
to the preexisting diffe rences in the subjects. The limitation of this design
is that there is carry -over effect of the groups from one treatment to the
next. Therefore, this design should be used only when the experimental
treatments are such that the administration of one treat ment on a group
will have no effect on the next treatment. There is possibility of boring
students with repeated testing.
Check your progress – 3
How does pre -experimental design differ from quasi experimental design?
c. TRUE EXPERIMENTAL DESIGN:
True experimental designs are used in educational research because they
ascertain equivalence of experimental and control groups by random
assignment of subjects to these groups, and thus, control the effects of
extraneous variables like history, maturation, te sting, measuring munotes.in

Page 120


Research Methodology in Education
120 Group Independent Variable Post-test E X Ya C - Ya MR instruments, statistical regression and mortality. This design, in contrast to
pre-experimental design, is a better and used in educational research
wherever possible.
1. Two groups, randomized subjects, post -test only design:
This is one of the most effective designs in minimizing the threats to
experimental validity. In this design subjects are assigned to experimental
and control groups by random assignment which controls all possible
extraneous variables, e.g. testing, statistical regressi on, mortality etc. At
the end of experiment the difference between the mean post -test scores of
the experimental and control group are put to statistical test –‘t’ test or
analysis of variance (ANOVA). If the differences between the means are
found signifi cant, it can be attributed to the effect of (X), the independent
variable.
Group Independent Variable Post-test
E X Ya
R
C
-
Ya

The main advantage of this design is random assignment of subjects to
groups, which assures the equivalence of the groups prior to experiment.
Further, this design, in the absence of pretest, controls the effects of
history, maturation and pre -testing etc.
This design is useful in the experimental studies at the pre- primary or
primary stage and the situations in which a pre -test is not appropriate or
not available.
2. Two groups, randomized matched subject, post -test only design :
This design, instead of using random assignment of subjects to
experimental and cont rol group, uses the technique of matching. In this
technique, the subjects are paired so that their scores on matching
variable(s), i.e. the extraneous variable(s) the experimenter wants to
control, are as close as possible. One subject of each pair is ran domly
assigned to one group and the other to the second group. The groups are
designated as experimental and control by random assignment (tossing a
coin).
munotes.in

Page 121


Quantitative and Qualitative
Research -III
121 This design is mainly used where ―two groups randomized subjects,
post-test only design is not applicable and where small groups are to be
used. The random assignment of subjects to groups after matching adds to
the strength of this design. The major limitation of the design is that it is
very difficult to use matching as a method of controlling extraneous
variables because in some situations it is not possible to locate a match
and some subjects are excluded from the experiment.
3. Two groups randomized subjects, pre -test post -test design:
In this design subjects are assigned to the experimental group and the
control group at random and are given a pre -test (Y b). The treatment is
introduced only to the experimental group, after which the two groups are
measured on dependent variabl e. The difference in scores or gain scores
(D) in respect of pre -test and post -test (Y a – Yb = D) is found for each
group and the difference in scores of both the groups (D e and D c) is
compared in order to ascertain whether the experimental treatment
produ ced a significant change. Unless the effect of the experimental
manipulation is strong, the analysis of the differential score is not
advisable (Kerlinger, 1973, p -336). If they are analyzed, however, a t‘ or
F‘ test issued.
Group Pre -test Indepe ndent Variable Post -test
E Yb X Ya C Yb - Ya The main advantages of this design include:
Through initial randomization and pre -testing equivalence between the
two groups can been sure.
Randomization seems to control most of the extraneous variables.
But the design does not guarantee external validity of the experiment as
the pretest may increase the subjects’ sensitivity to the manipulation of X.
4. The Solomon three groups design:
This design, developed by Solomon seeks to overcome the difficulty of the
design: Randomized Groups, Pre -test Posttest Design, i.e. the interactive
effects of pre -testing and the experimental manipulation. This is achieved
by employing a second control group (C 2) which is not pre -tested but is
exposed to t he experimental treatment (X).
Group Pre-test Independent Variable Post-test
E Yb X Ya
C1 Yb - Ya
C2 - X Ya munotes.in

Page 122


Research Methodology in Education
122 This design provides scope for comparing post -tests (Y a) scores for the
three groups. Even though the experimental group has a significantly
higher mean score as compared to that of the first control group (Y aE>Y a
C1), one cannot be confident that this difference is due to the experimental
treatment (X). It might have occurred because of the subjects’ pre -test
sensitization. But, if the mean Y a scores of the second control group is also
higher as compared to that of the first control group (Y a C2>Y a Cl), then
one can assume that the experimental treatment has produced the
difference rather than the pre -test sensitization, since C 2 is not pre - tested.
5. The Solomon four group design:
This design is an extension of Solomon three group design and is really a
combination of two two -groups designs: (i) Two groups randomized
subjects pre -test post -test design; and (ii) Two group randomized subjects
post-test only design. This design provides rigorous control over
extraneous variables and also provides opportunity for multiple
comparisons to determine the effects of the experimental treatment(X).
In this design the subjects are randomly assigned to the fou r groups. One
experimental (E) and three control (C l, C2 and C 3). The experimental and
the first control group (E and C 1) are pre -tested groups, and the second and
third control groups (C 2 and C 3) are not pre -tested groups. If the post -test
mean scores of experimental group (Y a E) is significantly greater than the
post-test mean score of the first control group (Y a C1); and also the post
test mean score of the second control group (Y a C2) is significantly greater
than the post -test mean score of the third c ontrol group (Y a C3), the
experimenter arrives at the conclusion that the experimental treatment (X)
has effect. Group Pre- Independent
C2 - X Ya C3 - - Ya This design is considered to be strong one as it actually involves
conducting the experiment twice, once with pre -test and once without pre -
test. Therefore, if the results of these two experiments are in agreement,
the experimenter can have much greater co nfidence in his findings. The
design seems to have two sources of weakness. One is practicability as it
is difficult to conduct two simultaneous experiments, and the researcher
encounters the difficulty of locating more subjects of the same kind. The Post test
test Variable
E Yb X Ya
C1 Yb - Ya
R munotes.in

Page 123


Quantitative and Qualitative
Research -III
123 other difficulty is statistical. Since the design involves four sets of
measures for four groups and the experimenter has to make comparison
between the experimental and first control group and between second and
third control groups there is no single statisti cal procedure that would
make use of the six available measures simultaneously.
Check your progress – 4
1. How does true experimental design differ from quasi experimental
design?
4C.3 FACTORIAL DESIGN:
Experiments may be designed to study simultaneously the effects of two
or more variables. Such an experiment is called factorial experiment.
Experiments in which the treatments are combinations of levels of two or
more factors are said to be factorial. If all possible treatment combinations
are studied, the exp eriment is said to be a complete factorial experiment.
When two independent factors have two levels each, we call it as 2x2
(spoken "two -by-two”) factorial design. When three independent factors
have two levels each, we call it 2x2x2 factorial design. Sim ilarly, we may
have 2x3, 3x3, 3x4, 3x3x3, 2x2x2x2,etc.
Simple Factorial Design :
A simple factorial design is 2x2 factorial design. In this design there are
two independent variables and each of the variables has two levels. One
advantage is that information is obtained about the interaction of factors.
Both independent and interaction effects of two or more than two factors
can be studied with the help of this factorial design.
In factorial designs, a factor is a major independent variable. In thi s
example we have two factors: methods of teaching and intelligence level
of the students. A level is a subdivision of a factor. In this example,
method of teaching has two levels and intelligence has two levels.
Sometimes we depict a factorial design with a numbering notation. In this
example, we can say that we have a 2x
2 (spoken "two -by-two”) factorial design. In this notation, the number of
numbers tells us how many factors there are and the number values tell
how many levels. The number of different treatment groups that we have
in any factorial design can easily be determined by multiplying
through the number notation. For instance, in our example we have
2x2 = 4 groups. In our notational example, we would need 3 x 4 = 12
groups. Full fac torial experiment is an experiment whose design consists
of two or more factors, each with discrete possible values or "levels", and
whose experimental units take on all possible combinations of these levels
across all such factors. A full factorial design may also be called a fully -
crossed design. Such an experiment allows studying the effect of each
factor on the response variable, as well as the effects of interactions
between factors on the response variable. munotes.in

Page 124


Research Methodology in Education
124









High Intelligence Group Low Intelligence Group Teaching Through
Co-operative Learning
Method Gain Score on the
Dependent Variable Gain Score on the
Dependent Variable
Teaching Through
Lecture Method Gain Score on the
Dependent Variable Gain Score on the
Dependent Variable

Check your progress – 5
1. How does factorial design differ from true experimental design?
4C.4 NESTED DESIGN:
In a nested design, each subject receives one, and only one, treatment
condition. In a nested design, the levels of one factor appear only within
one level of another factor. The levels of the first factor are said to be
nested within the level(s) of the s econd factor. When variables such as
race, income and education, etc. may be found only at a particular level of
the independent variable, these variables are called nested variables. In
these studies the various nested variables are grouped for the study. For
example, a researcher is studying school effectiveness with academic
achievement of students as the indicator or criterion variable. In this type Since one of the objectives is to compare various combinations of these groups, the experimenter has to obtain the mean scores for each row and each column. The experimenter can first study the main effect of the two independent variables and the interaction effect between the intelligence level and teaching method. For the vast majority of factorial experiments, each factor has only two levels. For example, with two factors each taking two levels, a factorial experiment would have four treatment combinations in total, and is usually called a 2×2 factorial design. The first independent variable, which is manipulated, has two values called the experimental variable. The second independent variable, which is divided into levels, may be called control variable. For example, there are two experimental treatments, that is, teaching through co -operative learning and teaching through lecture method. It is observed that there may be differential effects of these methods on different levels of intelligence of the students. On the basis of the IQ score the experimenter divides the students into two groups: one high intelligent group and the other the low intelligent group. There are four groups of students within each of the two levels of munotes.in

Page 125


Quantitative and Qualitative
Research -III
125 of research, school type can be nested within individual schools which can
be nested within classrooms. T he major distinguishing feature of nested
designs is that each subject has a single score. The effect, if any, occurs
between groups of subjects and thus the name “ Between Subjects” is
given to these designs. The relative advantages and disadvantages of
nested designs are opposite those of crossed designs. First, carry over
effects are not a problem, as individuals are measured only once. Second,
the number of subjects needed to discover effects is greater than with
crossed designs. Some treatments by their nature are nested. The effect of
gender, for example, is necessarily nested. One is either a male or a
female, but not both. Religion is another example. Treatment conditions
which rely on a pre -existing condition are sometimes called demographic
or block ing factors.
Crossed Design :
In a crossed design each subject sees each level of the treatment
conditions. In a very simple experiment, such as one that studies the
effects of caffeine on alertness, each subject would be exposed to both a
caffeine conditi on and a no caffeine condition. For example, using the
members of a statistics class as subjects, the experiment might be
conducted as follows. On the first day of the experiment, the class is
divided in half with one half of the class getting coffee with caffeine and
the other half getting coffee without caffeine. A measure of alertness is
taken for each individual, such as the number of yawns during the class
period. On the second day the conditions are reversed; that is, the
individuals who received coff ee with caffeine are now given coffee
without and vice -versa. The size of the effect will be the difference of
alertness on the days with and without caffeine.
The distinguishing feature of crossed designs is that each individual will
have more than one sc ore. The effect occurs within each subject, thus
these designs are sometimes referred to as within subjects‘ designs.
Crossed designs have two advantages. One, they generally require
fewer subjects, because each subject is used a number of times in the
experiment. Two, they are more likely to result in a significant effect,
given the effects are real.
Crossed designs also have disadvantages. One, the experimenter must be
concerned about carry -over effects. For example, individuals not used to
caffeine may s till feel the effects of caffeine on the second day, even
though they did not receive the drug. Two, the first measurements taken
may influence the second. For example, if the measurement of interest was
score on a statistics test, taking the test once may influence performance
the second time the test is taken. Three, the assumptions necessary when
more than two treatment levels are employed in a crossed design may be
restrictive.
Check your progress - 6
1. What is the difference between nested design and crossed design? munotes.in

Page 126


Research Methodology in Education
126
4C.5 SINGLE FACTOR EXPERIMENT:
Many experiments involve single treatment or variable with two or more
levels. First, a group of experimental subjects may be divided into
independent groups, using a random method. Different treatment may be
applied to each group. One group may be a control group, a group to
which no treatment is applied. For meaningful interpretation of
experiment, results obtained under treatment may be compared with
results obtained in the absence of treatment. Compariso n may be made
between treatments and between treatment and a control.
Some single factor experiments involve a single group of subjects. Each
subject receives treatments. Repeated observations or measurements are
made on the same subjects.
Some single fact or experiments may consists of groups that are matched
on one or more variables which are known to be correlated with the
dependent variable. For example IQ may be correlated with achievement.
An example of single factor Experiment :
It is believed that the amount of time a player warms up at the beginning
will have a significant impact on his game, lawn tennis. The hypothesis is
that if he does not warm up at all or only for a brief time (less than 15
minutes), he will be stiff and his score will be poor . However, if he warms
up too much (over 40 minutes), he will be tired and his game score will
also suffer. He needs to choose levels of warming up to test this
hypothesis that are significantly different enough. The levels he will test
are warming up for 0, 15, 30, and 45minutes.
4C.6 INTERNAL AND EXTERNAL EXPERIMENTAL
VALIDITY:
Validity of experimentation :
An experiment must have two types of validity: internal validity and
external validity (Campbell and Stanley, 1963):
Internal validity :
Internal validity refers to the extent to which the manipulated or
independent variables actually have a genuine effect on the observed
results or dependent variable and the observed results were not affected by
the extraneous variables. This validity is affected by the lack of control of
extraneous variables.
External validity :
External validity is the extent to which the relationships among the
variables can be generalized outside the experimental setting like other
population, other variables. This validity is concerned with the munotes.in

Page 127


Quantitative and Qualitative
Research -III
127 generalizability or representative ness of the findings of experiment, i.e. to
what population, setting and variables can the results of the experiment be
generalized.
Factors affecting validity of experimentation :
In educational experi ments, a number of extraneous variables influence
the results of the experiment in way that are difficult to evaluate. Although
these extraneous variables cannot be completely eliminated, many of them
can be identified. Campbell and Stanley (1963) have poi nted out the
following major variables which affect significantly the validity of an
experiment:
History : The variables, other than the independent variables, that may
occur between the first and the second measurement of the subjects (Pre -
test and post te st).
Maturation : The changes that occur in the subjects over a period of time
and confused with the effects of the independent variables.
Testing : Pre-testing, at the beginning of an experiment, may be sensitive
to subjects, which may produce a change amon g them and may affect their
post-test performance.
Measuring Instruments : Different measuring instruments, scorers,
interviewers or the observers used at the pre and post testing stages; and
unreliable measuring instruments or techniques are threats to the validity
of an experiment.
Statistical regression : It refers to the tendency for extreme scores to
regress or move towards the common mean on subsequent measures. The
subjects who scored high on a pre -test are likely to score relatively low on
the retest whereas the subjects who scored low on the pre -test are likely to
score high on the retest.
Experimental mortality : It refers to the differential loss of subjects from
the comparison groups. Such loss of subjects may affect the findings of
the study. For e xample, if some subjects in the experimental group who
received the low scores on the pre -test drop out after taking the test, this
group may show higher mean on the post -test than the control group.
Differential selection of subjects : It refers to differe nce between/among
groups on some important variables related to the dependent variable
before application of the experimental treatment.
Check your progress – 7
1. What is experimental validity?

munotes.in

Page 128


Research Methodology in Education
128 4C.7 UNIT END EXERCISE:
Differentiate between the true experimental designs and factorial design.
Differentiate between internal and external validity.
What is the significance of randomization in experimental research?
References:
 Anderson, G (1990): Fundamentals of Educational Research: The
Falmer Press, London.
 Best, J.W. & Kahn, J.V. (1993): Research in Education; 7th Ed.
Prentice Hall of India Pvt., Ltd., New Delhi.
 Campbell, D. T. & Stanley, J. C. (1963). Experimental and Quasi -
Experimental Designs for Research.. Chicago: Rand McNally.
 Fisher, R. A. ( 1959). Statistical Methods & Scientific Inference. New
York: HafnerPublishing.
 Gay, L.R. (1987). Educational Research, Englewood Cliffs NJ:
Macmillan Publishing Company.
 Kerlinger, F.N. (1964) : Foundations of Behavioural Research (2nd
Ed.), Surjeet Public ations, New Delhi.
 Koul, L. (1984): Methodology of Educational Research (2nd Ed.),
Vikash Publishing House Pvt. Ltd., New Delhi.
 Stockburger, David W. (1998): Introductory Statistics: Concepts,
Models, and Applications, (2nd Ed), www.atomicdogpublishing.com
 https://www.techtarget.com/whatis/definition/survey -research
 https:/ /conjointly.com/kb/types -of-surveys/


 
munotes.in

Page 129

129 5A
TOOLS AND TECHNIQUES OF
RESEARCH -I
(PREPARATION OF TOOL)
Unit Structure
5A.0 Classical test theory and Item Response Theory of test construction
5A.1 Steps of Preparing a Research Tool
5A.2 Validity
5A.3 Types of Validity
5A.4 Factors affecting validity
5A.5 Reliability
5A.6 The methods of Estimating Reliability
5A.7 Factors affecting Reliability
5A. 8 Item Analysis
5A.9 Steps involved in Item –Analysis
5A.10 Standardization of a tool
5A.0 CLASSICAL TEST THEORY AND ITEM
RESPONSE THEORY OF TEST CONSTRUCT ION
Measurements is the process of quantifying the attributes, traits,
characteristics of a concept or construct.
Theories of measurement or test help us in understanding and answering
some important questions like does the test work? Does the test measur e
what it is expected to measure? Why a test does behaves in a particular
way? etc. They help us in interpretation of the test scores, evaluate how
good/bad the test is and influence of the distracters. They help in
interpreting and treating the test resul ts mathematically or statistically.
Two widely used test theories are Classical Test Theory (CTT) and Item
Response Theory .
Classical test theory is the most widely used theory in the field of
psychology and education. It is also called as true score theo ry. It can be
summed up in the form of following equation:
X+T+E munotes.in

Page 130


Research Methodolo gy in Education
130 Where,
X= observed score
T=true score
E=Random Error
The aim of this theory is to improve the tests specifically in terms of
validity and reliability of tests. This theory is based on the ass umption that
each person has a true score.
CTT talks about three basic concepts i.e. test score, error and true score.
Test score refers to observed score. e.g. If you get 35 marks in English
test, 35 is your test score. Error refers to the amount of error found in a test
or measure or extraneous factors which we cannot control but affect the
outcome. True score refers to the score
that would be obtained if there is no error in the measurement or a test.
True scores quantify the attributes of a concept or construct. When the
value of the true score increases the response to the item representing that
same concept is expected to increase. IT is assumed that the error is
i) Normally distributed
ii) Uncorrelated with true zero
iii) Has a mean of zero
Item Response Theory :
Item response theory (IRT), also known as latent trait theory, is more
recent theory. It was created to understand how individuals respond to
individual items in a psychological or educational test. It focuses more on
items of the tests. The theory attem pts to explain the relationship between
the latent traits (unobservable characteristics or attributes) and their
manifestation (observed behavior, response, performance). It tries to
establish relationship between properties of item in a test, responders a nd
underlying or latent trait that is being measured. It makes stronger
assumptions as compared to CTT.
Though it is theoretically possible, practically it is not feasible to use this
theory without specialized software as it uses complex statistical
algorithms. The models of IRT are framed according to the probability of
a respondent to respond in a specific manner to a test item, because of
specific underlying behavior. Along with the person’s level of construct,
item difficulty, item discrimination and guessing affect the response to a
particular test item. The simplest models of IRT are based on the item
difficulty and the complex ones consider the other two or more
parameters.

munotes.in

Page 131


Tools and Techniques of
Research -I
131 Difference between Classical test theory and item response theory (Ref.
Hambleton, R.K. & Jones, R.W., 1993):
Classical Test Theory (CTT) Item Response Theory (IRT)
Most widely used approach Gaining more popularity
Considers general performance
of the test Useful for developing advanced test
(computer adaptive testing)
Works at test level Works at item level
Challenging but comprehensible Complex and nonlinear models
Better for smaller samples
(N=100 -300) Better for larger samples (N= 500+)
Items are standardized on a
given sample Knowledge about an item is
independent of sample on which it is
tested.
Simpler mathematical analysis
compared to IRT Scores describe examinee proficiency
that are not dependent on test
difficulty.
Estimation of model is
conceptually straightforward Estimation is complex
Techniques is robust, minimal
assumptions More stringent assumptions

5A.1 STEPS OF CONSTRUCTING A RESEARCH TOOL:
The three major stages of a tool construction are:
1. Developing pool of items
2. Item analysis - computing difficulty index and discrimination index
3. Ascertaining validit y
These three stages could be discussed in details under following steps:
Step I. Planning:
1. Deciding the nature, objective, and purpose of the test.
It is essential to know the test's purpose before constructing it. Is it meant
to test cognitive levels? , categorize the participants in different groups
based on particular criteria?, or test the skills? Keeping the purpose and
nature of the test in mind, the researcher would create test items.
2. Determine the weightage to different objectives (e.g., kno wledge,
understanding, application, skill for an achievement test) and different
aspects of the construct depending on operational definition. munotes.in

Page 132


Research Methodolo gy in Education
132 The researcher needs to determine the weightage to different objectives.
The selected test items need to test the attainment of those objectives.
3. Determine the weightage to different content areas.
What content to be covered should be decided, and the items should be
created keeping the selected content in mind.
4. Determine the test's item types (MCQs, long answe rs, short answers,
statements for rating, check list etc.).
Depending on the purpose and objectives and nature of the construct, the
decision should be taken about the type of questions in the test.
5. Prepare a blueprint keeping all the dimensions in min d
6. Deciding the technical aspects like test size, printing, font size, time
duration, etc.
Decide the length of the test, specifications of paper for printing, size of
letters to be used, and time to be allotted for taking the test.
7. Instructions for scoring and administering the test.
A detailed manual to be prepared with all the necessary instructions
related to the test's scoring. How to score the different items (positive
items, negative items, easy items, difficult items), calculate final scores,
convert the scores (if any convergence is required) should be clearly
stated. Detailed instructions should be provided for administering the test
like number of participants, seating arrangement (if a specific arrangement
is required), how to greet, what a nd how to instruct them should be given
in the manual.
8. Determine the weightage to different categories of the difficulty level of
items.
Decide what type of questions will carry how many marks. As per the
items' difficulty levels, varied weightage could be assigned to each item.
Step II. Preparing the test:
Following the blueprint, the test items should be created. While preparing
a standardized test, one has to be very careful. Hence, sufficient time
should be devoted to the create test items conside ring all the dimensions of
the blueprint.
While constructing these test items, one needs to perform the following
tasks:
1. Test items preparation: It is the most crucial task in constructing a
test. The test items should be clear, comprehensive, and free fro m
ambiguity. Appropriate vocabulary should be used for the test items.
Sufficient test items should be created as the number may reduce after
initial scrutiny. After the items are created, they need to be arranged in an munotes.in

Page 133


Tools and Techniques of
Research -I
133 appropriate sequence, and if differe nt forms of questions are being used,
they need to be grouped in different categories or sections.
2. Writing instructions about test items:
According to Gronlund, a test creator, generally the researcher, should
provide clear guidelines about the purpose of testing, time allotted for
testing, the basis of answering, the procedure for recording answers, and
methods to deal with guessing.
3. Directions about the administration of the test:
Detailed direction for the administration of the test is to be prov ided. The
conditions under which the test should be administered, when to
administer (middle of the session or in the end?), what time frame to be
followed, additional material (if required), precautions to be taken while
administering the test, etc. state d clearly.
4. Directions for scoring
A scoring key should be provided to facilitate objectivity in scoring.
Question -wise scoring key and marking scheme should be provided.
5. Question -wise chart:
A question -wise analysis chart should be prepared. This ch art should
clearly describe the content area covered, the objective it intends to
measure, type, marks allotted, difficulty level, and time required to answer
it.
Step III. Pilot testing:
After constructing the test, it is essential to try it out on a sm aller scale.
The purpose is to improve the test. Trying out a test helps identify
defective or ambiguous items, defects in administering the test, identifying
bad distractors, providing data for determining the difficulty level and
discriminatory value of the item, determining the number of items for the
final test, and determining the time frame.
Trying out a test is carried out in three phases:
Phase 1: Administer the test on a small sample (10 -15)
The feasibility of the items is observed at this stage. B ased on the
observations, the test items are improved/ modified.
Phase 2: Administer on a more significant sample (around 50).
This phase aims to select good test items and drop the poor ones. This
phase includes two activities; item analysis and preparing the final draft of
the test.

munotes.in

Page 134


Research Methodolo gy in Education
134 a) Item analysis: carry out item analysis for the following three parameters
i) Item difficulty
ii) Item discrimination
iii) Effectiveness of distracters
b) Preparing final draft: after item analysis, items satisfying all th e criteria
for the test are included in the final draft of the test.
Phase 3: Administer on a large sample (400)
This phase aims to estimate the reliability and validity of the test.
Step IV. Evaluating the test:
Standardization and evaluation of the test is done as follows:
1. The final test and answer sheet is printed.
2. The test's duration is determined by taking the average of three groups
of participants; bright, average, and below average.
3. Instructions are printed for administering the test
4. Score are tabulated and measures of central tendency; mean, mode,
median, SD are calculated.
5. The obtained scores are plotted and checked for normality of
distribution, and various percentile scores are calculated. Scores like
T-score, Z -score are estimate d. Norms like age, sex, rural -urban, etc.
are calculated as required.
6. Validity of the test score is estimated by correlating the scores with
some other criterion. Construct validity is found out using factor
analysis.
7. For evaluating the newly const ructed test, reliability is determined
using the split -half method or rational equivalence. The test reliability
can be estimated by using the test -retest method.
8. How feasible and usable the test is determined from scoring, time, and
economic point of view. All the required norms are to be provided to
facilitate the interpretation of scores.
5A.2 VALIDITY:
The first step in preparing a research tool is to develop a pool of item. This
is followed by item analysis which involves computing difficulty inde x
and discrimination index of each item. This is followed by ascertaining the
validity of the tool.
(i) Validity : Validity is the most important consideration in the selection
and use of any testing procedures. The validity of a test, or of any munotes.in

Page 135


Tools and Techniques of
Research -I
135 measuring instrument, depends upon the degree of exactness with which
something is reproduced/copied or with which it measures what it purports
to measure.
The validity of a test may be defined as“ the accuracy with which a test
measures what it attempts to measure. ” It is also defined as “The
efficiency with which a test measures what it attempts to measure”.
Lindquist has defined validity – “As the accuracy with which it measures
that which is intended to as the degree to which it approaches infallibility
in measur ing what it purports to measure”.
On the basis of the preceding definitions, it is seen that
 Validity is a matter of degree. It may be high, moderate or low
 Validity is specific rather than general. A test may be valid for one
specific purpose but not for another Valid for one specific group of
students but not for another.
5A.3 TYPES OF VALIDITY:
(i) Content Validity: According to Anastasi (1968), “content validity
involves essentially the systematic examination of the text content to
determine whether it cove rs a representative sample of the bahaviour
domain to be measured”. It refers to how well our tool sample represents
the universe of criterion behaviour. Content validity is employed in the
selection of items in research tools. The validation of content th rough
competent judgments is satisfactory when the sampling of items is wide
and judicious.
(ii) Criterion -related Validity: This is also known as empirical
validity.
There are two forms of criterion -related validity.
a) Predictive Validity : It refers to how well the scores obtained on the
tool predict future criterion behavior.
b) Concurrent Validity : It refers to how well the scores obtained on the
tool are correlated with present criterion behaviour.
(iii) Construct Validity : It is the extent to which the tool measures a
theoretical construct or trait or psychological variable. It refers to how
well our tool seems to measure a hypothesized trait.
5A.4 FACTORS AFFECTING VALIDITY:
The following points influence the validity of a test :
(I) Unclear Direction : If directions do n ot clearly indicate to the
respondent how to respond to tool items, the validity of a tool is
reduced. munotes.in

Page 136


Research Methodolo gy in Education
136 (II) Vocabulary : If the vocabulary of the respondent is poor, the he/she
fails to respond to the tool item, even if he/she knows the answer. It
becomes a read ing comprehension text for him/her, and the validity
decreases.
(III) Difficult Sentence Construction : If a sentence is so constructed as
to be difficult to understand, respondents would be confused, which
will affect the validity of the tool.
(IV) Poorly Constructed Test Items: These reduce the validity of a test.
(V) Use of Inappropriate Items : The use of inappropriate items lowers
validity.
(VI) Difficulty Level of Items : In an achievement test, too easy or too
difficult test items would not discriminate among students. The reby
the validity of a test is lowered.
(VII) Influence of Extraneous Factors: Extraneous factors like the style
of expression, legibility, mechanics of grammar, (Spelling,
punctuation) handwriting, length of the tool, influence the validity
of a tool.
(VIII) Inappropr iate Time Limit : In a speed test, if no time limit is given
he result will be invalidated. In a power test, an inappropriate time
limit will lower its validity. Our tests are both power and speed
tests. Hence care should be taken in fixing the time limit.
(IX) Inappropriate Coverage : If the does not cover all aspects of the
construct being measured adequately, its content validity will be
adversely affected due to inadequate sampling of items.
(X) Inadequate Weightage : Inadequate weightage to some dimensions,
sub-topics or objectives would call into question the validity of tool.
(XI) Halo Effect : If a respondent has formed a poor impression about
one aspect of the concept, item, person, issue being measured,
he/she is likely to rate that concept, item, person, issue poor on all
other aspects too. Similarly, good impression
About one aspect of the concept, item, person, issue being measured,
he/she is likely to rate that concept, item, person, issue high on all other
aspects too. This is known as the halo - effect which low ers the validity of
the tool about one aspect of the concept, item, person, issue being
measured, he/she is likely to rate that concept, item, person, issue poor on
all other aspects too.
5A.5 RELIABILITY:
A test score is called reliable when we have reaso ns of believing the score
to be stable and trustworthy. If we measure a student s level of
achievement, we hope that his score would be similar under different
administrators, using different scores, with similar but not identical items, munotes.in

Page 137


Tools and Techniques of
Research -I
137 or during a differ ent time of the day. The reliability of a test may be
defined as -
“The degree of consistency with which the test measures what it does
measure ”.
Anastasi (1968) “Reliability means consistency of scores obtained by
same individual when re -examined with the test on different sets of
equivalent items or under other variable examining conditions”.
A psychological or educational measurement is indirect and is connected
with less precise instruments or traits that are not always stable. There are
many reasons why a pupil’s test score may vary–
a) Trait Instability : The characteristics we measure may change over a
period of time.
b) Administrative Error: Any change in direction, timing or amount of
rapport with the test administrative may cause score variability.
c) Scorin g Error: In accuracies in scoring a test paper will affect the
scores.
d) Sampling Error : Any particular questions we ask in order to infer a
person’s knowledge may affect his score.
e) Other Factors : Such as health, motivation, degree of fatigue of the
pupil, g ood or bad luck in guessing may cause score variability.
5A.6 THE METHODS OF ESTIMATING RELIABILITY:
The four procedures in common use for computing the reliability
coefficient of a test
a) Test – Retest Method
b) The Alternate or Parallel Forms Method.
c) The Inte rnal Consistency Reliability
d) The Inter -rater Reliability
a) Test-Retest (Repetition) Method (Co-efficient of Stability) In test –
retest method the single form of a test is administered twice on the same
sample with a reasonable gap. Thus two set of scores ar e obtained by
administering a test twice. The correlation Co -efficient is computed
between the two set of scores as the reliability index. If the test is repeated
immediately, many subjects will recall their first answers and spend their
time on new materi al, thus tending to increase their scores. Immediate
memory effects, practice and the confidence induced by familiarity with
the material will affect scores when the test is taken for a second time.
And, if the interval between tests is rather long, growth changes will affect
the retest score and tends to lower the reliability coefficient. munotes.in

Page 138


Research Methodolo gy in Education
138 A high test –retest reliability or co -efficient of stability shows that there is
low variable error in the sets of obtained scores and vice -versa. The error
variance cont ributes inversely to the coefficient of stability.
b) Alternate or Parallel forms Method (Co-efficient of Equivalence
Reliability) : When alternative or parallel forms of a test can be developed,
the correlation between Form -‘A’ and Form ’B’ may be taken as a
reliability index.
The reliability index depends upon the alikeness of two forms of the test.
When the two forms are virtually alike, reliability is too high, when they
are not sufficient alike, reliability will be too low. The two forms of the
test are a dministered on same sample of subjects on the same day after a
considerable gap. Pearson’s method of correlation is used for calculating
of correlation between the sets of scores obtained by administering the two
forms of the test. The co -efficient of corr elation is termed as co -efficient
of equivalence.
c) The Spilt Half Method (The Co -efficient of Stability and Equivalence) :
The test is administered once on sample of subjects. Each individual scope is
obtained in two parts(odd numbers and even numbers). The scoring is done
separately of these two parts even numbers and odd numbers of items. The
co-efficient of correlation is calculated of two halves of scores. The co -
efficient of correlation indicates the reliability of half test. The self -
correlation co -efficient of whole test is then estimated by using spearman -
Brown Prophecy formula.
d) The method of ‘Rational Equivalence (Co-efficient of Internal
Consistency) : The method of rational equivalence stresses the inter
correlations of items in the test and the co rrelations of the items with the
test as a whole. The assumption is that all items have the same or equal
difficulty value, but not necessary the same persons solve each item
correctly.
5A.7 FACTORS AFFECTING RELIABILITY:
(i) Interval : With any method involv ing two setting testing occasions, the
longer the interval of time between two test administration, the lower the
co-efficient will tend to be.
(ii) Test Length : Adding equivalent items makes a test more reliable,
while deleting them makes it less reliable.
A longer test will provide a more adequate sample of the behaviour being
measured and the scores are apt to be less influenced by chance factors.
Lengthening of a test by a number of practical considerations like time,
fatigue, boredom, limited stock of good items.
(iii) Inappropriate Time Limit : A test is considered to be a pure speed
test if everyone who reaches an item gets it right, but no one has the time
to finish all the items. A power test is one in which everyone has time to munotes.in

Page 139


Tools and Techniques of
Research -I
139 try all the items but, because o f the difficulty level, no one obtains a
perfect score.
(iv) Group Homogeneity : Other things being equal, the more
heterogeneous the group, the higher the reliability. The test is more
reliable when applied to a group of pupils with a wide range of ability tha n
one with a narrow range of ability.
(v) Difficulty of the Items : Tests in which there is little variability among
the scores give lower reliability estimates that tests in which the variability
is high. Too difficult or too easy tests for a group will tend t o be less
reliable because the differences among the pupils in such tests are narrow.
(vi) Objectivity of Scoring : The more subjectively a measure is scored,
the lower its reliability. Objective -type tests are more reliable than
subjective/Essay type tests.
(vii) Amb iguous Wording of Items : When the questions are interpreted
in different ways at different times by the same pupils, the test becomes
less reliable.
(viii) Inconsistency in Test Administration : Such as deviations in
timing, procedure, instructions, etc. fluctuati ons in interests and attention
of the pupils, Shifts in emotional attitude make a test less reliable.
(ix) Optional Questions : If optional questions are given, the same pupils
may not attempt the same items on a second administration, thereby the
reliability of the test is reduced.
5A.8 ITEM ANALYSIS:
Item analysis begins after the test is over. The responses of the examinees
are to be analysed to check the effectiveness of the test items. The teacher
must come to some judgments regarding the difficulty level,
discriminating power and content validity of items. Only those items
which are effective are to be retained, while those which are not should
either be discarded or improved. This is known as the process of item -
analysis.
A test should be neither too easy nor too difficult; and each item should
discriminate validity among the high and low achieving students.
(i) The difficulty value of each item.
(ii) The discriminating power of each item.
(iii) The effectiveness of distracters in the given item.
5A.9 STEPS INVOLVED IN ITEM – ANALYSIS:
(i) Arrange the response sheets from the highest score the lowest score.
(ii) From the ordered set of response sheets , make two groups. Put those munotes.in

Page 140


Research Methodolo gy in Education
140
1 2 N with the highest scores in one group (top27%) and those with the
lowest scores (lowest 27%) in the or der group. The responses of the
respondents in the middle 46% of the group are not included in the
analysis.
(iii) For each item (T/F, completion types) count the number of
respondents in each group who answered the item correctly. For
alternate response type of items, count the number of students in each
group who choose each alternate.
A] Estimating Item Difficulty:
The difficulty of a test is indicated by the percentage of pupils who get the
item right
Difficulty= R 100
N 1
R – number of pupils who answered the item correctly N – total number of
pupils who attempted the item.

D. index

The higher the difficulty index, the easier the item. If an item is too
difficult, it cannot discriminate at all and adds nothing to test reliabi lity or
validity. An item having 0% or 100% difficulty index has no
discriminating value, Half of the items may have difficulty indices
between 25% and 75%. One -fourth of the items should have a diificulty
index greater than 0,75 and one -fourth of the ite ms should have a
diificulty index less than0.25
B] Estimating Discrimination Index:
The discriminating power of a tool item refers to the degree to which it
discriminates between the bright and the dull pupils in a group.
An estimate of an item’s discriminati ng power can be obtained by the
formula:
Discriminating power Ru – RL

Ru –Number of correct responses from upper group. RL – Number of
correct responses from lower group. N –Total number of pupils who
attempted the item.
Item 1 Item 2 Item 3 Item 4 Item 5
0.375 0.75 0.50 0.15 0.60

munotes.in

Page 141


Tools and Techniques of
Research -I
141 If all the respondents from the upper and lower group answer an item
correctly or if all fail to answer it correctly, the item has no validity, since
in neither case does the item separate the good from the poor respondents
in the sample .
The item has a high discrimination power if the number of high scorers
is greater in the upper group as compared to the number of high
scorers in the lower group.
Items have zero or negative validity need to be discarded.
The higher the discrimination index, the better the item.
Thus the results of item -analysis help one judge the worth or quality of a
tool and also help in revising the tool items.
5A.10 STANDARDIZATION OF A TOOL:
A tool is said to be standardized if it is constructed according to some
1) well defined procedure; 2) admini stered according to definite
instructions; 3) scored according to a define plan and 4) that it provides a
statement of norms.
A tool is standardized in respect of content; method of administration;
method of scoring; and setting up of norms.
Thus standardi zation is a process for refining a measuring instrument
through scientific procedures.
Its steps are as follows :
1. Preparing a draft form of the tool and writing items as per the
operational definition of the tool. Items should be selected in such a
way tha t the expected respondent behaviour in different situations is
reflected in the items.
2. Computing discrimination index and difficulty index (if it is a test)
of the items. In other words, conducting item analysis. Through
this process, item validity is esta blished.
3. Ascertaining content validity, face validity, construct validity and
criterion validity as the case maybe.
4. Ascertaining the reliability of the tool.
5. Fixing the time limit. This includes recording the time taken by
different individuals at the time of the preliminary try out so as to fix
the time limit for the final administration of the tool . It also depends
upon the purpose of the tool. Time allowances must always take into
consideration the age and ability of the respondents , the type of items
used and the complexity of the learning outcomes to be measured.
6. Writing the directions for administering the tool. C areful
instructions for responding to different type of items and for munotes.in

Page 142


Research Methodolo gy in Education
142 recording responses should be given/provided. The directions should
be clear, complete and concise so that each and every respondent
knows what he /she is expected to do. The respondent should be
instructed how and where to mark the items, the time allowed and
reduction of errors , if any, to be made in scoring. Instructions for
scoring are to be given in the test manual.
7. Preparing a scoring key. To ensure objectivity in scoring, the scoring
should be done in a pre -determined manner . In quantitative
research, scoring key is prepared in advance.
8. Establishing norms. Computing the norms (age -wise, gender -wise,
grade -wise, urban -rural location -wise and so on). Norms provide
the user of a standardized tool with the basis for a practical
interpretation and application of the results. A respondent ’s score can
be interpreted only by comp aring it with the scores obtained by
similar respondents . In the process of standardization, the tool must
be administered to a large, representative sample for whom it is
designed.
9. Preparing manual of the tool. Every standardized tool should be
accompanie d by the tool manual. The purpose of the manual is to
explain what the tool is supposed to measure, how it was constructed,
how it should be administered and scored and how the results should
be interpreted and used. It should a lso explain the nature of t he
sample selected, the number of cases in the sample and the procedure
of obtaining the norms. The manual should display the weaknesses as
well as the strengths of the tool and should provide examples of ways
in which the tool can be used as well as w arnings concerning
limitations and possible misuse of the results.
Suggested Readings
1. Best, J. and Kahn, J. Research in Education (9 ed), New Delhi: Prentice Hall
of India Pvt. Ltd.2006.
2. Campbell, D. T. and Fiske ,D. W.― Convergent and Discriminant Validation
by the Multitrait - Multi method Matrix. Psychological Bulletin . 56. pp.81 -
105.
3. Garret, H. E. Statistics in Psychology and Education . New York:
Longmans Green and Co. 5th edition1958.
4. Reference: https://www.yourarticlelibrary.com/statistics -
2/construction -of-a-standardised -test-4-steps -statistics/92629



munotes.in

Page 143

143 5B
TOOLS AND TECHNIQUES OF
RESEARCH -II
(TYPES OF TOOLS)
Unit Structure:
5B.0 Objectives
5B.1 Introduction
5B.2 Rating scale
5B.3 Attitude scale
5B.4 Opinionnaire
5B.5 Questionnaire
5B.6 Checklist
5B.7 Semantic Differentiate scale
5B.8 Psychological Test
5B.9 Inventory
5B.10 Observation
5B.11 Interview
5B.12 Let us sum up
5B.0 OBJECTIVES:
After reading this unit you will be able to :
 State different types of tools and techniques used for data collection
 Distinguish the basic difference between tools and techniques.
 Describe concept, purpose and uses of various tools and techniques in
research.
 State the tools coming under enquiry form, psychological test
observation and Interview.
5B.1 INTRODUCTION:
In the last chapter, you have studied about how to prepa re a research tool.
In this chapter we will study what are those research tools, their concepts
and uses in collection of data.
In every research work, if is essential to collect factual material or data
unknown or untapped so far. They can be obtained fro m many sources, munotes.in

Page 144


Research Methodolo gy in Education
144 direct or indirect. It is necessary to adopt a systematic procedure to collect
essential data. Relevant data, adequate in quantity and quality should be
collected. They should be sufficient, reliable and valid.
For checking new, unknown da ta required for the study of any problem
you may use various devices, instruments, apparatus and appliances. For
each and every type of research we need certain instruments to gather new
facts or to explore new fields. The instruments thus employed as mean s
for collecting data are called tools.
The selection of suitable instruments or tools is of vital importance for
successful research. Different tools are suitable for collecting various
kinds of information for various purposes. The research worker may us e
one or more of the tools in combination for his purpose. Research students
should therefore familiarize themselves with the verities of tools with their
nature, merits and limitations. They should also know how to construct
and use them effectively. The systematic way and procedure by which a
complex or scientific task is accomplished is known as the technique.
Techniques is the practical method, skill or art applied to a particulate task.
So, as a researcher we should aware of both the tools and techniqu es of
research.
The major tools of research in education can be classified broadly into the
following categories.
A. Inquiry forms Questionnaire Checklist Score -card Schedule Rating
Scale Opinionnaire Attitude Scale
B. Observation
C. Interview
D. Sociometry
E. Psychological Tests Achievement Test Aptitude Test Intelligence Test
Interest inventory
Personality measures etc.
In this unit we will discuss some of the tools of each categories.
5B.2 RATING SCALE:
Rating scale is one of the enquiry form. Form is a term applied to
expression or judgment regarding some situation, object or character.
Opinions are usually expressed on a scale of values. Rating techniques are
devices by which such judgments may be quantified. Rating scale is a very
useful device in assessing quality, specially when quality is difficult to
measure objectively. For Example,―How good was the performance? Is a
question which can hardly be answered objectively.
munotes.in

Page 145


Tools and Techniques of
Research -II
145
Rating scales record judgment or opinions and indicates the degree or
amount of diffe rent degrees of quality which are arranged along a line is
the scale. For example: How good was the performance?
Excellent Very good Good Average Below average Poor Very poor
| | | | | | |
This is the must commonly used instrument for making appraisals. It has a
large variety of forms and uses. Typically, they direct attention to a
number of aspects or traits of the thing to be rated and provide a scale for
assigning values to each of the aspects selected. They try to measure the
nature or degree of cert ain aspects or characteristics of a person or
phenomenon through the use of a series of numbers, qualitative terms or
verbal descriptions.
Ratings can be obtained through one of three major approaches:
 Paired comparison
 Ranking and
 Rating scales
The first attempt at rating personality characteristics was the man to man
technique devised curing World -war-I. This technique calls for a panel of
raters to rate every individual in comparison to a standard person. This is
known as the paired comparison approach.
In the ranking approach every single individual in a group is compared
with every other individual and to arrange the judgment in the form of a
scale.
In the rating scale approach which is the more common and practical
method rating is based on the rating scales, a procedure which consists of
assigning to each trait being rated a scale value giving a valid estimate of
its status and then comparing the separate ratings into an over all score.
Purpose of Rating Scale:
Rating scales have been successfully util ized for measuring the
following:
Teacher Performance/Effectiveness
Personality, anxiety, stress, emotional intelligence etc.
School appraisal including appraisal of courses, practices and
programmes.

munotes.in

Page 146


Research Methodolo gy in Education
146 Useful hints on Construction of Rating Scale:
A rating scale includes three factors like:
i) The subjects or the phenomena to berated.
ii) The continuum along which they will be rated and
iii) The judges who will do the rating.
All taken three factors should be carefully taken care by you when you
construct the rating scale.
1) The subjects or phenomena to be rated are usually a limited number of
aspects of a thing or of a traits of a person. Only the most significant
aspects for the purpose of the study should be chosen. The usual may to
get judgement is on five to seven point scales as we have already
discussed.
2) The rating scale is always composed of two parts:
i) An instruction which names the subject and defines the continuum and
ii) A scale which defines the points to be used in rating.
3) Any one can serve as a rater whe re non -technical opinions, likes and
dislikes and matters of easy observation are to be rated. But only well
informed and experienced persons should be selected for rating where
technical competence is required. Therefore, you should select experts in
the field as rater or a person who form a sample of the population in which
the scale will subsequently be applied. Pooled judgements increase the
reliability of any rating scale. So employ several judges, depending on the
rating situation to obtain desirable reliability.
Use of Rating Scale :
Rating scales are used for testing the validity of many objective
instruments like paper pencil inventories of personality. They are also
advantages in the following fields like :
 Helpful in writing reports to parents
 Helpful in filling out admission blanks for colleges
 Helpful in finding out student needs
 Making recommendations to employers.
 Supplementing other sources of understanding about the child
Stimulating effect upon the individuals who are rated.

munotes.in

Page 147


Tools and Techniques of
Research -II
147 Limitations of Rating Scale :
The rating scales suffer from many errors and limitations like the
following:
As you know that the raters would not like to run down their own people
by giving them low ratings. So in that case they give high ratings to almost
all cases. So metimes also the raters are included to be unduly generous in
rating aspects which they had to opportunity to observe. It the raters rate
in higher side due to those factors, then it is called as the generosity error
of rating.
The Errors of Central Tenden cy :
Some observes wants to keep them in safe position. Therefore, they rate
near the midpoint of the scale. They rate almost all as average.
Stringency Error :
Stringency error is just the opposite of generosity of error. These types of
raters are very st rict, cautions and hesitant in rating in average and higher
side. They have a tendency to rate all individuals low.
The Hallo Error :
When a rater rates one aspect influenced by other is called hallo
effect. For if a person will be rated in higher side on his achievement
because of his punctually or sincerely irrespective of his perfect
answer it called as hallo effect. The biased -ness of the rater affects from
one quality to other.
The Logical Error :
It is difficult to convey to the rater just what qualit y one wishes him to
evaluate. An adjective or Adverb may have no universal meaning. It the
terms are not properly understood by the rater and he rates, then it is called
as the logical error. Therefore, brief behavioural statements having clear
objectives should be used.
Check Your Progress - I
Q.1 What is rating scale?
Q.2 What are the approaches of rating?
Q.3 Explain the various purposes of rating scale?
Q.4 What are the factors to be considered during construction of rating
scale?
Q.5 What is rating scale? Explain it‘s use s.
munotes.in

Page 148


Research Methodolo gy in Education
148
Q.6 Write short note son:
i) The Generosity Error
ii) The Logical Error
iii) The Hallo error
iv) Stringency error
v) The errors of central Tendency
5B.3 ATTITUDE SCALE :
Attitude scale is a form of appraisal procedure and it is also one of the
enquiry term. Attitude scales have been designed to measure attitude of a
subject of group of subjects towards issues, institutions and group of
peoples.
The term attitude is defined in various ways, ―the behaviour which we
define as attitudinal or attitude is a certain observable set organism or
relative tendency preparatory to and indicative of more complete
adjustment.
- L. L. Bernard
―An attitude may be defined as a learned emotional response set for or
against something.
-Barr David Johnson
An attitude is spoken of as a tendency of an individual to read in a certain
way towards a Phenomenon. It is what a person feels or believes in. It is
the inner feeling of an individual. It may be positive, negative orneutral.
Opinion and attitude are used sometimes in a synonymous manner but
there is a difference between two. You will be able to know when we will
discuss about opinionnaire. An opinion may not lead to any kind of
activity in a particular direction. But an attitude compels one to act either
favourably or unfavourably according t o what they perceive to be correct.
We can evaluate attitude through questionnaire. But it is ill adapted for
scaling accurately the intensity of an attitude. Therefore, Attitude scale is
essential as it attempts to minimise the difficulty of opinionnaire and
questionnaire by defining the attitude in terms of a single attitude object.
All items, therefore, may be constructed with graduations of favour or
disfavour.
Purpose of Attitude Scale :
In educational research, these scales are used especially for finding the
attitudes of persons on different issues like: munotes.in

Page 149


Tools and Techniques of
Research -II
149
 Co-education
 Religious education
 Corporal punishment
 Democracy in schools
 Linguistic prejudices International co -operation etc.
Characteristics of Attitude Scale :
Attitude scale should have the fo llowing characteristics.
It provides for quantitative measure on a uni -dimensional scale of
continuum.
It uses statements from the extreme positive to extreme negative
position.
It generally uses a five point scale as we have discussed in rating scale.
It could be standardized and norms are worked out.
It disguise s the attitude object rather than directly asking about the
attitude on the subject.
Examples of Some Attitude Scale :
Two popular and useful methods of measuring attitudes indirectly,
commonly used for research purposes are:
 Thurstone Techniques of scaled values
 Likert‘s method of summated ratings.
Thurstone Technique :
Thurstone Technique is used when attitude is accepted as a uni -
dimensional linear Continuum. The procedure is simple. A large number
of statements of various shades of favourable and unfavourable opinion on
slips of paper, which a large number of judges exercising complete
detachment sort out into eleven plies ranging from the most hostile
statements to the most favourable ones. The opinions are carefully worded
so as to be clear and unequivocal. The judges are asked not expresstier
opinion but to ort the m at their face value. The items which bring out a
marked disagreement between the judges un assigning a position are
discarded. Tabulations are made which indicate the number of judges who
placed each item in each category. The next step consists of calc ulating
cumulated proportions for each item and ogives are constructed. Scale
values of each item are read from the ogives, the values of each item being
that point along the baseline in terms of scale value units above and below munotes.in

Page 150


Research Methodolo gy in Education
150 which 50% of the judges pl aced the item. It we‘ll be the median of the
frequency distribution in which the score ranges from 0 to11.
The respondent is to give his reaction to each statement by endorsing or
rejecting it. The median values of the statements that he checks
establishe s his score, or quantifies his opinion. He wins a score as an
average of the sum of the values of the statements he endores.
Thurstone technique is also known as the technique equal appearing
intervals.
Sample Items From Thurstone Type Scales :
Stateme nt Scaled
value
I think this company treats its employees 10.4
Better than any other company does. 9.5
It I had to do it over again I‘d still work for this company. 5.1
The workers put as much over on the company as the
company puts over on them. 2.1
You have got to have pull with certain people around 0.8
here to get ahead. An honest man fails in this company.

The Likert Scale :
The Likert scale uses items worded for or against the proposition, with
five point rating response indicating the strength of the respondent‘s
approval or disapproval of the statement. This method removes the
necessity of submitting items to the judges for working out scaled values
for each item. It yields scores very similar to those obtained from the
Thurstone scal e. It is an important over the Thurstone method.
The first step is the collection of a member of statements about the subject
in question. Statements may or may not be correct but they must be
representative of opinion held by a substantial number of peopl e. They
must express definite favourableness or unfavourableness to a particular
point of view. The number of favourable and unfavourable statements
should be approximately equal. A trial test maybe administered to a
number of subjects. Only those items that correlate with the total test
should be retained.
The Likerts calling techniques assigns a scale value to each of the five
responses. All favourable statements are scored from maximum to
minimum i. e. from a score of 5 to a score of one or 5 for stron gly agree
and so on 1 for strongly disagree. The negative statement or statement munotes.in

Page 151


Tools and Techniques of
Research -II
151 opposing the proposition would be scored in the opposite order .e. from a
score of 1 to a score of 5 or 1 for strongly agree and so on 5 for strongly
disagree.
The total of th ese scores on all the items measures a respondent‘s
favorableness towards the subject in question. It a scale consists of 30
items, Say, the following score values will be of interest.
30  5 = Most favorable response possible
30  3 = 90 A neutral attitude
30  1 = 30 Most unfavorable attitude
It is thus known as a method of summated ratings. The summed up score
of any individual would fall between 30 and 150. scores above 50 will
indicate a favourable and scores below go an unfavourable attitude.
Sample Items from Linkert Type Minnesota Scale on Morale
Responses Items
SA A U DSD Times are getting better
SA A U DSD Any man with ability and willingness to work
hard has a good chance of being successful.
SA A U DSD Life is just a series of disappointments.
SA A U DSD It is great to be living in those exciting times.
SA A U DSD Success is more dependent on lack than on real
ability.
Limitations Of Attitude Scale :
In the attitude scale the following limitations may occur:
An individual may express socially acceptable opinion conceal his real
attitude.
An individual may not be a good judge of himself and may not be
clearly aware of h is real attitude.
He may not have been controlled with a real situation to discover what
his real attitude towards a specific phenomenon was.
There is no basis for believing that the five positions indicated in the
Likert‘s scale are equally spaced.
It is unlikely that the statements are of equal value in ‗forness‘ or
againstness. munotes.in

Page 152


Research Methodolo gy in Education
152
It is doubtful whether equal scores obtained by several individuals
would indicate equal favourableness towards again position.
It is unlikely that a respondent can validi ty react to a short statement on
a printed form in the absence of real like qualifying Situation.
In sprite of anonymity of response, Individuals tend to respond
according to what they should feel rather than what they really feel.
However, until more precise measures are developed, attitude scale
remains the best device for the purpose of measuring attitudes and beliefs
in social research.
Check Your Progress – II
Q.1 What is attitude scale? Explain it‘s purpose and Characteristics.
Q.2 What is attitude scale ? Explain the methods of measuring attitudes
in research.
Q.3 Define attitude. Explain Likerts scale to measure attitude.
5B.4 OPINIONNAIRE:
Opinion polling or opinion gauging represents a single question approach.
The answers are usually in the form of yes‘ or no‘. An undecided
category is often included. Sometimes large number of response
alternatives if provided.
- Anna Anastusi
The terms opinion and attitude are not synonymous, through sometimes
we used it synonymously. We have till now discussed that a ttitude scale.
We have also discussed that attitudes are impressed opinions. You can
now understand the difference between opinionnaire and attitude scale,
when we discuss of out opinionnaire, it is characteristics and purposes.
Opinion is what a person says on certain aspects of the issue under
considerations. It is an outward expression of an attitude held by an
individual. Attitudes of an individual can be inferred or estimated from his
statements of opinions.
An opinionnaire is defined as a special f orm of inquiry. It is used by the
researcher to collect the opinions of a sample of population on certain
facts or factors the problem under investigation. These opinions on
different facts of the problem under study are further quantified, analysed
and in terpreted.
Purpose :
Opinionnaire are usually used in researches of the descriptive type which
demands survey of opinions of the concerned individuals. Public opinion
research is an example of opinion survey. Opinion polling enables the
researcher to fore cast the coming happenings in successful manner. munotes.in

Page 153


Tools and Techniques of
Research -II
153 Characteristics :
 The opinionnaire makes use of statements or questions on different
aspects of the problem under investigation.
 Responses are expected either on three point or five point scales.
 It uses favourable or unfavourable statements.
 It may be sub -divided into sections.
 The poll ballots generally make use of questions instead of
statements.
 The public opinion polls generally rely on personal contacts rather
than mail ballots.
Sample Items of Opinionnaire :
The following statements are from the opinionnaire on the reforms in
educational administration introduced in A. P. during 1956 -66.
1) Democratic decentralization has helped to develop
democratic values and practices in rural people.
2) There has been consequent improvements
of educational standards.
3) Specified subject inspectorate is better than Panel type
of inspectorate.
4) Inspection stripped of Administrative powers does not
help much.
5) Primary education should be brought under a separate
directora te as was done in some status. A

A

A

A

A U

U

U

U

U D

D

D

D

D

5B.5 QUESTIONNAIRE:
A questionnaire is a form prepared and distributed to secure
responses to certain questions. It is a device for securing answers to
questions by using a form which the respondent fills by himself. It is
a systematic compilation of questions that are submit ted to a sampling of
population from which information is desired.
Questionnaire rely on written information supplied directly by people in
response to questions. The information from questionnaires tends to fall
into two broad categories –facts‘ and opinio ns‘. It is worth stressing that, in
practice, questionnaires are very likely to include questions about both
facts and opinions. munotes.in

Page 154


Research Methodolo gy in Education
154 Purpose :
The purpose of the questionnaire is to gather information from widely
scattered sources. It is mostly used in uses in cases where one can not
readily see personally all of the people from whom he desires responses. It
is also used where there is no particular reason to see them personality.
Types :
Questionnaire can be of various type on the basis of it‘s preparation. They
are like:
 Structured v/s Non Structured
 Closed v/s Open
 Fact v/s Opinion
Structured v/s Non -Structured Questionnaire :
The structured questionnaire contains definite, concrete and directed
questions, where as non -structured questionnaire is often used in
interview and guide. It may consist of partially completed questions.
Closed v/s Open Questionnaire :
The question that call for short check responses are known as restricted or
closed form type. For Example, they provide for marking a yes or no, a
short response or checking an item from a list of responses. Here the
respondent is not free to wrote of his own, he was to select from the
selected from the supplied responses. On the other hand, increase of open
ended questionnaire, the respondent is free t o response in his own words.
Many questionnaire also included both close and open type questions.
The researcher selects the type of questionnaire according to his need of
the study.
Fact and Opinion:
Incase of fact questionnaire, the respondent is expect ed to give
information of facts without any reference to his opinion or attitude about
them.
But incase of opinion questionnaire the respondent gives the information
about the facts with his own opinion and attitude.
Planning the Use of Questionnaire :
The successful use of questionnaire depends on devoting the right balance
of effort to the planning stage, rather than rushing too early into
administering the questionnaire. Therefore, the researcher should have a
clear plan of action in mind and costs, prod uction, organization, time
schedule and permission should be taken care in the beginning. When
designing a questionnaire, the characteristics of a good questionnaire
should be kept in mind. munotes.in

Page 155


Tools and Techniques of
Research -II
155 Characteristics of A Good Questionnaire :
Questionnaire should de al with important or significant topic to create
interest among respondents.
It should seek only that data which can not be obtained from other sources.
 It should be as short as possible but should be comprehensive.
 It should be attractive.
 Directions shou ld be clear and complete.
 It should be represented in good Psychological order proceeding from
general to more specific responses.
 Double negatives in questions should be avoided.
 Putting two questions in one question also should be avoided.
 It should avoid annoying or embarrassing questions.
 It should be designed to collect information which can be used
subsequently as data for analysis.
 It should consist of a written list of questions.
 The questionnaire should also be used appropriately.
When is it ap propriate to use a questionnaire for research?
Different methods are better suited to different circumstances and
questionnaire are no exception to it. Questionnaire are used at their most
productive:
When used with large numbers of respondents.
When wha t is required tends to be fairly straight forward information.
When there is a need for standardize data from identical information.
When time is allows for delays.
When resources allow for the cast of printing and postage.
When respondents can be expe cted to be able to read and understand the
questions.
Designs of Questionnaire :
After construction of questions on the basis of it‘s characteristics it should
be designed with some essential routines like:
 Background information about the questionnaire munotes.in

Page 156


Research Methodolo gy in Education
156  . Instructions to the respondent.
 The allocation of serial numbers and
 Coding Boxes.
Background Information about The Questionnaire
Both from ethical and practical point of view, the researcher needs to
provide sufficient background information about the research and the
questionnaire. Each questionnaires should have a cover page, on which
some information appears about:
 The sponsor
 The purpose
 Return address and date
 Confidentiality
 Voluntary responses and
 Thanks
Instructions to the Respondent :
It is very important that respondents are instructed to go presented at the
start of the questionnaire which indicate what is expected from the
respondents. Specific instructions should be given for each question
where the style of questions varies throug h out the questionnaire. For
Example – Put a tick mark in the appropriate box and circle the relevant
number etc.
The Allocation of Serial Numbers :
Whether dealing with small or large numbers, a good researcher needs to
keep good records. Each questionnai re therefore should be numbered.
Advantages of Questionnaire :
Questionnaire are economical. In terms of materials, money and time it
can supply a considerable amount of research data.
 It is easier to arrange.
 It supplies standardized answers
 It encourages pre-coded answers.
 It permits wide coverage .
 It helps in conducting depth study.
munotes.in

Page 157


Tools and Techniques of
Research -II
157 Disadvantages :
It is reliable and valid, but slow.
Pre-coding questions can deter them from answering.
Pre-coded questions can bias the findings towards there searcher.
Postal questionnaire offer little opportunities to check the truthfulness of
the answers.
It can not be used with illiterate and small children.
Irrespective of the limitations general consensus goes in favour of the use
of questionnaire. It‘s quality sho uld be improved and we should be
restricted to the situations for which it is suited.
Check Your Progress – III
Q.1 Distinguish between opinionnaire and questionnaire.
Q.2 Write short notes on:
(a) Closed and open questionnaire.
(b) Structured and Non -Structure questionnaire.
(c) Fact and Opinion.
The serial number helps to distinguish and locate if necessary. It can also
helps to identify the date of distribution, the place and possibility the
person.
Coding Boxes :
When designing the questionnaire , it is necessary to prevent later
complications which might arise at the coding stage. Therefore, you
should note the following points:
 Locate coding boxes neatly on the right hand side of the page.
 Allow one coding box for each answer.
 Identify each column in the comple te data file underneath the
appropriate coding box in the questionnaire.
Besides these, the researcher should also be very careful about the length
and appearance of the questionnaire, wording of the questions, order and
types of questions while constructi ng a questionnaire.
Criteria of Evaluating a Questionnaire :
You can evaluate your questionnaire whether it is a standard questionnaire
or not on the basis of the following criteria: munotes.in

Page 158


Research Methodolo gy in Education
158  It should provide full information pertaining to the area of research.
 It should provide accurate information.
 It should have a decent response rate.
 It should adopt an ethical stance and
 It should be feasible.
Like all the tools, it also has some advantages and disadvantages based on
it‘s uses.
5B. 6 CHECKLIST :
A checklist, is a type of informational job aid used to reduce failure by
compensating for potential limits of human memory and attention. It helps
to ensure consisting and completeness in carrying out a task. A basic
example is ‗to do list‘. A more advanced checklist which lays out tasks to
be done according to time of a day or other factors.
The checklist consists of a list of items with a place to check, or to mark
yes or no.
Purpose :
The main purpose of checklist is to call attention to various aspects of an
object or situation, to see that nothing of importance is overlooked. For
Example, if you have to go for outing for a week, you have to list what
things you have to take with you. Before leaving home, if you will check
your baggage with the least there will be l ess chance of forgetting to take
any important things, like toothbrush etc. it ensures the completeness of
details of the data. Responses to the checklist items are largely a matter of
fact, not of judgment. It is an important tool in gathering facts for
educational surveys.
Uses :
Checklists are used for various purposes. As we have discussed that we
can check our requirements for journey, Birthday list, proforma for pass -
port, submitting examination form or admission form etc. in every case, it
we will ch eck before doing the work, then there is less chance of
overlooking any, important things. As it is useful in over daily life, it is
also useful in educational field in the following way.
 To collect acts for educational surveys.
 To record behaviour in obs ervational studies.
 To use in educational appraisal, studies – of school buildings,
property, plan, textbooks, instructional procedures and outcomes etc.
 To rate the personality. munotes.in

Page 159


Tools and Techniques of
Research -II
159  To know the interest of the subjects also. Kuder‘s interest inventory
and Strong‘s Interest Blank are also checklists.
Hints on Constructing Checklist :
 Items in the checklist may be continuous or divided into groups of
related items.
 Items should be arranged in categories and the categories in a logical
or psychological order.
 Terms used in the items should be clearly defined.
 Checklist should be continuous and comprehensive in nature.
 A pilot study should be taken to make it standardized.
 Checklist can be constructed in four different ways by arranging items
differently.
(1) In one of the arrangement all items found in a situation are to be
checked. For Example, a subject may be asked to check ( ) in the blank
side of each activity undertaken in a school.
(2) In the second form, the respondent is asked to check with a ‗yes‘ or
‗no‘ or a sked to encircle or underline the response to the given item.
For Example,(1)Does your school have a house system?
Yes/No
(3) In this form, all the items are positive statements with checks ( ) to be
marked in a column of a right. For Example, (1) The school functions
as a community centre ().
(4) The periodical tests are held – fortnightly, monthly, quarterly,
regularly.
The investigator has to select any one of the format appropriate to his
problem and queries or the combination of many as it requires.
Analysis and Interpretation of Checklist Data :
The tabulation and quantification of checklist data is done from the
responses. Frequencies are counted, percentages and averages calculated,
central tendencies, measures of variability and co -efficient of correlation
completed as and when necessary. In long checklists, where related items
are grouped together category wise, the checks are added up to give total
scores for the category wise total scores can be compared between
themselves or with similar scores secured through other studies.
The conclusions from checklist data should be arrived at carefully ad
judiciously keeping in view the limitations of the tools and respondents.
munotes.in

Page 160


Research Methodolo gy in Education
160 Merits :
 Students can measure their own behaviour with the help of checklist.
 Easy and simple to use and frame the tools.
 Wanted and unwanted behaviours can be included.
 Personal - Social development can bechecked.
Limitations :
 Only the presence or absence of the ability can be tested.
 Yes or no type judgement can only be given.
 How much ca n not be tested through checklist.
For Example, you want to test the story telling still of a student. You can
check only whether the student developed or not developed the skill but
you can not study how much he has developed?
When we want to check yes‘ o r no‘ of any ability, checklist is used.
Check Your Progress - IV
Q.1 Prepare a checklist for any skill.
5B.7 SEMANTIC DIFFERENTIALSCALE:
Semantic differential is a type of a rating scale designed to measure the
connotative meaning of objects, events and concepts. The connotations are
used to drive the attitude towards the given object, event of concept.
Semantic Differential:
sweet
falt warm beautiful meaningful
brave bright
bitter unfalt cold ugly
meaningless cowardly dark







Fig. 1 Modem Japanese version of the semantic Differential.
munotes.in

Page 161


Tools and Techniques of
Research -II
161 The Kanji characters in background stand for ―God and Wind
respectively, with the compound reading Kamikaze. (Adapted from
Dimensions of Meaning. Visual Statistics Illustrated at Vis ual
Statistics.net.)
Osgood‘s semantic differential was designed to measure the connotative
meaning of concepts. The respondent is asked to choose where his or her
position lies, on a scale between two bipolar adjectives (for example:
Adequate -Inadequate, Good -Evil or Valuable -Worthless). Semantic
differentials can be used to describe not only persons, but also the
connotative meaning of abstract concepts - a capacity used extensively in
effect control theory.
Theoretical Background
Nominalists and Realists :
Theoretical underpinnings of Charles E. Osgood‘s semantic differential
have roots in the medieval controversy between then nominalists and
realists. Nominalists asserted that only real things are entities and that
abstractions from these entities, calle d universals, are mere words. The
realists held that universals have an independent objective existence either
in a realm of their own or in the mind of God. Osgood‘s theoretical work
also bears affinity to linguistics and general semantics and relates to
Korzybski‘s structural differential.
Use of Adjectives :
The development of this instrument provides an interesting insight into the
border area between linguistics and psychology. People have been
describing each other since they developed the ability to speak. Most
adjectives can also be used as personality descriptors. The occurrence of
thousands of adjectives in English is an attestation a of the subtleties in
descriptions of persons and their behaviour available to speakers of
English. Roget‘s Thesauru s is an early attempt to classify most adjectives
into categories and was used within this context to reduce the number of
adjectives to manageable subsets, suitable for factor analysis.
Evaluation, Potency and Activity :
Osgood and his colleagues perform ed a factor analysis of large collections
of semantic differential scales and found three recurring attitudes that
people use to evaluate words and phrases: valuation, potency, and activity.
Evaluation loads highest on the adjective pair ‗active -passive‘ defines the
activity factor. These three dimensions off affective meaning were found
to be cross - cultural universals in a study of dozens of cultures.
This factorial structure makes intuitive sense. When our ancestors
encountered a person, the initial per ception had to be whether that person
represents a danger. Is the person good or bad? Next, is the person strong
or weak? Our reactions to a person markedly differ it perceived as good
and strong, good and weak, bad and weak, or bad and strong. munotes.in

Page 162


Research Methodolo gy in Education
162 Subsequentl y, we might extendour initial classification to include cases of
persons who actively threaten us or represent only a potential, danger, and
so on. The evaluation, potency and activity factors thus encompass a
detailed descriptive system of personality. Os good‘s semantic differential
measures these three factors. It contains sets of adjective pairs such as
warm - cold, bright -dark, beautiful -ugly, sweet -bitter, fair -unfair, brave -
cowardly, meaningful -meaningless.
The studies of Osgood and his colleagues re vealed that the evaluate factor
accounted for most of the variance in scalings, and related this to the idea
of attitudes.
Usage :
The semantic differential is today one of the most widely used scales used
in the measurement of attitudes. One of the reason s is the versatility of the
items. The bipolar adjective pairs can be used for a wide variety of
subjects, and as such the scale is nickn amed―the ever r eady batte ry of the
attitude research er.
A. Semantic Differential Scale :
This is a seven point scale and the end points of the scale are associated
with bipolar labels.
1
Unpleasant
Submissive
2
3
4
5
6 7
Pleasant
Dominant
Suppose we want to know personality of a particular person.
We have options –
a. Unpleasant /Submissive
b. Pleasant /Dominant
Bi-polar means two opposite streams. Individual can score between 1 to 7
or 3 to 3. On the basis of these responses profiles are made. We can
analyse for two for three products and by joining these profiles we get
profile analysis. It could take any shape depending on the number of
variables.
Profile Analysis
/
/
/
Mean and median are used for comparison. This scale helps to determine
overall similarities and differences among objects. munotes.in

Page 163


Tools and Techniques of
Research -II
163 When Semantic Differential Scale is used to develop an image profile, it
provides a good basis for comparing images of two or more items. The big
advantage of this scale is its simplicity, while producing results compared
with those of the more complex scaling methods. The method is easy and
fast to administer, but it is also sensitive to small differences in attitude,
highly versatile, reliable and generally valid.
Statistical Properties :
Five items, or 5 bipolar pairs of adjectives, have been proven to yield
reliable findings, which highly correlate with alternative measures of the
same attitude.
The biggest problem with his scale is that the properties of the level of
measurement are unknown. The most statistically should approach is to
treat it as an ordinal scale , but it can be argued that the neutral response (i.
e. the m iddle alternative on the scale) serves as an arbitrary zero point, and
that the intervals between the scale values can be treated as equal, making
it an interval scale.
A detailed presentation on the development of the semantic differential is
provided in the monumental book, Cross -Cultural Universals of Affective
Meaning. David R. Heise ‘s Surveying Cultures provides a contemporary
update with special attention to measurement issues when using
computerized graphic rating scales .
5B.8 PSYCHOLOGICALTESTS:
Among the most useful and most frequently employed tools of educational
research psychological tests occupy a very significant position.
Psychological tests are described to describe and measure a sample of
certain aspects of human behaviour or inner qual ities. They yield objective
descriptions of some psychological aspects of an individual‘s personality
and translate them in quantitative terms. As we have mentioned earlier
there are various kinds of psychological tests. In this unit we will discuss
‗Aptit ude tests‘ and Inventories‘.
Aptitude Tests :
Aptitude tests attempt to predict the capacities or the degree of
achievement that may be expected from individuals in a particular activity.
Aptitude is a means by which one can find the relative knowledge of a
person in terms of his intelligence and also his knowledge in general.
Purpose :
The purpose of aptitude test is to test a candidate‘s profile. Aptitude test
helps to check one‘s knowledge and filters the good candidates. The
ability of creativity and in telligence is proved by the aptitude test. It
always checks the intelligence and fastness of the person in performance.
munotes.in

Page 164


Research Methodolo gy in Education
164 Importance of Aptitude Test :
Research data show that individually administered aptitude tests have the
following qualities:
 They are excellent predictors of future scholastic achievement.
 They provide ways for comparison of a child‘s performance with other
in a same situation.
 They provide a profile of strength and weaknesses. They asses
difference among individuals.
Uses Of Aptitude Te st :
Aptitude tests are valuable in making programme and curricula decisions.
In general they have three major uses:
Instructional : Teacher can use aptitude test results to adopt their curricula
to match the level of students or to design assignments for s tudents who
differ widely.
Administrative : Result of Aptitude tests help in determining the
programmes for college on the basis of aptitude level of high -school.
It can also be identify students to be accelerated or given extra attention,
for exampling and in predicting job training performance.
Guidance : result of aptitude tests help counsellors to help parents and
students. Parents develop realistic expectations for their Child‘s
performance and students understand their own strength and weaknesses.
Intelligence tests are also a kind of aptitude test as they describe and
measure the general ability which enters into the performance of every
activity and thus predict the degree of achievement that may be expected
from individuals in various activities.
Aptitude test, however have proved of great value for research in
educational and vocational guidance, for research in selection of
candidates for particular course of study or professional training and for
research of the complex causal relationship type .
5B. INVENTORY:
Inventory is a list, record or catalog containing list of traits, preferences,
attitudes, interests or abilities used to evaluate personal characteristics or
skills.
The purpose of inventory is to make a list about a specific trait, activi ty or
programme and to check to what extent the presence of that ability types
of Inventories like
 Internet Inventory and Personality Inventory munotes.in

Page 165


Tools and Techniques of
Research -II
165 Interest Inventory :
Persons differ in their interests, likes and dislikes. Internets are significant
element in the personality pattern of individuals and play an important role
in their educational and professional careers. The tools used for describing
and measuring interests of individuals are the internet inventories or
interest blanks. They are self report ins truments in which the individuals
note their own likes and dislikes. They are of the nature of standardised
interviews in which the subject gives an introspective report of his feelings
about certain situations and phenomena which is then interpreted in te rms
of internets.
The use of interest inventories is most frequent in the areas of educational
and vocational guidance and case studies. Distinctive patterns of interest
that go with success have been discovered through research in a number of
educational and vocational fields. Mechanical, computational, scientific,
artifice, literary, musical, social service, clerical and many other areas of
interest have been analysed informs of activities. In terms of specific
activities, a person‘s likes and dislikes ar e sorted into various interest areas
and percentile scores calculated for each area. The area where a person‘s
percentile scores are relatively higher is considered to be the area of his
greatest interests, the area in which he would be the happiest and th e most
successful.
As a part of educational surveys of many kinds, children‘s interest in
reading, in games, in dramatics, in other extracurricular activities and in
curricular work etc. are studied.
One kind of instrument, most commonly used in interest m easurement is
known as Strong‘s Vocational Interest Inventory. It compares the subject‘s
pattern of interest to the interest patterns of successful individuals in a
number of vocational fields. This inventory consists of the 400 different
items. The subjec t has to tick mark one of the alternatives i. e. L(for like),
I(indifference) or D(Dislike) provided against each item. When the
inventory is standardised, the scoring keys and percentile norms are
prepared on the basis of the responses of a fairly large n umber of
successful individuals of a particular vocation. A separate scoring key is
therefore prepared for each separate vocation or subject area. The
subject‘s responses are scored with the scoring key of a particular vocation
in order to know his interes t or lack of interest or lack of interest in the
vocation concerned. Similarly his responses can be scored with scoring
keys standardised for other vocational areas. In this way you can
determine one‘s areas of vocational interest. Another well known inter est
inventories, there are also personality inventories to measure the
personality. You can prepare inventories of any ability to measure it.
Check Your Progress –V
Q.1 What are psychological tests? Explain the use of aptitude test as a
tool of research.
Q.2 What are inventories? Explain it‘s role in research with example. munotes.in

Page 166


Research Methodolo gy in Education
166 5B.10 OBSERVATION :
Observation offers the researcher a distinct way of collecting data. It does
not rely on what people say they do, or what they say they think. It is more
direct than that. Instead, it draws on the direct evidence of the eye to
witness events first hand. It is a more natural way of gathering data.
Whenever direct observation is possible it is the preferable method to use.
Observation method is a technique in which the behavio ur of research
subjects is watched and recorded without any direct contact. It involve the
systematic recording of observable phenomena or behaviour in a natural
setting.
Purpose :
The purpose of observation techniques are:
 To collect data directly.
 To collect substantial amount of data in short time span.
 To get eye witness first hand data in real like situation.
 To collect data in a natural setting.
Characteristics :
It is necessary to make a distinction between observation as a scientific
tool and th e casual observation of the man in the street. An observation
with the following characteristics will be scientific observation.
Observation is systematic.
 It is specific.
 It is objective.
 It is quantitative.
 The record of observation should be made immediately
 Expert observer should observe the situation.
 It‘s result can be checked and verified.
Types of Observation :
On the basis of the purpose of observation may be of varied type like:
Structured and Unstructured
Participant and Non -participant
munotes.in

Page 167


Tools and Techniques of
Research -II
167 Structured and Unstructured Observation :
In the early large stage of an investigation, it is necessary to allow
maximum flexibility in observation to obtain a true picture of the
phenomenon as a whole. In the early stage, it we attempt to restrict the
observation to certain areas, then there we‘,, be the risk of overlooking
some of the more crucial aspects. As the investigator studies the
significant aspects and observes some restricted aspects of the situation to
derive more and rigorous generalizations. So in the first stage of
observation, the observation is wide and unstructured and as the
investigation proceeds observation gets restricted and structured.
Participant and Non -Participant Observation:
In participant observation, the observer becomes more or less one of the
groups under observation and shares the situation as a visiting stranger, an
attentive listener, an eager learner or as a complete participant observer,
registering, recording and interpreting behaviour of the group.
In non -participant o bservation, the observer observes through one way
screens and hidden microphones. The observer remains a look from group.
He keeps his observation as inconspicuous as possible. The purpose of
non-participant observation is to observe the behaviour in a nat ural setting.
The subject will not shift his behaviour or the will not be conscious hat
someone is observing his behaviour.
The advantages and disadvantages of participant and non- participant
observation depend largely on the situation. Participant obser vation is
helpful to study about criminals at least participating with person
sometime. It gives a better in sight into the life. Therefore it has a built in
validity test. It‘s disadvantages are that it is time consuming As he
develops relationship with th e members, there is a chance of losing his
neutrality, objectivity and accuracy to rate things as they are:
Non-participant observation is used with groups like infants, children or
abnormal persons. It permits the use of recording instruments and the
gathering of large quantities of data.
Therefore, some researchers feel that it is best for the observer to remain
only a partial participant and to maintain his status of scientific observer
apart from the group.
Steps of Effective Observation:
 As a research tool effective observation needs effective Planning
 Execution
 Recording and
 Interpretation Planning:
munotes.in

Page 168


Research Methodolo gy in Education
168 While planning to employ observation as a research technique the
following factors should be taken into consideration.
 Sample to be observed should be adequate.
 Units of behaviour to be observed should be clearly defined.
 Methods of recording should be simplified.
 Detail instruction should be given to observes if more than one
observe is employed to maintain consistency.
 Too many variables should not be observed simultaneously.
 Excessively long period of observation without rest period should be
avoided.
 Observes should be fully trained and well equipped.
 Records of observation must be comprehensive.
Execution :
 A good observation plan lends to success o nly when followed with
skill and expert execution. Expert execution needs:
 Proper arrangement of special conditions for the subject.
 Assuming the proper physical position for observing.
 Focusing attention on the specific activities or units of behaviour un der
observation.
 Observing discreetly the length and number of periods and internals
decided upon.
 Handling well the recording instruments to be used. Utilising the
training received in terms of expertness.
Recording:
The two common procedures for recording observations are:
 Simultaneous
 Soon after the observation
Which methods should be used depend on the nature of the group? The
type of behaviour to be observed. Both the method has their merits and
limitations. The simultaneous form of recording m ay distract the subjects
while after observation the observer may distract the subjects while after
observation the observer may fail to record the complete and exact
information. Therefore for a systematic collection of data the various
devices of recordi ng should be used. They are like – checklist, rating scale
and score card etc. munotes.in

Page 169


Tools and Techniques of
Research -II
169 Interpretation:
Interpretation can be done directly by the observer at the time of his
observation. Where several observers are involved, the problem of
university is there. The refore, in such instances, the observer merely
records his observations and leaves the matter of interpretation to an
export that is more likely to provide a unified frame of reference. It must
of course, be recognized that the interpreter‘s frame of refer ence is
fundamental to any interpretation and it might be advisable to insist on
agreement between interpreters of different background.
Limitations of Observation :
 The limitations of observation are:
 Establishing validity is difficult.
 Subjectivity is a lso there.
 It is a slow and laborious process.
 It is costly both in terms of time of time and money.
 The data may be unmanageable.
 There is possibility of biasness
These limitations can be minimized by systematic observation as it
provides a framework for observation which all observes will use. It has
the following advantages.
Advantages of Observation :
 Data collected directly
 Systematic and rigorous
 Substantial amount of data can be collected in a relatively short time
span.
 Provides pre -coded data and r eady for analysis.
 Inter observer reliability is high.
However, observation is a scientific technique to the extent that it serves a
formulated research purpose, planned systematically rather than occurring
haphazardly, systematically recorded and related to more general
propositions and subjected to checks and controls with respect to validity,
reliability and precision.
Check Your Progress – VI
Q.1 Explain the steps of observational techniques with it‘s merit‘s and
limitations. munotes.in

Page 170


Research Methodolo gy in Education
170 Q.2 Write short notes on:
(a) Participant and non0participantobservation.
(b) Structured and non -structures observation.
5B.11 INTERVIEW :
Interviews are an attractive proposition for the project researcher.
Interviews are something more than conversation. They involve a set of
assumptions and understandings about the situation which are not
normally associated with a casual conversion. Interviews are also referred
as an oral questionnaire by some people, but it is indeed mush more than
that. Questionnaire involves indirect data collection, whereas Interview
data is collected directly from others in face to face contact. As you know,
people are hesitant to wrote something than to talk. With friendly
relationship and rapport, the interviewer can obtain certain types of
confidential informatio n which might be reluctant to put in writing.
Therefore research interview should be systematically arranged. It does
not happen by chance. The interviews not done by secret recording of
discussions as research data. The consent of the subject is taken for the
purpose of interview. The words of the interviews can be treated as ‗on
the record‘ and ‗for the record‘. It should not be used for other purposes
besides the research purpose. The discussion therefore is not arbitrary or at
the whim of one of the par ties. The agenda for the discussion is set by the
researcher. It is dedicated to investigating a given topic.
Importance of Interview :
Whether it is large scale research or small scale research, the nature of the
data collection depends on the amount of r esources available. Interview is
particularly appropriate when the researcher wishes to collect data based
on:
Emotions, experiences and feelings.
o Sensitive issues.
o Privileged information.
It is appropriate when dealing with young children, illiterates, language
difficulty and limited, intelligence.
It supplies the detail and depth needed to ensure that the questionnaire
asks valid questions while preparing questionnaire.
It is a follow up to a questionnaire and complement the questionnaire .
It can be combined with other tools in order to corroborate facts using a
different approach.
It is one of the normative survey methods, but it is also applied in
historical, experimental, case studies.
munotes.in

Page 171


Tools and Techniques of
Research -II
171 Types of Interview :
Interviews vary in purpose, nature and scope. They may be conducted for
guidance, research purposes. They may be confined to one individual or
extended to several people. The following discussions describe several
types of interview.
Structured Interview :
Structured interview involv es fight control over the format of questions
and answers. It is like a questionnaire which is administered face to face
with a respondent. The researcher has a predetermined list of questions.
Each respondent is faced with identical questions. The choice of
alternative answers is restricted to a predetermined list. This type of
interview is rigidly standardized and formal.
Structured interviews are often associated with social surveys where
researchers are trying to collect large volumes of data from a wid e range
of respondents.
Semi -Structured Interview :
In semi -structures interview, the interviewer also has a clear list of issues
to be addressed and questions to be answered. There is some flexibility in
the order of the topics. In this type of interviewe e is given chance to
develop his ideas and speak more widely on the issues raised by the
researcher. The answers are open -ended and more emphasis is on the
interviewee elaborating points of interest.
Unstructured Interview :
In case of unstructured intervi ew, emphasis is placed on the interviewee‘s
thoughts. The researcher introduces a theme or topic and then letting the
interviewee develop his or her ideas and pursue his or her train of thought.
Allowing interviewees to speak their minds is a better way of discovering
things about complex issues. It gives opportunity for in depth
investigations.
Single Interview :
This is a common form of semi structured or un -structured interview. It
involves a meeting between one researcher and one Informant. It is easy t o
arrange this type of interview. It helps the researcher to locate specific
ideas with specific people. It is also easy to control the situation in the part
of the interviewer.
Group Interview :
In case of group interview, more than one informant is invol ved. The
numbers involved normally about four to six people. Here you may think
that it is difficult to get people together to discuss matters on one occasion
and how many voices can contribute to the discussion during any one
interview. But the crucial th ing to bear in mind. Here is that a group munotes.in

Page 172


Research Methodolo gy in Education
172 interview is not an opportunity for the researcher to questions to a
sequence of individuals, taking turns around a table. ‗group‘ is crucial
here, because it tells us that those present in the interview will inter act
with one another and that the discussion will operate at the level of the
group. They can present a wide range of information and varied view
points.
According to Lewis –
Group interviews have several advantages over individual interviews. In
particula r, they help to reveal consensus views, may generate richer
responses by allowing participants to challenge one another‘s views, may
be used to verify research ideas of data gained through other thodsand may
enhance here liability of responses.
The disadva ntages of this type of interview is that the views of ‗quieter‘
people does not come out. Certain members may dominate the talk. The
most disadvantage is that whatever opinions are expressed are acceptable
by the group irrespective of their opinions contra ry to it. Private opinion
does not given importance.
Focus Group Interview :
This is an extremely popular form of interview technique. It consists of a
small group of people, usually between six and nine in number. This is
useful for non -sensitive and non-sensitive and non - controversial topics.
The session usually revolve around a prompt, a trigger, some stimulus
introduced by the interviewer in order to focus‘ the discussion. The
respondents are permitted to express themselves completely, but the
interviewer directs the live of thought. In this case, importance is given on
collective views rather than the aggregate view. It concentrates on
particular event or experience rather than on a general line of equality.
Requirements of a Good Interview :
As a tool of research good interview requires: Proper preparation.
 Skillful execution and
 Adequate recording and interpretation.
Preparation for Interview :
The follow actors need to be determined in advance of the actual
interview:
 Purpose and information nee ded should be clear.
 Which type of interview best suited for the purpose should be decided.
 A clear outline and framework should be systematically prepared.
 Planning should be done for recording responses. munotes.in

Page 173


Tools and Techniques of
Research -II
173 Execution of the Interview :
 Rapport should be est ablished.
 Described information should be collected with a stimulating and
encouraging discussion.
 Recording device should leased without distracting the interviewee.
Recording and Interpreting Responses :
 It is best to record through tape recorder.
 It the responses is to be noted down, it should be either noted
simultaneously or immediately after it.
 Instead of recording responses, sometimes the researcher noted the
evaluation directly interpreting the responses.
Advantages of Interview :
Interviews techniques has the following advantages:
Depth Information :
Interviews are particularly good at producing data which deal with topics
in depth and in detail. Subjects can be probed, issues pursued lines of
investigation followed over a relatively lengthy period.
Insights :
The researcher is likely to gain valuable insights based on the depth of the
information gathered and the wisdom of ―key informants .
Equipment:
Interviews require only simple equipment and build on conversation skills
which researchers already have.
Information Priorities :
Interviews are a good method for producing data based on informant‘s
priorities, opinions and ideas. Informants have the opportunity to expand
their ideas, explain their views and identify what regard as their crucial
factors.
Flexibility :
 Interviews are more flexible as a method of data collection.
 During adjustments to the line of enquiry can be made.
Validity :
Direct contact at the point of the interview means that data can be checked
for accuracy and relevance as they are collected. munotes.in

Page 174


Research Methodolo gy in Education
174 High response rate :
Interviews are generally pre -arranged and scheduled for a convenient time
and location. This ensures a relatively high response rate.
Therapeutic:
Interviews can be a rewarding experience for the informant, compared
with questionnaires, observation and experiments, there is a more personal
element to the method and people end to enjoy the rather rare chance to
talk about their ideas at length to a person whose purpose is to listen ad
note the ideas without be ing critical.
Disadvantages of Interviews :
Irrespective of the above advantages, it has the following disadvantages.
Time Consuming :
Analysis of data can be difficult and time consuming. Data preparation
and analysis is ―end loaded compared with, for ins tance, questionnaires,
which are preceded and where data are ready for analysis once they have
collected. The transcribing and coding of interview data is a major task for
the researcher which occurs after the data have been collected.
Difficulty in data a nalysis :
This method produce non -standard responses. Semi - structured and
unstructured interviews produce data that are not pre coded and have a
relatively open format.
Less Reliability :
Consistency and objectivity are hard to achieve. The data collected are, to
an extent, unique owing to the specific content and the specific individuals
involved. This has an adverse effect on reliability.
Interviewer Effect :
The identify of the researcher may affect the statements of the interviewee.
They may say what t hey do or what they prefer to do. The two may not
tally.
Inhibitions :
The tape recorder or video recorder may inhibit the important. The
interview is an artificial situation where people are speaking for the record
and on the record and this can be daunti ng for certain kinds of people.
Invasion of Privacy :
Interviewing can be an invasion of Privacy and may be upsetting for the
informant.
Resources: munotes.in

Page 175


Tools and Techniques of
Research -II
175 The cost of interviewer‘s fine, of travel and of transcription can be
relatively high it the informants are geographically widespread.
On the basis of the merits and limitations of the interview techniques it is
used in many ways for research and non -research purposes. This technique
was used in common wealth teacher training study to know the traits must
essent ials for success in teaching. Apart from being an independent data
collection tool, it may play an important role in the preparation of
questionnaires and check lists which are to be put to extensive use.
Check Your Progress - VII
Q.1 Explain different types o f interview for the purpose of research.
Q.2 Write short notes on:
(a) Importance of Interview.
(b) Requisites of a good interview.
5B.12 LET US SUM UP :
You would recall that we have touched upon the following learning items
in this unit.
For the purpose of collecting new relevant data foe a research study, the
investigator needs to select proper instruments termed as tools and
techniques.
The major tools of research can be classified into broad categories of
inquiry form, observation, interview, social measu res and Psychological
tests.
Among the inquiry forms, we have discussed in this unit are Rating scale,
attitude scale, opinionnaire, questionnaire checklist and semantic
differential scale. Observation and Interview are explained as the
techniques of data collection. In psychological tests, Aptitude tests and
inventories are discussed.
Rating scale is a technique which is designed or constructed to asses the
personality of an individual. It is very popular in testing applied
psychology, vocational guidance and counseling as well as in basic
research. They measure the degree or amount of the indicated judgments.
Attitude scale is the device by which the feelings or beliefs of persons are
described and measured indirectly through securing their responses to a set
of favourable statements. Thurstone and Likert scale are commonly
adopted for attitude scaling.
Opinionnaire is a special form of inquiry. It is used by the researcher to
collect the opinions of a sample of population. It is usually used in
descriptive type research.
munotes.in

Page 176


Research Methodolo gy in Education
176 Questionnaire is a tool which used frequently. The purpose is to gathered
information from widely scattered sources. Data collected in written form
through thistool.
Checklist is a selected list of words, Phrases, Sentences and paragraphs
following which an observer records a check mark to denote a presence or
absence of whatever is being observed. It calls for a simple yes / no
judgments. The main purpose is to call attention to various aspects of an
object or situation, to see that nothin g of importance is overlooked.
Semantic Differential Scale is a seven point scale and the end points of the
scale are associated with bipolar labels. This scale helps o determine
overall similarities and differences among objects.
Aptitude tests are psychological tests attempt to product the capacities or
the degree of achievement expected from individuals in a particular
activity. The purpose is to test a candidate‘s profile.
Inventory is a list, or record containing traits, preferences, attitudes
interests or abilities used to evaluate personal characteristics or skills.
Strong‘s vocational interest inventory is an example of interest inventory.
Observation method is a technical in which the behaviour research
subjects is watched and recorded without any direct contact. It deals with
the overt behaviour of persons in controlled or uncontrolled situations.
Interview is an oral type of questionnaire where the subject supplies the
needed information in a face to face situation. It is specially appropriat e
for dealing with young children, illiterates, dull and the abnormal.
Unit End Exercises:
1. State the characteristics of a questionnaire.
2. What are the disadvantages of an Interview?
3. Prepare items using Rating scale, Interview and Questionnaire for a
researc h proposal.
Reference Books :
Siddhu Kulbir Singh (1992). Methodology of Research in Education,
Sterling Publisher, NewDelhi.
Sukhia S. P. and Mehrotra P. V. (1983). ―Elements of educational
Research Allied Publisher Private Limited New Delhi
Denscombe, Martyn(1999).―The Good Research Guide Viva Books
Private Limited, New Delhi.
 
munotes.in

Page 177

177 6A
DATA ANALYSIS AND REPORT
WRITING - I
(QUANTITATIVE DATA ANALYSIS)
Unit Structure :
6A.0 Objectives
6A.1 Introduction
6A.2 Types of Measurement Scale
6A.3 Quantitative Data Analysis
6A.3.1 Parametric Techniques
6A.3.2 Non - Parametric Techniques
6A.3.3 Conditions to be satisfied for using parametric techniques
6A.3.4 Descriptive data analysis (Measures of central tendency,
variability, fiduciary limits and graphical presentation of
data)
6A.3.5 Inferential dat a analysis
6A.3.6 Use of Excel in Data Analysis
6A.3.7 Concepts, use and interpretation of following statistical
techniques: Correlation, t -test, z -test, ANOVA, Critical ratio
for comparison of percentages and chi -square (Equal
Probabilit y and Normal Probability Hypothesis).
6A.4 Testing of Hypothesis
6A.5 Interpretation of Result
6A.0 OBJECTIVES :
After reading this unit the student will be able to :
Explain different types of measurement scales with appropriate
examples.
State the Cond itions to be satisfied for using parametric techniques
List the examples of parametric and non -parametric techniques.
State the statistical measures of descriptive and inferential data analysis.
Explain concepts of different statistical techniques of da ta analysis and
their interpretation.
munotes.in

Page 178


Research Methodology in Education
178 6A.1 INTRODUCTION:
Statistical data analysis depends on several factors such as the type of
measurement scale used, the sample size, sampling technique used and the
shape of the distribution of the data. These will be described in this unit.
6A.2 SCALES OF MEASUREMENT:
The level of measurement refers to the relationship among the values that
are assigned to the attributes for a variable. It is important to understand
the level of measurement as it helps you to decide how to interpret the data
from the variable concerned. Second, knowing the level of measurement
helps you to decide which statistical techniques of data analysis are
appropriate for the numerical values that were assigned to the variables.
Types of Scales of Measurement
There are typically four scales or levels of measurement that are defined :
a. Nominal
b. Ordinal
c. Interval
d. Ratio
a. Nominal Scale : It is the lowest level of measurement. A nominal
scale is simply some placing of data into categories, without any ord er or
structure. A simple example of a nominal scale is the terms we use. For
example, religion as the names of religion are categories where no
ordering is implied.. Other examples are gender, medium of instruction,
school types and so on. In research act ivities, a YES/NO scale is also an
example of nominal scale. In nominal measurement the numerical values
just "name" the attribute uniquely. The statistical techniques of data
analysis which can be used with nominal scales are usually non -
parametric.
b. Ordi nal Scale : An ordinal scale is next up the list in terms of power of
measurement. In ordinal measurement, the attributes can be rank -ordered. Here,
distances between attributes do not have any meaning. For example, on a
survey you might code Educational A ttainment as 0=less than High
School; 1=some High School; 2=High School; 3=Junior College;
4=College degree; 5=post -graduate degree. In this measure, higher
numbers mean more education. But is distance from 0 to 1 same as 3 to 4?
Of course not. The simples t ordinal scale is a ranking. There is no
objective distance between any two points on your subjective scale. An
ordinal scale only lets you interpret gross order and not the relative
positional distances. The statistical techniques of data analysis which can
be used with nominal scales are usually non - parametric statistics. These
would include Karl -Pearson‘s coefficient of correlation and non -
parametric analysis of variance.
munotes.in

Page 179


Data Analysis and Report
Writing - I
179 c. Interval Scale : In interval measurement, the distance between
attributes does h ave meaning. For example, when we measure temperature
(in Fahrenheit), the distance between 30 and 40 is same as distance
between 70 and 80. The interval between values is interpretable. Because
of this, it makes sense to compute an average of an interval variable,
whereas it does not make sense to do so for ordinal scales. The rating scale
is an interval scale. i.e. when you are asked to rate your job satisfaction on
a 5 point scale, from strongly dissatisfied to strongly satisfied, you are
using an interv al scale. This means that we can interpret differences in the
distance along the scale. There is no absolute zero in the interval scale. For
example, if a student gets a score of zero on an achievement test, it does
not imply that his knowledge/ability in the subject concerned is zero as on
another, similar test in the same subject consisting of another set of
questions, the student could have got a higher score. Thus a score of zero
does not imply a complete lack of the trait being measured in the subject.
When variables are measured in the interval scale, parametric statistical
techniques of data analysis can be used. However, non -parametric
techniques can also be used with interval and ratio data.
d. Ratio Scale : Finally, in ratio scale, there is always an absolute zero that is
meaningful. This means that you can construct a meaningful fraction (or ratio)
with a ratio variable. Weight is a ratio variable. In educational research, most
"count" variables are ratio, for example, the number of students in a clas sroom.
This is because you can have zero students and because it is meaningful to say
that "...we had twice as many students in a classroom as compared to another
classroom."A ratio scale is the top level of measurement. When variables are
measured in the ratio scale, parametric statistical techniques of data
analysis can be used.
It is important to recognize that there is a hierarchy implied in the level of
measurement idea. In general, it is desirable to have a higher level of
measurement (e.g., interval or ratio) rather than a lower one (nominal or
ordinal).
Check Your Progress - I
Q.1 State the different types of measurement scale.
Q.2 Give three examples where you can use the following scales of
measurement :
(a) Nominal Scale
(b) Ordinal Scale
(c) Interval Scale
(d) Ratio Sca le

munotes.in

Page 180


Research Methodology in Education
180 6A. 3 QUANTITATIVE DATA ANALYSIS
6A.3.1 Parametric Techniques
Parametric statistics based in the assumption about the distribution of
population from which the sample is taken.
The information about the distribution of population is known and it is
based on certain fixed parameters. It assumes normal distribution of the
attributes being studied. When the data deviates strongly from the
assumptions, using parametric procedures could lead to incorrect
conclusions.
The commonly used parametric techni ques are t -test, ANOVA, Pearson
coefficient of correlation.
6A.3.2 Non -Parametric Techniques
Non-parametric statistics is based on no or few assumptions about
population. It means that the data can be collected from a sample that does
not follow any specif ic distribution.
The information about the distribution of population is unknown. The
parameters of distribution of population are not fixed. Hence it is
necessary to test the hypothesis for the population under consideration.
Interpretation of non -paramet ric procedures is more difficult as compared
to parametric procedures.
The example of non -parametric technique is Spearman’s rank correlation
6A.3.3 Conditions to be satisfied for Using Parametric Techniques:
These are as follows :
1. The sample size is grea ter than30.
2. Data are normally distributed.
3. Data are measured in interval or ratio scales.
4. Variances of different sub -groups are equal or nearly equal.
5. The sample is selected randomly.
6. Observations are independent.
6A.3.4 Descriptive Data Analysis
Descripti ve statistics are used to present quantitative descriptions in a
manageable form. In a research study, we may have lots of measures. Or
we may measure a large number of people on one measure. Descriptive
statistics help us to simply large amounts of data i n a sensible way. Each
descriptive statistic reduces lots of data into a simpler summary. For
instance, consider a simple number used to summarize how well a batter is
performing in baseball, the batting average. This single number is simply
the number of hits divided by the number of times at bat (reported to three munotes.in

Page 181


Data Analysis and Report
Writing - I
181 significant digits). A batter who is hitting .333 is getting a hit one time in
every three at bats. One batting .250 is hitting one time in four. The single
number describes a large number of di screte events. For example, we may
describe the performance of students of a class in terms of their average
performance. Descriptive statistics provide a powerful summary that may
enable comparisons across people or other units.
Measures of Central Tende ncy : These include Mean, Median and Mode
which indicate the average value of the variable being studied (Mean), the
value above and below which lie 50% of the sample values (Median) and
the value which occurs the maximum number of times in the sample
(Mod e). These help in determining the extent of normality of the
distribution of scores on the variable being studied.
Measures of Variability : These include the standard deviation, skewness
and kurtosis. The standard deviation indicates the deviation of each score
from the Mean. Skewness indicates whether majority of the scores lie to
the left of the Mean (positively skewed), to the right of the Mean
(negatively skewed) or bell -shaped (normally distributed). Kurtosis
indicates whether the distribution of the s cores is flat (platykurtic), peaked
(leptokurtic) or bell -shaped (mesokurtic).
Fiduciary Limits : These indicates the interval (or the fiduciary limits)
within which the Mean of the population will lieat 0.95or 0.99 levels of
confidence. The fiduciary limit s or the confidence interval of the
population Mean is estimated based on the sample mean. The sample
Mean is known as the Statistic‘ and the population mean is known as the
―parameter‘. The computation of population Mean requires the use of
Standard Error of the Mean. Similarly, fiduciary limits or the confidence
interval of the population standard deviation is also computed.
Graphical Presentation of Data : This includes bar diagrams, pie charts
and line graph. The line graph is usually used to represent t he distribution
of the scores obtained on a variable with the objective of indicating the
shape of the distribution. The bar diagram are used for making
comparisons of Mean scores on the variable being studied in various sub -
groups such as boys v/s girls, urban v/s rural, private - aided v/s private -
unaided v/s municipal schools, SSC v/s CBSE v/s ICSE v/s IGCSE
schools and so on. Pie charts are used to indicate proportion of different
sub-groups in the sample or the variance of a specific variable associated
with another variable.
Check Your Progress - I
1. Explain the meaning of parametric and non -[parametric techniques
of data analysis.
2. State the conditions necessary for using parametric techniques of
data analysis.
3. Which are the measures of central tendency a nd variability? Why is it
necessary to compute these? munotes.in

Page 182


Research Methodology in Education
182 4. You want to (i) compare the Mean Academic Achievement of boys and
girls, (ii) show whether the Academic Achievement scores of students are
normally distributed or not and (iii) show the proportion of boys and girls
in the total sample. State the graphical techniques to be used in each of
these cases.
6A.3.5 Inferential Data Analysis
Descriptive data analysis only describes the data and the characteristics of
the sample in terms of statistics. Its findi ngs could not be generalised to
larger population.
On the other hand, the findings of the inferential analysis can be
generalised to larger population.
6A.3.6 Use of Excel in Data Analysis : The MS -Excel is an excellent tool
for analysing data using statis tical techniques descriptive statistics such as
the Mean, Medial, Mode, SD, Skewness and Kurtosis and inferential
techniques including t -test, ANOVA, correlation. It also helps in
presenting data graphically through bar diagrams, line graph and pie -chart.
6A. 3.7 Concepts, Use and Interpretation of Statistical
Techniques
A. Correlation : When the variables are in the interval or ratio scale, correlation
and regression coefficients are computed. A Pearson product -moment correlation
coefficient is a measure of linear association between two variables in interval or
ratio scale. The measure, usually symbolised by the letter r, varies from –1to+1,
with 0 indicatingnolinear association. The word correlation is sometimes used in
a non -specific way as a synonym for asso ciation. Here, however, the Pearson‘s
product -moment correlation coefficient is a measure of linear association
computed for a specific set of data on two variables. For example, if a researcher
wants to ascertain whether teachers‘ job satisfaction is rela ted to their school
climate, Pearson‘s product -moment correlation coefficient could be
computed for teachers‘ scores on these two variables.
Interpretation of “r” : This takes into account four major aspects as
follows:
a. Level of significance (usually at 0.0 1 or 0.05 levels in educational
research).
b. Magnitude of r: In general, the following forms the basis of
interpreting the magnitude of their:
No. Value of “r” Magnitude
1 0.00-0.20 Negligible
2 0.21-0.40 Low
3 0.41-0.60 Moderate
4 0.61-0.80 Substantial
5 0.81-1.00 Very High
munotes.in

Page 183


Data Analysis and Report
Writing - I
183 c. Direction of ―r: The ―r could be positive, negative or zero.(i)A
positive r signifies that the relationship between two variables is direct i.e.
if the value of one variable is high, the other is also expected to be high.
For exampl e, if there is a substantial relationship between IQ and
Academic Achievement of students, it implies that higher the IQ of
students. Higher is likely to be their Academic Achievement.
(ii) A negative ―r signifies that the relationship between two variables is
inverse i.e. if the value of one variable is high, the other is also expected to
be low. For example, if there is a substantial relationship between Anxiety
and Academic Achievement of students, it implies that higher the Anxiety
of students. Lower is like ly to be their Academic Achievement.
(iii) The relationship between two variables could be zero.
d. The Coefficient of Determination: It refers to the percentage of
variability in one variable that is associated with variability in the other
variable. It is compute d using the formula 100r2. The square of the
correlation coefficient i.e. 100r2 is another PRE (proportionate reduction
in error) measure of association.
B. t-test : A t -test is used to compare the Mean Scores obtained by two
groups on a single variable. It i s also used when F -ratio in ANOVA is
found to be significant and the researcher wants to compare the Mean
scores of different groups included in the ANOVA. It can also be used to
compare the Mean Academic Achievement of two groups such as (i) boys
and girl s or (ii) Experimental and Control groups etc. The t -test was
introduced in 1908 by William Sealy Gosset, a chemist working in
Ireland. His pen name was" Student".
The assumptions on which the t -test is used are as follows:
(a) Data are normally distributed. T his can be ascertained by using
normality tests such as the Shapiro -Wilk and Kolmogorov -
Smirnovtests.
(b) Equality of variances which can be tested by using the F -test or the
more robust Levene‘s test, Bartlett‘s test or the Brown - Forsythe test.
(c) Samples may be independent or dependent, depending on the
hypothesis and the type of samples. For the inexperienced
researcher, the most difficult issue is often whether the samples are
independent or dependent. Independent samples are usually two
randomly selected gr oups unrelated to teach other such as boys and
girls or private -aided and private -unaided schools in a causal -
comparative research. On the other hand, dependent samples are
either two groups matched (or a "paired" sample) on some variable
(for example, IQ or SES) or are the same people being tested twice
(called repeated measures as is done in an experimental design).
Dependent t -tests can also used for matched -paired samples, where two
groups are matched on a particular variable. For example, if we examine d munotes.in

Page 184


Research Methodology in Education
184 the IQ of twins, the two groups are matched on genetics. This would call
for a dependent t -test because it is a paired sample (one child/twin paired
with one another child/twin). However, when we compare 100 boys and
100 girls, with no relationship betwe en any particular boy and any
particular girl, we would use an independent samples test. Another
example of a matched sample would be to take two groups of students,
match each student in one group with a student in the other group based on
IQ and then com pare their performance on an achievement test.
Alternatively, we assign students with low scores and students with high
scores in two groups and assess their performance on an achievement test
independently. An example of a repeated measures t -test would b e if one
group were pre -tested and post -tested as is done in educational research
quite often especially in experiments. If a teacher wanted to examine the
effect of a new set of textbooks on student achievement, he could test the
class at the beginning of the treatment (pre -test) and at the end of the
treatment (post -test). A dependent t -test would be used, treating the pre -
test and post -test as matched variables (matched by student).
Types of t -test :
(a) Independent one -sample t -test : Suppose in a research, on the basis
of past research, the researcher finds that the Mean Academic
Achievement of students in Mathematics is 50 with a SD of 12. If the
researcher wants to know whether this year‘s students‘ Academic
Achievement in Mathematics is typical, he takes μo = 50 which is the
population Mean and S = 12 where S is the population SD. Suppose N =
36. In testing the null hypothesis that the population mean is equal to a
specified value μ 0, one uses the statistic t the formula for which is follows:
t = [M - μo ]’ √S/N
where S is the population standard deviation and N is the sample size. The
degrees of freedom used in this test is N − 1.
(b) Independent two -sample t -test : This is of the following three types:
(i) Equal sample sizes, equal variance : This test is only u sed when both
the samples are of the same size i.e. N 1 = N 2 = N and when it can be
assumed that the two distributions have the same variance. Its formula is
as follows:
Standard Error of the Difference between Means (SED)
= √ (σ2
1 + σ2
2) ÷ N ) t = (M 1 – M2) ÷SED
Where,
M1= Mean of Group 1 M 2= Mean of Group 2 σ 1= SD of Group 1 σ 2= SD
of Group 2
(ii) Unequal sample sizes, equal variance : This test is used only when it can be
assumed that the two distributions have the same variance. The t statistic to
test wheth er the means are different can be calculated as follows: munotes.in

Page 185


Data Analysis and Report
Writing - I
185 Standard Error of the Difference between Means (SED)
= √ (σ2
1 / N1 + σ2
2/N2) t = (M 1 – M2) ÷ SED
Where,
M1= Mean of Group 1 M 2= Mean of Group 2 σ 1= SD of Group 1 σ 2= SD
of Group 2
(iii) Unequal sample size s, unequal variance : This test is used
only when the two sample sizes are unequal and the variance is assumed to
be different. The t statistic to test whether the means are different can be
calculated as follows:
Standard Error of the Difference between M eans (SED)
= √ (σ2
1 / N1 + σ2
2/N2) t = (M 1 – M2) ÷ SED
However, the tabulated t (t t) against which the obtained t (t o) is compared
in order test its significance is calculated differently using the following
formula:
Suppose for Sample 1, for df = N -1, tabulated t (t t) at 0.05 level of
significance = x
and
Sample 2, for df = N -1, tabulated t (t t) at 0.05 level of significance = y.
SE1 = Standard Error of M 1 SE2 = Standard Error of M 2
Corrected tabulated t (t t) at 0.05 level of significance
= (SE2
1 × x) + (SE2
2 × y) ÷ (SE2
1 + SE2
2)
Where,
M1= Mean of Group 1 M 2= Mean of Group 2 σ 1= SD of Group 1 σ 2= SD
of Group 2
(c) Dependent t -test for paired samples : This test is used when the
samples are dependent; that is, when there is only one sample that has
been tested twice (repeated measures) or when there are two samples that
have been matched or "paired" as is usually done in experimental research.
Its formula is as follows:
Standard Error of the (Mean) 1 = σ 1’ √ N 1 = SEM 1 Standard Error of
the (Mean) 1 = σ 2’ √ N 2 = SEM 2 Standard Error of the Difference
between Means (SED)
= √ (SEM 1+ SEM 2 – 2r. SEM 1 SEM 2) t = (M 1 – M2) ÷ SED

munotes.in

Page 186


Research Methodology in Education
186 Where,
M1= Mean of Pre -test Scores M 2= Mean of Post -test Scores σ 1= SD of
Group1
σ2= SD of Group2
r = Coefficient of Correlation between Pre -test and Post -test Scores
Alternatives to the t test
The t test can be used to test the equality of the means of two normal
populations with unknown, but equal, variance. To relax the normality
assumption, a non -parametric alternative to the t test can be used and the
usual choices are (a) for independent samples , the Mann -Whitney U test
and (b) for related samples, either the binomial test or the Wilcoxin
signed -rank test. To test the equality of the means of more than two
normal populations, an analysis of variance can be performed.
z-test : It is used to compar e two coefficients of correlation. For example,
a researcher studies the relationship between job satisfaction and school
climate among two groups, viz., male and female teachers. Further, he
may want to ascertain whether this relationship differs among ma le and
female teachers. In this case, he will have two coefficients of correlation,
r1 for male teachers and r 2 for male teachers for the variables of job
satisfaction and school climate. In such a case, z -test is used the formula
for which is as follows:
z = (z 1 – z2) ’√(1 / N 1-3 + 1/ N 2-3)
ANOVA : Analysis of variance (ANOVA) is used for comparing more
than two groups on a single variable. It a collection of statistical models
and their associated procedures, in which the observed variance is
partitioned into components due to different explanatory variables. In its
simplest form ANOVA gives a statistical test of whether the means of
several groups are all equal, and therefore generalizes Student‘s two -
sample t -test to more than two groups.
There are three conceptual classes of such models:
1. Fixed -effects model assumes that the data came from normal
populations which may differ only in their means. The fixed -effects model
of analysis of variance applies to situations in which the experimenter
applies several treatments to the subjects of the experiment to see if the
response variable‘s values change. This allows the experimenter to
estimate the ranges of response variable values that the treatment would
generate in the population as a whole.
2. Random -effects mo del assumes that the data describe a hierarchy of
different populations whose differences are constrained by the hierarchy.
Random effects models are used when the treatments are not fixed. This
occurs when the various treatments (also known as factor leve ls) are
sampled from a larger population. Because the treatments themselves are munotes.in

Page 187


Data Analysis and Report
Writing - I
187 random variables, some assumptions and the method of contrasting the
treatments differ from ANOVA.
3. Mixed -effect model describes situations where both fixed and random
effects a re present.
Types of ANOVA : These are as follows :
(a) One-way ANOVA : It is used to test for differences among two or more
independent groups. Typically, however, the one -way ANOVA is used to test for
differences among three or more groups, since the two -group case can be covered
by student‘s t -test. When there are only two means to compare, the t -test and the
F-test are equivalent with F = t2. For example, a researcher wants to compare
students‘ attitude towards the school on the basis of school types (SSC, CBSE
and ICSE). In this case, there is one dependent variable, namely, attitude towards
the school and three groups, namely, SSC, CBSE and ICSE schools. Here, the
one-way ANOVA is used to test for differences in students‘ attitude towards the
school among the three groups.
(b) One-way ANOVA for repeated measures : It is used when the
subjects are subjected to repeated measures. This means that the same
subjects are used for each treatment. Note that this method can be subject
to carryover effects. This technique is often used in experimental research
in which we want to compare three or more groups on one dependent
variable which is measured twice i.e. as pre -test and post -test.
(c) Two -way ANOVA : It is used when the researcher wants to study the effects
of two or mo re independent or treatment variables. it is also known as factorial
ANOVA. The most commonly used type of factorial ANOVA is the 2×2 (read as
"two by two", as you would a matrix) design, where there are two independent
variables and each variable has two levels or distinct values. Two -way ANOVA
can also be multi -level such as 3×3, etc. or higher order such as 2×2×2, etc. but
analyses with higher numbers of factors are rarely done by hand because the
calculations are lengthy. However, since the introduction of data analytic
software, the utilization of higher order designs and analyses has become quite
common. For example, a researcher wants to compare students‘ attitude towards
the school on the basis of (i) school types (SSC, CBSE and ICSE) and (ii)
gender . In this case, there is one dependent variable, namely, attitude towards the
school and two independent variables, viz., (i) school types including three
levels, namely, SSC, CBSE and ICSE schools and (ii) gender including two
levels, namely, boys and gir ls. Here, the two -way ANOVA is used to test for
differences in students‘ attitude towards the school on the basis of (i) school
types and (ii) gender. This is an example of 3×2 two -way ANOVA as there are
three levels of school types, namely, SSC, CBSE and ICSE schools and two
levels of gender, namely, boys and girls. it is known as two -way ANOVA as
it involves comparing one dependent variable (attitude towards the school)
on the basis of two independent variables, viz., (i) school types and (ii)
gender.
(d) MAN OVA: When one wants to compare two or more independent
groups in which the sample is subjected to repeated measures such as pre -
test and post -test in an experimental study, one may perform a factorial
mixed - design ANOVA i.e. Multivariate Analysis of Varia nce or
MANOVA in which one factor is a between -subjects variable and the munotes.in

Page 188


Research Methodology in Education
188 other is within -subjects variable. This is a type of mixed -effect model. It is
used when there is more than one dependent variable.
(e) ANCOVA : While comparing two groups on a dependent v ariable, if
it is found that they differ on some other variable such as their IQ, SES or
pre-test, it is necessary to remove these initial differences. This can be
done through using the technique of ANCOVA.
Assumptions of Using ANOVA
1. Independence of cases .
2. Normality of the distributions in each of the groups.
3. Equality or homogeneity of variances, known as homo scedasticity i.e.
the variance of data in groups should be the same. Levene‘s test for
homogeneity of variances is typically used to confirm
homosce dasticity. The Kolmogorov -Smirnov or the Shpiaro -Wilk test
may be used to confirm normality. According to Lindman, F -test is
unreliable if there are deviations from normality whereas Ferguson and
Takane claim that the F - test is robust. The Kruskal -Wallis test is a
non-parametric alternative which does not rely on an assumption of
normality. These together form the common assumption that the errors
are independently, identically, and normally distributed for fixed
effects models.
Critical Ratio for Comparis on of Percentages : This technique is used
when the researcher wants to compare two percentages. Its formula is as
follows :
(i) For uncorrelated percents : CR = P 1-P2 ÷ (SE of Percentage)
Where,
P1 = Percentage occurrence of observed behaviour in Group 1 P 2 =
Percentage occurrence of observed behaviour in Group 2 P = (N 1P1 +
N2P2) ÷ (N 1+ N 2)
P = Percent occurrence of observed behaviour Q = 1=P
SE of difference between percentages = √ [PQ(1/ N 1 + 1/ N 2)]
(ii) For correlated percents:
CR = P 1-P2 ÷ (SE of Percentage) Where,
P1 = Percentage occurrence of observed behaviour in Group 1 P 2 =
Percentage occurrence of observed behaviour in Group 2 P = (N 1P1 +
N2P2) ÷ (N 1+ N 2)
P = Per cent occurrence of observed behaviour Q = 1=P
SE of difference between percentages
= √[(SE) P1+(SE) P2 - 2r(SE) P1 ×(SE) P2]

munotes.in

Page 189


Data Analysis and Report
Writing - I
189 The obtained CR is compared with tabulated CR given in statistical table
to test its significance.
Chi-square (Equal Probability and Normal Probability Hypothesis) :
A chi -square test (also chi -squared or χ2 test) is any statistical test in which
the test statistic has a chi -squared distribution when the null hypothesis is
true, or any distribution in which the probability distributi on of the test
statistic (assuming the null hypothesis is true) can be made to approximate
a chi -square distribution as closely as desired by making the sample size
large enough. Chi -square is a statistical test commonly used to compare
observed data with data we would expect to obtain according to a specific
hypothesis. For example, if a researcher expects parents‘ attitude towards
sex education to be provided in schools to be normally distributed, then he
might want to know about the "goodness to fit" bet ween the observed and
expected results i.e. whether the deviations (differences between observed
and expected) the result of chance, or whether they are due to other
factors. The chi -square test tests the null hypothesis, which states that
there is no sign ificant difference between the expected and observed
result. If it is assumed that the expected frequencies are equally distributed
in all the cells, the chi -square test is known as equal distribution
hypothesis. On the other hand, if it is assumed that th e frequencies are
expected to be distributed normally, the chi -square test is known as normal
distribution hypothesis. The chi -square (χ 2) test measures the alignment
between two sets of frequency measures. These must be categorical counts
and not percent ages or ratios measures.
Thus, Chi -squared, χ2 = ∑(f o - fe)2 / fe where ,
fo = observed frequency and f e = expected frequency.
It may be noted that the expected values may need to be scaled to be
comparable to the observed values. A simple test is that the total
frequency/count should be the same for observed and expected values. In a
table, the expected frequency, if not known, may be estimated as: fe =
(row total) x (column total) / N, where N is the total number of all the
rows (or columns). The obtained Chi Square is compared with that given
in the Chi Square table to determine whether the comparison shows
significance.
In a table, the degrees of freedom are computed as follows : df = (R - 1) ×
(C - 1),
Where R = number of rows C = number of columns.
Chi-square indicates whether there is a significant association between
variables, but it does not indicate just how significant and important this
is.
Check Your Progress – II
Suggest appropriate statistical technique of data analysis in the following
cases : munotes.in

Page 190


Research Methodology in Education
190 (a) You want to find out the relationship between academic achievement and
motivation of students.
(b) You want to compare the academic achievement of boys and girls:
(c) You are conducting a survey of teachers‘ opinion about admission
criteria for junior college adm issions and want to know whether their
opinion is favourable or not.
(d) You want to compare the academic achievement of students from
private -aided, private -unaided and municipal schools.
(e) You want to compare whether the percentage of girls and boys
enrolling for secondary school:
6A.4 TESTING OF HYPOTHESIS :
There are a number of ways in which the testing of the hypothesis may
bear on a theory. According to Wallace, it can
(i) Lend confirmation to the theory by not disconfirming it.
(ii) Modifying the theory by disconf irming it, but not at a crucial point ;or
(iii) Overthrow the theory by disconfirming it at a crucial point in its
logical structure, or in its competitive value as compared with rival
theories.
Methods of Testing of Hypothesis
There are three major methods of t esting of hypothesis as follows:
1. Verification : The best test of a hypothesis is to verify whether the
inferences reached from the propositions are consistent with the
observed facts. Verification is of two types as follows: (a) Direct
Verification by Obse rvation or Experimentation and (b) Indirect
Verification by deducing consequences from the supposed cause and
comparing them with the facts of experience. This necessitates the
application of the principle of deduction.
2. Experimentum Crucis : This is known as crucial instance or
confirmatory test. When a researcher is confronted with two equally
competent but contradictory hypotheses, he needs one instance which
explains the phenomenon and helps in accepting any one of the
hypotheses. When this is done throu gh an experiment, the experiment
is known as Experimentum Crucis‘.
3. Consilience of Inductions : This refers to the power which a
hypothesis has of ‗explaining and determining cases of a kind
different from those which were contemplated in the formation of
hypothesis‘. In other words, the hypothesis is accepted and its value is
greatly enhanced when it is found to explain other facts also in
addition to those facts which it was initially designed to explain. munotes.in

Page 191


Data Analysis and Report
Writing - I
191 Errors in Testing of Hypothesis:
A researcher test s the null hypothesis using some statistical technique.
Based on the test of statistical significance he / she accepts or rejects the
null hypothesis and thereby either rejects or accepts the research
hypothesis respectively.
If the null hypothesis is true and is accepted or when it is false and is
rejected, the decisions taken are true.
However, error in testing of hypothesis occurs under the following two
situations:
(i) If the null hypothesis (H 0) is true but is rejected and
(ii) If the null hypothesis (H 0) is fa lse but is accepted.
The former is the example of Type I error while the latter is the example
of Type II error in testing of hypothesis.
Type I error occurs when a true null hypothesis is rejected. It is also
known as Alp ha (α) error.
Type II error occurs when a false null hypothesis is accepted. Itis also
known as Beta (β)error.
This is shown in the following table :


Possible
Situations Possible Outcomes
Accept H 0 Reject H 0
H0 True Correct Decision Type I error
(α error)
H0 False Type II error Correct Decision
(β error)

When the sample size N is fixed, if we try to reduce Type I error, the
chances of making Type II error increase. Both types of errors cannot be
reduced simultaneously. More about this wi ll be discussed in the section
on statistical analysis of data.
Check Your Progress – III
(a) What are the major methods of hypothesis testing?
(b) What are the different types of errors in the hypothesis testing?
munotes.in

Page 192


Research Methodology in Education
192 6A.5 INTERPRETATION OF DATA AND RESULT
The quant itative data interpretation method is used to analyze quantitative
data, which is also known as numerical data . This data type contains
numbers and is therefore analyzed with the use of nu mbers and not texts.
Quantitative data are of 2 main types, namely; discrete and continuous
data. Continuous data is further divided into interval data and ratio data,
with all the data types being numeric .
Due to its natural existence as a number, analysts do not need to employ
the coding technique on quantitative data before it is analyzed.
The process of analyzing quantitative data involves statistical modelling
techniques such as standard deviation, mean and median.
Some of the statistical methods used in analyzing quantitative data are
highlighted below:
 Mean
The mean is a numerical average for a set of data and is calculated by
dividing the sum of the values by the number of values in a dataset. It is
used to get an estimate of a large population from the dataset obtained
from a sample of the population.
For example, online job boards in the US use the data collected from a
group of registered users to estimate the salary paid to people of a
particular profession. The estimate is usually made using the average
salary submitted on their platform for each profe ssion.
 Standard deviation
This technique is used to measure how well the responses align with or
deviates from the mean. It describes the degree of consistency within the
responses; together with the mean, it provides insight into data sets.
In the job boa rd example highlighted above, if the average salary of
writers in the US is $20,000 per annum, and the standard deviation is 5.0,
we can easily deduce that the salaries for the professionals are far away
from each other. This will birth other questions lik e why the salaries
deviate from each other that much.
With this question, we may conclude that the sample contains people with
few years of experience, which translates to a lower salary, and people
with many years of experience, translating to a higher s alary. However, it
does not contain people with mid -level experience.
 Frequency distribution
This technique is used to assess the demography of the respondents or the
number of times a particular response appears in research. It is extremely
keen on deter mining the degree of intersection between data points. munotes.in

Page 193


Data Analysis and Report
Writing - I
193 Some other interpretation processes of quantitative data include:
 Regression analysis
 Cohort analysis
 Predictive and prescriptive analysis
Tips for Collecting Accurate Data for Interpretation
 Identify the Required Data Type
Researchers need to identify the type of data required for particular
research. Is it nominal, ordinal, interval, or ratio data ?
The key to collecting the required data to conduct research is to properly
understand the research question. If the researcher can understand the
research question, then he can identify the kind of data that is required to
carry out the research.
For example, when collecting customer feedback, the best data type to use
is the ordinal data type . Ordinal data can be used to acc ess a customer's
feelings about a brand and is also easy to interpret.
 Avoid Biases
There are different kinds of biases a researcher might encounter when
collecting data for analysis. Although biases sometimes come from the
researcher, most of the biases e ncountered during the data collection
process is caused by the respondent.
There are 2 main biases, that can be caused by the President,
namely; response bias and non -response bias. Researchers may not be able
to eliminate these biases, but there are ways in which they can be avoided
and reduced to a minimum.
Response biases are biases that are caused by respondents intentionally
giving wrong answers to responses, while non -response bias occurs when
the respondents don't give answers to questions at all. Biases are capable
of affecting the process of data interpretation .
 Use Close Ended Surveys
Although open -ended surveys are capable of giving detailed information
about the questions and allowing respondents to fully express themselves,
it is not the best kind of survey for data interpretation. It requires a lot of
coding before the data can be analyzed.
Close -ended surveys , on the other hand, restrict the respondents' answers
to some predefined options, w hile simultaneously eliminating irrelevant
data. This way, researchers can easily analyze and interpret data. munotes.in

Page 194


Research Methodology in Education
194 However, close -ended surveys may not be applicable in some cases, like
when collecting respondents' personal information like name, credit card
details, phone number, etc.
Visualization Techniques in Data Analysis
One of the best practices of data interpretation is the visualization of the
dataset. Visualization makes it easy for a layman to understand the data,
and also encourages people to view t he data, as it provides a visually
appealing summary of the data.
There are different techniques of data visualization, some of which are
highlighted below.
Bar Graphs
Bar graphs are graphs that interpret the relationship between 2 or more
variables using rectangular bars. These rectangular bars can be drawn
either vertically or horizontally, but they are mostly drawn vertically.
The graph contains the horizontal axis (x) and the vertical axis (y), with
the former representing the independent variable while the latter is the
dependent variable. Bar graphs can be grouped into different types,
depending on how the rectangular bars are placed on the graph.
Some types of bar graphs are highlighted below:
 Grouped Bar Graph
The grouped bar graph is used to show mo re information about variables
that are subgroups of the same group with each subgroup bar placed side -
by-side like in a histogram.
 Stacked Bar Graph
A stacked bar graph is a grouped bar graph with its rectangular bars
stacked on top of each other rather t han placed side by side.

 Segmented Bar Graph
Segmented bar graphs are stacked bar graphs where each rectangular bar
shows 100% of the dependent variable. It is mostly used when there is an
intersection between the variable categories. munotes.in

Page 195


Data Analysis and Report
Writing - I
195 Advantages of a Bar Graph
 It helps to summarize a large data
 Estimations of key values c.an be made at a glance
 Can be easily understood
Disadvantages of a Bar Graph
 It may require additional explanation.
 It can be easily manipulated.
 It doesn't properly describe the dataset .
Pie Chart
A pie chart is a circular graph used to represent the percentage of
occurrence of a variable using sectors. The size of each sector is
dependent on the frequency or percentage of the corresponding variables.
There are different variants of the pie charts, but for the sake of this
article, we will be restricting ourselves to only 3. For better illustration of
these types, let us consider the following examples.
Pie Chart Example : There are a total of 50 students in a class, and out of
them, 10 st udents like Football, 25 students like snooker, and 15 students
like Badminton.
 Simple Pie Chart
The simple pie chart is the most basic type of pie chart, which is used to
depict the general representation of a bar chart.
 Doughnut Pie Chart
Doughnut pie is a variant of the pie chart, with a blank center allowing for
additional information about the data as a whole to be included.
 3D Pie Chart
3D pie chart is used to give the chart a 3D look and is often used for
aesthetic purposes. It is usua lly difficult to reach because of the distortion
of perspective due to the third dimension.
Advantages of a Pie Chart
 It is visually appealing.
 Best for comparing small data samples.
Disadvantages of a Pie Chart
 It can only compare small sample sizes.
 Unhelpful with observing trends over time.
munotes.in

Page 196


Research Methodology in Education
196 Tables
Tables are used to represent statistical data by placing them in rows and
columns. They are one of the most common statistical visualization
techniques and are of 2 main types, namely; simple and complex tab les.
 Simple Tables
Simple tables summarize information on a single characteristic and may
also be called a univariate table. An example of a simple table showing the
number of employed people in a community concerning their age group.
 Complex Tables
As its name suggests, complex tables summarize complex information and
present them in two or more intersecting categories. A complex table
example is a table showing the number of employed people in a
population concerning their age group and sex as shown in th e table
below.
Advantages of Tables
 Can contain large data sets
 Helpful in comparing 2 or more similar things
Disadvantages of Tables
 They do not give detailed information.
 Maybe time -consuming.
Line Graph
Line graphs or charts are a type of graph that dis plays information as a
series of points, usually connected by a straight line. Some of the types of
line graphs are highlighted below.
 Simple Line Graphs
Simple line graphs show the trend of data over time, and may also be used
to compare categories. Let u s assume we got the sales data of a firm for
each quarter and are to visualize it using a line graph to estimate sales for
the next year.
 Line Graphs with Markers
These are similar to line graphs but have visible markers illustrating the
data points
 Stacke d Line Graphs
Stacked line graphs are line graphs where the points do not overlap, and
the graphs are therefore placed on top of each other. Consider that we got munotes.in

Page 197


Data Analysis and Report
Writing - I
197 the quarterly sales data for each product sold by the company and are to
visualize it to predi ct company sales for the next year.
Advantages of a Line Graph
 Great for visualizing trends and changes over time.
 It is simple to construct and read.
Disadvantage of a Line Graph
 It can not compare different variables at a single place or time.
What are t he Steps in Interpreting Data?
After data collection, you’d want to know the result of your findings.
Ultimately, the findings of your data will be largely dependent on the
questions you’ve asked in your survey or your initial study questions. Here
are the four steps for accurately interpreting data
1. Gather the data
The very first step in interpreting data is having all the relevant data
assembled. You can do this by visualizing it first either in a bar, graph, or
pie chart. The purpose of this step is to accurately analyze the data without
any bias.
Now is the time to remember the details of how you conducted the
research. Were there any flaws or changes that occurred when gathering
this data? Did you keep any observatory notes and indicators?
Once you h ave your complete data, you can move to the next stage
2. Develop your findings
This is the summary of your observations. Here, you observe this data
thoroughly to find trends, patterns, or behavior. If you are researching
about a group of people through a sample population, this is where you
analyze behavioral patterns. The purpose of this step is to compare these
deductions before drawing any conclusions. You can compare these
deductions with each other, similar data sets in the past, or general
deduction s in your industry.
3. Derive Conclusions
Once you’ve developed your findings from your data sets, you can then
draw conclusions based on trends you’ve discovered. Your conclusions
should answer the questions that led you to your research. If they do not
answer these questions ask why? It may lead to further research or
subsequent questions.
4. Give recommendations
For every research conclusion, there has to be a recommendation. This is
the final step in data interpretation because recommendations are a munotes.in

Page 198


Research Methodology in Education
198 summary of your findings and conclusions. For recommendations, it can
only go in one of two ways. You can either recommend a line of action or
recommend that further research be conducted.
Advantages and Importance of Data Interpretation
 Data interpretatio n is important because it helps make data -driven
decisions.
 It saves costs by providing costing opportunities
 The insights and findings gotten from interpretation can be used to
spot trends in a sector or industry.
Conclusion
Data interpretation and an alysis is an important aspect of working with
data sets in any field or research and statistics. They both go hand in hand,
as the process of data interpretation involves the analysis of data.
The process of data interpretation is usually cumbersome, and s hould
naturally become more difficult with the best amount of data that is being
churned out daily. However, with the accessibility of data analysis tools
and machine learning techniques, analysts are gradually finding it easier to
interpret data.
Data int erpretation is very important, as it helps to acquire useful
information from a pool of irrelevant ones while making informed
decisions. It is found useful for individuals, businesses, and researchers.
Suggested Readings
1. Guilford, J. P. and Fruchter, B. Fu ndamental Statistics in psychology
and Education. Singapore : McGraw –Hill Book Company.1981.
2. Best, J. and Kahn, J. Research in Education (9 ed), NewDelhi: Prentice
Hall of India Pvt. Ltd.2006.
3. Garret, H. E. Statistics in Psychology and Education . NewYork:
Longmans Green and Co. 5th edition1958.
4. https://www.formpl.us/blog/data -interpretation



 munotes.in

Page 199

199
6B
DATA ANALYSIS AND REPORT
WRITING -II
(QUALITATIVE DATA ANALYSIS)

Unit Structure:
6B.0 Objectives
6B.1 Introduction
6B.2 Qualitative Data Analysis
 Data Reduction and Classification
 Analytical Induction
 Constant Comparison
6B.0 OBJECTIVES:
After reading this unit the student will be able to:
 Explain the meaning of qualitative data analysis.
 State the broad focus of qualitative research.
 State the specific research questions usually formulated in qualitative
research.
 State the principles and characterist ics of qualitative data analysis.
 Explain the strategies of qualitative data analysis.
6B.1 INTRODUCTION:
Meaning: Qualitative data analysis is the array of processes and
procedures whereby a researcher provides explanations, understanding and
interpretati ons of the phenomenon under study on the basis of meaningful
and symbolic content of qualitative data. It provides ways of discerning,
examining, comparing and contrasting and interpreting meaningful
patterns and themes. Meaningfulness is determined by the specific goals
and objectives of the topic at hand wherein the same set of data can be
analysed and synthesised from multiple angles depending on the research
topic. It is based on the interpretative philosophy. Qualitative data are
subjective, soft, rich munotes.in

Page 200


Research Methodology in Education
200 and in -depth descriptions usually presented in the form of words. The
most common forms of obtaining qualitative data include semi - structured
and unstructured interviews, observations, life histories and documents.
The process of analysing is difficult r igorous.
Broad Focus of Qualitative Research : These include finding answers to
the following questions:
• What is the interpretation of the world from the participants‘
perspective?
• Why do they have a particular perspective?
• How did they develop such a persp ective?
• What are their activities?
• How do they identify and classify themselves and others?
• How do they convey their perspective of their situation?
• What patterns and common themes surface in participants‘ responses
dealing with specific items? How do thes e patterns shed light on the
broader study questions?
• Are there any deviations from these patterns? If so, are there any
factors that might explain these a typical responses?
• What stories emerge from these responses? How do these stories help
in illuminati ng the broader study questions?
• Do any of these patterns or findings suggest additional data that may
be required? Do any of the study questions need to be revised?
• Do the patterns that emerge substantiate the findings of any
corresponding qualitative anal ysis that have been conducted? If not,
what might explain these discrepancies?
Specific Research Questions Usually Formulated in Qualitative
Research
• What time did school start in the morning?
• What would students probably have to do before going to school?
• What was the weather like in the month of data collection?
• How did the students get from home to school?
• What were the morning activities?
• What did the students do in the recess?
• What were the activities after recess? munotes.in

Page 201


Data Analysis And Report
Writing -II
201 • What was the classroom environment li ke?
• What books and instructional materials were used?
• At what time did the school got over?
• What would students probably have to do after school got over?
• What kind of homework was given and how much time it required?
Before describing the process of quali tative data analysis process, it is
necessary to describe the terms associated with process.
Principles of Qualitative Data Analysis
These are as follows;
1 Proceeding systematically and rigorously (minimize human error).
2 Recording process, memos, journals, etc.
3 Focusing on responding to research questions.
4 Identifying appropriate level of interpretation suitable to a situation.
5 Simultaneous process of inquiry and analysis.
6 Seeking to explain or enlighten.
7 Evolutionary/emerging.
Characteristics of Qualitative Data Analysis:
According to Seidel, the process has the following characteristics :
a. Iterative and Progressive : The process is iterative and progressive
because it is a cycle that keeps repeating. For example, if you are thinking
about things, you also sta rt noticing new things in the data. You then
collect and think about these new things. In principle the process is an
infinite spiral.
b. Recursive : The process is recursive because one part can call you
back to a previous part. For example, while you are bus y collecting things,
you might simultaneously start noticing new things to collect .
c. Holographic : The process is holographic in that each step in the
process contains the entire process. For example, when you first Notice
things, you are already mentally co llecting and thinking about those
things.
6B.2 COMPONENTS OF QUALITATIVE DATA
ANALYSIS
According to Miles and Huberman, following are the major components
of qualitative data analysis : munotes.in

Page 202


Research Methodology in Education
202 (A) Data Reduction: "Data reduction refers to the process of selecting,
focusing, simplifying, abstracting, and transforming the data that appear in
written up field notes or transcriptions." First, the mass of data has to be
organized and somehow meaningfully reduced or reconfigured. These data
are condensed so as to make th em more manageable. They are also
transformed so that they can be made intelligible in terms of the issues
being addressed. Data reduction often forces choices about which aspects
of the accumulated data should be emphasised, reduced or set aside
completel y for the purposes of the topic at hand. Data in themselves do not
reveal anything and hence it is not necessary to present a large amount of
unassimilated and uncategorized data for the reader's consumption in order
to show that you are "perfectly objecti ve". In qualitative analysis, the
researcher uses the principle of selectivity to determine which data are to
be singled out for description. This usually involves some combination of
deductive and inductive analysis. While initial categorizations are shap ed
by pre -established research questions, the qualitative researcher should
remain open to inducing new meanings from the data available. Data
reduction should be guided primarily by the need to address the salient
question(s) in a research. This necessita tes selective winnowing/sifting
which refers to removing data from a group so that only the best ones
which are relevant for answering particular research questions are left.
This is difficult as not only qualitative data are very rich but also because
the person who analyses the data also often plays a direct, personal role in
collecting them. The process of data reduction starts with a focus on
distilling what the different respondents report about the activity, practice
or phenomenon under study to share knowledge. The information given by
various categories of sample is now compared – such as the information
given by experienced and new teachers or the information given by
teachers, principal, students and/or parents about central themes of the
research. In setting out these similarities and dissimilarities, it is important
not to so" flatten" or reduce the
data that they sound like close -ended survey responses. The researcher
should ensure that the richness of the data is not unfairly and unnecessarily
diluted. Apart from exploring the specific content of the respondents'
views, it is also a good idea to take note of the relative frequency with
which different issues are raised, as well as the intensity with which they
are expressed.
(B) Data Display : Data di splay provides "an organized, compressed
assembly of information that permits conclusion drawing..." A display can
be an extended piece of text or a diagram, chart or matrix that provides a
new way of arranging and thinking about the more textually embedde d
data. Data displays, permits the researcher to extrapolate from the data
enough to begin to identify systematic patterns and interrelationships. At
the display stage, additional, higher order categories or themes may
emerge from the data that go beyond t hose first discovered during the
initial process of data reduction. Data display can be extremely helpful in
identifying whether a system is working effectively and how to change it.
The qualitative researcher needs to discern patterns of among various munotes.in

Page 203


Data Analysis And Report
Writing -II
203 concepts so as to gain a clear understanding of the topic at hand. Data
could be displayed using a series of flow charts that map out any critical
paths, decision points, and supporting evidence that emerge from
establishing the data for each site. The resear cher may (1) use the data
from subsequent sites to modify the original flow chart of the first site, (2)
prepare an independent flow chart for each site; and/or (3) prepare a single
flow chart for some events (if most sites adopted a generic approach) and
multiple flow charts for others.
(C) Conclusion Drawing and Verification : Conclusion drawing requires
a researcher to begin to decide what things mean. He does this by noting
regularities, patterns (differences/similarities), explanations, possible
configurati ons, causal flows, and propositions. This process involves
stepping back to consider what the analysed data mean and to assess their
implications for the questions at hand. Verification, integrally linked to
conclusion drawing, entails revisiting the data as many times as necessary
to cross -check or verify these emergent conclusions. Miles and Huberman
assert that "The meanings emerging from the data have to be tested for
their plausibility, their sturdiness, their confirmability‘ that is, their
validity". Validity in this context refers to whether the conclusions being
drawn from the data are credible, defensible, warranted, and able to
withstand alternative explanations. When qualitative data are used with the
intension of identifying dimensions/aspects o f a concept for
designing/developing a quantitative tool, this step may be postponed.
Reducing the data and looking for relationships will provide adequate
information for developing other instruments.
Miles and Huberman describe several tactics of system atically examining
and re -examining the data including noting patterns and themes, clustering
cases, making contrasts and comparisons, partitioning variables and
subsuming particulars in the general which can be employed
simultaneously and iteratively for drawing conclusions in a qualitative
research. This process is facilitated if the theoretical or logical
assumptions underlying the research are stated clearly. They further
identify 13 tactics for testing or confirming findings, all of which address
the n eed to build systematic "safeguards against self -delusion" into the
process of analysis.
THE PROCEDURES OF QUALITATIVE DATA ANALYSIS
These are as follows:
1 Coding/indexing
2 Categorisation
3 Abstraction
4 Comparison
5 Dimensionalisation
6 Integration
7 Iteration munotes.in

Page 204


Research Methodology in Education
204 8 Refuta tion (subjecting inferences to scrutiny)
9 Interpretation (grasp of meaning - difficult to describe procedurally)
Steps of Qualitative Data Analysis
The Logico -Inductive process of data analysis is as follows;
• Analysis is logico -inductive.
• Data are mostly ve rbal.
• Observations are made of behaviours, situations, interactions, objects
and environment.
• Becoming familiar with the data.
• Data are examined in depth to provide detailed descriptions of the
setting, participants and activity(describing).
• Coding pieces of data.
• Grouping them into potential themes (classifying) which are identified
from observations through (reading /memoing).
• Themes are clustered into categories.
• Categories are scrutinised to discover patterns.
• Explanations are made from patterns.
• Inter preting and synthesizing the organised data into general written
conclusions or understandings based on what is observed and are
stated verbally (interpreting).
• These conclusions are used to answer research questions.
Terms associated with Qualitative Data Analysis:
• Data : It is the information obtained in the form ofwords.
• Category : It is a classification of ideas and concepts. When concepts
in the data are examined and compared with one another and
connections are made, categories are formed. Categories are used to
organise similar concepts into distinct groups.
• Pattern : It is a link or the relationship between two or more categories
that further organises the data and that usually becomes the primary
basis of organising and reporting the outcomes of the study. Pattern
seeking means examining the data in as many ways as possible
through understanding the complex links between situations,
processes, beliefs and actions.
Qualitative data analysis is a predominantly an inductive process of
organizing data in to categories and patterns (relationship) among the
categories.

munotes.in

Page 205


Data Analysis And Report
Writing -II
205 Types of Codes Usually Used in Educational Research :
Seidel identifies three major types of codes in qualitative analysis of data:
1. Descriptive Coding : This is when coding is used to describe what is
in the data.
2. Objectivist Coding : According to Seidel and Kelle, an objectivist
approach treats code words as ―condensed representation of the facts
described in the data .Given this assumption, code words can be treated
as substitutes for the t ext and the analysis can focus on the codes
instead of the text itself. You can then imitate traditional distributional
analysis and hypothesis testing for qualitative data. But first you must
be able to trust your code words. To trust a code word you need : 1) to
guarantee that every time you use a code word to identify a segment of
text that segment is an unambiguous instance of what that code word
represents, 2) to guarantee that you applied that code word to the text
consistently in the traditional sense of the concept of reliability, and 3)
to guarantee that you have identified every instance of what the code
represents. If the above conditions are met, then: 1) the codes are
adequate surrogates for the text they identify, 2) the text is reducible to
the codes, and 3) it is appropriate to analyze relationships among
codes. If you fall short of meeting these conditions then an analysis of
relationships among code words is risky business.
3. Heuristic Coding : In a heuristic approach, code words are primarily
flags or signposts that point to things in the data. The role of code
words is to help you collect the things you have noticed so you can
subject them to further analysis. Heuristic codes help you reorganize
the data and give you different views of the dat a. They facilitate the
discovery of things, and they help you open up the data to further
intensive analysis and inspection. The burdens placed on heuristic
codes are much less than those placed on objective codes. In a
heuristic approach code words more o r less represent the things you
have noticed. You have no assurance that the things you have coded
are always the same type of thing, nor that you have captured every
possible instance of that thing in your coding of the data. This does not
absolve you of the responsibility to refine and develop your coding
scheme and your analysis of the data. Nor does it excuse you from
looking for counterexamples and confirming examples in the data. The
heuristic approach does say that coding the data is never enough. It is
the beginning of a process that requires you to work deeper and deeper
into your data. Further, heuristic code words change and evolve as the
analysis develops. The way you use the same code word changes over
time. Text coded at time one is not necessa rily equivalent with text
coded at time two. Finally, heuristic code words change and transform
the researcher who, in turn, changes and transforms the code words as
the analysis proceeds.
munotes.in

Page 206


Research Methodology in Education
206 Bogdan and Biklen (1998) provide common types of coding categories ,
but emphasize that your hypotheses shape your coding scheme.
Setting/Context codes provide background information on the setting,
topic, or subjects.
1. Defining the Situation codes categorize the world view of respondents
and how they see themselves in rel ation to a setting or your topic.
2. Respondent Perspective codes capture how respondents define a
particular aspect of a setting. These perspectives may be summed up in
phrases they use, such as, "Say what you mean, but don't say it mean."
3. Respondents' Ways of Thinking about People and Objects codes
capture how they categorize and view each other, outsiders, and
objects. For example, a dean at a private school may categorize
students: "There are crackerjack kids and there are junk kids."
4. Process codes categor ize sequences of events and changes overtimes.
5. Activity codes identify recurring informal and formal types of
behaviour.
6. Event codes, in contrast, are directed at infrequent or unique
happenings in the setting or lives of respondents.
7. Strategy codes relate to ways people accomplish things, such as how
instructors maintain students' attention during lectures.
8. Relationship and social structure codes tell you about alliances,
friendships, and adversaries as well as about more formally defined
relations such as social roles.
9. Method codes identify your research approaches, procedures,
dilemmas, and breakthroughs.
Check Your Progress - I
(a) State the components of qualitative data analysis.
(b) Which are the different types of codes used in qualitative research in
educat ion?
(c) What are the terms associated with qualitative data analysis?
(d) Explain the steps of qualitative data analysis.
STRATEGIES OF QUALITATIVE DATA ANALYSIS:
Some of these are as follows:
A. Analytical Induction : Analytic induction is a way of building
explana tions in qualitative analysis by constructing and testing a set of
causal links between events, actions etc. in one case and the iterative munotes.in

Page 207


Data Analysis And Report
Writing -II
207 extension of this to further cases. It is research logic used to collect,
develop analysis and organize the presentat ion of research findings. It
refers to a systematic and exhaustive examination of a limited number of
cases in order to provide generalizations and identify similarities between
various social phenomena in order to develop contacts or ideas. Its formal
objective is causal explanation. It has its origin in the theory of symbolic
interaction which stipulates that a person‘s actions are built up and evolve
over time through processes of learning, trial -and-error and adjustment to
responses by others. This help s in searching for broad categories followed
by development of subcategories. If no relevant similarities can be
identified, then either the data needs to be re -evaluated and the definition
of similarities changed, or the category is too wide and heterogeneous and
should be narrowed down. In analytical induction, definitions of terms are
not identified/ determined at the beginning of research. They are rather,
considered hypotheses to be tested using inductive reasoning. It allows for
modification of concepts and relationships between concepts aimed at
representing reality of the situation most accurately .
According to Katz, "Analytic induction (AI) is a research logic used to
collect data, develop ana lysis, and organize the presentation of research
findings. Its formal objective is causal explanation, a specification of the
individually necessary and jointly sufficient conditions for the emergence
of some part of social life. AI calls for the progressi ve redefinition of the
phenomenon to be explained (the explanandum) and of explanatory
factors (the explanans), such that a perfect (sometimes called "universal")
relationship is maintained. Initial cases are inspected to locate common
factors and provisio nal explanations. As new cases are examined and
initial hypotheses are contradicted, the explanation is reworked in one or
both of two ways The definition of the explanandum may be redefined so
that troublesome cases either become consistent with the expla nans or are
placed outside the scope of the inquiry; or the explanations may be revised
so that all cases of the target phenomenon display the explanatory
conditions. There is no methodological value in piling up confirming
cases; the strategy is exclusive ly qualitative, seeking encounters with new
varieties of data in order to force revisions that will make the analysis
valid when applied to an increasingly diverse range of cases. The
investigation continues until the researcher can no longer practically
pursue negative cases."
Usually, three explanatory mechanisms are available for presenting the
findings in analytical induction as follows:
(b) Practicalities of action.
(c) Self-awareness and self -regard.
(d) Sensual base of motivation in desires, emotions or a sense of
compulsion to act.
The steps of analytical induction process are as follows :
a) Develop a hypothetical statement drawn from an individual instance. munotes.in

Page 208


Research Methodology in Education
208 b) Compare that hypothesis with alternative possibilities taken from other
instances. Thus the social system p rovides categories and
classifications, rather than being imposed upon the social system.
Progress in the social sciences is escalated further by comparing
aspects of a social system with similar aspects in alternative social
systems. The emphasis in the p rocess is upon the whole, even though
elements are analysed as are relationships between those elements. It is
not necessary that the specific cases being studied are ―average or
representative of the phenomena.
According to Cressey, the steps of analytical induction process are as
follows :
a) A phenomenon is defined in a tentative manner.
b) A hypothesis is developed about it.
c) A single instance is considered to determine i f the hypothesis is
confirmed.
d) If the hypothesis fails to be confirmed, either the phenomenon is
redefined or the hypothesis is revised so as to include the instance
examined.
e) Additional cases are examined, and if the new hypothesis is repeatedly
confirmed , some degree of certainty about the hypothesis is ensured.
f) Each negative case requires that the hypothesis be reformulated until
there are no exceptions.
B. Constant Comparison : Many writers suggest about the ways of
approaching your data so that you can do the coding of the data with an
open mind and recognize noteworthy patterns in the data. Perhaps the
most famous are those made by the grounded theorists. This could be done
through constant comparison method. This requires that every time you
select a pas sage of text (or its equivalent in video etc.) and code it, you
should compare it with all those passages you have already coded that
way, perhaps in other cases. This ensures that your coding is consistent
and allows you to consider the possibility either that some of the passages
coded that way do not fit as well and could therefore be better codes as
something else or that there are dimensions or phenomena in the passages
that might well be coded another way as well. But the potential for
comparisons doe s not stop there. You can compare the passage with those
codes in similar or related ways or even compare them with cases and
examples from outside your data set altogether. Previously coded text also
needs to be checked to see if the new codes created are relevant. Constant
comparison is a central part of grounded theory. Newly gathered data are
continually compared with previously collected data and their coding in
order to refine the development of theoretical categories. The purpose is to
test emerging ideas that might take the research in new and fruitful
directions. In the case of far out comparisons, the comparison is made with
cases and situations that are similar in some respects but quite different in munotes.in

Page 209


Data Analysis And Report
Writing -II
209 others and may be completely outside the study. For example, still
thinking about parental help, we might make a comparison with the way
teachers help students. Reflecting on the similarities and differences
between teaching and parental relationships might suggest other
dimensions to parental help, li ke the way that teachers get paid for their
work but parents do not.
Ryan and Bernard suggest a number of ways in which those coding
transcripts can discover new themes in their data. Drawing heavily on
Strauss and Corbin (1990) they suggest these include:
a. Word repetitions : Look for commonly used words and words whose
close repetition may indicated emotions
b. Indigenous categories (what the grounded theorists refer to as in vivo
codes) : It refers to terms used by respondents with a particular
meaning and si gnificance in their setting.
c. Key-words -in-context : Look for the range of uses of key terms in the
phrases and sentences in which they occur.
d. Compare and contrast : It is essentially the grounded theory idea of
constant comparison. Ask ,what is this about? ‘and how does it differ
from the preceding or following statements?‘
e. Social science queries : Introduce social science explanations and
theories, for example, to explain the conditions, actions, interaction
and consequences of phenomena.
f. Searching for mis sing information : It is essential to try to get an
idea of what is not being done or talked out, but which you would have
expected to find.
g. Metaphors and analogies : People often use metaphor to indicate
something about their key, central beliefs about th ings and these may
indicate the way they feel about things too.
h. Transitions : One of the discursive elements in speech which includes
turn-taking in conversation as well as the more poetic and narrative
use of story structures.
i. Connectors : It refers to co nnections between terms such as causal
(since‘, because‘, as‘ etc) or logical(implies‘, means‘, is one of‘ etc.)
j. Unmarked text : Examine the text that has not been coded at a theme
or even not at all.
k. Pawing (i.e. handling) : It refers to marking the text and eyeballing or
scanning the text. Circle words, underline, use coloured highlighters,
run coloured lines down the margins to indicate different meanings
and coding. Then look for patterns and significances. munotes.in

Page 210


Research Methodology in Education
210 l. Cutting and sorting : It refers to the traditi onal technique of cutting
up transcripts and collecting all those coded the same way into piles,
envelopes or folders or pasting them onto cards. Laying out all these
scraps and re -reading them, together, is an essential part of the process
of analysis.
C. Triangulation ; According to Berg and Berg, triangulation is a term
originally associated with surveying activities, map making, navigation
and military practices. In each case, there are three known objects or
points used to draw sighting lines towards an unknown point or object.
Usually, these three sighting lines will intersect forming a triangle known
as the triangle of error. Assuming that the three lines are equal in error, the
best estimated place of the new point or object is at the centre of the
triangle. The word triangulation was first used in the social sciences as
metaphor describing a form of multiple operationalisation or convergent
validation. Campbell and Fiske were the first to apply the navigational
term triangulation to research. The simil e is quite appropriate because a
phenomenon under study in a qualitative research is much like a ship at
sea as the exact description of the phenomenon in a qualitative research is
unclear. They used the term triangulation to describe multiple data
collect ion strategies for measuring a single concept. This is known as data
triangulation. According to them, triangulation is a powerful way of
demonstrating concurrent validity, particularly in qualitative research.
Later on, Denzin introduced another metaphor, viz., ‗line of action‘ which
characterises the use of multiple data collection strategies (usually three),
multiple theories, multiple researchers, multiple methodologies or a
combination of these four categories of researcher activities. This is aimed
at mutual confirmation of measures and validation of findings. The
purpose of triangulation is not restricted to combining different kinds of
data but to relate them so as enhance the validity of the findings.
Triangulation is an approach to research that uses a co mbination of more
than one research strategy in a single investigation. Triangulation can be a
useful tool for qualitative as well as quantitative researchers. The goal in
choosing different strategies in the same study is to balance them so each
counterba lances the margin of error in the other.
Used with care, it contributes to the completeness and confirmation of
findings necessary in qualitative research investigations.
Choosing Triangulation as a Research Strategy
Qualitative investigators may choose tr iangulation as a research strategy to
assure completeness of findings or to confirm findings. The most accurate
description of the elephant comes from a combination of all three
individuals' descriptions. Researchers might also choose triangulation to
confirm findings and conclusions. Any single qualitative research strategy
has its limitations. By combining different strategies, researchers confirm
findings by overcoming the limitations of a single strategy. Uncovering
the same information from more than o ne vantage point helps researchers
describe how the findings occurred under different circumstances and
assists them to confirm the validity of the findings. munotes.in

Page 211


Data Analysis And Report
Writing -II
211 Types of Triangulation
1. Data Triangulation : Time, Space, Person
2. Method Triangulation : Design, Dat a Collection
3. Investigator Triangulation
4. Theory Triangulation
5. Multiple Triangulation. which uses a combination of two or more
triangulation techniques in one study.
Each of these are described in detail in the following paragraphs
1. Data Triangulation
Accordi ng to Denzin (1989) there are three types of data triangulation: (a)
time, (b) space, and (c) person.
(a) Time Triangulation :Here, the researcher/s collect data about a
phenomenon at different points in time. However, studies based on
longitudinal designs are not considered examples of data triangulation for
time because they are intended to document changes over time.
Triangulations of data analysis in cross sectional and longitudinal research
is an example of time triangulation.
(b) Space Triangulation :It consi sts of collecting data at more than one
site. At the outset, the researcher must identify how time or space relate to
the study and make an argument supporting the use of different time or
space collection points in the study. By collecting data at differe nt points
in time and in different spaces, the researcher gains a clearer and more
complete description of decision making and is able to differentiate
characteristics that span time periods and spaces from characteristics
specific to certain times andspac es.
(c) Person Triangulation : According to Denzin, person triangulation has
three levels, viz., aggregate, interactive and collective. It is also known as
combined levels of triangulation. Here researchers collect data from more
than one level of person, that is, a set of individuals, groups, or collectives.
Researchers might also discover data that are dissimilar among levels. In
such a case, researchers would collect additional data to resolve the
incongruence. According to Smith, there are seven levels of ‘p erson
triangulation’ as follows:
i. The Individual Level.
ii. Group Analysis : The interaction patterns of individuals and groups.
iii. Organisational Units of Analysis : Units which have qualities not
possessed by the individuals making them up.
iv. Institutional Analysi s : Relationships within and across the legal (For
example, Court, School), political (For example, Government), munotes.in

Page 212


Research Methodology in Education
212 economic (For example, Business) and familial (For example,
Marriage) institutions of the society.
v. Ecological Analysis : Concerned with spatial explanation.
vi. Cultural Analysis : Concerned with the norms, values, practices,
traditions and ideologies of a culture.
vii. Societal Analysis : Concerned with gross factors such as urbanisation,
industrialisation, education, wealth, etc.
2. Methods Triangulation
Methods triangulation can occur at the level of (a) design or (b) data
collection.
(a) Design Level Triangulation : Methods triangulation at the design
level has also been called between -method triangulation. Design methods
triangulation most often uses quantit ative methods combined with
qualitative methods in the study design. There is simultaneous and
sequential implementation of both quantitative and qualitative methods.
Theory should emerge from the qualitative findings and should not be
forced by researcher s into the theory they are using for the quantitative
portion of the study. The blending of qualitative and quantitative
approaches does not occur during either data generation or analysis.
Rather, researchers blend these approaches at the level of interpr etation,
merging findings from each technique to derive a consistent outcome. The
process of merging findings "is an informed thought process, involving
judgment, wisdom, creativity, and insight and includes the privilege of
creating or modifying theory”. lf contradictory findings emerge or
researchers find negative cases, the investigators most likely will need to
study the phenomenon further. Sometimes triangulation design method
might use two different qualitative research methods. When researchers
combi ne methods at the design level, they should consider the purpose of
the research and make a logical argument for using each method.
(b) Data Collection Triangulation :Methods triangulation at the data
collection level has been called within -method triangulatio n. Using
methods triangulation at the level of data collection, researchers use two
different techniques of data collection, but each technique is within the
same research tradition. The purpose of combining the data collection
methods is to provide a more holistic and better understanding of the
phenomenon under study. It is not an easy task to use method
triangulation; it is often more time consuming and expensive to complete a
study using methods triangulation.
3. Investigator Triangulation
Investigator tri angulation occurs when two or more researchers with
divergent backgrounds and expertise work together on the same study. To
achieve investigator triangulation, multiple investigators each must have
prominent roles in the study and their areas of expertise must be munotes.in

Page 213


Data Analysis And Report
Writing -II
213 complementary. All the investigators discuss their individual findings and
reach a conclusion, which includes all findings. Having a second research
expert examine a data set is not considered investigator triangulation. Use
of methods triangulatio n usually requires investigator triangulation
because few investigators are expert in more than one research method.
4. Theory Triangulation
Theory triangulation incorporates the use of more than one lens or theory
in the analysis of the same data set. In qua litative research, more than one
theoretical explanation emerges from the data. Researchers investigate the
utility and power of these emerging theories by cycling between data
generation and data analysis until they reach a conclusion.
5. Multiple Triangul ation
It uses a combination of two or more preceding triangulation techniques in
one study.
Reducing Bias in Qualitative Data Analysis:
Bias can influence the results. The credibility of the findings can be
increased by:
a. Using multiple sources of data. Using data from different sources
helps in cross -checking the findings. For example, combine and compare
data from individual interviews with data from focus groups and an
analysis of written material on the topic. If the data from these different
sources poi nt to the same conclusions, the findings are more reliable.
b. Tracking choices. The findings of the study will be more credible if
others understand how the conclusions were drawn. Keep notes of all
analytical decisions to help others follow the reasoning. D ocument reasons
for the focus, category labels created, revisions to categories made and any
observations noted concerning the data while reading and re -reading the
text.
c. Document the process used for data analysis. People often see and
read only what supp orts their interest or point of view. Everyone sees data
from his or her perspective. It is important to minimise this selectivity.
State how data was analysed clearly so that others can see how decisions
were made, how the analysis was completed and how t he interpretations
were drawn.
d. Involving others. Getting feedback and input from others can help
with both analysis and interpretation. Involve others in the entire analysis
process, or in any one of the steps. Have several people or another person
review the data independently to identify themes and categories. Then
compare categories and resolve any discrepancies in meaning.

munotes.in

Page 214


Research Methodology in Education
214 Drawbacks to be Avoided:
a. Do not generalise results. The goal of qualitative work is not to
generalise across a population. Rather, a qualitative data collection
approach seeks to provide understanding from the responden t's
perspec tive. It tries to a nswer the ques tion ―why .Qualitative data provide
for clarification, understanding and explanation, not for generalizing.
b. Choose quotes carefully. Use of quotes can not only provide valuable
support to data interpretation but is also useful in directly supporting the
argumen t or illustrate success. However, avoid using people's words out of
context or editing quotes to exemplify a point. Use quotes keeping in mind
the purpose for including quotes. Include enough of the text to allow the
reader to decide what the respondent is trying to convey.
c. Respect confidentiality and anonymity when using quotes. Even if the
person's identity is not noted, others might be able to identify the person
making the remark. Therefore, get people's permission to use their words.
d. Be aware of, state and deal with limitations. Every study has
limitations. Presenting the problems or limitations encountered when
collecting and analysing the data helps others understand the conclusions
more effectively.
Check Your Progress – II
Explain the meaning of the following terms :
(a) Analytical Induction:
(b) Constant Comparison:
(c) triangulation:
SUGGESTED READINGS
Bogdan R. B. and Biklen, S. K. Qualitative Research forEducation: An
Introduction to Theory and Methods . Third Edition. Needham Heights,
MA: A llyn and Bacon. 1998.
Miles, M. B. and Huberman, M. A. Qualitative Analysis
: AnExpanded Sourcebook . Thousand Oaks, CA: Sage. 1994.
Katz, J. "Analytic Induction," in Smelser and Baltes, (eds) International
Encyclopedia of the Social and Behavioral S ciences. 2001.
Ragin, C. C. Constructing Social Research: The Unity and Diversityof
Method . Pine Forge Press, 1994.
Strauss, A. and Corbin, J. Basics of Qualitative Research.
GroundedTheory Procedures and Techniques . Newbury Park, CA : Sage.
1990.


munotes.in

Page 215

215 6C
DATA ANALYSIS AND REPORT
WRITING -III
(RESEARCH REPORTING)
Unit Structure:
6C.0 Objectives
6C.1 Introduction
6C.2 Meaning and purpose of Research Report
6C.3 Types of Research Report
6C.3.1 Forma t
6C.3.2 Style
6C.3.3 Mechanism of report writing with reference to Dissertation and
thesis and papers.
6C.4 Bibliography
6C. 5 Evaluation of Research Report
6C.0 OBJECTIVES :
After reading this unit, the student will be able to
 Decide the style, format and mechanisms of writing a research report.
 Write down bibliography correctly and comprehensively.
 Explain how to write a research report.
6C.1 INTRODUCTION:
Educational research is shared and communicated to others for
dissemination of knowledge. After completion of research activities, the
researcher has to report the entire activities that are involved in research
process systematically in writing. For clear and easy understanding of
readers, writing a good research report require s knowledge of the types of
research reporting, rules for writing and typing, f ormat and style of
research reporting and the body of the report. However, scholarship,
precision of thought and originality of a researcher cannot be undermined
in producing a good research report.
munotes.in

Page 216


Research Methodology in Education
216 6C.2 MEANING AND PURPOSE OF RESEARCH
REPORT
1. Meaning of Research Report:
The purpose of research report is to convey the interested persons the
whole result of study in sufficient detail and to determine himself the
validity of the conclusions. As the culmination of the research
investigation, the research r eport contains a description of different stages
of the survey and the conclusions arrived at. Thus it is an end product of a
research activity which gives an account of a long journey on the path of
finding a new knowledge or modified knowledge.
Writing a research report is a technical task as it requires not only skill on
the part of the researcher but also considerable effort, patience and
penetration, an overall approach to the problem, data and analysis along
with grasp over language and greater object ivity, all springing from
considerable thought.
Writing a research report also involves adequate planning and a vast
amount of preparation. That apart, perfection of research report is also
attributed to coherence of thought, creativity and intelligence of the
researcher.
Although a definite standard criterion for the organisation is not possible,
a good report writer should always be conscious about the effective and
purposeful communication with the society by conveying the interested
persons the entire o utcome of the study so as to ensure each reader to
comprehend the data and to enable himself to cognize the validity of the
conclusions. Consideration of certain questions like who says ‘what is it
about’, ‘to whom’, ‘in what manner’ and ‘of what use’ will enable the
researcher in preparing a standard research report.
No uniform research report can be prepared to cater to the needs of
different categories of audiences. The report should always incorporate the
material which will be of interest to the target audience, may that be
investigator of fundamental research or applied research, practitioners,
policy formulators, funding agents or sponsors or even the general public.
To a report writer, the prima facie task may appear an easy affair. But in
real terms this is a herculean task as uncertainty about target group results
in ineffective communication.
2. Purpose of Research Report:
A good research report not only disseminates knowledge, but also presents
the findings for expansion of the horizon of knowled ge. That apart, it also
checks the validity of the generalization and inspires others to carry on
related or allied problems.
munotes.in

Page 217


Data Analysis And Report
Writing -III
217 The purpose of the research report may be discussed under the following
heads:
1. Transmission of Knowledge:
The knowledge that h as been obtained on the basis of research need
transmission for proper utilization of the resources invested. Because of
that reason, it is always advisable to prepare to report in a written manner
so that it can also provide knowledge to layman in underst anding various
social problems.
2. Presentation of Findings:
ADVERTISEMENTS:
Society is more concerned with the finished product in terms of output of
research which has the input of immense money, human resources and
precious time. Therefore, the social u tility of the research report lies in its
exposure to the laymen as well as its submission to the sponsoring agency
of the project.
Whereas people may acquire knowledge about various social problems in
the widest possible manner, the sponsoring agency may take the credit of
the conduct of a piece of successful research. Even interesting findings
may draw the attention of the world community through mass media. That
apart, it may also result in legislative or ameliorative, measures.
3. Examining the Validity of the Generalizations:
Submission of the report enables the researchers to examine the validity
and the authenticity of the generalizations. For that purpose the report
must be prepared and presented in an organized form. Thereafter it can be
checked and the discrepancy, if any, in generalizations, practical or real
can be dispelled and the facts can be re -examined and reorganized.
4. Inspiration for Further Research:
Research report inspires others to undertake further research in the same
line or in any other inter -disciplinary fields. If the report appears to be
interesting and a novel one, it is more likely to draw the attention of the
social scientists.
Planning and Organisation of a Report:
At the outset, before the commencement of report writing, th e researcher
needs accurate planning and organisation of study materials to be used
prudently. Simple accumulation of masses of data would not make proper
sense, only when such data are arranged in a logical and coherent manner
within the framework of over all structure those are construed to be
planned and organized.
When proper planning and organisation are made the following positive
outcomes are obtained: munotes.in

Page 218


Research Methodology in Education
218 (i) Ideas and data are screened, i.e. only those ideas and data having
relevance to the study are incorporated and the rest are left out;
(ii) The report is marked by greater synthesis of facts with clear -cut
explanation;
(iii) The output of research beco mes easily intelligible to the readers;
(iv) Transition in passing of ideas are smoothened;
(v) Presents facts sequentially and maintains their unity; and
(vi) Provides the readers with a comprehensive report in a well integrated
manner.
6C.3 TYPES OF RES EARCH REPORT:
Research reports mainly take the form of a thesis, dissertation, journal
article and a paper to be prescribed at a professional meeting. Research
reports vary in format and style. For example, there are difference found
in a research report prepared as a thesis or dissertation and a research
report prepared as a manuscript for publication.
The dissertation and thesis are more elaborat e and comprehensive. While
research papers prepared for journal articles and professional meeting are
more pre cise and concise.
6C.3.1 Format:
Format refers to the general pattern of organisation and arrangement of the
report. It is an outline that includes sections and subsections or chapters
and subchapters or headings and subheadings followed to write research
report. All research reports follow a format that is parallel to the steps
involved in conducting a study. The format of a research report is
generally well spelled out in contents. Different universities, institutions
and organizations publishing professi onal journals follow style manual
prepared on their own. Some institutions follow by style manuals prepared
by other professional bodies like the American Psychological
Associations, the University of Chicago and the Harvard Law Review
Association. The Pub lication Manual of the American Psychological
Association (APA), the Chicago Manual of Style, and A Uniform System
of Citation (USC) published by Harvard Review Association are some of
the worth mentioning style manuals that are followed by researcher s to
follow format and style while writing research reports.
The APA format is widely followed because it eliminates formal
footnotes. It provides detailed information about research format for all
types of research reports on various behavioural and social science
disciplines. The CMC presents guidelines for use of quotations,
abbreviations, names and terms and distinctive treatment of words,
numbers, tables, mathematics in type and writing footnotes. Some
historians and ethnographers prefer to use the CMC an d USC. munotes.in

Page 219


Data Analysis And Report
Writing -III
219 The common format used to write research report of quantitative studies
for a degree requirement is as follows.
Preliminary page s
1. Title Page
a) Title
b) Degree requirement
c) University or institution ’s name
d) Author’s name
e) Supervisor’s name
f) University Department
g) Year
2. Acknowledgements
3. Supervisor’s Certificate
4. Table of contents
5. List of Tables
6. List of Figures
Main Body the Report
1. Chapter – I :Introduction
a. Theoretical Framework
b. Rationale of the study
c. Statement of the problem
d. Definitions of terms
e. Objectives
f. Hypothesis
g. Scope and Delimitations of the study
h. Significance of the study
2. Chapter II : Review of Related Literature
3. Chapter III : Methodology and Procedures
a. Design and Research method
b. Population and sample
c. Tools and techniques of data collection
d. Techniques of data analysis
4. Data Analyses
5. Results and Discussions munotes.in

Page 220


Research Methodology in Education
220 6. Conclusions and Recommendations
7. Bibliography
8. Appendi ces
The common format followed for qualitative research including historical
and analytical research is different from the format followed in
quantitative research. The common format used usually to write research
report of qualitative studies for degree requirements is as follows.
1. Preliminary pages (same as in quantitative research)
2. Introduction
a) General problem statement
b) Preliminary Research Review
c) Foreshadowed Problems
d) Significance of the study
e) Delimitation s of the study
3. Design and Methodology
a) Site selection
b) Researcher’s Role
c) Purposeful / Theoretical Sampling
d) Data collection strategies
4. Qualitative Data analysis and Presentation
5. Presentation of Findings : An Analytical interpretation
6. Bibliography
7. Appendices
The common format followed for writing a research report as an article or
a paper for a journal and seminar is as follows:
1. Title and author’s name and address
2. Abstract
3. Introduction
4. Method
a) Sample
b) Tools
c) Procedure
5. Results
6. Discussions
7. Reference s munotes.in

Page 221


Data Analysis And Report
Writing -III
221 Check Your Progress -I
1. What is the format of writing a dissertation?
6C.3.2 Style :
Style refers to the rules of spelling, capitalization, punctuations and typing
followed in preparing the report. A researcher has to follow some general
rules for writing and typing a research report. The rules that are applicable
both for quantitative and qualitative research report are as follows :
1. The research report should be presented in a creative, clear, concise
and comprehensive style. Literary style of writing is to be replaced by
scientific and scholarly style reflecting precise thinking. Descript ions
should be free from bias, ambiguity and vagueness. Ideas need to be
presented logically and sequentially so that the reader finds no
difficulty in reading.
2. The research report should be written in a clear, simple, dignified and
straight forward style, sentences should be grammatically correct.
Colloquial expressions, such as ‘write up ’ for report and ‘put in ’ for
insert should be avoided. Even great ideas are sometimes best
explained in simple, short and coherent sentences. Slang, flippant
phrases and folksy style should be avoided.
3. Research report is a scientific document but not a novel or treatise. It
should not contain any subjective and emotional statements. Instead, it
should contain factual and objective statements.
4. Personal pronouns such as I an d me, and active voice should be
avoided as far as possible. For example, instead of writing I randomly
selected 30 subjects , it is advisable to write thirty subjects were
selected randomly by the investigator.
5. Sexist language should be replaced by non -sexist language while
writing research report. Male or female nouns and pronouns (he and
she) should be avoided by using plurals. For example, write children
and their parents have been interviewed rather than child and his
parents were interviewed.
6. Instead of using titles and first names of the cited authors, last name is
needed. For example, instead of writing Professor John Dewey, write
Dewey.
7. Constructed forms of modal auxiliaries and abbreviations should be
avoided. For example, shouldn’t, can’t, couldn’ t should not be used.
However, abbreviations can be used to avoid repetition if the same
has been spelled out with the abbreviation in parentheses. For
example, researcher can write NCERT if he/she has used NCERT in
parenthesis in his/her earlier sentence s like National, Council of
Educational Research and Training (NCERT).There are few
exceptions to this rule for well - known abbreviations such as IQ.
munotes.in

Page 222


Research Methodology in Education
222 8. Use of tense plays an important role in writing a research report. Past
tense or present perfect tense is used for review of related literature
and description of methodology, procedure results and findings the
study, Present tense is appropriate for discussing results and presenting
research conclusions and interpretations. Future tense, except in
research p roposals, is rarely used.
9. Economy of expression is important for writing a research report.
Long sentences and long paragraphs should be avoided . Short, simple
words are better than long words. It is important that thought units and
concepts are ordered co herently to provide a reasonable progression
from paragraph to paragraph smoothly.
10. Fractions and numbers which are less than ten should be expressed in
words For example, six schools were selected or fifty percent of
students were selected.
11. Neither standard statistical formula not computations are given in the
research report.
12. Research report should not be written hurriedly. It should be revised
many times before publication. Even typed manuscripts require to be
thoroughly proofread b efore final typing.
13. Typing is very important while preparing research report. Use of
computer and word processing programme has made the work easy.
However, following rules of typography require to be followed.
i) All material should be double spaced.
ii) A good quality of hand paper 8(1/2), by 11 in size and 13 to 16
pound in weight should be used.
iii) Only one side of the sheet is used in typing.
iv) The margin should be 1(1/2) inches. All other margins i.e. the top,
the bottom and right should be 1 inch.
v) Times New Roman or A Oldman Book Style with 12 size front can
be used for typing words in English and book titles can be italicized.
vi) Direct quotations not over three typewritten lines in length are
included in the text and enclosed in quotation marks. Quo tations of
more than three lines are set off from the text in a double – spaced
paragraph and indented five spaces from the left margin without
quotation marks. However, original paragraph indentations are
retained. Page numbers are given in parentheses at the end of a direct
quotation.
Check Your Progress -II
1. State the style in which a research report needs to be written. munotes.in

Page 223


Data Analysis And Report
Writing -III
223 6C.3.3 Mechanisms of Writing Dissertation and Thesis:
In reality, the terms dissertation and thesis carry the same meaning. Thesis
is an English (UK) term whereas dissertation is an American term.
However, in India, the term thesis is used to denote work carried out for
Ph.D. degree whereas the term dissertation is used to denote with work
carried out for M. Ed. and M.Phil. degrees espec ially in the academic
discipline of Education. Both format or outlines stated earlier vendor
format section. Thesis and dissertation should be complete and
comprehensive. The main sections of a dissertation and thesis are (i)
Preliminary pages, (ii) Main b ody of the report and (iii) Appendices.
i) Preliminary pages : The Preliminary pages include title page,
supervisor’s certificate, acknowledgement page, table of contents, list of
tables and figures. The title page usually includes the title of the report, th e
author’s name, the degree requirement, the name and location of the
college or university according the degree and the date or year of
submission of the report. Name, designation and institutional affiliation of
the guide are also written. The title of a dissertation and thesis should
clearly state the purpose of the study. The title should be typed in capital
letters, should be centred, in an inverted pyramid form and when two or
more lines are needed, should be double spaced. An example of the title
page is given in the following box –1.
BOX -1

As per the requirement of some universities and these include certificate
of the supervisor under whose guidance or supervision the research mark
was completed.
Most th eses and dissertations include an acknowledgement page. This
page permits the researcher to express appreciation of persons who have
contributed significantly to the completion of the report. It is acceptable to
thank one’s own guide or supervisor who help ed at each stage of the DEVELOPMENT OF SCIENCE CONCEPTS IN HEARING
IMPAIRED CHILDREN STUDYING IN SPECIAL SCHOOLS
AND INTEGRATED SETTINGS

Thesis submitted to the University of
Mumbai for the Degree of
DOCTOR OF PHILOSOPHY

in

ARTS (EDUCATION)

By munotes.in

Page 224


Research Methodology in Education
224 research work, teachers, students or principals who provided data for the
research and so on. Only these persons who helped significantly for
completion of research work should be acknowledged.
The table of contents is an outline o f the dissertation or thesis which
indicates page on which each major section (chapter) and subsection
beings.
The list of tables and figures are given in a separate page that gives
number, title of each table and figure and page on which it can be found.
Entries listed in the table of contents should be identical to headings and
subheadings in the report, and table titles and figures, titles should be the
same titles that are given to the actual tables and figures in the main body
of the report.
The main b ody of the report : The main body of the report includes
introduction, review of related literature, methodology and procedures,
results and discussion, conclusions and recommendations and appendices.
The introduction section includes a theoretical framewor k that introduce
the problem, significance of the study both from theoretical and practice
points of view, description of the problem, operational definition of the
terms, objectives, statement of hypotheses with rationale upon which each
hypothesis is bas ed and sometimes delimitations of the study. The
problem requires to be stated in interrogative statement or series of
questions which answers are to be sought by the researcher through
empirical investigation. Abstract terms and variables used in the pro blem
require to be operationally defined.
The problem should be stated that it should aim at finding the relationship
between two variables.
The hypotheses related should be supported by the rationale deduced from
the previous research studies or experienc es with evidences. Hypothesis
which is a tentative answer to the research question should be stated
concisely and clearly so that it can be tested statistically or logically with
evidences. An example of an hypothesis based on the rationale is given in
the Box – 2.
munotes.in

Page 225


Data Analysis And Report
Writing -III
225 Box – 2

Rationale – 1

Cognitive development possess through form
successive stages from Piagian perspective, namely,
sensory – motor, pre-operational, concrete operational
and formal operational stages . Each stage starts with
specific age and is characterized by the development of
some specific concepts. Hence, development of science
concepts is related to age. It is assumed that age plays
vital role in the development of concepts. The hypothesis
derived from this rationale is:

The delimit ation of the study should include such aspects as variables,
sample, area or site, ratings tools and techniques to which the study has
been delimited.
The second chapter of the main body of the report includes review of
literature. In this chapter, the pas t research works relating to the present
study under report should be described and analysed. The description that
includes last name of the previous researcher with year of study in
parenthesis includes mainly method and findings briefly and precisely.
More description of studies and findings has no meaning unless those
descriptions and findings are analsed critically to find out research gaps to
be bridged by the present study. Therefore, it is required that after
description of the previous research wor k, the research report should
contain a critical appraisal to find out significance the study.
The methodology and procedure section includes the description of
subjects, tools and instruments, design, procedures. The description of
subjects includes a def inition and description of population from which
sample is selected, description of the method used in selecting sample and
size of the sample, the description of population should indicate its size,
major characteristics such as age, grade level, ability level and socio -
economic status. The method and sampling techniques used for selection
of sample requires to be described in detail along with the size of the
sample.
The instruments and tools used for investigation require detailed
description. The descr iption includes the functions of the instrument, its
validity, reliability and searing procedures. If an instrument is developed
by the investigator, the descript needs to be more detailed that specifies
the procedures followed for developing the tools, st eps used for
determining validity and reliability, response categories, scoring pattern,
norms, if any, and guidelines for interpretations. A copy of the instrument
with scoring key and other pertinent information related to the instrument munotes.in

Page 226


Research Methodology in Education
226 are generally g iven in appendix of the dissertation and thesis but not given
in the main body of the report.
The description of the design is given in detail. It also includes a rationale
for selection of the design.
The result s section describes the statistical techniqu es applied to data with
justification, preselected alpha levels and the result of each analysis.
Analysis of data is made under subheadings pertaining to each hypothesis.
Tables and figures need to present findings (in summary form) of
statistical analysis in vertical columns and horizontal rows. Graphs are
also given in the main body of the report. Tables and figures enable the
reader to comprehend and interpret data quickly. It is advisable to use
several tables rather than to use one table that is crowde d. Good tables and
figures are self – explanatory. Each table should be presented on the same
page. Large tables or graphs should be reduced to manuscript page size
either by Photostat on some other process of reproduction. The word table
or figure is cen tred between the page margins and typed in capital letters,
followed by the table or figure number in Arabic numerals. Tables and
figures are numbered continuously but separately for each chapter .. Title
of the table and figure is placed in double space b elow the word table and
figure. The title of the table and figure should be brief and clear indicating
the nature of table presented. Column headings and row headings of a
table should be clearly labeled. If no data is available for a particular cell,
indicate the lack by a dash ( -), rather than a zero(0).
Each table and figure is followed by description systematically in simple
language with statistical or mathematical language in parenthesis. An
example in given in Box –3.
munotes.in

Page 227


Data Analysis And Report
Writing -III
227 The result emerged should be f ollowed by discussion and interpretation.
Each result is discussed in terms of the original hypothesis to which it
relates, and in terms of its agreement or disagreement with previous results
obtained by other researchers in other studies. Sometimes, resea rcher uses
a separate section titled ‘ Discussions ’ where all the results emerged are
explained either individually or joined both at micro level and macro level.
The conclusion and recommendation chapter includes description of the
major findings, discussi on of the theoretical and practical implications of
the findings and recommendations for future research or future action. In
this section, a researcher is free to discuss any possible revisions and
additions to existing theory and to encourages studies de signed to test
hypotheses suggested by the results. The researcher may also discuss the
implications of findings for educational practical and suggest studies that
can be replicated in other settings. The researcher may also suggest further
studies to be d esigned for investing different dimensions of the problem
investigated.
The references of bibliography section of the dissertation and thesis
consists of lists of all the sources alphabetically by author’s lost names
that mere cited in the report. Most of the authors’ names were cited in the
introduction and review of related literature, sections, Primary sources that
are cited in the body of the dissertation and thesis are only included in the
references.
Appendices are necessary in thesis and dissertation reports. Appendices
include information and data pertinent to the study which are not
important to be included in the main body of the report or are too lengthy.
Tests, questionnaire, career letters, raw data and data analysis sheet are
included in the ap pendices. Sometimes, subject index that includes
important concepts used in the main body of the report. The list of those
concepts are given alphabetically with the page on which each can be
found. Appendices are named alphabetically followed with short t itle
relating to the theme. (For example, APPENDIX A Non -verbal Text on
concept attainment in Biology.)
MECHANISM OF WRITING PAPERS :
Paper includes a research report prepared for publication in a journal
or for presentation in seminar and professional meeting. Paper
prepared for journal and seminar follows the same mechanism of
writing, style and format. The main purpose of a paper is for sharing
the ideas emerged with other researchers, which is not possible through
dissertation and thesis. The content and format of a paper and a thesis are
very similar except that the paper is much shorter. Lengthy thesis or
dissertation may once again be prepared of two papers or articles. The
research paper follows the format given below.
1. Title
2. Author’s name
3. Abstract (About 100 to 120words) munotes.in

Page 228


Research Methodology in Education
228 4. Introduction
5. Method
6. Results
7. Discussion
8. Reference
Writing of the title of a research paper follows the same mechanism it is
followed in dissertation and thesis. Below the title, author’s name and
address is given.
An abstract of the paper consisting of 100 to 120 words, and containing
mainly objectives, methods and findings are given before main body of
the paper. Other preliminary pages of the dissertation and thesis are not
required in the research paper either to be published in a journal or
presented in a seminar.
The introduction section of a research paper consists of a brief description
of theoretical background, agreements and disagreement of previous
researchers on findings related to the topic center report, objective,
hypo theses.
The method section deals with sample size and sampling, instruments and
tools, design and procedure of collecting data. In dissertation and thesis,
detailed description is necessary, whereas, in research paper the same is to
be writing very precise ly and comprehensively. The author has to exercise
judgment in determining which are the critical aspects of the study and
which aspects require more in depth description.
The result section of a research paper includes tables and figures including
graphs. However, the tables and figures require to be described in the light
of the hypothesis for its acceptance and rejection. The findings described
should be supported by statistical values and alpha levels with
mathematical signs of less than (<) or greater than (>) etc. in parenthesis.
For example, as it can be seen in table 1, high creative and low creative
teachers differed significantly on attitude towards class room teaching (t =
4.24; df = 121; p<.01), child centred practices (t = 2.14; df = 131; p<.05) ,
educational process (t = 3.38; df = 131; p<.01) and pupils (t = 2.87; df =
131; p <.01) in favour of high creative teachers as mean attitude towards
class room teaching of high creative teachers is greater than their
counterparts.
The discussion section is very important in a research paper. Each finding
is discussed in the right of its agreement and disagreement with the
previous findings followed with justification based on previous theory and
existing body of knowledge. In this section, the researchers are free to give
their critical judgmen t for using new dimensions that were emerged out of
the study in order to add something new to the existing body of knowledge
or for revision and modification of theory. Critical and analytic
description is highly essential in discussion. munotes.in

Page 229


Data Analysis And Report
Writing -III
229 Lastly, names of authors cited in the paper are given in alphabetical orders
beginning with first name in the reference.
Check Your Progress -III
1. Explain the mechanisms of writing a dissertation.
6C.4 BIBLIOGRAPHY :
A bibliography is list of all the sources which the researcher actually used
for writing a research report. Bibliography and references and sometimes
used interchangeably, buy both are different from each other. References
consist of all documents including books, journal, articles, technical
reports, computer programmes and unpublished works that are cited in the
main body of a research report. i.e. dissertation, thesis, journal article,
seminar paper, etc. References includes mainly primary sources. A
bibliography, in contrast, contains everything that is either cited or not
cited in the body of the report but are used by the researcher. It includes
both primary and secondary sources. The common trend has been to use
bibliography in dissertation and thesis , and reference in journal articles
and papers.
The American Psychological Association (APA) Publication, Manual, the
Chicago Manual of Style and Uniform System of Citation of Harvard
Review Association are available that guide a researcher to write
biblio graphy. However, the APA style is widely used to write
bibliography is educational research. It provides guidelines for writing
different types of sources such as boxes, journal articles, unpublished
dissertation and thesis, unpublished papers presented at the meeting and
seminars, unpublished manuscripts, technical reports etc.
Bibliography is arranged in alphabetical order by the last names of the first
authors. When no author is given, the first word of the title on sponsoring
organisation is used to beg in the entry. Each entry in the bibliography
starts at the left margin of the page, with subsequent lines double spread
and indented. Extra space is not given between the entries.
Following illustrations provide information for writing different type of
entries in the bibliography.
1. Book:
Vaizey, J. (1967). Education in the modern world. New York : McGraw
Hill.
2. Book with multiple authors:
Barzun, J. & Graff, H. F. (1977), The modern researcher, New York :
Harcourt, Brace, Hovanovich.
3. Book in subsequent editi on:
Hallahan, D.P. & Kauffman, J.M. (1982), Exceptional children (2nd ed.),
Englewood Cliffs, NJ : Prentice Press.
munotes.in

Page 230


Research Methodology in Education
230 4. Editor as author:
Mitchell, J. V., Jr (Ed.), (1985). Mental measurement yearbook (9th ed.),
Highland Park, NJ : Gryphon Press.
5. No author give n:
Prentice –Hallauthor’sguide.(1978),Englewood Cliffs,NJ:PrenticeHall.
6. Corporate or association author
American Psychological Association, (1983), Publication manual (3rd ed.),
Washington, DC : Author
7. Part of a series of books:
Terman, L.M. & Oden, M.H. (1 947), Genetic studies of genius series :
Vol. 4. The gifted child grows up. Stanford, CA : Stanford University
Press.
8. Chapter in an edited book:
Kahn, J.V. (1984), Cognitive training and its relationship to the language
of profoundly related children. In J.M. Berg (Ed.), Perspectives and
progress in mental retardation. Baltimore : University Park, 211 –219.
9. Journal article:
Seltzer, M.M. (1984) Correlates of community opposition to community
residences for mentally retarded persons. American Journal of Men tal
Deficiency, 89, 1 – 8.
10. Magazine article:
Meer, J. (1984, August), Pet theories, Psychology Today, pp. 60 –67.
11. Unpublished paper presented at a meeting:
Schmidt, M., Khan, J.V. & Nucci, L. (1984, May), Moral and social
conventional reasoning of trainable mentally retarded adolescents. Paper
presented at the annual meeting of the American Association on Mental
Deficiency, Minneapolis, MN.
12. Thesis or dissertation (unpublished):
Best, J.W. (1948), An analysis of certain selected factors underlying t he
choice of teaching as a profession. Unpublished doctoral dissertation,
University of Wisconsin, Madison.
13. Unpublished manuscripts:
Kahn, J.V., Jones, C., & Schmidt, M. (1984). Effect of object Preference
on sign learnability by severely and profoundly r etarded children : A pilot
study. Unpublished manuscript, University of Illionois at Chicago.
Kahn, J.V., (1991). Using the Uzgiris and Hunt scales to understand sign
usage of children with severe and profound mental retardation, Manuscript
submitted for p ublication.

munotes.in

Page 231


Data Analysis And Report
Writing -III
231 14. Chapter accepted for publication:
Kahn, J.V. (in press), Predicting adaptive behavior of severely and
profoundly mentally retarded children with early cognitive measures.
Journal of Mental Deficiency Research
15. Technical report:
Kahn, J.V., (1981 ) : Training sensory -motor period and language skills
with severely retarded children. Chicago, IL : University of Illinois at
Chicago. (ERIC Document Reproduction Service, No. ED 204 941).
6C.5 EVALUATION OF RESEARCH REPORT:
Evaluation of a research rep ort is essential to find out major
problems and shortcomings. Through a critical analysis, the student
may gain some ideas into the nature of research problem,
methodology for conducting research, the process by which data are
analysed and conclusions are drawn, format of writing research
report, style of writing. The following questions are suggested to evaluate
each components of research report.
1. The Title and Abstract
Are the tile and abstract clear and concise?
Do they promise no more than the study can provide?
2. The problem
 Is the problem stated clearly?
 Is the problem researchable?
 Is background information on a problem presented?
 Is the significance of the problem given?
 Are the variables defined operationally?
3. The Hypothesis
 Are hypotheses testable and stated clearly?
 Are hypotheses based on sound rationale?
 Are assumptions, limitations and delimitations stated?
4. Review of Repeated Literature
Is it adequately covered?
 Are most of the sources primary?
 Are important findings noted?
 Is it well organised?
 Is the literature given directly relevant to the problem? munotes.in

Page 232


Research Methodology in Education
232  Have the references been critical analysed and the results of studies
compared and constructed?
 Is the review well organised?
 Does it conclude with a brief summary and its implications fo r the
problem investigated?
5. Sample
Are the size and characteristics of the population studied described?
Is the size of the sample appropriate?
Is the method of selecting the sample clearly described?
6. Instruments and Tools
Are data gathering instrument s described clearly?
Are the instruments appropriate for measuring the intended variable?
Are validity and reliability the instruments discussed ?
Are systematic procedure followed if the instrument was developed by
one researcher?
Are administration, s earing and interpretation procedures described?
7. Design and Procedure
 Is the design appropriate for testing the hypotheses?
 Are the procedures described in detail?
 Are control procedures described?
8. Results
 Is the statistical method appropriate?
 Is the level of significance given?
 Are tables and figures given?
 Is every hypothesis tested?
 Are the data in each table and figure described clearly?
 Are the results stated clearly?
9. Discussions
Is each finding discussed?
Is each finding discussed in term of its agreement and disagreement
with previous studies?
Are generalizations consistent with the results?

munotes.in

Page 233


Data Analysis And Report
Writing -III
233 10. Conclusions and Recommendations
Are theoretical and practical implications of the findings discussed?
Are recommendations for further action m ade?
Are recommendations for further research made?
11. Summary
Is the problem restated?
Are the number and type of subjects and instruments described?
Are procedures described?
Are the major findings and conclusions described?
Check Your Progress -IV
1. W hat are the criteria of evaluating a research report?
Suggested Readings
Turabian, K. Manual for Writers of Term Papers, Theses, and
Dissertations. (7th ed.) Chicago: University of Chicago Press. 2007.
Cardasco, F. and Gatner, E. (1958).Research Report Wri ting. New York :
Barner and Noble.



munotes.in