Research is a systematic inquiry towards understanding a complex social phenomenon or a process. Based on the research problem, the selection of research methods by the researcher may vary. There are two types of research on the basis of a process i.e Quantitative research and Qualitative research.
Qualitative research:
The objective is to explore a phenomenon to gain understanding by diving deeper into the problem.
The quality of data rather than quantity is given importance.
Used qualitative methods of data collection such as interviews, focus groups, observation, etc.
It is spiral in nature.
The researcher starts with observation and ends with a theoretical position based on the facts, data, and his/her perception.
It moves from specific to theory. Therefore, it is inductive in nature.
A subject is studied in depth.
The conclusions are descriptive rather than predictive.
Believes in the interpretivism paradigm and disregards the positivist assumptions and statistical data analysis.
The behavioral aspect of people is studied. (thoughts, beliefs, attitude, values, etc)
Sampling is the selection of a subset (a statistical sample) of individuals from within a statistical population to estimate characteristics of the whole population. Statisticians attempt for the samples to represent the population in question.
It is used to quantify the problem by way of generating numerical data or data that can be transformed into usable statistics.
It is used to quantify attitudes, opinions, behaviours, and other defined variables – and generalize results from a larger sample population.
Quantitative Research uses measurable data to formulate facts and uncover patterns in research.
Quantitative data collection methods are much more structured than Qualitative data collection methods.
Quantitative data collection methods include various forms of surveys – online surveys, paper surveys, mobile surveys and kiosk surveys, face-to-face interviews, telephone interviews, longitudinal studies, website interceptors, online polls, and systematic observations.
The probability method is the best representative in quantitative research. So, the correct answer is A, B, and E only.
Sampling
Description
Process
Simple random sampling
In a simple random sample, every member of the population has an equal chance of being selected.
Your sampling frame should include the whole population.
Stratified sampling
This sampling method is appropriate when the population has mixed characteristics, and you want to ensure that every characteristic is proportionally represented in the sample.
You divide the population into subgroups (called strata) based on the relevant characteristic (e.g. gender, age range, income bracket, job role).
Quota sampling
In this method, the population is first segmented into mutually exclusive sub-groups, just as in stratified sampling. Dimension sampling is a part of it.
Snowball sampling
If the population is hard to access, snowball sampling can be used to recruit participants via other participants.
The number of people you have access to “snowballs” as you get in contact with more people.
Systematic sampling
This method is similar to simple random sampling, but it is usually slightly easier to conduct.
Every member of the population is listed with a number, but instead of randomly generating numbers, individuals are chosen at regular intervals.
Measurement is the process of assigning numbers to objects and events in accordance with a set of rules. To grasp the full impact of measurement, it is important to understand the concept of a measurement scale. There are several different kinds of scales: nominal, ordinal, interval, and ratio.
Scales of Measurement:
1) Nominal (Label or category):
With a nominal scale, numbers are assigned to objects or events simply for identification purposes.
For example, participants in various sports have numbers on their jerseys that quickly allow spectators, referees, and commentators to identify them. This identification is the sole purpose of the numbers.
Labeling or naming allows us to make qualitative distinctions or to categorize and then count the frequency of persons, objects, or things in each category.
2) Ordinal (Rank order):
An ordinal scale allows us to rank-order events.
Original numbers are assigned to the order, such as first, second, third, and so on.
For example, grades such as “A,” “B,” “C,” “D,” and “F”; scores are given in terms of high, medium, and low; birth order in terms of firstborn, second-born, or later-born; a list of examination scores from highest to lowest; a list of job candidates ranked from high to low; and a list of the ten best-dressed persons.
For example, a researcher may pose a question to a teacher as follows:
Although most psychological scales are probably ordinal, psychologists assume that many of the scales have equal intervals and act accordingly.
The difference in the level of aggression between a score of 1 and a score of 2 is about the same as the difference in the level of aggression between a score of 2 and a score of 3, and so on.
Many researchers believe that these scales do approximate equality of intervals reasonably well, and it is unlikely that this assumption will lead to serious difficulties in interpreting our findings.
3) Interval (Rank order + Equal Intervals):
When we can specify both the order of events and the distance between events, we have an interval scale.
The distance between any two intervals on this type of scale is equal throughout the scale.
The central shortcoming of an interval scale is its lack of an absolute zero point—a location where the user 5 - 8 can say that there is a complete absence of the variable being measured. This type of scale often has an arbitrary zero point, sometimes called an anchor point.
For example,
scores on intelligence tests are considered to be on an interval scale.
With intelligence test scores, the anchor point is set at a mean IQ value of 100 with a standard deviation (SD) of 15.
A score of 115 is just as far above the mean (one SD) as a score of 85 is below the mean (one SD). Because we have a relative zero point and not an absolute one, we cannot say that a person with an IQ of 120 is twice as intelligent as a person with an IQ of 60. It is simply not meant to do so.
Some additional examples of interval scales are both the centigrade and Fahrenheit scales of temperature, altitude (zero is sea level rather than the center of the earth), and scores on a depression scale or an anxiety scale.
4) Ratio (Rank order + Equal Intervals + Absolute zero)
A ratio scale has a number of properties that the others do not. With ratio scales, we can identify rank order, equal intervals, and equal ratios—two times as much, one-half as much.
Ratios can be determined because the zero points are absolute, a true anchor—the complete absence of a property.
Zero weight or height means the complete absence of weight or height. A 100-pound person has one-half the weight of a 200-pound person and twice the weight of a 50-pound person. We can say these things because we know that the starting points for these dimensions or measures are 0.
For example, you might measure a child’s aggressive behavior by counting the number of times that the child inflicts physical harm on another person during a one-week observation period. Clearly, 10 incidents would be twice as many as 5, and 0 incidents would represent the absence of the variable you are measuring.
Frequency counts that represent the number of times that a particular event occurred are a common example of measurement on a ratio scale. But be careful not to confuse this use of frequency with the use of frequency as a summary statistic for data measured on a nominal scale (how many times observations fit a particular category).
Decision tree to determine the appropriate scale of measurement:
Hence, the Nominal < ordinal < interval < ratio is the correct order of measurement scales in increasing order of accuracy, precision, and the number of operations used.
The distinction among scales becomes of particular importance when we conduct statistical analyses of data. Underlying statistical tests are various
assumptions, including those relating to the scale of measurement. In other words, the scale of measurement for a variable can determine the most appropriate type of statistical analysis of the data.
Examples of Variables for Each Scale of Measurement
Nominal (Label or category)
Type of disorder (schizophrenia, depression, anxiety)
The religious affiliation (none, Catholic, Hindu, Islam, Jewish, other)
Region of the country (Northeast, Midwest, Southwest, etc.)
Eye color (blue, brown, hazel, green)
The flavor of ice cream (vanilla, chocolate, strawberry)
Ordinal (Rank order)
College classification (junior, senior, etc)
Grade on test (A,B,C,D,F)
The national ranking of a sports team (1st, 2nd, 3rd, 4th, 5th, etc)
Reaction time (fastest, 2nd fastest, 3rd fastest, 4th fastest, etc.)
Age classification (Child, teen, young adult, adult, older adult)
Interval (Rank order + Equal Intervals)
Difference between the mean test score and each student's score
Score on the Beck Anxiety scale (score range from 0 to 44, but note that a score of 0 does not represent a complete absence of anxiety)
score on a Likert-type scale (1,2,3,4,5,6,7)
weight measured on a scale not calibrated to zero
temperature measure in degree Celsius or degree Fahrenheit
Ratio (Rank order + Equal Intervals + Absolute zero)
Number of college credits completed
number of correct answers on a test
number of total points scored in a season by a sports team
Qualitative research is an inquiry process of understanding based on distinct methodological traditions of inquiry that explore a social or human problem.
The researcher builds a complex holistic picture, analyses words, reports detailed views of informants and conducts the study in a natural setting. It is a multi-method focus, involving an interpretive, naturalistic approach to its subject matter.
It also involves the studied use and collection of a variety of empirical materials - case study, personal experience, introspection, life story, interview, observational, historical, interactional, and visual texts-that describe routine and problematic and meaning in individuals' lives.
Types of Qualitative Research design
Many varieties of traditions of qualitative studies exist in social sciences. They have been categorized by Creswell(1998) in the context of their forms, terminologies, and focus as under:
Narrative research designs
It is known as Biographical study which involves the study of an individual and his or her experiences as narrated to the researcher or found in different sources. You can come across biographical writings in different fields like literature, history, anthropology, sociology, education, psychology, etc. Biographies are presented with different perspectives like literary, historical, anthropological, sociological, educational, psychological as well as of interdisciplinary nature.
The focus of the biography remains on telling and inscribing the stories of others. It explores the history of life e.g. accounts of major achievements of life. There are different connotations linked with biographical study viz., individual biographies, autobiography, life history, oral history. In all these cases the researchers must take care of objectivity in expression with little research interpretation. It must be written in a scholarly way with a strong historical background of the subject and chronological organization.
Phenomenological Study
Phenomenological study's focus remains on describing the meaning of lived experiences for several individuals about a concept or the phenomenon. It is said that through a phenomenological approach the researcher explores the structures of consciousness in human experiences. Here experiences contain both the outward appearance and inward consciousness based on memory image and meaning.
Grounded Theory Study
This kind of study aims at discovering or generating a theory. Here theory means an abstract analytical scheme of the phenomenon. In other words, a theory is understood as a plausible relationship, as any concept or set of concepts. In this case, the theory is discovered in the context of a particular situation. This situation is one in which individuals interact, take actions, or engage in a process in response to a phenomenon.
The researcher intends to explore how people act and react to a phenomenon. The process involved in data collection can be through continuous visits to the field, interviews with participants, in-depth observations of activities, etc. Through the grounded theory method, a theory is generated in the context of a phenomenon being studied.
Ethnography
Ethnography can be understood as a description and interpretation of a cultural or social group or system. Here the focus of the study remains on examining the patterns of behavior of a group, its customs, and ways of life. This method involves prolonged observation of events where the researcher becomes a part and parcel of the day-to-day lives of the people.
One to one interviews with the members of the group corroborated with participant observation can form the base of such a method. The researcher makes use of ethnography to study the meanings of behavior, language, and interactions of the culture sharing group.
Case Study
Case study as a method of research focuses on the in-depth study of a unit or case in totality. The case may be an individual, program, an event, an institution, an activity, etc.
The case study method was originally used in medicine to examine the patient's previous development, his health and physical state from the beginning, and many other factors in the past, besides making a careful study of the patient's present condition and symptoms.
Hence, from the above-discussed points, we can conclude that phenomenological design is the most appropriate for depicting lived experiential realities.
Which method of the qualitative research focuses on language and meanings that are given to texts, for the purpose of creating and shaping knowledge and behaviour?
Discourse analysis
Narrative research
Trend analysis
Grounded theory
Answer (Detailed Solution Below)
Option 1 : Discourse analysis
Research in Education MCQ Question 5 Detailed Solution
It focuses on the study of complex human and social problems in totality, unlike the scientific methods of concentrating on the study of fragmented variables or situations or events.
It places the main emphasis on the researcher who narrates and interprets phenomena in terms of meanings derived from people's experiences, events, etc. Hence, the human and subjective approach is highlighted.
The studies are conducted in a natural setting i.e., to observe the events without making any manipulations or controls on variables studied.
It involves a variety of data gathering techniques and approaches of qualitative nature viz., case study, interviews, dialogues, observations, personal experience, life story, visual data like photography, etc. These data are gathered from a variety of Qualitative Research sources.
Discourse analysis:
It is an umbrella term for a number of qualitative research approaches that examine the use of language (oral and written texts) in social contexts in helping researchers find answers to their research questions or problems.
It focuses on language and meanings that are given to texts, for the purpose of creating and shaping knowledge and behavior.
It is used in various social science disciplines including sociology, anthropology, social work, social and cognitive psychology, communication studies, socio-legal studies, education, management, and organization studies, each of which is subject to its own assumptions, dimensions of analysis, and methodologies. In discourse analysis, the objects of analysis are varied and may include coherent sequences of sentences, propositions, speech acts, turns, and gaps.
Various types of discourse used in various disciplines including those used in politics, the media, education, science, medicine, law, business, etc
There are some approaches to discourse analysis, such as Speech Act Theory, Interactional Sociolinguistics,Ethnography of Communication, Pragmatics,Conversational Analysis, Variation Analysis, etc.
Devices for Discourse Analysis:
Cohesion: Cohesion refers to the ties and connections which exist within texts that link different parts of sentences or larger unit of discourse.
Coherence: The language users try to come to an interpretation in the scenario of knowledge of the world they possess.
Parallelism: Parallelism means side by side. In some pieces of literature, some comparisons or contrasts go side by side with each other. They also help to interpreter the whole text.
Speech Events: Speech events are mainly concerned with what people say in different environment e.g. Debate, interview, discussions, quiz etc are different Speech Events
Background Knowledge: Background knowledge can be very much helpful in interpreting any text.
Narrative research:
It can be defined as collecting and analyzing the accounts people tell to describe experiences and offer an interpretation.
This approach has been used in many disciplines to learn more about the culture, historical experiences, identity, and lifestyle, etc.
Trend analysis:
A relationship between two quantitative entities is established using trend analysis.
The future of this relationship is set on the basis of the trend in the past and thus known as trend analysis.
Trend analysis is based on the idea that what has happened in the past gives traders an idea of what will happen in the future. There are three main types of trends: short-, intermediate- and long-term.
Grounded theory:
It involves the collection and analysis of data.
The analysis and development of theories happen after you have collected the data.
It was introduced by Glaser & Strauss in 1967 to legitimize qualitative research
Hence, Discourse analysis focuses on language and meanings that are given to texts, for the purpose of creating and shaping knowledge and behaviour.
A “sample” is a miniature representation of and selected from a larger group or aggregate. The sample provides a specimen picture of a larger whole. This larger whole is termed as the “population” or “universe”. In research, this term is used in a broader sense; it is a well-defined group that may consist of individuals, objects, characteristics of human beings, or even the behavior of inanimate objects, such as, the throw of a dice or the tossing of a coin.
Sampling is the selection of representative of a small group (sample) from the population. The method used for drawing a sample is significant
to arrive at dependable results or conclusions. The various sampling methods can be broadly classified into two categories: Probability Sampling and Non-probability Sampling.
Simple or unrestricted random sampling:
Simple random sampling is a method of selecting a sample from a finite population in such a way that every unit of the population is given an equal chance of being selected.
In practice, you can draw a simple random sample unit by unit through the following steps:
Define the population
Decide the size of the sample or the number of units to be included in the sample.
Make a list of all the units in the population and number them from 1 to n.
Use either the ‘lottery method’ or ‘random number tables’ to pick the units to be included in the sample.
For example,
you may use the lottery method to draw a random sample by using a set of ‘n’ tickets, with numbers ‘1 to n’ if there are ‘n’ units in the population.
After shuffling the tickets thoroughly, the sample of required size, say x, is selected by picking the required x number of tickets.
The units which have the serial numbers occurring on these tickets will be considered selected.
The assumption underlying this method is that the tickets are shuffled so that the population can be regarded as arranged randomly.
Hence, Define the target population, decide sample size, list all the units of the target population, and drawing the sample by randomization is the process of drawing samples in a simple random sampling.
A tool used for data collection must be reliable, that is, it must have the ability to consistently yield the same results when it is repeatedly administered to the same individuals under the same conditions.
For example, if an individual record his/her responses on various items of a questionnaire and thus provides a certain type of information, he/she should provide approximately the same type of responses when the questionnaire is administered to him/her on the second occasion. If an achievement test is administered to learners and then readministered after .a gap of fifteen days without any special coaching in that subject, within these fifteen days, the learners must show a similar range of scores on the administration of the test.
The repeated measure of an attribute, characteristic, or trait by a tool may provide different results. They may be due either to a real change in the individual's behavior or to the unreliability or inconsistency of the tool itself.
It varies from sample to sample. It does not give the same or fixed result in varied situations.
It is affected by the variance of scores.
It is the correlation of scores on the testing tool itself. The results in terms of two sets of measures obtained by the use of the tool are correlated to measure the level of reliability.
If the variation in the results is due to a real change in behavior, the reliability of the tool is not to be doubted. However, if the variation is due to the tool itself, then the tool is to be discarded.
There are various procedures to assess the reliability of a tool. These include (i) the test-retest method, (ii) the alternate or parallel-form method, (iii) the split-half method, and (iv) the rational equivalence method.
Research is the systematic and objective analysis and recording of controlled observations that may lead to the development of generalization, principles, or theories, resulting in prediction and possibly ultimate control of events.
Research is an activity as characterized below:
An intellectual activity of a high order;
An investigation of a phenomenon, event or activity;
Aims to discover data and facts and their interpretations;
To arrive at conclusions to formulate new theories and laws or revise the already established theories and laws;
To communicate the results for peer review; and
To be accepted or rejected before adding this new knowledge to the already existing general pool of knowledge.
Purpose of Research
1) Experimentation:
the goal of experimental research conducted in a controlled setting is to test a theory or construct theoretical explanations.
Independent variables are manipulated or introduced, and all other variables (extraneous) are carefully controlled to measure the dependent variable and make conclusions about how the variables are related. With attention to design details such as the introduction of variables, the measurement of dependent variables, and random assignment of subjects into control and experimental groups, what could go wrong?
2) Phenomenology:
Phenomenological study's focus remains on describing the meaning of lived experiences for several individuals about a concept or the phenomenon.
The purpose of the phenomenological approach is to illuminate the specific, to identify phenomena through how they are perceived by the actors in a situation.
It is said that through a phenomenological approach the researcher explores the structures of consciousness in human experiences. Here experiences contain both the outward appearance and inward consciousness based on memory image and meaning.
3) Participant observation:
The goal of participant observation research is to understand as fully as possible the situation being studied without disturbing that situation. Any data collecting that is compatible with that goal can be pursued.
It connects the researcher to the most basic of human experiences, discovering through immersion and participation the hows and whys of human behavior in a particular context
The result of this discovery and systemization is that we not only make ourselves into acceptable participants in some venue but also generate data that can meaningfully add to our collective understanding of human experience
4) Symbolic interaction:
The symbolic interaction perspective emphasizes the shifting, flexible, and creative manner in which humans use symbols. The process of adjustment and change involves individual interactions and larger-scale features such as norms and order.
Hence, the main purpose of experimentation with control and manipulation of variables is to formulate generalizations leading to theory building.
In the Cluster sampling method units comprising its constituents are groups taken intact rather than individually.
Sampling:
Sampling is the selection of a subset (a statistical sample) of individuals from within a statistical population to estimate characteristics of the whole population. Statisticians attempt for the samples to represent the population in question.
The following table is about the characteristics of the given sampling methods-
Type of Sampling
Method
Cluster sampling
Cluster sampling also involves dividing the population into subgroups, but each subgroup should have similar characteristics to the whole sample.
Instead of sampling individuals from each subgroup, you randomly select entire subgroups.
Simple random sampling
In a simple random sample, every member of the population has an equal chance of being selected.
Your sampling frame should include the whole population.
Systematic sampling
This method is similar to simple random sampling, but it is usually slightly easier to conduct.
Every member of the population is listed with a number, but instead of randomly generating numbers, individuals are chosen at regular intervals.
Dimensional sampling
It is an extension to quota
The researcher takes into account several characteristics e.g. gender, age, income, residence, and education.
The researcher must ensure that there is at least one person in the study representing each of the chosen characteristics.
Universe: universe contains the whole population, its groups, and all varieties
Population: it is a set of similar items or people who have common characteristics
Sample set: it is a specific group of people chosen for collecting the sample. It consists of responsive and non-responsive samples. The actual sample has been taken from the responsive samples only.
Research is a systematic way of investigation, a process of discovering new knowledge. Research is also considered as searching for and gathering information, usually to answer a particular question. Each research study has its own specific purpose
Case study method
A case study is a deep, detailed, and intensive study of a social unit;
It is a method of qualitative research;
It preserves wholeness of the units i.e. it is an approach that views any social unit as a whole.
It helps to collect detailed information about the unit of study and gives clues to new ideas and further research.
As a tool of analysis, it helps to ascertain a number and variety of traits, qualities, and habits confined to a particular instance.
The Case Study method shows the way to deepen our perception and sharpen insights to understand biographies.
Hence, Developing an in-depth understanding of the case is the distinctive feature of case study research.
Reliability: is the state of being consistent or reliable. It is the overall consistency of the test. A test is reliable if different trials give same results or different parts of the test give the same results.
The reliability coefficient is a numerical term used to show how reliable the test is.
the more items there are in a scale designed to measure a particular concept, the more reliable will the measurement (sum scale) be.
Suppose you want to measure the height of 10 persons, using only a crude stick as the measurement device. Note that we are not interested in this example in the absolute correctness of measurement (i.e., in inches or centimeters), but rather in the ability to distinguish reliably between the 10 individuals in terms of their height. If you measure each person only once in terms of multiples of lengths of your crude measurement stick, the resultant measurement may not be very reliable. However, if you measure each person 100 times, and then take the average of those 100 measurements as the summary of the respective person's height, then you will be able to make very precise and reliable distinctions between people (based solely on the crude measurement stick).
Hence, the Assertion "Tests with a larger number of items have higher reliability" is correct. The reason "Each test item adds to test reliability" is not correct.
Additional Information
Types of reliability tests:
Test-retest reliability:
Test-retest reliability measures the consistency of results when you repeat the same test on the same sample at a different point in time.
This is used to measure attributes/traits or characteristics that are expected to stay constant in the sample.
Smaller the difference between the two results, the higher the test-retest reliability. For e.g. A test of personality
Inter rater reliability:
Interrater reliability (also called inter observer reliability) measures the degree of agreement between different people observing or assessing the same thing.
This is used when data is collected by researchers assigning ratings, scores, or categories to one or more variables.
Parallel forms reliability:
Parallel forms reliability measures the correlation between two similar versions of a test.
This is used when two different assessment tools or sets of questions designed to measure the same thing.
Internal consistency:
Internal consistency assesses the correlation between multiple items in a test that are intended to measure the same construct.
It is used to calculate internal consistency without repeating the test or involving other researchers, so it’s a good way of assessing reliability when you only have one data set.
Sampling in research is the process of selection of units (e.g. people, organization) from a population of interest so that by studying the sample may fairly generate results back to the population from which they were chosen. The objective of sampling is to derive the desired information about the population at the minimum cost or with the maximum reliability.
Blalock (1960) indicated that most sampling methods could be classified into two categories: Non-probability sampling methods and Probability sampling methods.
Non- probability Sampling Method
Probability Sampling Method
It is one in which there is no way of assessing the probability of the element or group of elements, of the population being included in the sample.
Non-probability sampling methods are those that provide no basis for estimating how closely the characteristics of the sample approximate the parameters of the population from which the sample had been obtained. This is because the non-probability samples do not use the techniques of random sampling.
Important techniques of non-probability sampling methods are Haphazard, Accidental, or Convenience Sampling, Quota Sampling, Purposive sampling, Snowball sampling.
Probability sampling methods are those that clearly specify the probability or likelihood of inclusion of each element or individual in the sample.
These are free of bias in selecting sample units.
They help in the estimation of sampling errors and evaluate sample results in terms of their precision, accuracy and efficiency, and hence, the conclusions reached from such samples are worth generalization and comparable to the similar populations to which they belong.
Major probability sampling methods are simple random sampling, stratified random sampling, and Cluster sampling,and Systematic sampling.
A university teacher has conducted a survey of achievement of students in chemistry through a self-made test. The distribution of scores has been studied in terms of mean and standard deviation for a sample of 100 students. The results are as follows:
Mean = 50 Standard deviation = 10 and N = 100
Assuming that the distribution of scores is normal, what will be the Percentile Rank (PR) of a student whose score is 60?
70
75
80
84
Answer (Detailed Solution Below)
Option 4 : 84
Research in Education MCQ Question 14 Detailed Solution
Normal Probability Curve: It is a bell shaped curve having highest point at mean which is symmetrical along the vertical line drawn at the mean. It is used to interpret the percentile or the percentage of no of cases for respective value of z scores.
Mean: Mean (average) is sum of all quantities divided by no of quantities.
Standard deviation: It is used to measure how deviated or dispersed the numbers are with respect to the mean in the same set of data.
Percentile Rank: It is measure to find how many values are below the score. It basically rank of the score.
Z score: Z score is a value calculated through the above formula, used to interpret percentile of a given score or value.
Important Points
To calculate: the percentile rank
Given: score(x) = 60, mean (x̄) = 50 and standard deviation = 10.
Formula: Z = \((X - \bar{X}) \over σ\)
Calculation:
Step 1:
Calculate z scores
z = (60-50)/10= 1
Step 2:
Calculate the percentile rank:
From Normal Probability Curve, the total percentage of cases:
So total percentage of cases till the score 60 = 0.14% + 2.14% + 13.59% + 34.13% +34.13% ( from the NPC given above)
So total percentage of cases till the score 60 = 84.13%
The Percentile Rank (PR) of a student whose score is 60 = 84.13% of 100 = 84.
A teacher finds that the distribution of scores for a self made test is positively skewed. What inference he / she should make about the difficulty level of this test?
The test is difficult
The test is easy
The test is moderate difficulty
The test is moderately of low difficulty
Answer (Detailed Solution Below)
Option 1 : The test is difficult
Research in Education MCQ Question 15 Detailed Solution
Descriptive statistics describe the trend or a feature or a characteristic or phenomena of the data.
This can be normally distributed.
This involves central tendency(mean, median, mode), Spread chart(histogram, bar graphs, etc.), Range, variance, etc.
For e.g. Average height of grade 9th students in XYZ school.
Hence, When the nature of the population from which samples is drawn is not known to be normally distributed the data can be analyzed with the help of Non-parametric statistics.
Central tendency: Central tendency provides descriptive representation of entire data set which reflects the entire data set. Mean, Mode and median are measures of central tendency.
Mean: Mean (average) is the sum of all quantities divided by no of quantities of the data set.
Mode: It is the most repeated value in the data set.
Median: Median is the middlemost value of the data set.
Dispersion: This tells about the scatterings of data around the mean. It gives information about how stretched or squeezed the distribution of data are. Range, std deviation, percentile, etc. are used to measure dispersion.
Shape of a probability distribution:
In probability theory, Normal Distribution is an arrangement of values in which most of the values are in the middle and the rest are symmetrically distributed at the end. This observed in the normal probability curve. Normal Probability Curve is a bell-shaped curve having the highest point at the mean which is symmetrical along the vertical line drawn at the mean. In this, the mean, median and mode are identical.
Normal distributions are often observed in nature. Therefore, in natural and social science experiments the random variables are assumed to be normally distributed.
The other common shapes are skewness and kurtosis.
Skewness:
The skewness is a measure of the asymmetry of the probability distribution assuming a unimodal distribution (assuming there is only one peak in the distribution of values) .
We can say that the skewness indicates how much our underlying distribution deviates from the normal distribution since the normal distribution has skewness 0. Generally, we have three types of skewness.
Symmetrical: When the skewness is close to 0 and the mean is almost the same as the median
Negative skew: When the left tail of the histogram of the distribution is longer and the majority of the observations are concentrated on the right tail. In this case, we can use also the term “left-skewed” or “left-tailed”. and the median is greater than the mean.
Important Points
If a test shows, a left-skewed distribution, this means most of the students are high scorers who scored above average and there is a very less number of students who scored very low marks. We can conclude that the test was very easy and there are very few weak students who scored low marks and these students can be considered outliers as they are not following the general trend.
Positive skew: When the right tail of the histogram of the distribution is longer and the majority of the observations are concentrated on the left tail. In this case, we can use also the term “right-skewed” or “right-tailed”. and the median is less than the mean.
If a test shows, a right-skewed distribution, this means most of the students are low scorers who scored below average and there is a very less number of students who scored very high marks. We can conclude that the test was very difficult and there are very few good students who scored high marks and these students can be considered outliers as they are not following the general trend.
Kurtosis
It is the flatness or peakedness of a histogram of a frequency distribution. It shows how peaked the central values are in a distribution of data sets.
Types of Kurtosis: There are 3 types of kurtosis as far as statistics are concerned-
Mesokurtic: This distribution has the tails often similar to normal distribution.
Leptokurtic: This distribution will be having very long and skinny tails. This means there are more chances of the presence of outliers.
Platykurtic: This distribution will be having very low and stretched around center tails which means most of the data points are present in high proximity with mean.
For the purpose of highlighting the characteristics of a particular group of children, which of the following may be considered as the primary source ?
A review of research published in the last 5 years
A critical explanation of the psychological characteristics of children suffering from Autism
Findings of case studies of a group of children with Autism
List of books on special education
Answer (Detailed Solution Below)
Option 3 : Findings of case studies of a group of children with Autism
Research in Education MCQ Question 16 Detailed Solution
Data Collection is an important aspect of any type of research study. Data collection techniques allow us to systematically collect information about the subject of our study (people, objects, phenomena). The sources from where we get information are called data sources and these may comprise documents, humans, institutions as well as mass media like newspapers, radio, and television.
Sources of Data: They can be categorized based on their content into primary, secondary, and tertiary sources of data.
Primary
Secondary
Tertiary
These sources contain original/firsthand material or fresh information that has been published, reported, or recorded for the first time.
These offer an analysis or restatement of the primary sources of information. They present the contents of primary documents in a condensed form and effectively list them for easy and quick retrieval.
These sources are based on primary and secondary sources. Tertiary sources index, abstract, organize or compile primary and secondary sources. They are usually not credited to an author.
It includes historical and legal documents, collective questionnaires, journals, interviews, surveys, email communication, technical reports, dissertations, conference papers, patents, diaries, etc.
It includes biographies, indexing and abstracting periodicals, reviews, magazines, newspapers, textbooks, dictionaries, encyclopedias, yearbooks, directories, statistical sources, etc.
It includes sources such as bibliography of bibliographies, guides to the literature, directories listing primary and secondary Periodicals, etc.
Key Points
Review of research: Review of Research, books, scholarly articles, and any other sources relevant to a particular issue, area of research, or theory, and by so doing, provides a description, summary, and critical evaluation of these works in relation to the research problem being investigated
The critical explanation is subjective writing because it expresses the writer's opinion or evaluation of a text.
Case studies: a case study is a research method involving an up-close, in-depth, and detailed examination of a particular case. It can be defined as an intensive study about a person, a group of people, or a unit, which is aimed to generalize over several units.
Special Education: education for children with physical or mental problems, who need to be taught in a different way is called special education.
Conclusion: Since the case study is an intensive and in-depth study about a particular case or group which could be later generalized over several units. Hence, option (3) is correct.
Note: Research of review tells us about the research gap in the field, the critical explanation is subjective hence biased opinion of a writer and books on special education will tell about the special need of particular children. The only case study tells intensively about the problems and solutions required by particular children.
Using equivalent samples, a researcher obtained a significant correlation 95 times out of 100 trials. He/She decided to reject the null hypothesis. The alpha level would be :
.01
.0.2
.05
.001
Answer (Detailed Solution Below)
Option 3 : .05
Research in Education MCQ Question 17 Detailed Solution
t-test : t- test is a statistical term which is used to define the significant difference between two groups, the differences are measured in terms of means of the groups.
Hypothesis testing
It uses sample data to make inferences about the population.
It gives tremendous benefits by working on random samples, as it is practically impossible to measure the entire population.
Hypothesis testing is a procedure that assesses two mutually exclusive theories about the properties of a population.
For Hypothesis testing, the two hypotheses are as follows:
Null Hypothesis
Alternative hypothesis
There are two errors defined, both are for null hypothesis condition
Type-I error corresponds to rejecting H0 (Null hypothesis) when H0 is actually true, and a Type-II error corresponds to accepting H0 (Null hypothesis)when H0 is false. Hence four possibilities may arise:
The null hypothesis is true but the test rejects it (Type-I error). The probability of making a type I error is α, which is the level of significance you set for your hypothesis test.
The null hypothesis is false but the test accepts it (Type-II error). The probability of making a type II error is β, which depends on the power of the test.
The null hypothesis is true and the test accepts it (correct decision).
The null hypothesis is false and test rejects it (correct decision)
Significant difference: Significant differences in statistics means measurable differences.
Given:
Error Type: Type I error
Experiment done: 100 times
Significant difference: 95 times
To find: alpha level (α) i.e. probability of type I error
Terms:
Formula:Probability = No of favourable outcome/Total no of outcome
Calculation:
Probability= No of favourable outcome/Total no of outcome
Since we want to find the probability of type I error, we will calculate the total number of cases (= 20) and number of cases in favour of type I error or which not significant (100-95=5)
Probability= No of favourable outcome(in favour of type I error) /Total no of outcome (cases)
= 5/100
= 0.05
Hence, The probability of committing a type I error i.e. alpha level was 0.05
In psychological research, we often wish to determine the relationship between two variables for prediction purposes.
For example, you may be interested in knowing whether “the amount of study time” is related to the “student’s academic achievement”. you simply find out the relationship between the two variables to determine whether they are associated, or covary or not.
The strength and direction of the relationship between the two variables are represented by a number, known as the correlation coefficient.
Correlation coefficient:
Its value can range from +1.0 through 0.0 to –1.0.
The coefficient of correlation is of three types: positive, negative, and zero.
A positive correlation
It indicates that as the value of one variable (X) increases, the value of the other variable (Y) will also increase.
Similarly, when variable X decreases, a decrease in Y too will take place.
Suppose, it is found that the more time the students spend on studying, the higher was their achievement score. Also, the less they studied, the lower was their achievement score. This type of association will be indicated by a positive number, and the stronger the association between studying and achievement, the closer the number would be to +1.0.
A correlation of + 0.85, indicating a strong positive association between study time and achievement.
a negative correlation
It tells us that as the value of one variable (X) increases, the value of the other (Y) decreases.
For example, you may hypothesize that as the hours of study time increase, the number of hours spent on other activities will decrease. Here, you are expecting a negative correlation, ranging between 0 and –1.0.
zero correlation:
It is also possible that sometimes no correlation may exist between the two variables. This is called a zero correlation.
Generally, it is difficult to find zero correlation but the correlations found may be close to zero, e.g., -.02 or +.03.
This indicates that no significant relationship exists between two variables or the two variables are unrelated.
Hence,If two variables X and Y have a significant negative correlation, then X and Y vary together.
Criticism is the practice of judging the merits and faults of something. Criticism is often presented as something unpleasant, but there are friendly criticisms, amicably discussed, and some people find great pleasure in criticism.
1) External criticism:
External criticism refers to the authenticity of the document.
External criticism is concerned with establishing the authenticity or genuineness of data. It is also called lower criticism.
Once a document has been determined to be genuine (external criticism), researchers need to determine if the content is accurate (internal criticism).
External criticism is when historians check the validity of a source to verify whether or not it's authentic. This process is important regarding analyzing data because we have to question whether or not we can trust the data and use it as a reference point.
2) Internal criticism:
The credibility of the evidence is established by internal criticism.
Internal criticism concerns the contents of the document.
Internal criticism, aka positive criticism, is the attempt of the researcher to restore the meaning of the text. This is the phase of hermeneutics in which the researcher engages with the meaning of the text rather than the external elements of the document.
Internal criticism looks at the reliability of an authenticated source after it has been subjected to external criticism.
3) Meta-Analysis:
meta-analysis is a statistical analysisthat combines the results of multiple scientific studies.
Meta-analysis can be performed when there are multiple scientific studies addressing the same question, with each individual study reporting measurements that are expected to have some degree of error. The aim then is to use approaches from statistics to derive a pooled estimate closest to the unknown common truth based on how this error is perceived.
Meta-analysis has the capacity to contrast results from different studies and identify patterns among study results, sources of disagreement among those results, or other interesting relationships that may come to light in the context of multiple studies.
Meta-analyses are often, but not always, important components of a systematic review procedure. For instance, a meta-analysis may be conducted on several clinical trials of medical treatment, in an effort to obtain a better understanding of how well the treatment works.
4) Trend Analysis:
Trend analysis is a statistical procedure performed to evaluate hypothesized linear and nonlinear relationships between two quantitative variables.
Typically, it is implemented either as an analysis of variance (ANOVA) for quantitative variables or as regression analysis.
A trend is a recurring pattern and trend analysis is the practice of collecting data in an attempt to spot that pattern.
When the user needs and behavior are changing rapidly, trend analysis is a method that can act as a window into the future demands of users.
Conclusion: From the above discussion, it is clear that Meta-Analysis is the review process that uses statistical methods to synthesize the results of independently conducted prior studies.
An investigator obtained a correlation coefficient of +0.65 between two variables. The other investigator obtained a correlation coefficient of -0.65 between the same variables. Which of the two values indicates a higher degree of relationship?
+0.65 indicates a higher degree of relationship than that of -0.65.
-0.65 indicates a higher degree of relationship than that of +0.65.
Both indicate the same degree of relationship.
No such statement can be made about comparison.
Answer (Detailed Solution Below)
Option 3 : Both indicate the same degree of relationship.
Research in Education MCQ Question 20 Detailed Solution
In research, we often wish to determine the relationship between two variables for prediction purposes.
For example, you may be interested in knowing whether “the amount of study time” is related to the “student’s academic achievement”. you simply find out the relationship between the two variables to determine whether they are associated, or covary or not.
The strength and direction of the relationship between the two variables are represented by a number, known as the correlation coefficient.
Correlation coefficient:
Its value can range from +1.0 through 0.0 to –1.0.
The coefficient of correlation is of three types: positive, negative, and zero.
A positive correlation
It indicates that as the value of one variable (X) increases, the value of the other variable (Y) will also increase.
Similarly, when variable X decreases, a decrease in Y too will take place.
Suppose, it is found that the more time the students spend on studying, the higher was their achievement score. Also, the less they studied, the lower was their achievement score. This type of association will be indicated by a positive number, and the stronger the association between studying and achievement, the closer the number would be to +1.0.
A correlation of + 0.85, indicating a strong positive association between study time and achievement. On the other hand,
a negative correlation
It tells us that as the value of one variable (X) increases, the value of the other (Y) decreases.
For example, you may hypothesize that as the hours of study time increase, the number of hours spent on other activities will decrease. Here, you are expecting a negative correlation, ranging between 0 and –1.0.
zero correlation:
It is also possible that sometimes no correlation may exist between the two variables. This is called a zero correlation.
Generally, it is difficult to find zero correlation but the correlations found may be close to zero, e.g., -.02 or +.03.
This indicates that no significant relationship exists between two variables or the two variables are unrelated.
Hence, An investigator obtained a correlation coefficient of +0.65 between two variables. The other investigator obtained a correlation coefficient of -0.65 between the same variables. Both indicate the same degree of the relationship if one variable decreases then the other variable increases.