Final Exam in Research 1

 

 

Article Title:

The Perception of the Other in International Relations:  Evidence for the Polarizing Effect of Entitativity

 

Authors:

 

 

Abstract:

In an international relations context, the mutual images held by actors affect6 their mutual expectations about the Other’s behavior and guide the interpretation of the Other’s actions. Here it is argued that the effect of these images is moderated by the degree of entitativity of the Other – that is, the extent to which it is perceived as a real entity. Two studies tested this hypothesis by manipulating the entitativity of the European Union (EU) among US citizens whose images of the EU varied among the enemy/ally dimension. Results of these studies yielded converging evidence in support of the hypothesized moderating effect of entitativity. Specifically, entitativity showed a polarizing effect on the relationship between the image of the EU and judgements of harmfulness of actions carried out by the EU.

 

Key Words:

entitativity, agency, polarization, image, intergroup relations, international relations

 

 

 

Underlined Words:

  • Hypotheses or hypothesis – refers to a suggested explanation of a phenomenon or reasoned proposal suggesting a possible correlation between multiple phenomena. The term derives from the ancient Greek, hypotithenai meaning "to put under" or "to suppose". The scientific method requires that one can test a scientific hypothesis. Scientists generally base such hypotheses on previous observations or on extensions of scientific theories (2002). Hypotheses are causal explanations or propositions that have at least one independent and one dependant variable but have yet to be tested ( 2003).
  •  

    In the case of the research article at hand, the authors identified two hypotheses that set out the research activity based on the psychology and communication theories such as schema, image, and entitativity.  The hypotheses are: (1) increasing the entitativity of an enemy country leads to a perception of increased harmfulness, and (2) increasing the entitativity of an ally country leads to a perception of increased friendliness.  The identified hypotheses in the study served as main guidelines or framework in conducting the research activity following the scientific method under the empirical research design. By testing the truthfulness, validity and reliability of the hypotheses under a defined context, the authors were able to answer the presented research problem which concludes the entire research activity.       

     

     

  • Outgroup – is a set of individuals that are perceived to share common characteristics which relatively differ in some degrees to the perceived characteristics of one’s own affiliation or membership to his or her ingroup (1998). Researchers have focused their attention on questions focusing on the subjective perceptions that people hold in defining groups as groups which led to a bulk of academic materials highlighting a variety of qualitative and quantitative investigations on perceptual reality of social groups ( 2003). Specifically,   (1967) criteria for entitativity is to think of them as theories that individual perceivers hold about specific social groups and that endow the groups with social meaning and predictive value (1985;  1995) hence differentiating further the outgroup from the ingroup.
  •  

    Entitativity can be defined as a theory of common origin underlying expected similarities of attitude or behavior on the part of group members in which groups are seen as “real” groups because some aspects of member behavior are believed to arise from some common source (2003). Moreover, the roots of group entitativity can be further divided into “causes” (distal or proximal) and “reasons” (1999). In the selected research study, the American students served as participants and they constitute as members of the ingroup which is America. The outgroup was the European Union (EU) whose entitativity was manipulated and determined based on the different perceptions of the students in the conducted experiments.  Because the EU is becoming an important actor in the international arena, investigations of the relations of the United States (ingroup) and the EU (outgroup) became the interest of the study. Since EU is a group which the participants (students) had little knowledge, the implemented entitativity manipulation became convenient.    

     

     

  • Manipulation – is the act of controlling or operating upon a person or group by unfair means to one's own advantage. In a psychological context, manipulation means to influence a person or a group of people in such a way that the manipulator tries to get what he or she wants or makes a person believe something in a calculating, indirect and somewhat dishonest way (1981). Experiments involve the deliberate manipulation of one or more independent variables in which variations or treatments are made in order to determine if the induced treatment/s will affect the reaction of the group or individual being experimented upon (2004; 1961). Manipulation is utilized by researchers in an experiment in order to test the hypotheses presented. Checks on whether participants understood the manipulation instructions called manipulation checks are used in research to ensure that what is intended to be manipulated in  an experiment is indeed manipulated (2004).
  •  

    In the article, the researchers identified that there are multiple roots of entitativity and so decided to test their hypothesis using two different manipulations of entitativity in order to ensure that entitativity, and not one of its components only, was responsible for the differing perceptions of the participants. The entitativity was manipulated using perceptual cues and was measured by utilizing a shorter version of the entitativity scale developed by  (1999). The scale includes (a) common fate, (b) similarity, and (c) distinctiveness. The manipulation conducted for the purpose of the experiment focused on the interaction between the image of the EU held by the participants and the different measures of entitativity induced. Through the initiated manipulations during the data collection procedures, the authors presented their predictions or expectations from the research activity. These include:

     

  • Only those participants who viewed the EU as an enemy of the United States to judge the actions of the EU as more harmful to the United States in the high-entitativity condition than in the low-entitativity condition.
  • For those participants who viewed the EU as an ally, its higher entitativity should have led to the perception of the EU as less harmful.
  •  A stronger relationship between the image of the EU and perceived harmfulness of the actions of the EU among participants in the high-entitativity condition compared to those in the low-entitativity condition.
  •  

     

  • Randomly or random – a concept used to express lack of purpose, cause, order, or predictability in non-scientific parlance. A random process is a repeating process whose outcomes follow no describable deterministic pattern, but follow a probability distribution. The term randomness is often used in statistics to signify well defined statistical properties, such as lack of bias or correlation (1986; 1998). Random selection or distribution techniques are used in research activity in order to ensure the represenativeness of the participants of the study which supports validity and reliability of the data based on the number or quantity of the participants while keeping equal opportunity for representation ( 2001). This is one way to avoid research biases that can influence the answers of the participants which will invalidate the results of the study.
  •  

    The researchers in the article asked the participation of fifty-seven students for the experiment who are enrolled in an introductory psychology course in exchange for course credits. As the participants arrive at the lab in groups of four, they were randomly assigned to one of the two experimental conditions (high vs. low entitativity). Aside from the random distribution of the participants in the different treatments induced by the researchers, they likewise sat in separate cubicles and were unable to communicate with each other for the entirety of the experiment. They were asked to respond to four statements aimed at assessing the image of the EU in which two of the questions assess the EU as an “ally”, while the remaining two to assess the EU as an “enemy”. The proceeding conditions presented to the participants were likewise conducted under the concept of random distribution. This enabled the researchers to collect data and information that are free from research biases that can spoil the entire research endeavor.

       

     

  • M = 3.97 (mean) – is an estimate of the "center" of a distribution of values. It is the distribution or summary of the number of occurrence of individual values or ranges of values for a variable (2000). To compute the mean all the values are added up and divide by the number of values (2002). For the quantitative data analysis of the variables of the study, descriptive statistics were primarily used so as to present descriptions in manageable forms. As such, univariate analysis which involves the evaluation of different cases of a specific variable for a specific period of time (2003) was incorporated through statistical tools in the form of the frequency distribution. For a real-valued random variable X, the mean is the expectation of X. If the expectation does not exist, then the random variable has no mean (1986). For a data set, the mean is just the sum of all the observations divided by the number of observations (1999). An alternative measure of dispersion is the mean deviation, equivalent to the average absolute deviation from the mean. It is less sensitive to outliers, but less tractable when combining data sets (1969). The weighted mean is used, if one wants to combine average values from samples of the same population with different sample sizes.
  •  

    In the article, the re researches used the mean to denote a general characteristic of the participants regarding the entitativy of the EU. The researchers assessed the effectiveness of the entitativity manipulation by creating a composite score (M=3.97, SD=1.21) averaging the three items measuring entitativity. The mean served as the standard measure of the general perception of the participants to which the subgroups of the experiments were compared in order to illustrate the variance of their answers or the difference of perception compared to the standard entitativity perception of all the participants (regardless of whether their responses resulted to high or low perceived entitativity). This enabled the researcher to determine the general perceived entativity of all the participants of the study as well as the degree of difference of the participants who were categorized with high perceived entitativity and the participants with low entitativity perception toward the EU. Furthermore, the indicated mean perceived entitativity of the participants were also used as reference data for the succeeding results of other implemented statistical measures in order to strengthen the claims that were presented by the researchers and the quantitative results of the entire study.  The value M=3.97 describes the average degree of entitativity of the EU based on the responses of the participants of the study.

     

     

  • SD = 1.21 (standard deviation)is the square root of the average of squared deviations from the mean which describes how the observations differ. The standard deviation of a probability distribution, random variable, or population or multiset of values is defined as the square root of the variance (2003). Standard deviation is the most common measure of statistical dispersion, measuring how spread out the values in a data set is. If the data points are all close to the mean, then the standard deviation is close to zero. If many data points are far from the mean, then the standard deviation is far from zero. If all the data values are equal, then the standard deviation is zero. The standard deviation (σ) of a population can be estimated by a modified standard deviation (s) of a sample ( 1999).
  •  

    Since the mean is the unique value about which the sum of squared deviations is a minimum, calculating the sum of squared deviations from any other measure of central tendency, it will be larger than for the mean (1999). This explains why the standard deviation and the mean are usually cited together in statistical reports.

     

    In the article, the researchers assessed the effectiveness of the entitativity manipulation by creating a composite score (M=3.97, SD=1.21) averaging the three items measuring entitativity. As was implied in the earlier discussion on mean, the mean is not much of use if it was only computed for the general perceived entitativity of all the participants of the study since the set out research problem and objectives of the researchers do not only entail mere description of ordinal quantitative data. Despite being designed in the empirical research paradigm, the study was also geared towards the understanding of the different intervening and confounding variables that affect the perceived entitativity of the participants, be it high or low. This was made possible by the concept of standard deviation which enabled the comparison of the different levels or degrees of perceived entitativity of the participants toward the EU. The value SD=1.21 denotes the standard variation or average difference in the levels or degrees of perceived entitativity of the participants toward the EU.  

     

     

  • Cronbach’s α = .77 (Cronbach’s α)has an important use as a measure of the reliability of a psychometric instrument. It indicates the extent to which a set of test items can be treated as measuring a single latent variable. It was first named as alpha by Cronbach (1951), as he had intended to continue with further instruments. It is the extension of an earlier version, the Kuder-Richardson Formula 20 (often shortened to KR-20), which is the equivalent for dichotomous items, and (1945) developed the same quantity under the name lambda-2. Alpha is an unbiased estimator of reliability when the components are all parallel which can take values between minus infinity and 1. As a rule of thumb, it is required that a reliability of 0.70 or higher be obtained on a substantial sample before using an instrument. Cronbach’s α operates under the classical test theory which claims that the reliability of test scores can be expressed as the ratio of the true score and total score error and true score variances (2002).
  •  

    The researchers in the article stated that they were able to assess the effectiveness of the of the entitativity manipulation by creating a composite score (M=3.97, SD=1.21) averaging the three items measuring entitativity using Cronbach’s α to test the reliability of the constructed research instrument. The research instrument in particular is the statements which were answered by the participants to measure their average entitativity perception towards the EU. But before being able to claim that the answers of the participants are reliable, the question set or the questionnaire used by the researcher should be first tested for reliability. In this case, the researchers utilized Cronbach’s α test. As was mentioned earlier, α can take values between minus infinity and 1 as measures of reliability. But the test makes sense only if the resulting value is positive indicating that the research instrument is reliable and can be used as a statistically accurate means of measuring a particular variable. The researchers reported that the composite score (M=3.97, SD=1.21) were calculated based on the positive result of the reliability test for the utilized research instrument. The research instrument was statistically reliable at Cronbach’s α = .77 indicating further that the computed results of the responses of the participants are likewise reliable.

     

     

  • t test – is a statistical computation in order to predict the correctness of the hypothesis. Unlike the z score, the t test can determine the standard degree of error of the hypothesis being tested using the sample data. the t test operates under the notion that (1) the sample mean is expected more or less to approximate the population mean or its value which permits hypothesis testing using the sample mean; (2) the standard error can provide a measure of how well a sample mean approximates the population mean, and (3) quantifying the inferences about the population is possible by converting each sample mean to a z score. Thus, knowing the value of the population standard deviation is needed to compute the standard error (1999).     
  •  

    In the study of the entitatitvity of the EU, the composite score mentioned earlier served as the mean and standard deviation based on all the responses of the participants. The experiment, in order to answer the problem posted by the study, was designed by comparing two sub-populations categorized as the participants in the high entitativity condition and those classified in the low entitativity condition. Using the t test, the researchers were able to quantify the degree or magnitude of the difference between the high and the low entitativity sub-populations from the standard perception of the entire population or of all the participants. Following the logic of the t-test, the researcher were able to determine the difference in terms of the standard entitativity of the EU (M=3.97, SD=1.21) based on the responses of the entire study population with the sample population in the high entitativity condition which indicated M=4.21 and with the sample population in the low entitativity condition which indicated M=3.71. These results supported the first two hypotheses presented by the researchers.  Comparing the sample means of the high entitativity samples with those of the low entitativity samples further confirmed the tendency of the third hypothesis of the research study. Furthermore, the t test was able to measure the amount or degree of the effect of the manipulations induced during the experiment.

     

     

  • t (55) = 1.58 – interpretation of the t score can only be made logical by identifying the critical region in the standard distribution of the entire study population. Imagining the bell-curve standard distribution, researchers locate the average or mean result of the population’s responses in the highest point of the bell curve slope. The t score indicates the approximated magnitude or amount of that limits the average of the population which will serve as the standard reference to interpret the deviation of a sample or sub-populations being tested (1999).
  •  

    The statistical figure t (55) = 1.58 indicates that the critical region for the population of 55 participants in the experiment is at 1.58. Beyond this region or limit, null hypotheses regarding the experiment are rejected or otherwise depending on the whether the score computed is within the said limit. This statistical result made possible the comparisons of variance or differences between the population and the sub-population that were condition under a particular controlled environment. Interpreting further this t score indicates that there are significant variations or deviations from the standard or average perception of the entire study population regarding the entitativity of the EU as influenced, dictated or effect of the induced manipulations during the experiment that randomly classified the participants into the high and low entitativity experiment groups.        

     

     

  • p < .05 – upon computing the t score, the score should be evaluated further in terms of significance in order to determine if the ratio is large enough to say that the difference between the groups or subpopulations is not likely to have been a chance finding. As such, researchers need to set a risk level (called the alpha level). In most social research, the "rule of thumb" is to set the alpha level at .05 which means that five times out of a hundred you would find a statistically significant difference between the means. The degrees of freedom (df) for the test should likewise be computed which is the sum of the participants or total number of the entire population minus 2. This statistical procedure enables researcher to identify the significant changes as induced by manipulations in the study ( 1999). 
  •  

    Given the alpha level, the df, and the t-value, researchers are able to determine whether the t-value is large enough to be significant. If it is, researchers can conclude that the difference between the means for the two groups is different. In the case of the research article, the researchers decided to set the limit of significance of the deviation scores in testing the hypotheses at 95%. This means that the results calculated, particularly the differences in scores of the subpopulations should fall into a minimum of 5% error. This is necessary in the function of entitativity of the EU being investigated in order to verify the statistical validity of the results of the study. This contributed in strengthening the claims made by the researchers particularly their assumptions and scientific prediction regarding the outcome of the conducted and facilitated experiment.

     

     

  • One-tailed tests – are directional hypothesis tests in which the critical region can only be located in one tail of the standard bell curve distribution.  In these tests, the statistical hypotheses specify either an increase or a decrease in the population mean score. Unlike two-tailed tests, one-tailed or directional hypothesis test is conducted using the alternative hypothesis instead of the null hypothesis. The alternative hypothesis is evaluated by determining whether it should be rejected or otherwise. The critical region is likewise located by looking at all the possible results that could be obtained if the null hypothesis is true. The z score is then calculated from which the statistical decision is made. The critical factor in this decision is the size of difference between treated sample and the original study population. A large evidence is evidence that the treatment worked, a small difference is not sufficient to say that the treatment has no effect at all (1999).
  •  

       A one-tailed test allows researchers to reject the null hypothesis when the difference between the sample and population is relatively small, on the other hand, requires a relatively large difference independent of direction. One of the main disadvantages of using the one-tailed tests is the high tendency of committing the Type I error since they require a strong evidence to reject the null hypothesis and thus provides a convincing demonstration that a treatment effect has occurred. Among the advantages of one-tailed tests are their sensitivity to detect treatment effects. As such researchers use one-tailed tests in situations where they do not want to overlook any possible significant outcome and where a Type I error is not very damaging (1999).

     

    Since the article being studied is exploratory in its approach to investigate the entitativity of the EU which is likewise geared toward generating new research possibilities, the one-tailed hypothesis test was used in order to detect if there is significant effect induced by the different conditions in the experiment.      

     

     

  • r  = .91 (Pearson r correlation) – measures the degree and direction of linear relation between two variables in which a perfect linear relation assumes that every change in X variable is accompanied by a corresponding change in the Y variable.  With X and Y always varying together, the covariability of X and Y together is identical to the variability of X and Y separately which produces or results to a correlation of 1.00. Correlation simply describes a relationship between two variables. It does not explain why the two variables are related and should not and can not be interpreted as proof of a cause-effect relation between the two variables. Moreover, the value of a correlation can be affected greatly by the range of scores represented in the data. Furthermore, correlation should mot be interpreted as a proportion by using the numerical value to judge “how good” the relation is. The squared correlation (r²) measures the gain in accuracy that is obtained from using the correlation for prediction instead of just guessing (1999).
  •  

    In the article, the researchers used Pearson r correlation to test the relation of the high entitativity measures they utilized to gauge the image of the EU based on the responses of the participants. The value .91 means that the variable measured were highly correlated thus emphasizing that the measures utilized by the researchers to vary the experimental condition of the study is accurate for its purposes. Having proven that the measures classified to induce high entitativity to some of the participants in the study further indicates that the researchers were able to provide validation and reliability to the succeeding utilization of the measures or variables being tested. This enabled the researchers to define the different set of sample population of sub-population used in the experiment in order to emphasize the effect of high entitativity perceptions against the perceptions made by the participant in the low entitaitvity subpopulation.        

     

     

  • p < .001 – as was discussed earlier, researchers need to set a risk level (called the alpha level). In most social research, the "rule of thumb" is to set the alpha level at .05 which means that five times out of a hundred you would find a statistically significant difference between the means. The degrees of freedom (df) for the test should likewise be computed which is the sum of the participants or total number of the entire population minus 2. This statistical procedure enables researcher to identify the significant changes as induced by manipulations in the study (1999). 
  •  

    But this time the margin of error for the correlation test applied by the researcher is stricter compared when the hypotheses were evaluated for significance. Instead of the rule of thumb of using the .05 percentage of error, the researchers decided to use .001ratio. This means that the result generated by the test is only allowed to make an illogical one mistake per 100 cases studied. This was due to the nature of investigation being carried out in particular. The researchers make it a point to show significantly that the two variables being measured are correlated with each other. These variables are those used in the researchers to measure the high entitativity values particularly the measures that reflect the harmfulness of the EU which likewise serve as treatments to vary the condition in the conducted experiment. 

     

     

  • β  = - .68** (regression)  - when there is a general linear relation between two variables X and Y, it is possible to construct a linear equation that allows to predict the Y value corresponding to any known value of X. The technique for determining this equation is called regression. The best fitting line is achieved by using the least-squares method to minimize the error between the predicted Y values. The regression equation can be used to compute a predicted Y value for any value of X. The accuracy of the prediction is measured by the standard error of estimate which provides a measure of the average distance (or error) between the predicted Y value on the line and the actual data point ( 1999).
  •  

    Having been measured that the correlation of the variables used to indicate high entitaitvity of the EU is at .91, the researchers used regression analysis to more accurately predict the effect of each of the variables to the responses and perceptions of the participants of the study. Using regression analysis particularly the hierarchical multiple regression, the impact of entitativity and their interaction with the perceived harmfulness was measured entering the two main effects in the first step and the interaction terms in the second step. The image of the EU was centered as the mean variable was subtracted by each score resulting to a value of -.68. This indicated that although the image of the EU is a reliable predictor of harmfulness, entitaitvity is not. The more EU is perceived as an ally the less it was perceived as harmful to the United States in which the interaction between these variables was noted significant. In this light, the researchers were able to identify the specific variable or factor the highly affect the perception of the participants toward the EU testing and eliminating assumed variables and factors that are seemingly significant in the study.

     

     

  • β = -.34 – It was earlier discussed that it is possible to construct a linear equation that allows to predict the Y value corresponding to any known value of X Since the regression equation can be used to compute a predicted Y value for any value of X, the accuracy of the prediction is measured by the standard error of estimate which provides a measure of the average distance (or error) between the predicted Y value on the line and the actual data point (1999).
  •  

    Although the image of the EU was a reliable predictor of harmfulness, entitaitvity was not. The more EU is perceived as an ally the less it was perceived as harmful to the United States. However, the interaction between these variables was noted significant. Following (2003), the interaction between the image of the EU and the entitativity was decomposed in two ways by using one standard deviation above (ally) and one standard deviation below (enemy). The results indicated that for an ally image, high entitativity triggered the perception of less harmfulness at β = -.34.  This data was compared to the figure t (53) = -2.02 at .05 percentage of error as the standard or mean data for comparing the significance of variation between the high entitativity for the EU as an ally resulting to less harmful perception. The comparison of the figures emphasizes that relation of high entitativity of the respondents to the EU and its degree or level of harmfulness to the United States depending on whether the participant consider EU as an ally or as an enemy.  

     

     

  • t (53) = 3.38 – again, It was earlier discussed that it is possible to construct a linear equation that allows to predict the Y value corresponding to any known value of X Since the regression equation can be used to compute a predicted Y value for any value of X, the accuracy of the prediction is measured by the standard error of estimate which provides a measure of the average distance (or error) between the predicted Y value on the line and the actual data point (1999).
  •  

    Since the interaction between the image of the EU and the entitativity of the EU according to the manipulated perception of the participants is significant, the relation was decomposed. The previous number presented the first of the two decomposition made by researchers to further give light on the issue at hand by detailing the results of the experiment through regression. It was found out that for the participants at the high entitaitvity subpopulation who perceived the EU as an enemy, the results indicated that the EU is harmful to the United States at  β = .55. such interpretation was realized by comparing the mentioned data to the standard deviation of the variable at t (53) = 3.38.

     

     

  • p < .005 – as was discussed earlier, researchers need to set a risk level (called the alpha level). In most social research, the "rule of thumb" is to set the alpha level at .05 which means that five times out of a hundred you would find a statistically significant difference between the means. The degrees of freedom (df) for the test should likewise be computed which is the sum of the participants or total number of the entire population minus 2. This statistical procedure enables researcher to identify the significant changes as induced by manipulations in the study (1999). 
  •  

    But this time the margin of error for the correlation test applied by the researcher is stricter compared when the hypotheses were evaluated for significance. Instead of the rule of thumb of using the .05 percentage of error, the researchers decided to use .005 ratio. This means that the result generated by the test is only allowed to make an illogical .05 mistake per 100 cases studied. This was due to the nature of investigation being carried out in particular. The researchers make it a point to show significantly that the two variables being measured are correlated with each other.

     

     

  • Dependent variable also known as response variable or regressand is a factor whose values in different treatment conditions are compared. The value of the dependent variable varies when the values of another variable – the independent variable – are varied. The independent variable is said to cause an apparent change in, or simply affect, the dependent variable. Researchers usually want to explain why the dependent variable has a given value ( 1981; 1973).
  •  

    Perceived harmfulness of the participants toward the EU as an ally or as an enemy at high entitativity condition is the dependent variable of the study. The dependent variables which the study recognized are the conditions of high and low entitativity of the respondents as well as their perception of the EU as an ally or as an enemy. These independent variables are said to dictate, direct, influence and predict the perceived harmfulness of the EU as the dependent variable. As such, changes on the independent variables will signi9fy corresponding changes in the dependent variable. As was repeatedly mentioned by the researchers, relationships between the independent variables and the resulting dependent variable synthesizes the core interest of the research activity. In this case, the hypotheses presented by the researchers are the clear statements on the assumed relations of the independent and dependent variables. And as was proven by the research study, participants who believe EU as an enemy at high entitativitty condition resulted to high perception of harmfulness to the United States. For the participants who believe that the EU is an ally at high entitaitvity condition resulted to lower level or degree of perceived harmfulness to the United States.       

     

     

  • Controlused in scientific experiments to prevent factors other than those being studied from affecting the outcome. Controls are needed to eliminate alternate explanations of experimental results. In other cases, an experimental control is used to prevent the effects of one variable from being drowned out by the known, greater effects of other variables (1981; 1973). In a designed experiment, researchers make it a point to differentiate the control group from the experimental group. The control group serves as the benchmark of the study since the participants classified under this category are not induced with manipulation that will directly affect their responses to the experiment being conducted. On the other hand, the experimental group includes the participants of the study that were manipulated by the researchers to some extent in or5der fro them to behave and portray the different responses as the effect of the manipulation ( 1999).
  •  

    The participants in the research article being analyzed did not belong to any control group. All of them were manipulated to conditions of low and high entitativity. The design of the experiment consisted of two experimental groups which were defined with low and high entitativity. Moreover, they were likewise assessed regarding their perception of the image of the EU as to whether they view the organization as an ally or asn enemy. Despite interesting results that the researchers were able to come up with, the study however lacks the design to find out the image of the EU at varying levels or degrees of entitaitivity as well as percieved image of the EU to individuals that are not manipulated at all. In this light, the researchers proceed to the second phase of the experiment in order to support their claim on the polarizing effect of entitativity by using a control group in the experiment.

     

     

  • Analysis of Variance (ANOVA) - identifies sources of variability from one or more potential sources, sometimes referred to as “treatments” or “factors’”, widely used to determine the source of potential problems ( 2000;  2002). As such, this tool is helpful in tracing as well as measuring the causes of variations between and among the variables of the study. Most commonly variance statistics are employed (a) for developing taxonomies or systems of classification, (b) to investigate useful ways to conceptualize or group items, (c) to generate hypotheses, and (d) to test hypotheses (1981;1973).
  •  

    According to  (1999), the single-factor independent measures ANOVA uses data from two or more separate samples to test a hypothesis about two or more population means. The null hypothesis states that there are no differences among the population means. In the case of two-factor independent measures ANOVA hypothesis testing about means differences with data from an experiment with two independent variables is conducted The independent variables were identified as factors A and B, and the ANOVA evaluates these three separate hypotheses:

     

    1.    there are no mean differences among the levels of  factor A;

    2.    there are no mean differences between the levels of factors B; and

    3.    there is no interaction between factors A and B in which the effect of either factor does not depend on the levels of the other factor.

     

    The two-factor ANOVA uses much of the same notation as the single-factor ANOVA.  In the case of the experiment conducted by the researchers in the experiment, two sets of single-factor ANOVA was used to further the details of the conducted study using high entitativity group, the control group, and the low entitativity group as the between-participants factor. The ANOVA indicated significant results that support the hypotheses and predictions stated by the researchers regarding the differing perceptions of the participants toward the EU. The ANOVA was conducted by the researchers in order to support their claim on the polarizing effect of entitativity which is a factor that may differ or not the perceptions toward the EU.  

     

     

  • Significant – in statistical tests, the term indicates that the result is different from what would be expected by chance. A significant result means that the null hypothesis has been rejected, that is, the data are in the critical region in the bell curve distribution and not what would have expected to obtain if the bull hypothesis is true (1999).
  •  

    As practiced in the academic researches, the significance levels of experiments and other statistical analysis varies according to the number of population or participants used in the study as well as the design of the test being conducted. The researchers employ risk levels to take into account the percentage of error that could be likely committed while doing the statistical analyses of the collected data. This serve as precautionary as well as validity and reliability assurance that the results of the study are legitimate and were a result of logical and systematic investigations that are free of unintentional biases. Defini9ng the significant level adds credibility to the results of the statistical tests that supports the claims and arguments being presented by the researchers. 

     

     

  • F (2, 114) = 10.45 – this statistical result identifies sources of variability from one or more potential sources, sometimes referred to as treatments measuring the causes of variations between and among the variables of the study (2000; 2002). The null hypothesis states that there are no differences among the population means and ANOVA uses data from two or more separate samples to test a hypothesis about two or more population means (1999).
  •  

    The statistical result F (2, 114) = 10.45 supports the predictions and hypotheses of the researchers regarding the variability that will be indicated by the responses of the participants belonging to the different experiment al and control groups in the experiment, thus strengthening the polarizing effect of entitativity towards the participants’ perception to the EU. As expected, in the high entitativity condition, participants judged the EU as more entitative (M =4.94, SD = .67) than in the control condition (M = 4.66, SD = .65), which in turn, scored higher than the low entitativity condition (M =4.24, SD = .69). These results were significant for all the treatment groups (high and low) as well as to the control group at p < .03 or less, one-tailed).

     

     

  • N = 39 (population number) – is the entire group of individuals that the researcher wishes to study. The entire group, literally mean every single individual that participated in the research activity from whom data are being collected. A sample, on the other hand, is a portion of the population that is selected for observation. A parameter is used as a measurement that describes the characteristics of the population, such as a population average. A statistic describes characteristics of sample. Descriptive statistical methods summarize, organize, and simplify data while inferential statistics consist of techniques allow us to study samples and then make generalizations about the population from which they were selected. In random selection, every individual in the population has the same chance of being selected for the sample (1999). 
  •  

    The statistical figure N = 39 denotes the number of samples in the experimental group characterized with high entitativity. From this number, generalizations of the study are given scope and limitations particularly on the generalizations that the researchers will present at the end of the experiment. As the basic characteristic of a study group and in this case of a experimental group, succeeding assumptions and conclusions of the researchers are either directly and indirectly affected by this number. Since the figure N = 39 pertains to the experimental samples or individuals that were manipulated in the high entitaitvity condition, their number will likewise dictate the statistical significance as well as risk of the experiment results derived from their answers. This is the reason why it is always necessary to indicate the number of individuals or participants involved in the group to ascertain the readers of the validity and reliability of the claims of the researchers.  

     

  •  p  < .03 – Researchers need to set a risk level (called the alpha level). In most social research, the "rule of thumb" is to set the alpha level at .05 which means that five times out of a hundred you would find a statistically significant difference between the means. The degrees of freedom (df) for the test should likewise be computed which is the sum of the participants or total number of the entire population minus 2. This statistical procedure enables researcher to identify the significant changes as induced by manipulations in the study ( 1999). 
  •  

    This is the conventional way of stating that and specifying the alpha level used to test the variables that are being investigated in a particular statistical test. In this case the test is ANOVA. The figure states that that result of the experiment would occur by chance with a probability that is less than .03. Since the result of the ANOVA is highly significant which was carried out at this risk level, the claims of the stuffy regarding the variability between the high entitaivity group, the control group, and the low entitaitvity group is within the predicted values with only .03 chance of it occurring by chance. The differences support the claims stated by the researchers regarding entitativity as a factor with polarizing effect towards perception to the EU. 

     

     

  • [F (2, 114) = 2.10, n.s.] – Again, this statistical result identifies sources of variability from one or more potential sources, sometimes referred to as treatments measuring the causes of variations between and among the variables of the study (2000; 2002). The null hypothesis states that there are no differences among the population means and ANOVA uses data from two or more separate samples to test a hypothesis about two or more population means (1999).
  •  

    But in this case, the analysis of variance statistical test utilize3d to determine the variability of the image of the EU and its relation with the measurable concept of entitativity is not significant. This was interpreted using the mean average of the variables being investigated to come up with a composite score indicating the sample mean and standard deviation of the image of the EU and entitativity. Comparing the statistical results, namely: the sample mean, the sample standard deviation, and the ratio of variance, indicated a not significant finding. This means that the manipulation of the entitativity conditions (high and low) of the experimental groups were not factors that influenced or dictated the responses of the participants of the study. In effect, the different degrees or levels of enttiativity are negligible when it comes to its influence to the perceived image of the EU.

     

    In this light, the polarizing effect of entitativity as a considered variable affecting the perception of the participants to the EU was not justified. As such, the earlier claims made by the researchers pertaining to the polarizing effect of entitativity failed the statistical testing conducted. This reflects a lot on the overall result of the experiment since the most important hypothesis of the researchers were rejected. However, the statistical credibility of the researchers for truthfully publishing the results of the research activity is more than enough to be worthy of high distinction and merits in the academic discipline.          

     

     

  • Hierarchical multiple regression – Multiple regression is used to account for (predict) the variance in an interval dependent, based on linear combinations of interval or dichotomous independent variables. This can establish that a set of independent variables explains a proportion of the variance in a dependent variable at a significant level, and can establish the relative predictive importance of the independent variables. Power terms can be added as independent variables to explore curvilinear effects while cross-product terms can be added as independent variables to explore interaction effects (1993; 1997).
  •  

    The level-importance is the b coefficient times the mean for the corresponding independent variable. The sum of the level importance contributions for all the independents, plus the constant, equals the mean of the dependent variable.  (1982) notes that the b coefficient may be conceived as the "potential influence" of the independent on the dependent, while level importance may be conceived as the "actual influence." Meanwhile, the beta weights are the regression (b) coefficients for standardized data. Beta is the average amount the dependent increases when the independent increases one standard deviation and other independent variables are held constant (1999).

     

    In the article, the researchers averaged the answers of the participants to the perceived harmfulness items for the commercial treaty and army issues into a single perceived-harmfulness index (M = 3.66, SD = 1.28). This served as the criterion fro the hierarchical multiple regression implemented in the data. The levels of entitaitivity were recorded into two variables X1 and X2 and multiplied each with the variable image of the EU to come up with another two variables. The results of the test indicated that the levels of entitaitivity are not predictors of perceived harmfulness of the EU. It was the image of the EU held by the participants that influenced the perceived harmfulness of the said organization to the United States. The use of hierarchical multiple regression at this stage of the study reflects the researchers focus on identifying and validating the existence of the polarizing effect of the entitativity variable to the perceived harmfulness of the EU.    

     

     

  • Two variables – variables are measurements of behavior that provide data that are composed of numerical values. These numbers form the basis of the computations that are done for statistical analyses. Making observations of a dependent variable in a study will typically yield values or scores for each subject. While the raw scores are the original, unchanged set of scores obtained in the study, scores for a particular variable are represented by the letter X. When observations are made of two variables, there will be two scores for each subject, X and Y. the hypothetical concepts or constructs used in theories to describe the mechanisms of the behavior are defined into observable and measurable terms called variables (1999).
  •  

    In the study at hand, the researchers made it a point to further the interpretation and analysis of the data collected from the stage 2 of the experiment by creating another set of variables from the existing variables being studied. The hierarchical multiple regression called for the need to combine the levels of entitativity defined in the experimental groups with the image of the EU which resulted to a two new recorded variables. The result of the regression analysis validated the initial finding that entitativity is not a significant factor that affect the perception of the participants towards the harmfulness of the EU to the United States.      

     

     

  • Although X2 did not interact significantly with IEU – in using undertaking the hierarchical multiple regression, the researchers were able to come up with new sets of variables. This widened the analysis of the study through different manipulations of the available variables of the study. As such the researchers were able to identify and validate the variables that are significant to the results of the study.
  •  

    In this case however, X2 as a product of multiplying entitativity variables and image of the EU was found to be an insignificant factor that affected the perceptions of the participants of the study. The phrase “did not interact significantly with IEU” or image of the EU means that the original variables that made up X2 has little, negligible or no effect at all to the resulting perceptions of the participants on the harmfulness of the Eu to the United States.

     

     

  • linear effect – since the goal of regression is to make the process of prediction simpler and more precise, the regression is set to provide a simplified description of the relation between X and Y. This line identifies the center or central tendency of the relation just as the mean describes central tendency for a set of scores. The line does not have to be drawn in a graph; it can be presented in a simple equation. The advantage of a linear equation is that whenever you have a specific value for X, you can compute the precise value for Y without sketching the graph (1999). 
  •  

    The linear effect referred to in the experiment is the other result of the hierarchical multiple regression analysis conducted by the researchers. Even though X2 did not illustrate significant effect to the perceived harmfulness of the EU based on the responses of the participants, X! did in a linear fashion. In this case, the changes made in X1 during manipulation resulted to a significant and similar effect to the value of Y. As such, the researchers furthered their analysis on X1.the completion of these statistical tests defined the path being threaded by the researchers in their investigation on the polarizing effect of entitativity to the perceived harmfulness of the EU as a threat to the United States.    

     

     

  • One standard deviation above the mean (ally) – since X1 resulted to significant effect on the perceptions of the participants of the study, the researchers geared towards the exploration of the additionally found statistical result. The hypotheses presented by the researcher likewise served as a basis for deciding on the mean variance or standard deviation of the samples. As such, the effect of X1 on harmfulness was computed for two single values, one of which is the one standard deviation above the mean for those who perceived the EU as an ally.  The result of the study supported the claims made by the researchers at the fist phase of the research activity. When the image of the EU was that of an ally, higher levels of entitativity resulted to the perception of less harmfulness.
  •  

     

  • One standard deviation below the mean (enemy) – meanwhile, the other effect of X1 on harmfulness was computed using one standard deviation below the mean for the participants who perceived the EU as an enemy to the United States.  Like the first measure on the significant value of X1, the result of the statistical test for the participants who perceived the EU as an enemy to the United States indicated that at higher levels or degrees of entitativity, the EU was perceived as harmful.
  •  

     

  • Simple slopes – since the goal of regression is to make the process of prediction simpler and more precise, the regression is set to provide a simplified description of the relation between X and Y. This line identifies the center or central tendency of the relation just as the mean describes central tendency for a set of scores (1999).
  •  

    In this case, the regression analysis undertaken by the researchers resulted to simple slopes that support the logic of regression and distribution. In the general linear equation, the value of b is called the slope while the value of a is the Y-intercept because it determines the value of Y when X is 0. The slope determines how much the Y variables will change when X is increased by one point (1999). As such, the image of EU predicted harmfulness for the three levels of entitaitvity resulting to linear relationships between the variables investigated.    

     

     

  • Ordinal variable – in statistics, there are four scales of measurements: nominal, ordinal, interval, and ratio. A nominal scale of measurement labels observations so that they can fall into different categories. In a nominal scale of measurement, observations are labeled and organized. In an ordinal scale of measurement, observations are ranked in terms of size or magnitude. As the word ordinal implies, the investigator simply arranges the observations in rank order. In an interval scale of measurement, intervals between numbers reflect differences in magnitude. That is, it allows the investigator to measure differences in the size or amount of events. A ratio scale of measurement has a meaningful zero point, and thus ratios are numbers on the scale do reflect ratios of magnitude ( 1999).
  •  

      The ordinal variable being referred to in the article is the levels or degrees of entitaitvity. It specifically signifies the high, low and the absence of the concept of entitativity during the hierarchical multiple regression analysis. As it is obvious, the high, low and condition in the control group are the rank or the size and magnitude of the variable entitaitvity. This iteration helped the researchers in providing explanations and testing different levels of entitaitvity. 

     

     

  • β - .32 * - It was earlier discussed that it is possible to construct a linear equation that allows to predict the Y value corresponding to any known value of X Since the regression equation can be used to compute a predicted Y value for any value of X, the accuracy of the prediction is measured by the standard error of estimate which provides a measure of the average distance (or error) between the predicted Y value on the line and the actual data point (1999).
  •  

    The value -.32 served as the standard measure or limits of determining the accuracy of the prediction. In this case, the presented numerical value highlights and further emphasizes that the image of the EU was a reliable predictor of harmfulness. As such, at higher entitativity, the participants who believe RU as an enemy indicated higher perception of harmfulness.  

     

     


    0 comments:

    Post a Comment

     
    Top