The concept of experiment validity. External and internal. Their connection

Validity serves the main goals of any science - knowledge and power.

Today it is often argued that the concept of validity is unique. But an evaluation criterion or evaluation procedure is valid or invalid, and nothing more. Verification methods are varied and numerous, some of which are theoretical or constructive, while others are pragmatic or simply empirical.

Validation techniques are very common and their number is growing rapidly. But, there are only two fundamental types of validity, which are otherwise interdependent: theoretical and practical. They correspond to the two fundamental goals of science: knowledge and power. On the one hand, to cognize reality, explain and understand it, that is, to take into account one aspect of reality (explain it) in connection with other aspects (understand them).

On the other hand, to be able to intervene in reality in order to dominate it to some extent, it is better to adapt to it or modify it so that it better suits our purposes. These are two different goals, but not independent.

In psychometrics, validity is understood as the degree to which an instrument or procedure measures what it is intended to measure. In other words, a measurement method will be valid as long as it effectively reflects the psychological variable it is aimed at. This variable is assessed by its relationship with a certain criterion, for example, results in other variables or related tasks. In this sense, different types of validity can be considered: predictive validity, convergent validity, construct validity, etc.

Validity is a judgment or assessment of how well a test measures (assesses) what it is intended to measure in a given context. Specifically, it is making evidence-based judgments about the appropriateness of differences made on the basis of test results.

Validity ensures that what is being measured is what is being proposed and not something else. A test is considered valid if it meets our purposes.

Validation is a process that allows a measurement “instrument” to be increasingly refined, constrained and improved. Thus, a measurement theory test will be valid if it actually measures, for example, the knowledge that students have in the subject and not (perhaps unintentionally) something else.

Finally, validity term refers to the extent to which a test measures what it intends to measure. In this sense, a test will be suitable for measuring spatial reasoning, for example, if it measures this type of reasoning and not something else.

Validity in psychology

Validation is the process of collecting and assessing the reliability of evidence. Both the test creator and the test user can play a role in validating a test for a specific purpose.

One of the ways measurement professionals have traditionally conceptualized validity is in three categories:

  • Content validity. Content validity determines whether a test is representative of all aspects of the construct. Does the test fully reflect what it aims to measure.
  • Criterion-related validity (current or predictive) assesses how closely the results of a test correspond to the results of another test of the same subject.
  • Construct validity is ensuring that the measurement method is appropriate for the construct you want to measure.

Validity can be divided into two main types:

  1. Internal validity refers to the degree of confidence that the causal relationship being tested is credible and independent of other factors or variables.

One of the keys to understanding internal validity is recognizing that when it is associated with experimental research, it refers to both how well the study was conducted (research design, operational definitions used, how variables were measured, what was/was not measured, and etc.), and how confidently one can conclude that the change in the dependent variable was produced solely by the independent variable and not by extraneous factors.

In their classic book on experimental research, Campbell and Stanley (1966) identify and discuss 8 types of extraneous variables that, if not controlled, can compromise the internal validity of an experiment.

  • Story

It is a unique experience gained by the subjects between the different measurements made in the experiment. These experiences act as additional and unplanned independent variables. Studies that take repeated measurements of subjects over time are more likely to be influenced by historical variables than those that collect data over shorter periods of time or do not use repeated measures.

  • Maturation

These are natural (not experimenter-induced) changes that occur as a result of the normal passage of time. For example, the more time passes in a study, the more likely it is that subjects become tired and bored, more or less motivated depending on hunger or thirst, etc.

  • Testing

Many experiments pre-test subjects to ensure that all subjects start the study at approximately the same level, etc. This may affect subjects' performance later on.

  • Tools

Changing measurement methods (or their application) during a study affects what is measured.

  • Statistical regression

This is when research subjects are selected as participants because they scored extremely high or extremely low on some performance measure. Retesting subjects almost always results in a different distribution of scores. and the mean for this new distribution will be closer to the mean.

  • Selection

Subjects in comparison groups (eg, control and experimental) should be functionally equivalent at the start of the study. If the comparison groups are different from each other at the start of the study, the study results are biased.

  • Experimental mortality

Subjects drop out of the study. If one comparison group experiences a higher rate of subject exclusion/mortality than the other groups, then the observed differences between the groups become questionable.

  • Selection interaction

In some studies, the selection method interacts with one or more of the other threats (described above) to bias the results of the study.

  1. External validity refers to the extent to which the results of a study can be applied (generalized) to other situations, groups, or events.

The reliability of the study is largely determined by the experimental design. To ensure the validity of the tools or tests you use, you must also consider the validity of the measurements.

The extent to which the results of a study (whether the study is descriptive or experimental) can be generalized/applied to other people or settings reflects its external validity. In general, group studies that use randomization will initially have higher external validity than studies (eg, case studies and single-subject experimental studies) that do not use random selection/assignment. Campbell and Stanley identified 4 factors that negatively affect the external validity of a study:

  • Interaction

Interactions between how subjects were selected and treatment may occur. If subjects are not randomly selected from a population, their specific demographic/organismal characteristics may influence their performance, and the results of the study may not be applicable to the population or to another group that more accurately represents the characteristics of the population.

  • Preliminary testing

This may cause a more/less strong response to treatment, for example, than if they had not been pre-tested. In other words, to generalize the results of the study, the researcher will have to specify that some type of pretest should also be conducted, since pretest may serve as an additional unintended independent variable.

  • Subject Efficiency

Subjects' performance in some studies is more a product or response to the experimental conditions (e.g., the situation in which the study is conducted) than to the independent variable.

Studies that use multiple intervention methods may have limited generalizability because early study methods may have a cumulative effect on subjects' performance.

There is a difference between internal and external validity.

Internal validity is the degree of confidence that the causal relationship being tested is independent of other factors or variables.

External validity is the extent to which your results can be generalized to other contexts.

  • Increasing internal and external validity

In group research, the main methods used to achieve internal and external validity are randomization, the use of a research design and statistical analysis appropriate to the types of data collected and the questions the researcher(s) are trying to answer. Single-subject experimental studies almost always have high internal validity because the subjects serve as their own controls, but they are extremely low in external validity. Single-subject studies acquire external validity through the process of replication and extension, that is, repeating the study in different conditions, with a different subject, etc.

Factors Threatening Internal Validity[3]

  • Change over time
    (dependence of subjects and the environment on the time of day, seasons, changes in the person himself - aging, fatigue and distraction during long-term studies, changes in the motivation of the subjects and the experimenter, etc.; cf. natural development)
  • Sequence effect
  • Rosenthal (Pygmalion) effect
  • Hawthorne effect
  • Placebo effect
  • Audience effect
  • First impression effect
  • Barnum effect
  • Concomitant confusion
  • Sampling factors Incorrect selection
    (non-equivalence of groups in composition, causing systematic error in the results)
  • Statistical regression
  • Experimental attrition
    (uneven dropout of subjects from compared groups, leading to non-equivalence of groups in composition)
  • Natural development
    (the general property of living beings to change; cf. ontogeny)

And etc.

What is the validity of a psychological experiment?

Despite its importance, the concept of experimental validity has received little development since its inception. For this reason, the goal is to provide a critical analysis using a method of philosophical analysis along three axes:

  • the distinction between alternative hypotheses and experimental artifacts;
  • lists of threats to experimental validity;
  • perceived tension between internal and external validity.

If the development of the concept of experimental validity has been insufficient, it is due to limited consideration of both causal assumptions and uncertainty in the experimental context.

The reliability of a psychological experiment measures the consistency, testability, or repeatability of the study. If a study can be repeated and still produces the same results (either in a different group of participants or over a different period of time), then it is considered reliable.

For its part, validity in psychology (and not only) measures the relative precision or accuracy of the conclusions drawn from a study. This is the relative accuracy and correctness of psychological research. To quantify the validity of a measure, it must be compared with a criterion.

There are different types of validity of a psychological experiment:

Test validity

Test validity is a measure of the amount of meaning that can be attributed to a set of test results. In psychological and educational tests, where test importance and accuracy are of paramount importance, test validity is very important.

Test validity involves a number of activities, including criterion validity, content validity, and validity. If a research project scores high in these areas, the overall validity of the test is high.

  • Validity criterion

Criterion validity determines whether a test meets a specific skill set:

  • Concurrent validity measures a test against a reference test, and a high correlation indicates that the test has strong criterion validity.
  • Predictive validity is a measure of how well a test predicts skills, such as measuring whether, for example, a good GPA in high school leads to good results in a college or university.
  • Content validity

Content validity determines how well a test compares to the real world. For example, a scholastic aptitude test should reflect what is actually taught in the classroom.

  • Construct validity

Construct validity is a measure of how well a test fulfills its requirements. A test designed to measure depression should measure only this specific construct and not closely related ideals such as anxiety or stress.

Validity of the method

Validity, together with reliability, constitute fundamental properties of psychometric methods and, more generally, of procedures for observing and recording psychological variables. In this sense, it also applies to experimental procedures in which a distinction is made between internal and external validity.

Method validation refers to the process of experimentation and evaluation to determine the performance characteristics of a method. A method is considered validated when the tester has confirmed, through objective evidence and evaluation of these experiments, that the method is fit for its intended use (fit for purpose).

Two of these parameters are precision and accuracy.

The outcome of validation is a decision regarding the controls that need to be put in place to ensure that the method remains valid.

There are various methodologies available to determine the content validity of a test or instrument. Some authors state that they include test results, expert opinion, cognitive interviews, and expert judgment. Others perform statistical analysis with various “formulas.” Qualitative data is obtained through methods such as:

  • Expert commission

This is a methodology that allows the validity of an instrument to be determined by a panel of experts for each of the areas of science to be addressed in the assessment instrument, who must review, at a minimum, the consistency of the items with the course objectives, the difficulty of the items, and the cognitive abilities being assessed. This methodology is most commonly used for content verification.

  • Cognitive interview

This is a method that requires participants to think out loud while performing the required activity. The resulting story is recorded for subsequent decoding and analysis.

For better results regarding content validity, it is suggested that more than one methodology be used to complement them, thereby increasing the rigor of the process.

Content

  • 1 Specifics of a psychological experiment
  • 2 General information 2.1 The main objective of the experimental study
  • 2.2 Validity in a psychological experiment

3 Types of experiments

  • 3.1 Depending on the method of conduct

3.2 Depending on the stage of the study 3.3 Depending on the level of awareness 4 Organization of the experiment

  • 4.1 The Perfect Experiment

4.2 Interaction between experimenter and subject 4.2.1 Instructions to the subject 4.2.2 Sampling problem 5 Stages of a psychological experiment 6 Advantages of experiment as a research method 7 Criticism of the experimental method 8 Sources 9 Recommended reading 10 Famous psychological experiments 11 Control methods 12 See also

Types of experiments[edit]

Depending on the method

There are mainly three types of experiments:

  • Laboratory experiment
  • Field or natural experiment
  • Formative or psychological and pedagogical experiment

Depending on the level of awareness

Depending on the level of awareness, experiments can also be divided into

  • those in which the subject is given full information about the goals and objectives of the study,
  • those in which, for the purposes of the experiment, some information about it is hidden or distorted from the subject (for example, when it is necessary for the subject not to know about the true hypothesis of the study, he may be told a false one),
  • and those in which the subject is unaware of the purpose of the experiment or even the fact of the experiment itself (for example, experiments involving children).
Rating
( 1 rating, average 5 out of 5 )
Did you like the article? Share with friends:
For any suggestions regarding the site: [email protected]
Для любых предложений по сайту: [email protected]