After a brief consideration of the development of meta-analyses as a joint discussion of results ... more After a brief consideration of the development of meta-analyses as a joint discussion of results from a research area across development stages 0, 1, 2, it is concluded that the present form 2.0 is unsuitable to serve as a basis for theory building. Further development of this tool into a metaanalysis 3.0 is necessary for this purpose which requires the validity of the independent variables in the primary studies, the reduction of the error variance of the dependent variables, a stability of the effects over the primary studies and a quantitative comparison between observed and predicted effects in the primary studies. In the current meta-analyses 2.0, a concrete single-case approach creates the impression that mainly everyday ideas are investigated, which one would like to generalize to a population of other conditions. Furthermore, the results of the existing meta-analyses are either homogeneous and very small or heterogeneous. The procedure of a meta-analysis 3.0 is described in general and carried out hypothetically. The conclusion can be summarized as that metaanalysis 3.0 is indispensable as a tool for theorizing, and theorizing presupposes meta-analysis 3.0. The link between this interdependence is abduction as a research strategy.
Page 91. CHAPTER 5 SHARED REPRESENTATIONS AND ASYMMETRIC SOCIAL INFLUENCE PROCESSES IN SMALL GROU... more Page 91. CHAPTER 5 SHARED REPRESENTATIONS AND ASYMMETRIC SOCIAL INFLUENCE PROCESSES IN SMALL GROUPS R. Scott Tindale Christine M. Smith Linda S. Thomas Joseph Filkins Susan Sheffey Loyola ...
Volume 1. Contents: Preface. Part I: Introduction. J.H. Davis, Small-Group Research and the Stein... more Volume 1. Contents: Preface. Part I: Introduction. J.H. Davis, Small-Group Research and the Steiner Questions: The Once and Future Thing. Part II: Social Aggregation and Combination Models. H.W. Crott, J. Werner, C. Hoffmann, A Probabilistic Model of Opinion Change Considering Distance Between Alternatives: An Application to Mock Jury Data. J.H. Davis, Group Decision Making and Quantitative Judgments: A Consensus Model. P.R. Laughlin, Group Decision Making and Collective Induction. R.S. Tindale, C.M. Smith, L.S. Thomas, J. Filkins, S. Sheffey, Shared Representations and Asymmetric Social Influence Processes in Small Groups. N.L. Kerr, R.J. MacCoun, G.P. Kramer, "When are N Heads Better (or Worse) Than One?": Biased Judgment in Individuals Versus Groups. T. Kameda, Procedural Influence in Consensus Formation: Evaluating Group Decision Making From a Social Choice Perspective. Part III: Social Information-Processing Models. G. Stasser, S.I. Vaughan, Models of Participation Du...
Collaborative work with Erich H. Witte and Klaus Boehnke on introducing the distribution approach... more Collaborative work with Erich H. Witte and Klaus Boehnke on introducing the distribution approach to arriving at values at the culture level from value preferences of individuals. This approach is meant as an alternative to the current methodology of using average scores over individuals in a culture as evidence for the culture's value typology.
In the last decade there have been intense disputes about the scientific status of psychology esp... more In the last decade there have been intense disputes about the scientific status of psychology especially in connection with the replicability of prominent effects in textbooks. Very different statements about the reasons for this situation have been given (for instance, sample sizes too small, inferential statistical considerations, faulty applications of evaluation methods, lack of theory building, etc.). All these reasons have their relative justification. What remains unresolved, however, is whether these deficiencies are really the fundamental reasons for the state of psychology as a science or just surface symptoms. Thus, one needs a diagnosis that can identify these fundamental reasons. For this purpose, the systematicity theory is suitable, which can address the scientific status of psychology, e.g., in distinction to everyday psychology, via nine aspects. These nine aspects are well founded and at the same time can give hints how to improve the scientific status of psychology. Only by advancing a common and integrated development of psychology will it be possible to improve its scientific status. Individual improvements of rather arbitrarily selected aspects cannot achieve that. The purposeful connection of diagnosis and therapy will be demonstrated by the example of psychology in order to guide the scientific process in a focused way.
There are three fundamental criteria for selecting individuals for professorships: a general crit... more There are three fundamental criteria for selecting individuals for professorships: a general criterion relates to the state of the scientific discipline of psychology. It should be derived from criticism of the discipline and rewards perceived as inadequate. According to this criterion, preference was given to those applicants who were least oriented towards the inappropriate criteria. The main goal is the positive development of the subject in the future. The second criterion refers to the needs of the institute as a smaller concrete unit. There are tasks that a person must fulfill as a professor. By focusing on research, existing priorities can be identified, which can be considered when filling the professorship. The selection serves the positive scientific development of the institution.
Every measurement is subject to error. To describe the measurement error, there is the classical ... more Every measurement is subject to error. To describe the measurement error, there is the classical approach without a concrete measurement instrument. After the application of a specific instrument both approaches are compared, and conclusions are drawn for a more complex error theory. The principle limitation of human capabilities is assumed as the basis for error. From this principal limitation it follows that the anthropometric reliability of measurement instruments cannot be arbitrarily improved. Nevertheless, in order to control the measurement error of the scale values, quantization of continuous measurement values is proposed. Software is developed for this purpose. The simulations with this software show how depending on the reliability of the concrete measuring instrument the number of quanta and the reduction of the error dispersion affects the measurements. Only by quantization a sufficiently stable relationship between the scale value of the theoretical construct (trait) and the empirical measurements is established.
Die moderne Arbeitswelt sieht sich zahlreichen Entwicklungen und Umbruchen gegenuber, welche auch... more Die moderne Arbeitswelt sieht sich zahlreichen Entwicklungen und Umbruchen gegenuber, welche auch die Anforderungen an Fuhrungskrafte verandern. Dabei dominieren immer noch Modelle, die ihren Fokus auf die dyadische Interaktion (Motivation einzelner Mitarbeiter durch die Fuhrungskraft) legen. Diesen personenbezogenen Ansatzen wird im Beitrag eine ganzheitliche Perspektive der synergetischen Fuhrung gegenubergestellt: das Fuhren von Teams als Mikrosystem. Hierbei werden zunachst die Grundlagen der synergetischen Fuhrung und seine systemtheoretischen Wurzeln beschrieben. Im Anschluss an die Ubertragung der generellen Merkmale sozialer Systeme auf die Fuhrungsaufgabe werden sechs Management-Funktionen synergetischer Fuhrung von Mikrosystemen abgeleitet sowie der Integration dieser Management-Funktionen erlautert.
In dieser Studie werden ost- und westdeutsche Arbeitnehmer bezuglich ihrer personalen und soziale... more In dieser Studie werden ost- und westdeutsche Arbeitnehmer bezuglich ihrer personalen und sozialen Identitat verglichen, soweit sie sich uber das Konzept der Arbeitstugenden erfassen lassen. Es lassen sich zwischen den beiden Gruppen einige Misverstandnisse aufzeigen, die durch Aufklarung beseitigt werden konnen. Trotzdem ist die soziale Identitat ostdeutscher Arbeitnehmer bedroht, da sie sich in ihrem Selbstbild eher an den westdeutschen Arbeitnehmer orientieren und sich von den eigengen Kollegen abgrenzen. Dagegen entsprechen die Ergebnisse der Westdeutschen den theoretischen Annahmen aus der Theorie der sozialen Identitat und der Selbst-Kategorisierungstheorie. Zum Abschlus wird auf die Dimension der Unterscheidung zwischen individualistischen und kollektiven Kulturen hingewiesen und sie als Erganzung zu den Intergruppentheorien vorgeschlagen.
An einer Stichprobe von N = 102 Personen wird die Routinebesprechung als Managementaufgabe analys... more An einer Stichprobe von N = 102 Personen wird die Routinebesprechung als Managementaufgabe analysiert. Es wird ihre Durchfuhrungsform beschrieben, es werden die gesetzten Ziele betrachtet sowie die Differenzen zwischen Realitat und Vorstellung erhoben. Schlieslich werden die Routinebesprechungen in drei komplexe Formen differenziert, die sich mit der Informationsbearbeitung, der sozio-emotionalen Beziehung und den Vorstellungen beschaftigen. Aus diesen Ergebnissen wird der Schlus gezogen, das Routinebesprechungen notwendig sind, aber durch spezifische Vorgehensweisen erganzt werden sollten, um der Vielfalt der angestrebten Aufgaben gerecht werden zu konnen. In an sample of N = 102 subjects the general meeting of the working groups as a management task was analyzed. The general procedure of these meetings is described, the exspected aims are evaluated, and finallay the differences between aims and reality are given. The factor analysis of all items resulted in a structure of a genera...
In dieser Arbeit werden die Erwartungen und Vorstellungen von Studierenden in Bezug auf die von i... more In dieser Arbeit werden die Erwartungen und Vorstellungen von Studierenden in Bezug auf die von ihnen besuchte Vorlesung Sozialpsychologie untersucht. Die Daten wurden an 97 Personen mittels eines Fragebogens erhoben, der einen offen zu beantwortenden, einen gebunden zu beantwortenden sowie einen soziodemographischen Teil enthielt. Es zeigte sich, dass Befurchtungen vor allem in Bezug auf methodische und theoretische Grundlagen bestehen. Des Weiteren ergaben sich Differenzen in der Erwartungshaltung in Hinblick auf das Studium im Allgemeinen und die Vorlesung Sozialpsychologie im Besonderen. So wird etwa auf praktische Erkenntnisse und Selbsterkenntnis in der Ausbildung viel Wert gelegt, von der Vorlesung Sozialpsychologie wird sich diesbezuglich jedoch weniger erhofft. Schlieslich ergab ein Mittelwertsvergleich signifikante Unterschiede in der Motivation von Haupt- und Nebenfachlern. Wahrend Studierende mit dem Hauptfach Psychologie groseren Prufungsdruck und eine starker praktisch...
Ausgehend von Forschung zu sozialen Vorurteilen und der Theorie der sozialen Identitat wird die P... more Ausgehend von Forschung zu sozialen Vorurteilen und der Theorie der sozialen Identitat wird die Partnerwahl von deutschen und turkischen Jugendlichen der 2. Generation analysiert. Grundlage der Studie bildet eine schriftliche Befragung von 100 deutschen und 100 turkischen Jugendlichen zur aktuellen Partnersituation sowie Kriterien der Partnerwahl aus der eigenen Sicht (Autostereotyp) und der vermeintlichen Sicht der anderen Gruppe (vermeintliches Autostereotyp). Die Ergebnisse zeigen ein starkes Gefuhl der Zugehorigkeit zur eigenen nationalen Gruppe sowie Unterschiede in den Autostereotypen und zwischen Autostereotypen und vermeintlichen Autostereotypen der jeweils anderen Gruppen. Ebenso lassen sich grose Unterschiede zwischen den Autostereotypen und vermeintlichen Autostereotypen aus einer Perspektive feststellen. Die Ergebnisse in diesem Bereich der Partnerwahl lassen weder auf eine Gleichheit der Sichtweisen der beiden Kulturen noch auf ein detailliertes Verstandnis fur die jewe...
After a brief consideration of the development of meta-analyses as a joint discussion of results ... more After a brief consideration of the development of meta-analyses as a joint discussion of results from a research area across development stages 0, 1, 2, it is concluded that the present form 2.0 is unsuitable to serve as a basis for theory building. Further development of this tool into a metaanalysis 3.0 is necessary for this purpose which requires the validity of the independent variables in the primary studies, the reduction of the error variance of the dependent variables, a stability of the effects over the primary studies and a quantitative comparison between observed and predicted effects in the primary studies. In the current meta-analyses 2.0, a concrete single-case approach creates the impression that mainly everyday ideas are investigated, which one would like to generalize to a population of other conditions. Furthermore, the results of the existing meta-analyses are either homogeneous and very small or heterogeneous. The procedure of a meta-analysis 3.0 is described in general and carried out hypothetically. The conclusion can be summarized as that metaanalysis 3.0 is indispensable as a tool for theorizing, and theorizing presupposes meta-analysis 3.0. The link between this interdependence is abduction as a research strategy.
Page 91. CHAPTER 5 SHARED REPRESENTATIONS AND ASYMMETRIC SOCIAL INFLUENCE PROCESSES IN SMALL GROU... more Page 91. CHAPTER 5 SHARED REPRESENTATIONS AND ASYMMETRIC SOCIAL INFLUENCE PROCESSES IN SMALL GROUPS R. Scott Tindale Christine M. Smith Linda S. Thomas Joseph Filkins Susan Sheffey Loyola ...
Volume 1. Contents: Preface. Part I: Introduction. J.H. Davis, Small-Group Research and the Stein... more Volume 1. Contents: Preface. Part I: Introduction. J.H. Davis, Small-Group Research and the Steiner Questions: The Once and Future Thing. Part II: Social Aggregation and Combination Models. H.W. Crott, J. Werner, C. Hoffmann, A Probabilistic Model of Opinion Change Considering Distance Between Alternatives: An Application to Mock Jury Data. J.H. Davis, Group Decision Making and Quantitative Judgments: A Consensus Model. P.R. Laughlin, Group Decision Making and Collective Induction. R.S. Tindale, C.M. Smith, L.S. Thomas, J. Filkins, S. Sheffey, Shared Representations and Asymmetric Social Influence Processes in Small Groups. N.L. Kerr, R.J. MacCoun, G.P. Kramer, "When are N Heads Better (or Worse) Than One?": Biased Judgment in Individuals Versus Groups. T. Kameda, Procedural Influence in Consensus Formation: Evaluating Group Decision Making From a Social Choice Perspective. Part III: Social Information-Processing Models. G. Stasser, S.I. Vaughan, Models of Participation Du...
Collaborative work with Erich H. Witte and Klaus Boehnke on introducing the distribution approach... more Collaborative work with Erich H. Witte and Klaus Boehnke on introducing the distribution approach to arriving at values at the culture level from value preferences of individuals. This approach is meant as an alternative to the current methodology of using average scores over individuals in a culture as evidence for the culture's value typology.
In the last decade there have been intense disputes about the scientific status of psychology esp... more In the last decade there have been intense disputes about the scientific status of psychology especially in connection with the replicability of prominent effects in textbooks. Very different statements about the reasons for this situation have been given (for instance, sample sizes too small, inferential statistical considerations, faulty applications of evaluation methods, lack of theory building, etc.). All these reasons have their relative justification. What remains unresolved, however, is whether these deficiencies are really the fundamental reasons for the state of psychology as a science or just surface symptoms. Thus, one needs a diagnosis that can identify these fundamental reasons. For this purpose, the systematicity theory is suitable, which can address the scientific status of psychology, e.g., in distinction to everyday psychology, via nine aspects. These nine aspects are well founded and at the same time can give hints how to improve the scientific status of psychology. Only by advancing a common and integrated development of psychology will it be possible to improve its scientific status. Individual improvements of rather arbitrarily selected aspects cannot achieve that. The purposeful connection of diagnosis and therapy will be demonstrated by the example of psychology in order to guide the scientific process in a focused way.
There are three fundamental criteria for selecting individuals for professorships: a general crit... more There are three fundamental criteria for selecting individuals for professorships: a general criterion relates to the state of the scientific discipline of psychology. It should be derived from criticism of the discipline and rewards perceived as inadequate. According to this criterion, preference was given to those applicants who were least oriented towards the inappropriate criteria. The main goal is the positive development of the subject in the future. The second criterion refers to the needs of the institute as a smaller concrete unit. There are tasks that a person must fulfill as a professor. By focusing on research, existing priorities can be identified, which can be considered when filling the professorship. The selection serves the positive scientific development of the institution.
Every measurement is subject to error. To describe the measurement error, there is the classical ... more Every measurement is subject to error. To describe the measurement error, there is the classical approach without a concrete measurement instrument. After the application of a specific instrument both approaches are compared, and conclusions are drawn for a more complex error theory. The principle limitation of human capabilities is assumed as the basis for error. From this principal limitation it follows that the anthropometric reliability of measurement instruments cannot be arbitrarily improved. Nevertheless, in order to control the measurement error of the scale values, quantization of continuous measurement values is proposed. Software is developed for this purpose. The simulations with this software show how depending on the reliability of the concrete measuring instrument the number of quanta and the reduction of the error dispersion affects the measurements. Only by quantization a sufficiently stable relationship between the scale value of the theoretical construct (trait) and the empirical measurements is established.
Die moderne Arbeitswelt sieht sich zahlreichen Entwicklungen und Umbruchen gegenuber, welche auch... more Die moderne Arbeitswelt sieht sich zahlreichen Entwicklungen und Umbruchen gegenuber, welche auch die Anforderungen an Fuhrungskrafte verandern. Dabei dominieren immer noch Modelle, die ihren Fokus auf die dyadische Interaktion (Motivation einzelner Mitarbeiter durch die Fuhrungskraft) legen. Diesen personenbezogenen Ansatzen wird im Beitrag eine ganzheitliche Perspektive der synergetischen Fuhrung gegenubergestellt: das Fuhren von Teams als Mikrosystem. Hierbei werden zunachst die Grundlagen der synergetischen Fuhrung und seine systemtheoretischen Wurzeln beschrieben. Im Anschluss an die Ubertragung der generellen Merkmale sozialer Systeme auf die Fuhrungsaufgabe werden sechs Management-Funktionen synergetischer Fuhrung von Mikrosystemen abgeleitet sowie der Integration dieser Management-Funktionen erlautert.
In dieser Studie werden ost- und westdeutsche Arbeitnehmer bezuglich ihrer personalen und soziale... more In dieser Studie werden ost- und westdeutsche Arbeitnehmer bezuglich ihrer personalen und sozialen Identitat verglichen, soweit sie sich uber das Konzept der Arbeitstugenden erfassen lassen. Es lassen sich zwischen den beiden Gruppen einige Misverstandnisse aufzeigen, die durch Aufklarung beseitigt werden konnen. Trotzdem ist die soziale Identitat ostdeutscher Arbeitnehmer bedroht, da sie sich in ihrem Selbstbild eher an den westdeutschen Arbeitnehmer orientieren und sich von den eigengen Kollegen abgrenzen. Dagegen entsprechen die Ergebnisse der Westdeutschen den theoretischen Annahmen aus der Theorie der sozialen Identitat und der Selbst-Kategorisierungstheorie. Zum Abschlus wird auf die Dimension der Unterscheidung zwischen individualistischen und kollektiven Kulturen hingewiesen und sie als Erganzung zu den Intergruppentheorien vorgeschlagen.
An einer Stichprobe von N = 102 Personen wird die Routinebesprechung als Managementaufgabe analys... more An einer Stichprobe von N = 102 Personen wird die Routinebesprechung als Managementaufgabe analysiert. Es wird ihre Durchfuhrungsform beschrieben, es werden die gesetzten Ziele betrachtet sowie die Differenzen zwischen Realitat und Vorstellung erhoben. Schlieslich werden die Routinebesprechungen in drei komplexe Formen differenziert, die sich mit der Informationsbearbeitung, der sozio-emotionalen Beziehung und den Vorstellungen beschaftigen. Aus diesen Ergebnissen wird der Schlus gezogen, das Routinebesprechungen notwendig sind, aber durch spezifische Vorgehensweisen erganzt werden sollten, um der Vielfalt der angestrebten Aufgaben gerecht werden zu konnen. In an sample of N = 102 subjects the general meeting of the working groups as a management task was analyzed. The general procedure of these meetings is described, the exspected aims are evaluated, and finallay the differences between aims and reality are given. The factor analysis of all items resulted in a structure of a genera...
In dieser Arbeit werden die Erwartungen und Vorstellungen von Studierenden in Bezug auf die von i... more In dieser Arbeit werden die Erwartungen und Vorstellungen von Studierenden in Bezug auf die von ihnen besuchte Vorlesung Sozialpsychologie untersucht. Die Daten wurden an 97 Personen mittels eines Fragebogens erhoben, der einen offen zu beantwortenden, einen gebunden zu beantwortenden sowie einen soziodemographischen Teil enthielt. Es zeigte sich, dass Befurchtungen vor allem in Bezug auf methodische und theoretische Grundlagen bestehen. Des Weiteren ergaben sich Differenzen in der Erwartungshaltung in Hinblick auf das Studium im Allgemeinen und die Vorlesung Sozialpsychologie im Besonderen. So wird etwa auf praktische Erkenntnisse und Selbsterkenntnis in der Ausbildung viel Wert gelegt, von der Vorlesung Sozialpsychologie wird sich diesbezuglich jedoch weniger erhofft. Schlieslich ergab ein Mittelwertsvergleich signifikante Unterschiede in der Motivation von Haupt- und Nebenfachlern. Wahrend Studierende mit dem Hauptfach Psychologie groseren Prufungsdruck und eine starker praktisch...
Ausgehend von Forschung zu sozialen Vorurteilen und der Theorie der sozialen Identitat wird die P... more Ausgehend von Forschung zu sozialen Vorurteilen und der Theorie der sozialen Identitat wird die Partnerwahl von deutschen und turkischen Jugendlichen der 2. Generation analysiert. Grundlage der Studie bildet eine schriftliche Befragung von 100 deutschen und 100 turkischen Jugendlichen zur aktuellen Partnersituation sowie Kriterien der Partnerwahl aus der eigenen Sicht (Autostereotyp) und der vermeintlichen Sicht der anderen Gruppe (vermeintliches Autostereotyp). Die Ergebnisse zeigen ein starkes Gefuhl der Zugehorigkeit zur eigenen nationalen Gruppe sowie Unterschiede in den Autostereotypen und zwischen Autostereotypen und vermeintlichen Autostereotypen der jeweils anderen Gruppen. Ebenso lassen sich grose Unterschiede zwischen den Autostereotypen und vermeintlichen Autostereotypen aus einer Perspektive feststellen. Die Ergebnisse in diesem Bereich der Partnerwahl lassen weder auf eine Gleichheit der Sichtweisen der beiden Kulturen noch auf ein detailliertes Verstandnis fur die jewe...
The effect size measure narrowly concerns the causal influence, or correlation, between an experi... more The effect size measure narrowly concerns the causal influence, or correlation, between an experimental setting's independent and dependent variables. A given empirical effect's significance, by contrast, pivots on its statistical, theoretical, and practical aspect-that is, the effect's magnitude, how well a theory models it, and the utility we ascribe to it. Rather than mix these aspects, a sound evaluation must keep them apart. Once we evaluate theories by data, however, effect size measures that model participants' average reaction cannot stand proxy for participants' observed individual reaction. To facilitate the construction of valid, informative theories of intended phenomena, we present and exemplify three indices that assess a theory's quality in terms of its data similarity, reliability, and its validity.
To justify the effort of developing a theoretical construct, a theoretician needs data that bear ... more To justify the effort of developing a theoretical construct, a theoretician needs data that bear out a non-random effect of sufficiently high replication probability. To establish such effects statistically, researchers (rightly) rely on a t-test. But many pursue questionable strategies meant to lower the cost of data collection. Our paper reconstructs two such strategies. Both reduce the minimum sample-size (NMIN) necessary under conventional error rates (α, β) to register a given effect size (d) as statistically significant. The first strategy increases the β-error rate; the second treats the control group as a constant, thereby collapsing a two sample t-test into its one sample version. As an example, a two sample t-test for d=0.50 given α=β=0.05 requiring NMIN=176 becomes a one sample t-test given α=0.05, β=0.20 requiring NMIN=27. Not only does this decrease the replication probability from (1-β)=0.95 to (1-β)=0.80, but worse, the second strategy collapses a Neyman Pearson-test into a Fisher-test, which cannot corroborate hypotheses meaningfully. The ubiquity of both strategies arguably makes them partial causes of the confidence crisis. But since research groups can collaborate to reach NMIN jointly, their individual resource limitations justify neither strategy.
The effect size measures the causal influence, or correlation, between an experimental setting’s ... more The effect size measures the causal influence, or correlation, between an experimental setting’s independent and dependent variable. But an effect’s overall significance pivots on its statistical, theoretical, and practical aspect, that is, the effect’s magnitude, how well a theory predicts it, and the utility ascribed to it. Rather than mix these three aspects, a sound evaluation of empirical effects keeps them apart. When evaluating theories by data, moreover, effect sizes for participants’ theoretically modelled average reaction cannot stand proxy for their observed individual reactions. To facilitate constructing valid, informative theories of an intended phenomenon, we present and exemplify two indices that assess a theory’s quality as its data similarity and its validity.
To justify the effort of developing a theoretical construct, a theoretician needs empirical data ... more To justify the effort of developing a theoretical construct, a theoretician needs empirical data that support a non-random effect of sufficiently high replication-probability. To establish these effects statistically, researchers (rightly) rely on a t-test. But many pursue questionable strategies that lower the cost of data-collection. Our paper reconstructs two such strategies. Both reduce the minimum sample-size (NMIN) sufficing under conventional errors (α, β) to register a given effect-size (d) as a statistically significant non-random data signature. The first strategy increases the β-error; the second treats the control-group as a constant, thereby collapsing a two-sample t-test into its one-sample version. (A two-sample t-test for d=0.50 under α=β=0.05 with NMIN=176, for instance, becomes a one-sample t-test under α=0.05, β=0.20 with NMIN=27.) Not only does this decrease the replication-probability of data from (1-β)=0.95 to (1-β)=0.80, particularly the second strategy cannot corroborate hypotheses meaningfully. The ubiquity of both strategies arguably makes them partial causes of the confidence-crisis. But as resource-pooling would allow research groups to reach NMIN jointly, a group’s individually limited resources justify neither strategy.
In individual idea generation, early ideas are usually less creative than later ones. Findings on... more In individual idea generation, early ideas are usually less creative than later ones. Findings on brainstorming, divergent thinking, and creative cognition suggest that this serial order effect reflects tendencies of thinking in cognitively undemanding, conform ways. The rationale of a new facilitation technique for idea generation, the elimination method (EM, Witte, 2009), is to use the serial order effect in a controlled way: First, conventional ideas are demanded.
The gold standard for an empirical science is the replicability of its research results. But the ... more The gold standard for an empirical science is the replicability of its research results. But the estimated average replicability rate of key-effects that top-tier psychology journals report falls between 36 and 39% (objective vs. subjective rate; Open Science Collaboration, 2015). So the standard mode of applying null-hypothesis significance testing (NHST) fails to adequately separate stable from random effects. Therefore, NHST does not fully convince as a statistical inference strategy. We argue that the replicability crisis is “home-made” because more sophisticated strategies can deliver results the successful replication of which is sufficiently probable. Thus, we can overcome the replicability crisis by integrating empirical results into genuine research programs. Instead of continuing to narrowly evaluate only the stability of data against random fluctuations (discovery context), such programs evaluate rival hypotheses against stable data (justification context).
The main intentions of this paper are to discuss the
similarities and dissimilarities between dif... more The main intentions of this paper are to discuss the similarities and dissimilarities between different dynamic models of social influence in small groups and the development of a dynamic version of the group situation theory. As theoretical approaches were chosen : a) the social transition scheme model (STS) developed by Kerr(1981, 1982) b) the social interaction sequence model (SlS) proposed by Stasser & Davis(1981), and c) the dynanic theory of social irnpact (DTSI) pubtished by Nowak' Szamrej and Latane'(1990). These theories were compared with group situation theory developed by Witte (1987,90) and now modified as a dynanic version. This group situation theory tried to explain the different meanings of a group decision for their members after the group discussion and not only the change of the opinions to reach a consensus. However, this quatitative change of the Group situation in its normative components has to be modelled in the future.
Group situation theory has been developed as a general-framework in order to integrate different ... more Group situation theory has been developed as a general-framework in order to integrate different theoretical approacbes in the area of small group research. Its development is based on some fundamental assumptions combining normative influences with an information integration process of the individual group member. Under these basic assumptions as guidelines the social impact theory has been integrated into the group situation theory which results in an extended group situation theory. After this extension the similarities of this approach with the social decision schemes has been discussed. The predictions are very similar, but the kind of theories are different: Social decision schemes is a family of descriptive models predicting the transformation of a group members’ distribution after discussion using qualitative choices; EGST is a family of information integration processes giving an explanation why a given change has happened after discussion using also quantitative choices. As a next step the BALES’ research tradition on interaction frequencies and their influence in group decisions has been integrated in EGST to find their specific place in a broader theoretical concept.
Notwithstand.ing the body of venerable and formidabie criticism (e.g., Birnbaum,
7962; Bakan, 196... more Notwithstand.ing the body of venerable and formidabie criticism (e.g., Birnbaum, 7962; Bakan, 1966; Meehl, 1967; Morrison & Henkel, 1970)' demand remains unwavering for inferential argument articulated via si,gnificance' To date there are several porriüI" grounds for this persistence; perhaps foremost of these being a certain objecti'ity obtrin"d from aigorithmic qualification of results. Relieved of the burden äf ,".jt qualification, research requires only the null hlpothesis prediction. In this paper *= .turrd with previous objection to the current standard by arguing that its ,rr"g" has lndesirable effects on theory development and therefore should be modified so that prediction takes a more specific form. Having discussed this and in view of other considerations, an alternative is put forward and discussed'
This study tries to define relevant terms. It outlines those components which influence the proce... more This study tries to define relevant terms. It outlines those components which influence the processes of motivation gains and losses in groups, namely the unit of research, the measure of performance, the concatenation operation and the type of task. Because of its topicality the Köhler effect is the focus of this study. This effect can be further differentiated into: a) an additive, b) conjunctive, c) loss-avoiding, and d) compensatory Köhler effect, depending on the baseline used, e.g., the average, poorest or most capable group member, or whether a Ringelmann effect can be expected.
In this article, two scientific approaches are conjoined: Small group research and evolutionary t... more In this article, two scientific approaches are conjoined: Small group research and evolutionary theory. In the past 50 years, small group researchers have identified various deficits in group performance. Presently, how to improve group interaction is a focal point of their work. Meanwhile, social psychologists are paying more attention to evolutionary theory, and process losses in group performance may be evaluated differently from such a perspective. It appears that proximate performance losses could mean ultimate gains for the individual. A reduction in group performance should therefore be anticipated from a proximate perspective, because it represents an individual selection advantage from the ultimate view. As a means of intervention, group facilitation techniques are the key to proximate gains in group processes.
In psychology as elsewhere, the main statistical inference strategy to establish empirical effects is null-hypothesis significance testing (NHST). The recent failure to replicate allegedly well-established NHST-results, however, implies that such results lack sufficient statistical power, and thus feature unacceptably high error-rates. Using data-simulation to estimate the error-rates of NHST-results, we advocate the research program strategy (RPS) as a superior methodology. RPS integrates Frequentist with Bayesian inference elements, and leads from a preliminary discovery against a (random) H0-hypothesis to a statistical H1-verification. Not only do RPS-results feature significantly lower error-rates than NHST-results, RPS also addresses key-deficits of a “pure” Frequentist and a standard Bayesian approach. In particular, RPS aggregates underpowered results safely. RPS therefore provides a tool to regain the trust the discipline had lost in the ongoing replicability-crisis.
Commentary on: Rolf A. Zwaan, Alexander Etz, Richard E. Lucas, and M. Brent Donnellan (2017). Mak... more Commentary on: Rolf A. Zwaan, Alexander Etz, Richard E. Lucas, and M. Brent Donnellan (2017). Making Replication Mainstream. Behavioral and Brain Sciences, published online: 25 October 2017, pp. 1-50; forthcoming at https://www.cambridge.org/core/journals/behavioral-and-brain-sciences *******Abstract:
Before replication becomes mainstream, the potential for generating theoretical knowledge better be clear. Replicating statistically significant non-random data shows that an original study made a discovery; replicating a specified theoretical effect shows that an original study corroborated a theory. Yet only in the latter case is replication a necessary, sound, and worthwhile strategy.
We reanalyze the recent multilab preregistered study on ego-depletion by Hagger and Chatzisaranti... more We reanalyze the recent multilab preregistered study on ego-depletion by Hagger and Chatzisarantis (2016) as if their data were obtained under the research program-strategy (Witte & Zenker, 2016a, 2016b). This strengthens Hagger and Chatzisarantis’s (2016) main conclusion, because our reanalysis more directly corroborates the absence of a medium-sized, or a small-sized, ego-depletion effect (d ¼ .50 under α ¼ β ¼ .05; d ¼ .20 under α ¼ β ¼ .01). We explain how a smaller ego-depletion effect of d ¼ .04 can be tested under similar conditions, having determined this value by maximum likelihood estimation, and compare the research program-strategy to a standard meta-analytic integration.
We reconstruct recent work on macro-social stress (Chou et al., 2016) as if it were an instance o... more We reconstruct recent work on macro-social stress (Chou et al., 2016) as if it were an instance of a research strategy that tests point-alternative hypotheses within a full-fledged research program. Since this strategy is free of various deficits that beset dominant strategies (e.g., meta-analysis, Bayes-factor analysis), our article demonstrates one way in which the confidence crisis may be overcome.
Uploads
Papers by Erich H. Witte
similarities and dissimilarities between different dynamic
models of social influence in small groups and the development
of a dynamic version of the group situation theory.
As theoretical approaches were chosen : a) the social transition
scheme model (STS) developed by Kerr(1981, 1982) b) the social interaction sequence model (SlS) proposed by Stasser & Davis(1981), and c) the dynanic theory of social irnpact (DTSI)
pubtished by Nowak' Szamrej and Latane'(1990). These theories
were compared with group situation theory developed by Witte (1987,90) and now modified as a dynanic version. This group situation theory tried to explain the different meanings of a group decision for their members after the group discussion and not only the change of the opinions to reach a consensus. However, this quatitative change of the Group situation in its normative components has to be modelled in the
future.
After this extension the similarities of this approach with the social decision schemes has been discussed. The predictions are very similar, but the kind of theories are different: Social decision schemes is a family of descriptive models predicting the transformation of a group members’ distribution after discussion using qualitative choices; EGST is a family of information integration processes giving an explanation why a given change has happened after discussion using also quantitative choices.
As a next step the BALES’ research tradition on interaction frequencies and their influence in group decisions has been integrated in EGST to find their specific place in a broader theoretical concept.
7962; Bakan, 1966; Meehl, 1967; Morrison & Henkel, 1970)' demand remains unwavering
for inferential argument articulated via si,gnificance' To date there are several
porriüI" grounds for this persistence; perhaps foremost of these being a certain objecti'ity
obtrin"d from aigorithmic qualification of results. Relieved of the burden
äf ,".jt qualification, research requires only the null hlpothesis prediction. In this
paper *= .turrd with previous objection to the current standard by arguing that its
,rr"g" has lndesirable effects on theory development and therefore should be modified
so that prediction takes a more specific form. Having discussed this and in view of
other considerations, an alternative is put forward and discussed'
In psychology as elsewhere, the main statistical inference strategy to establish empirical effects is null-hypothesis significance testing (NHST). The recent failure to replicate allegedly well-established NHST-results, however, implies that such results lack sufficient statistical power, and thus feature unacceptably high error-rates. Using data-simulation to estimate the error-rates of NHST-results, we advocate the research program strategy (RPS) as a superior methodology. RPS integrates Frequentist with Bayesian inference elements, and leads from a preliminary discovery against a (random) H0-hypothesis to a statistical H1-verification. Not only do RPS-results feature significantly lower error-rates than NHST-results, RPS also addresses key-deficits of a “pure” Frequentist and a standard Bayesian approach. In particular, RPS aggregates underpowered results safely. RPS therefore provides a tool to regain the trust the discipline had lost in the ongoing replicability-crisis.
Before replication becomes mainstream, the potential for generating theoretical knowledge better be clear. Replicating statistically significant non-random data shows that an original study made a discovery; replicating a specified theoretical effect shows that an original study corroborated a theory. Yet only in the latter case is replication a necessary, sound, and worthwhile strategy.