Fitzgerald Skills
Fitzgerald Skills
Fitzgerald Skills
Children can do anything with technology; they are born with a chip in their brains that
makes them naturally attuned to all things wired. Along with this new physical endowment, they
are naturally able to navigate the Internet, seeking, finding, and applying information for all
their needs. They are invulnerable to the scams and low-quality information lurking there.
Absurd? Yes. However, this false belief – that children are somehow naturally more
able to use technology and are also somehow more Internet savvy – is a common one. Problems
with Internet information and in more traditional forms of information are well documented.
Professors of entering college undergraduates observe that students “are ill prepared to function
in a technological and information-rich environment” (Quarton, 2003, p. 120). The Stanford
Web Credibility Project (Stanford Persuasive Technology Lab, 2004) documents gullibility in
the entire Internet-using population.
While it is true that many of today’s youngsters live in a wired world from birth and are
hence more comfortable and adept with technology than their elders, problems remain. These
problems are old problems: how to detect misinformation; how to tell when someone is lying;
how to counter the subliminal commercial messages that saturate much broadcast media today;
and how to evaluate the strength of an argument, among many others. While much could and
should be done to apply technology to improve web information quality, a major portion of the
task will continue to fall on users. Children and youth are traditionally considered among the
most vulnerable members of our society. This paper focuses on the skills that people, especially
young people, need to evaluate information.
With this focus, I can only provide a brief overview of a vast territory, synthesized
through the lens of the user’s cognitive perspective. The important problems of information
quality and the philosophical controversies concerning censorship and filtering are beyond the
scope of this paper. The first section describes and defines the basic process of evaluation,
synthesizing ideas from cognitive psychology and critical thinking theory. The second section
discusses the importance of goals and information context, and the third discusses the critical
Skills: Fitzgerald 1
beginning of the whole process. Finally, I list the special problems children have in evaluating
information, skills we all need to evaluate information, and remaining research questions.
Skills: Fitzgerald 2
Critical thinking
Evaluation is closely associated with critical thinking. Some writers such as Beyer
(1985), D’Angelo (1971), and Yinger (1980) seem to equate “critical thinking” with
“evaluation.” Most theorists, however, describe critical thinking as including evaluation among
several other higher order thinking processes such as problem solving, decision making, and
analysis (Cromwell, 1992; Ennis, 1989; Paul & Elder, 2001). Another key relevant theory is
Bloom’s Taxonomy of cognitive skills, which places evaluation at the top (or most complex) of a
range of thinking activity (Bloom, Engelhart, Furst, Hill, & Krathworhl, 1956). Because of these
ties between evaluation and critical thinking, much theory and research about critical thinking
informs an understanding of evaluation.
Within the critical thinking paradigm, evaluation is defined as the making of judgments
about the value, for some purpose, of ideas, works, solutions, methods, etc. The target of
evaluation can be an object, as in a piece of art, an idea, or a person. Most writers list component
processes such as finding inconsistencies, comparing and contrasting, and judging by criteria
(Ennis, 1987). When information is the object of evaluation, a person typically studies it for
reliability, quality, credibility, and personal usefulness. These qualities overlap in meaning, but
together they describe what a person considers when judging information, leading to the idea of
criteria discussed later.
Bloom et al. (1956) also acknowledge a “link with the affective behaviors” (p. 185), due
to the inclusion of values. This affective link is richly born out in empirical literature, leading
often to biases, to be discussed later.
Within the world of education, it is vital to note that standardized tests provide only
rudimentary measures of critical thinking and information literacy (Dunn, 2002; Partnership for
21st Century Skills). This shortcoming is understandable when given the parameters of rapid,
mass assessment. Unfortunately, standardized testing tends to drive curriculum. Therefore,
difficult-to-assess skills like critical thinking and information literacy are often neglected
systematically in schools.
Metacognition
Although the relationship between metacognition and evaluation may not be readily
apparent, effective evaluation may not be possible without at least some thinking about one’s
Skills: Fitzgerald 3
own thinking. Flavell defines metacognition as “knowledge or cognition that takes as its object
or regulates any aspect of any cognitive endeavor. Its name derives from this ‘cognition about
cognition’ quality” (1981, p. 37). Brown, Bransford, Ferrara, and Campione (1983) identify two
major strands of research usually labeled “metacognition.” One concerns knowledge about
thinking, whereas the other concerns regulation of thinking and learning. Both strands are of
interest here. Procedural knowledge of the evaluation process and memories of successful past
thinking episodes form part of prior knowledge and will be discussed later. Metacognitive
regulation is vital because the thinker must often consciously choose strategies or meta-strategies
to apply in a given evaluative situation. Flavell calls the regulatory type of metacognition
“cognitive monitoring.” Simply recognizing the need to evaluate information is probably a
metacognitive event.
Epistemology
Another factor that probably affects the evaluation process is epistemology.
Epistemology is one branch of philosophy dealing with the nature of knowledge and sources of
knowledge. Belenky, Clinchy, Goldberger, and Tarule (1986) coined the phrase “ways of
knowing,” and it aptly defines epistemology as the beliefs people hold about how we come to
know what we know.
A thorough discussion of the impact of epistemology on information evaluation can be
found in Fitzgerald (1999). The ideas a person holds about the origin of knowledge provides
foundation for crucial elements of evaluation, such as the identification of authority and the
choice of criteria. A person who believes some knowledge to be unquestionable may neither
criticize new information about that knowledge nor consider new information that contradicts it.
On the other hand, the person who believes in the fluidity of all or most knowledge may be more
likely to consider new information in an evaluative light and how it may change knowledge
already in memory. Much of the theory and research surrounding information evaluation is
implicitly or explicitly based upon some assumptions about users’ epistemology. One notable
problem is in identifying which web sites contain “true” information. “True” and “false” may
vary according to the user. Arguably, information specialists have little right to make this
distinction for any given user, leading to important philosophical dilemmas. To avoid this
Skills: Fitzgerald 4
debate, I will state that these assumptions are important, that epistemologies may drift as people
mature, and that puzzling differences of opinion based upon similar evidence may well be traced
to epistemological differences. King and Kitchener (1994) are the most important and relevant
writers in this area.
Skills: Fitzgerald 5
Due to the advantages listed above, prior knowledge is probably a necessary but
insufficient condition for effective critical thinking (Ennis, 1989). However, Ennis asserts that
prior knowledge both helps and hampers critical thinking. Any of the five types of stored
information above may contain errors. Another highly significant problem is determining the
difference between knowledge and belief, and belief mistaken for knowledge easily assumes the
negative aspect of bias. People may become so convinced of their own expertise in a particular
area that contradictory incoming information is dismissed without consideration. This problem
links critical thinking to epistemology, and illustrates the uncertain demarcation between prior
knowledge, beliefs, and bias. A lengthy discussion of the much-researched prior knowledge
territory can be found in Fitzgerald (1999).
Strategies
One way to break down the complex process of evaluation is into a collection of
strategies. Most of these strategies, if not all, may be conceptualized as questions users pose to
themselves, or miniature operations (Fitzgerald, 2000). For example, one user might ask himself
while browsing a web site, “How is this site organized?” The answer he discovers might be “It
isn’t.” He might then ask “Is the information communicated in a professional, educated
manner?” The answer to this might be “yes.” The answers to these strategic questions would
figure in a summative judgment or decision later on, sometimes to perform other strategies, or
leave the site.
I have observed users employing meta-strategies as well, in which they conduct an
operation consisting of several related strategies. For example, the meta-strategy of verifying
web information through an outside source could be broken down into a series of questions, such
as “What authority might I consult that would have the expertise to agree or disagree with this
source? What does the outside source say? Does this outside information agree, in whole or in
part, with the questioned resource?” and so on. Appendix B displays a number of disparate
strategies gleaned from qualitative studies. Using one or more strategies, the evaluator
deliberates for a brief or sustained time, ending in a decision of some kind. These deliberation
and decision phases deserve a great deal of work and attention, but are beyond the scope of this
paper. (For more information, see Fitzgerald, 2000; Fitzgerald and Galloway, 2001.)
Skills: Fitzgerald 6
An important meta-strategy is the deliberate choice and application of criteria. Criteria
can be thought of as yardsticks or standards, and many theorists have furnished lists of them for
many different situations, sometimes in the form of a rubric. Rubrics are popular because they
scaffold the selection and application of criteria. Criteria in an information search context might
include aspects of information quality such as objective content, sufficient depth, and clear
articulation (Eisenberg & Small, 1993; Taylor, 1986). Paul and Elder list “clarity, accuracy,
relevance, logicalness, breadth, precision, significance, completeness, fairness, depth” as
“standards” (2001, p. 50).
All three evaluation models listed earlier use strategies as components. The origin of
strategies is most likely prior knowledge, possibly triggered in memory through learned
associations. A major component of skill building for evaluative thinking would be for users to
learn a large number of strategies to be employed in appropriate situations. A side benefit of
thinking of evaluation this way is that strategies are proactive. Their mastery and deployment
are in the control of the user, once the user knows about them. See Appendix B for a list of
strategies.
Section summary
In summary, evaluation of web-based information relates closely to educational and
cognitive theories dealing with critical thinking. Major components of evaluation include
metacognition, epistemology, and prior knowledge. All of these underlie strategies that users
apply in information settings. As such, evaluation involves many sub-processes or strategies,
including the application of criteria.
Goals
Fritch (2003) contends: “…some information is not important enough to require careful
evaluation, but each individual must determine when this is true” (p. 327). Purposes (or
motivation) for using information include entertainment, fact collection, simple curiosity,
Skills: Fitzgerald 7
collecting information to inform consumer decisions, and an infinite number of others. The
cognitive strategies chosen and level of engagement depend largely upon this goal. In school
library media literature, Weisburg and Toor (1994) point out that a reader’s purpose helps to
establish source evaluation and selection criteria. In persuasion literature, Petty and Cacioppo
(1986) identify motivation as one of several key factors determining whether or not a person will
deliberate upon a problem and to what depth. To illustrate, a person browsing an online celebrity
magazine for entertainment will probably be less likely to evaluate displayed information than a
prospective car buyer searching for safety information. This illustration assumes that car buyers
are generally more motivated to establish facts than entertainment-seekers.
Partial support for this idea in another context is provided by accountability research. In
accountability studies, subjects are made responsible for the outcomes of their judgments in
some way. Judgment accuracy seems to increase as motivation to judge accurately increases,
demonstrating that people can critically evaluate information if they choose to do so (Simonson
& Nye, 1992). In several studies, subjects motivated to be accurate through expectation of
accountability exhibited decreased susceptibility to certain flawed judgment patterns
(Bodenhausen, Kramer, & Susser, 1994; Freund, Kruglanski, & Shpitzajzen, 1985; Kruglanski &
Freund, 1983; Tetlock, 1983).
Returning to the realm of web information evaluation, Flanagin and Metzger (2000)
found that greater motivation (and hence, engagement) affected attitudes toward the message and
tended to lead to verification behaviors. If motivation increases evaluation accuracy through the
deliberate avoidance of flawed thinking patterns, it is fair to assume that lack of motivation may
decrease evaluation accuracy.
Skills: Fitzgerald 8
documented empirically as influential. Time is one such contextual factor. It seems logical to
predict that less time spent deliberating will lead to lower-quality decisions, and research results
support this proposition (e.g., Garbarino & Edell, 1997; Kruglanski & Freund, 1983).
Information problem type, whether ill-structured or well-structured, is an often overlooked but
crucially important aspect of all evaluation situations. A more detailed discussion of contextual
factors can be found in Fitzgerald (1999).
Summary
Any analysis of an information evaluation situation must begin with the user’s goal.
Choice of criteria, strategies, and depth of engagement stem directly from this goal. Often
overlooked, but still important, contextual factors include available time, environmental
confusion, and the nature of the question itself.
Disposition
A precursor or necessary condition for evaluation may be what Ennis (1987) and Glaser
(1985) label “disposition.” Glaser defines disposition as the “attitude of being disposed to
consider in a thoughtful, perceptive manner the problems and subjects that come within the range
of one’s experiences” (p. 25). However, like Fritch, Siegel and Carey (1989) point out that
people cannot evaluate every message they encounter. Nor do they always evaluate information
when they probably should. In fact, much empirical research vividly describes incidents in
which people failed to evaluate fraudulent information (e.g., Aycock & Buchignani, 1995; Belli,
Skills: Fitzgerald 9
1989; Bird, 1996). In an updated information context, Fogg et al. (2003) found that the most
significant factor by far that participants considered in evaluating web sites were related to
appearance. Flanagin and Metzger (2000) found that people seldom took steps to verify web-
based information. It seems most likely that the strength of critical disposition varies among
individuals, but also that it varies within the same individual from situation to situation.
Several factors may work to increase a person’s disposition to think evaluatively in a
given situation. Goals and prior knowledge are probably vital to successful cognitive
performance. Memories of past experiences and content knowledge about a topic will equip an
individual with tools to notice inconsistencies. Another factor may be epistemological
orientation (King & Kitchener, 1994; Siegel & Carey, 1989). Certainly, numerous studies show
that people will be critical when implicitly asked to do so.
Signals themselves
A critical early component of evaluation is the “signal,” named by both Flavell (1981)
and Markman (1981) as the cognitive event initiating a metacognitive episode. Signals are the
specific thoughts that launch the evaluation process, or recognitions that something may be
wrong with the information. Unfortunately, even the best descriptions of these signals are vague
and mostly theoretical. Flavell likens them to urges: “implicit signals to pay close attention,
listen intently, read carefully, store or retrieve information intentionally, try to solve a problem”
(p. 43). He also refers to them as “feeling[s]” of “vague puzzlement” (p. 45). Dewey calls them
“a state of doubt, hesitation, perplexity, mental difficulty, in which thinking originates” (1933, p.
12). Siegel and Carey (1989) advocate sensitivity to “anomalies” (p. 2). Often, signals indicate
miscomprehension. However, the problem could be that the information itself is flawed, and
signals notify the reader of this possible flaw. The reader may then take steps to determine the
stimulus of the signal, or ignore it.
Questions remain about the specific source of signals and the mechanisms that generate
them. Unfortunately, signals could easily be confounded by the confusion and frustration that
users often feel when confronted with innocuous but disorganized or complex web sites, as Bilal
(2000) found with 43% of youngsters in her study of children using a well-known children’s
search engine.
Skills: Fitzgerald 10
Initialization
Given the paucity of research that describes the “switching on” of evaluation, we have
focused on this element in naturalistic studies (Fitzgerald, 2000; Fitzgerald & Galloway, 2001).
Without cuing participants that I was looking for critical behavior, I observed them using
information in projects of their own construction. In many incidences, they would be immersed
in some information task, and suddenly they would say something critical about the information.
Probing of these incidents revealed that there were different cues that made people jump from a
browsing mode to a critical one. These evaluation initializations seemed to occur in one of two
ways. In the first form, participants noticed a specific quality or clearly identifiable anomaly in
the information and began evaluating because of the presence of this “problem marker.” For
example, one participant noticed several typographical errors on a web page and began to
evaluate the entire document more closely due to her discovery of these typos. Problem markers
varied markedly among participants. Typographical mistakes were important to some, and
ignored by others. For the most part, it seemed that participants had individualized sets of
problem markers that were meaningful only to them. However, the problem markers themselves
make sense as cues (but not proof) of the presence of information problems. Appendix C
presents a subset of the problem markers recognized by participants. Fogg et al. (2001) found
“commercial implication” and “amateurism” to be elements that hurt credibility, and these
correspond to the idea of problem markers.
In the second form, evaluation initialization occurred in response to a vague thought that
something was wrong with the information, with no clear idea of what that something was. For
example, one participant said she was “disappointed” after reading an article. It took several
questions to uncover the specific reasons that led to this feeling of disappointment, and it was
due to the presence of overgeneralizations. These thoughts correspond to the “cognitive signals”
described by Flavell (1981, p. 43). Besides disappointment, other similar “signals” included
distaste, disagreement, curiosity, surprise, and confusion.
Summary
Because people cannot evaluate all incoming messages, there must theoretically be an
“evaluation-on” switch. Perhaps some people are disposed to be “evaluation-on” proportionately
more of the time than others. Specific thoughts and urges have been recognized as characteristic
Skills: Fitzgerald 11
of the initial recognition that something is wrong in information. Research has also identified
specific cues or markers within information that people recognize as cues to start evaluating;
some are listed in Appendix C. Still, this area is mysterious and poorly mapped.
Children
Most of the literature above deals with adults. How are children different, if at all?
According to a recent Corporation for Public Broadcasting study (2002), 65 percent of
American children (aged 2-17) use the Internet. Tenopir, in her 2003 summary of many virtual
library usage studies, summarized that high school and college students now use the Internet
more than the library for research. Against the backdrop of these usage figures are several more
troubling findings. A Pew study (Fallows, 2005) found that young users (defined as under 30)
are more “trusting” than older users, although by a small margin. Tenopir also found that while
young adult students seemed to be evaluating Internet materials, faculty did not always approve
of their choice of criteria. In view of increasing patterns of Internet usage in young people, it is
important to summarize what more traditional literature says about the evaluative abilities of
children and youth:
• Ability to evaluate increases with age and education, with a notable developmental plateau
when formal education comes to an end (Dunn, 2002; Garett & Wulf, 1978; King &
Kitchener, 1994; Mackinnon, 1987).
• Children are “universal novices” (Brown & DeLoache, 1978, p. 14) and thus may generally
lack relevant domain knowledge (prior knowledge), along with all associated benefits
summarized earlier (Carey, 1985).
• Children differ from adults in terms of goals, being less able or likely to articulate them, set
reasonable ones, or to pursue imposed goals (Flavell, 1981; Markman, 1981).
• Children are less able to pursue multiple goals at once (Markman, 1981).
Skills: Fitzgerald 12
• Children may focus on one particular puzzling or interesting aspect of information,
ignoring all others (Flavell, 1981).
• Children are socialized to trust and obey authority (King & Kitchener, 1994) and later are
disproportionately influenced by peers (Paul & Elder, 2001).
• Children have difficulty distinguishing between knowledge and belief (King & Kitchener,
1994).
• Pre-adolescents believe in the absolute reality of what they observe (Miller & Lipps, 1973;
Osherson & Markman, 1974-1975).
• Poor readers add challenges of decoding and comprehension to all of the above (Garner,
1980; Paris & Myers, 1981).
• While experience and maturity are not the same thing, inexperienced Internet users are
particularly unlikely to verify the information they find (Metzger, Flanagin, Eyal, Lemus,
& McCann, 2003).
Skills
As an end to this paper, I will summarize and translate much of the theory and research into a
proactive set of skills.
• Build literacy. Evaluation is based upon a good foundation of reading ability and
information literacy (AASL/AECT, 1998) or “ICT literacy” -- information and
communication technologies literacy (Partnership for 21st Century Skills). Notably, these
last two definitions of information literacy contain evaluation as part of an overall ability
to negotiate information seeking situations.
• Learn to define the information task and context with questions such as these:
Skills: Fitzgerald 13
o For my purpose, how credible must the site be?
o Even in casual information use, are there possibly harmful elements or outcomes
if the information is wrong?
o List criteria appropriate to the context (e.g., Paul & Elder, 2001). Apply them
appropriately.
• Raise awareness that misinformation exists and that people lie in many different ways for
many different reasons.
• Build prior knowledge by reading deeply about the topic (Stripling & Pitts, 1988).
• Identify personal emotions, values, and biases about the topic; consider their impact on
the current problem. Write them down. Seek information that supports the opposite
perspective; resist the urge to ignore these opposing items, especially for ill-defined
problems (controversial or open-ended questions with no known right or wrong answer).
• Know the typical cognitive signals, and how to respond.
• Know the most common problem markers (Appendix C).
• Be able to identify the author’s purpose; what explanations or motivations can be found
underlying web sites?
• Be able to choose and execute a number of evaluative strategies. Many are listed in
Appendix B.
• Learn the skills of argument and logic; these help expose flawed arguments.
Questions
The most interesting questions about evaluation are the most difficult to attack through
standard research methodologies. A major weakness and challenge in most of the research that
has been done in the area of user evaluative behavior involves imposed questions. Both Quarton
(2003) and Gross (1999) have discussed the problems children have with research when topic
choice is not theirs. One of the most important of these is lack of motivation. Conversely, it
seems clear that motivation is one of the most important elements in the whole process. As
Dresang asserted in 1999, research should describe young Internet users in very natural
conditions, pursuing questions of their own. Only through these self-determined information
contexts can a real understanding of user behavior grow. Even more difficult, the question of
how people move to a critical state from a relaxed one is crucial but overlooked by most studies.
Almost daily, the media reports a successful scam or hoax. Why do people continue to
fall victim to these deceptions despite numerous public warnings? Why do tabloid publications,
notorious for printing inaccurate, unsubstantiated, and sensational information, continue to sell
issues? Why do email hoaxes, some of them almost as old as the Internet, continue to circulate?
Technology marches on, and how do all of these factors apply to newer web formats, like the
Skills: Fitzgerald 14
author-constructed and self-policed blogs and wikis, called “collaborative information assembly”
by Jesdanun (2004)? Scams and hoaxes are usually mild in import. However, they raise the
question: could an entire society be fooled about matters of importance? The answer to this
question, arguably at least, is yes. Successful, although small, deceptions reflect the possibility
that wholesale and tragic deceptions can occur. These are matters for adults, and we will
continue to argue the philosophical problems of protecting children from them. Still, it would
seem that re-educating an adult population and educating the young about the dangers of
misinformation remain essential projects for the 21st century.
Skills: Fitzgerald 15
References
Abelson, R.P. (1986). Beliefs are like possessions. Journal for the Theory of Social
Behaviour, 16 (3), 223-230.
American Association of School Librarians, & Association for Educational
Communications and Technology. (1998). Information power: Building partnerships for
learning. Chicago: American Library Association.
Ashcraft, M.H. (1994). Human memory and cognition (2nd ed.). New York:
HarperCollins.
Aycock, A., & Buchignani, N. (1995). The e-mail murders: Reflections on ‘dead’
letters. In S.G. Jones (Ed.), Cybersociety (pp. 184-231). Thousand Oaks, CA: SAGE.
Baker, L. (1979). Comprehension monitoring: Identifying and coping with text
confusions. Journal of Reading Behavior, 11 (4), 365-374.
Baker, L. (1984). Children’s effective use of multiple standards for evaluating their
comprehension. Journal of Educational Psychology, 76 (4), 588-597.
Belenky, M.F., Clinchy, B.M., Goldberger, N.R., & Tarule, J.M. (1986). Women’s ways
of knowing: The development of self, voice, and mind. New York: Basic Books.
Belli, R.F. (1989). Influences of misleading postevent information: Misinformation
interference and acceptance. Journal of Experimental Psychology: General, 118 (1), 72-85.
Beyer, B.K. (1985). Critical thinking: What is it? Social Education, 49 (4), 270-276.
Bilal, D. (2000). Children’s use of the Yahooligans! web search engine: I. Cognitive,
physical, and affective behaviors on fact-based search tasks. Journal of the American Society for
Information Science, 51 (7), 646-665.
Bird, S.E. (1996). CJ’s revenge: Media, folklore, and the cultural construction of AIDS.
Critical Studies in Mass Communication, 13 (1), 44-58.
Bloom, B.S., Engelhart, M.D., Furst, E.J., Hill, W.H., & Krathworhl, D.R. (1956).
Taxonomy of educational objectives: The classification of educational goals. New York: David
McKay.
Bodenhausen, G.V., Kramer, G.P., & Susser, K. (1994). Happiness and stereotypic
thinking in social judgment. Journal of Personality and Social Psychology, 66, 621-632.
Brown, A.L., Bransford, J.D., Ferrara, R.A., & Campione, J.C. (1983). Learning,
remembering, and understanding. In J.H. Flavell & E.M. Markman (Eds.), Handbook of child
psychology (Vol. III): Cognitive Development (pp. 77-166). New York: John Wiley & Sons.
Brown, A.L., & DeLoache, J.S. (1978). Skills, plans, and self-regulation. In R.S. Siegler
(Ed.), Children’s thinking: What develops (pp. 3-35). Hillsdale, NJ: Lawrence Erlbaum.
Carey, S. (1985). Are children fundamentally different kinds of thinkers and learners than
adults? In S.F. Chipman & J.W. Segal (Eds.), Thinking and learning skills, (Vol. 2, pp. 485-517).
Hillsdale, NJ: Lawrence Erlbaum.
Corporation for Public Broadcasting. (2002). Connected to the future: A report on
children’s Internet use from the Corporation for Public Broadcasting. Retrieved February 15,
2005, from the Corporation for Public Broadcasting site:
http://www.cpb.org/ed/resources/connected/
Cromwell, L.S. (1992). Assessing critical thinking. In C.A. Barnes (Ed.), Critical
thinking: Educational imperative (pp. 37-50). San Francisco: Jossey-Bass.
D’Angelo, E. (1971). The teaching of critical thinking. Amsterdam: B.R. Gruner.
Skills: Fitzgerald 16
Dewey, J. (1933). How we think: A restatement of the relation of reflective thinking to the
educative process (rev. ed.). Boston: D.C. Heath.
Dresang, E.T. (1999). More research needed: Informal information-seeking behavior of
youth on the Internet. Journal of the American Society for Information Science, 50, 1123-1124.
Dunn, K. (2002). Assessing information literacy skills in the California State University:
A progress report. Journal of Academic Librarianship, 28(1/2), 26-35.
Eisenberg, M.B., & Small, R.V. (1993). Information-based education: An investigation
of the nature and role of information attributes in education. Information Processing &
Management, 29 (2), 263-275.
Ennis, R.H. (1987). A taxonomy of critical thinking dispositions and abilities. In J.B.
Baron & R.J. Sternberg (Eds.), Teaching thinking skills: Theory and practice (pp. 9-26). New
York: W.H. Freeman.
Ennis, R. (1989). Critical thinking and subject specificity: Clarification and needed
research. Educational Researcher, 18 (3), 4-10.
Fallows, D. (2005). Search engine users. Washington, DC: Pew Internet & American
Life Project. Retrieved March 10, 2005, from http://www.pewinternet.org.
Fitzgerald, M.A. (1999). Evaluating information: An information literacy challenge.
School Library Media Research, 2. Retrieved March 23, 2005, from
http://www.ala.org/aasl/SLMR/vol2/evaluating.html.
Fitzgerald, M.A. (2000). The cognitive process of information evaluation in doctoral
students: A collective case study. Journal of Education for Library and Information Science, 41
(3), 170-186.
Fitzgerald, M.A., & Galloway, C. (2001). Relevance judging, evaluation, and decision
making in virtual libraries: A descriptive study. Journal of the American Society for Information
Science & Technology, 52 (12), 989-1010.
Flanagin, A.J. & Metzger, M.J. (2000). Perceptions of Internet information credibility.
Journalism & Mass Communication Quarterly, 77 (3), 515-540.
Flavell, J.H. (1981). Cognitive monitoring. In W.P. Dickson (Ed.), Children’s oral
communication skills (pp. 35-60). New York: Academic Press.
Fogg, B.J., Soohoo, C., Danielson, D.R., Marable, L. Stanford, J., & Tauber, E.R. (2003).
How do people evaluate the credibility of web sites? A study with over 2,500 participants.
Conference on Designing for User Experiences, San Francisco. ACM Press.
Fogg, B.J., Swani, P., Treinen, M., Marshall, J., Laraki, O., Osipovich, A., Varma, C.,
Fang, N., Paul, J., Rangnekar, A., & Shon, J. (2001). What makes web sites credible? A report
on a large quantitative study. SIGCHI Conference on Human Factors in Computing Systems,
Seattle, WA. ACM Press.
Fogg, B.J. & Tseng, H. (1999, May). The elements of computer credibility. Paper
presented at the Conference on Human Factors and Computing Systems, Pittsburgh.
Freund, T., Kruglanski, A.W., & Shpitzajzen, A. (1985). The freezing and unfreezing of
impressional primacy: Effects of the need for structure and the fear of invalidity. Personality
and Social Psychology Bulletin, 11 (4), 479-487.
Fritch, J.W. (2003). Heuristics, tools, and systems for evaluating Internet information:
Helping users assess a tangled Web. Online Information Review, 27 (5), 321-327.
Garbarino, E.C. & Edell, J.A. (1997). Cognitive effort, affect, and choice. Journal of
Consumer Research, 24 (2), 147-158.
Skills: Fitzgerald 17
Garett, K. & Wulf, K. (1978). The relationship of a measure of critical thinking ability to
personality variables and to indicators of academic achievement. Educational and Psychological
Measurement, 38, 1181-1187.
Garner, R. (1980). Monitoring of understanding: An investigation of good and poor
readers’ awareness of induced miscomprehension of text. Journal of Reading Behavior, 12 (1),
55-63.
Gilovich, T. (1991). How we know what isn’t so: The fallibility of human reason in
everyday life. New York: Free Press.
Glaser, E.M. (1985). Critical thinking: Educating for responsible citizenship in a
democracy. National Forum: Phi Kappa Phi Journal, 65 (1), 24-27.
Gross, M. (1999). Imposed queries in the school library media center: A descriptive
study. Library and Information Science Research, 21 (4), 501-521.
Jesdanun, A. (2004, December 13). More information but less truth? When access is so
easy, answers can be elusive. Associated Press (MSNBC). Retrieved January 25, 2005, from
http://www.msnbc.msn.com/id/6645957/
King, P.M., & Kitchener, K.S. (1994). Developing reflective judgment: Understanding
and promoting intellectual growth and critical thinking in adolescents and adults. San
Francisco: Jossey-Bass.
Kruglanski, A.W., & Freund, T. (1983). The freezing and unfreezing of lay-inferences:
Effects on impressional primacy, ethnic stereotyping, and numerical anchoring. Journal of
Experimental Social Psychology, 19 (5), 448-468.
Mackinnon, J.L.. (1987). Relations among patient management problems, critical
thinking abilities and professional knowledge levels attained by physical therapy students
(Doctoral dissertation, North Carolina State University, 1987). Dissertation Abstracts
International, 48-06A, 1441.
Markman, E.M. (1979). Realizing that you don’t understand: Elementary school
children’s awareness of inconsistencies. Child Development, 50, 643-655.
Markman, E.M. (1981). Comprehension monitoring. In W.P. Dickson (Ed.), Children’s
oral communication skills (pp. 61-84). New York: Academic Press.
Markman, E.M., & Gorin, L. (1981). Children’s ability to adjust their standards for
evaluating comprehension. Journal of Educational Psychology, 73 (3), 320-325.
McGregor, J.H. (1994). Cognitive processes and the use of information: A qualitative
study of higher-order thinking skills used in the research process by students in a gifted program.
In C.C. Kuhlthau (Ed.), School Library Media Annual (pp. 124-133). Englewood, CO: Libraries
Unlimited.
Metzger, M.J., Flanagin, A.J., Eyal, K., Lemus, D.R., & McCann, R.M. (2003).
Credibility for the 21st century: Integrating perspectives on source, message, and media
credibility in the contemporary media environment. In Kalbfleisch, P.J. (Ed.). Communication
Yearbook 27 (pp. 293-335). Mahwah, NJ: Lawrence Erlbaum Associates.
Miller, S.A., & Lipps, L. (1973). Extinction of conservation and transivity of weight.
Journal of Experimental Child Psychology, 16 (3), 388-402.
Moskowitz, D. & Stroh, P. (1996). Expectation-driven assessments of political
candidates. Political Psychology, 17 (4), 695-712.
Osherson, D.N., & Markman, E. (1975). Language and the ability to evaluate
contradictions and tautologies. Cognition, 3 (3), 213-226.
Skills: Fitzgerald 18
Osman, M., & Hannafin, M. (1992). Metacognition research and theory: Analysis and
implications for instructional design. Educational Technology Research & Development, 40 (2),
83-99.
Paris, S.G., & Myers, M. (1981). Comprehension monitoring, memory, and study
strategies of good and poor readers. Journal of Reading Behavior, 13 (1), 5-22.
Partnership for 21st Century Skills. Learning for the 21st century: A report and mile
guide for 21st century skills. Retrieved February 21, 2005, from
http://www.21stcenturyskills.org/default.asp
Paul, R. & Elder, L. (2001). Critical thinking: Tools for taking charge of your learning
and your life. Upper Saddle River, NJ: Prentice Hall.
Petty, R.E., & Cacioppo, J.T. (1986). The elaboration likelihood model of persuasion. In
L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 19, pp. 123-205).
Orlando, FL: Academic Press.
Pitts, J.M. (1994). Personal understandings and mental models of information: A
qualitative study of factors associated with the information-seeking and use of adolescents
(Doctoral dissertation, Florida State University, 1994). Dissertation Abstracts International, 55-
01A, 4616. .
Quarton, B. (2003). Research skills and the new undergraduate. Journal of Instructional
Psychology, 30 (2), 120-124.
Ross, L., Lepper, M.R., & Hubbard, M. (1975). Perseverance in self-perception and
social perception: Biased attributional processes in the debriefing paradigm. Journal of
Personality and Social Psychology, 32, 880-892.
Siegel, M., & Carey, R.F. (1989). Critical thinking: A semiotic perspective.
Bloomington, IN: ERIC Clearinghouse on Reading and Communication Skills.
Simonson, I., & Nye, P. (1992). The effect of accountability on susceptibility to
decision errors. Organizational Behavior and Human Decision Processes, 51 (3), 416-446.
Stanford Persuasive Technology Lab. (2004, April). Stanford Web Credibility Research:
(Mostly) Evidence-based Articles on Web Credibility. Retrieved March 22, 2005, from
http://credibility.stanford.edu/credlit.html
Stripling, B.K., & Pitts, J.M. (1988). Brainstorms and blueprints: Teaching library
research as a thinking process. Englewood, CO: Libraries Unlimited.
Svenson, O. (1981). Are we all less risky and more skillful than our fellow drivers?
Acta Psychologica, 47 (2), 143-148.
Taylor, R.S. (1986). Value-added processes in information systems. Norwood, NJ:
Ablex.
Tenopir, C. (2003). Use and users of electronic library resources: An overview and
analysis of recent research studies. Washington, DC: Council on Library and Information
Resources. Retrieved February 22, 2005, from www.clir.org/pubs/reports/pub120/pub120.pdf
Tetlock, P.E. (1983). Accountability and the perseverance of first impressions. Social
Psychology Quarterly, 46 (4), 285-292.
Wathen, C.N. & Burkell, J. (2002). Believe it or not: Factors influencing credibility on
the Web. Journal of the American Society for Information Science, 53 (2), 134-144.
Weisburg, H.K., & Toor, R. (1994). Learning, linking and critical thinking. Berkeley
Heights, NJ: Library Learning Resources.
Yinger, R.J. (1980). Can we really teach them to think? In R.E. Young (Ed.), Fostering
critical thinking (pp. 11-31). San Francisco: Jossey-Bass.
Skills: Fitzgerald 19
Skills: Fitzgerald 20