Note Taking and Note Sharing While Browsing Campaign Information
Scott P. Robertson
University of Hawaii
Information & Computer
Sciences Department
Ravi Vatrapu
University of Hawaii
Information & Computer
Sciences Department
George Abraham
Drexel University
College of Information
Science and Technology
{ scott.robertson, vatrapu }@hawaii.edu,
[email protected]
Abstract
Participants were observed while searching and
browsing the internet for campaign information in a
mock-voting situation in three online note-taking
conditions: No Notes, Private Notes, and Shared
Notes. Note taking significantly influenced the
manner in which participants browsed for
information about candidates. Note taking competed
for time and cognitive resources and resulted in less
thorough browsing. Effects were strongest when
participants thought that their notes would be seen by
others. Think-aloud comments indicated that
participants were more evaluative when taking notes,
especially shared notes. Our results suggest that
there could be design trade-offs between eDemocracy and e-Participation technologies.
1. Introduction
The internet has grown into an important political
information tool. Usage by candidates and citizens in
the United States has grown tremendously over the
last several election cycles [1, 2]. Smith & Rainie [3]
report that 46% of Americans have used the internet
to get news and information about the 2008 U.S.
presidential campaign. According to Kohut [4], 24%
of Americans (42% between the ages of 18-29) said
that they used the internet “regularly” to gain
campaign information.
Politicians and citizens have also begun to use a
wide variety of internet tools. Just as the Howard
Dean campaign gained credit for innovative use of
organizational internet tools, blogs, and online
referenda in 2000 [5,6], the current campaign of
Barack Obama is gaining a reputation as an innovator
on social networks such as MySpace and Facebook
[7], although all campaigns have effectively used
these social networks [8] to raise money, raise
awareness, and build constituencies [see also 3,4].
Hillary Clinton used web video to announce her
candidacy, and YouTube has assumed a central role
in the debates for the 2008 presidential election.
Evidence suggests that “wired” voters are exposed to
more points of view about candidates and issues than
voters who do not use the internet, and that internet
users do not narrow information consumption to their
own special interests [9].
Growth in use of internet-based information
sources and technologies is so rapid that theory
development and empirical study is lagging behind.
We have argued for a “design science” approach to
the study of technology-enhanced political
information behavior [10-12] which involves
continuous cycles of development and empirical
study of information systems for e-Democracy,
however in practice we are discovering that these
cycles must be quite rapid. Robertson, Wania,
Abraham, & Park [12] presented data on a study of a
drop-down interface to a search query engine and
showed that the interface encouraged more issuebased consideration of the candidates. Here we
extend our study of this interface to include an
annotation component. Before discussing our study,
we briefly review issues related to note taking in
general and web annotation in particular.
1.1 Note Taking and Web Annotation
Debriefing sessions from our previous studies
have often revealed a desire among browsers of
political information to make point-by-point
comparisons of candidates, an activity that should be
enhanced by the ability to take notes. Also,
participation in political blogs and candidate-centered
social networking sites suggests that many web users
are eager to share their thoughts about political issues
with others and curious to view the thoughts of
others. We therefore studied several users of our
previously-designed drop-down search query
interface [12] under various web annotation
conditions.
Note taking is a way to select important pieces of
information from a larger set of items and transfer
that information to a local “external memory” for
later use. Note taking may also enhance retention or
understanding by helping learners focus their
attention and concentrate on important information.
Note taking is common in learning situations [13]
and many attempts have been made to develop
annotation systems that can be used during web
browsing [14]. Although note taking seems
intuitively helpful, empirical studies suggest that its
usefulness depends on a number of factors related to
the learning task and the structure of the notes
themselves [15]. A common finding is that note
taking takes a toll on cognitive load and can interrupt
attention in ways that are often not inconsequential
[16-18].
Notes can be private or shared. Shared notes are
often found in workgroup situations where
individuals can use them to communicate with each
other about what different members find to be
important and what individuals think other group
members should notice. Shared notes are often
associated with information artifacts (e.g. marginalia
and sticky-notes). The web has offered new
opportunities for shared annotation, and many shared
annotation systems in which web pages can have
notes associated with them have been developed [19,
20]. Again, although web annotation seems like a
promising direction for developers [13, 21], studies
of web-based annotation systems have shown only
marginal improvements in learning, [22, 23]. Shared
annotation environments, however, might have the
consequence of creating communities of interest [24].
1.2 Current Study
In this study we concentrated on how note taking
might influence information browsing behavior when
participants are seeking information about political
candidates in order to make a voting decision. If note
taking requires greater cognitive effort that competes
with the learning task, then participants who are
taking notes should show less effective browsing
behavior. On the other hand, if note taking enhances
learning, then we should see more effective browsing
behavior. We were also interested in how private
notes intended for oneself might differ from shared
notes intended to be seen by others [25]. Shared notes
serve a more public purpose and might require
greater thought.
Figure 1: VotesBy.US Portal: The drop-down search interface allows users to select candidates
from one list and issues from a second list. Menu selections result in automatic Google searches.
Results are returned in s results list with tabbed categories (Web, News, Blog, Video, Book).
Our primary experimental purpose was to study
annotation, however we also added features to a
developing voter-browser environment as part of an
iterative design exercise. Added features, described
below and pictured in Figure 1, were a visible query
box, topically organized issue list, and content-tabbed
results pages.
2. Method
2.1 Participants
Thirteen participants were recruited, using
information flyers, from areas around Drexel
University in Philadelphia, PA. Data was collected
from July 05, 2007 to August 07, 2007. Each
participant was paid $35 for their time.
The age of the participants ranged from 20.0 years
to 48.0 years with an average of 33.4 years. One
participant each reported high school and 2-year
College education whereas nine participants reported
4-year College with the remaining 2 participants
reporting an education level of graduate school.
Three participants self-reported as “Mixed Race,”
one participant self-reported as “Native American or
Alaskan,” another participant selected the category of
“Puerto Rican American (Commonwealth),” and the
remaining nine participants selected the category of
“White (non-Hispanic).”
Five participants reported being affiliated with the
Democratic Party, three as Independents, one
participant as affiliated with the Green Party and the
rest of the four selected the category of “Other”.
Ten out of the thirteen total participants selfreported as having voted in a federal, state, and/or
city election in the past. These ten participants further
reported that they had cast their vote in the USA
general election of 2004. Of the ten participants with
positive past electoral voting participation, seven
reported that they voted in “most” elections while the
remaining three reported voting in “all” elections.
When asked about how often they use the Internet
from home, nine participants reported “several times
a day,” one participant reported “once a day,” one
participant reported “once every few weeks” and the
remaining two participants reported using the Internet
less often than every few weeks. With respect to the
use of Internet at work, five participants reported
“several times a day,” three participants reported
“once a day,” one participant reported “once every
few weeks” and four participants reported using the
Internet less often than once every few weeks.
Figure 2. Google Notebook allowed users to make notes. In this example a participant has copied
text from a web page that they are browsing into a Notebook shown in the smaller window.
Regarding the use of Internet for political
information seeking, only one participant each
reported several times a day and once a day. Five
participants reported using the Internet for political
information seeking “once or twice a week, “four
participants “once every few weeks” and two
participants reported using the Internet for looking up
political information less often than once every few
weeks.
Participants were assigned randomly to one of
three note-taking conditions: No Notes, Private
Notes, or Shared Notes. Four participants (2 female, 2
male) were assigned to No Notes, four participants
were assigned to Private Notes, and five participants
(1 female, 4 male) were assigned to Shared Notes.
2.2 Materials and Procedure
All participants were given a scenario about a
mock-voting situation and instructions on how to use
a drop-down search interface (Figure 1) to search the
internet for campaign information. The scenario
asked subjects to imagine that they had just moved to
Louisiana where a gubernatorial election was coming
up. The participants were informed that there were
four candidates for Louisiana Governor: Bobby
Jindal, Walter Boasso, John Georges, and Foster
Campbell. These were actual candidates in an
upcoming election at the time the study was
conducted. Participants were told that they were
going to “vote for one candidate for the governor of
the state of Louisiana” and that they should use the
search interface to find out what they needed to know
in order to make a choice. Materials that participants’
discovered and browsed on the internet were real and
current campaign materials.
Participants in the two annotation conditions were
instructed about taking notes with Google Notebook.
(Figure 2). Participants in the Shared Notes condition
were told that their notes would be available for other
users to see when those users were browsing the
same materials, whereas participants in the Private
Notes condition were told that their notes were for
their use only.
In order to search the internet, participants used an
interface with two drop-down selection menus, one
listing the candidates’ names and another listing a set
of issues (see Figure 1). Robertson et al. [12]
described the initial design of this “drop-down”
search interface and showed that it results in more
thorough and complete searching and browsing than
a free-form query box. Selections from the dropdown lists generated queries which were visible in a
query box and which were automatically sent to
Google. Selection of a candidate resulted in a search
query consisting of that candidate’s name and the
office (e.g. “Bobby Jindal Governor Louisiana”).
Selection of an issue resulted in a search query
consisting of the issue keyword (e.g. “taxes”). When
menu items were selected from both lists the result
was a combined query (e.g. “Bobby Jindal Governor
Louisiana taxes”).
An AJAX API to Google was utilized to display
search results on pages with the following content
categorization tabs: Web, News, Blog, Video, and
Book. Participants could page through results lists, or
look at the results lists under each tab, or open web
pages from the results lists.
While carrying out the tasks described in the
scenario participants were encouraged to think aloud.
Software was used to capture and integrate the search
behavior and verbalizations of the each participant.
An experimenter remotely tagged the capture file
while the participant was searching for information.
These tags were adapted from previous studies we
conducted on online political information seeking
behavior [12, 26]. Participants were given as long as
they wished to search and instructed that they should
tell the experimenter when they were ready to vote.
After voting, participants were given a recall survey
and an exit questionnaire.
3. Results
3.1 Time
Participants were allowed as much time as they
needed to complete the task. They made the choice of
when to stop browsing and vote. On average,
Figure 3. Total Session Time in Minutes.
participants spent 53.36 minutes browsing, and there
was no significant difference in time spent across the
three annotation conditions (see Figure 3).
3.2 Confidence in the Final Vote
Participants rated their confidence in their final vote
on a Likert scale from 1-5 where higher values
signified greater confidence. The average confidence
rating was a 2.61, and there was no significant
difference in confidence across the three annotation
conditions.
3.3 Searching and Information Browsing
We conducted an analysis of the screen recordings
of participants’ activities. Morae Observer TM was
used for the coding of the participant sessions for
following events: search queries, website visits,
return to the search results, think-aloud comments,
making annotations, and reviewing annotations. The
resulting screen recordings along with the marker
data were analyzed using Morae ManagerTM 2.0.
We compared several searching and browsing
activities across the three annotation conditions. In
each case we conducted an overall ANOVA on the
means in the three annotation conditions, a planned
comparison of the No Notes condition with the
combined annotation conditions, and (if the ANOVA
was significant) a post-hoc comparison (Tukey HSD
test) of all pairs of means. Dependent measures that
we examined in this way were number of search
queries, number of websites visited, number of
returns to the results list, and number of think-aloud
comments made. Figure 4 shows the means for all of
these measures across the three annotation
conditions.
3.3.1 Search Queries
Possible search queries were categorized as being
Candidate Name (selecting a candidate name from
one of the drop-down lists without selecting an
issue), Issue (selecting an issue from one of the drop
down lists without selecting a candidate), or
Candidate+ Issue (selecting a candidate name and an
issue to a combined search). Participants made no
Issue searches. A within-subjects comparison showed
that
participants
made
significantly
more
Candiate+ Issue
searches
(mean=13.15) than
Candidate Name searches (mean=4.23), t(12)=2.65,
p<.05. This is consistent with our prior work [10, 11]
showing that the drop-down interface encourages
more complex queries about where candidates stand
on various issues.
The number of Candidate Name search queries
that participants made differed significantly across
the three annotation conditions, with means=1.7, 5.5,
and 5.2 queries per participant for No Notes, Private
Notes, and Shared Notes conditions respectively,
F(2,10)=4.64, p<.05. The contrast test between No
Notes and the combined note taking conditions was
significant, t(10)=-3.05, p<.01). Tukey HSD post-hoc
comparisons showed that the annotation conditions
did not differ from each other, but that both
annotation conditions differed from the No Notes
condition (p<.05 for No Notes versus Private Notes,
and p<.06 for No Notes versus Shared Notes).
The number of Candidate+ Issue search queries
that participants made differed significantly across
the three annotation conditions, with means=21.2,
14.7, and 5.4 queries per participant for No Notes,
Private Notes, and Shared Notes conditions
respectively, F(2,10)=4.10, p<.05. The contrast test
between No Notes and the combined note taking
conditions was significant, t(10)=-2.22, p<.05).
Figure 4. Frequencies of Searching and Browsing Activities the Three Annotation Conditions.
Tukey HSD post-hoc comparisons showed that the
No Notes condition differed significantly from the
Shared Notes condition (p<.05).
3.3.2 Websites Visited
The number of websites visited decreased across the
three annotation conditions, with means=39.5, 27.0,
and 18.2 websites per participant for No Notes,
Private Notes, and Shared Notes conditions
respectively.
The overall trend did not reach
significance at the .05 level, but could be considered
suggestive with such a small n, F(2,10)=2.75, p<.11.
The contrast between No Notes and the combined
note taking conditions was also suggestive,
t(10)=2.07, p<.07.
3.3.3 Returns to Results List
The number of returns to the results list decreased
across the three annotation conditions, with
means=33.0, 25.0, and 14.0 returns per participant for
No Notes, Private Notes, and Shared Notes
conditions respectively, F(2,10)=4.22, p<.05. The
contrast between No Notes and the combined note
taking conditions was significant, t(10)=2.27, p<.05.
Tukey HSD post-hoc comparisons showed that the
No Notes condition differed significantly from the
Shared Notes condition (p<.05).
3.3.4 Comments
The number of comments appeared to increase
when participants were taking notes, with
means=28.2, 39.5, and 39.6 comments per participant
for No Notes, Private Notes, and Shared Notes
conditions respectively, although this effect was not
significant.
3.3.5 Correlations
The number of Candiate+ Issue queries, the
number of returns to the results lists, and the number
of websites visited were all highly positively
correlated with each other (Table 1). The number of
Candidate Name queries was negatively correlated
with the number of Candidate+ Issue queries, but this
is an artifact of the interface (since the usual method
of making Candidate+ Issue queries was to select a
candidate first and then follow it with selections of
several issues, which generates a single Candidate
Name query for every set of Candiate+ Issue queries).
The number of Candidate Name queries was also
negatively correlated (though not significantly) with
number of returns to results and number of websites
Candidate
+ Issue
r(13)= -.70
(p<.01)
Returns to
Results
r(13)= -.50
(p<.08)
r(13)= .65
(p<.01)
Websites
Visited
r(13)= -.41
Candidate
(p<.17)
Name
r(13)= .55
Candidate
(p<.05)
+ Issue
r(13)= .94
Returns to
(p<.001)
Results
Table 1. Correlations between querying and
browsing activities.
visited. Though marginal, together these negative
correlations suggest that Candidate Name searchers
were not as thorough as Candidate+ Issue searchers.
Number of comments was positively correlated
with both confidence, r(13)=.56, p<.05, and session
time, r(13)=.53, p<.06, although confidence and
session time were not correlated with each other.
3.4 Content of Think-Aloud Comments
Transcription of the think-aloud comments
resulted in 457 individual comments. Following [11,
12], the comments were coded into 10 categories by
two coders (RV and AJ) independently. Cohen’s
Kappa for assessing inter-coder reliability was
initially 0.62, which translates to moderate agreement
[27]. The coders reconciled differences and
eventually assigned each comment to a final category
as follows (see Figure 5):
Goal (3%): A statement about what the
participant plans to do, e.g. “I am going to see if
there is anything noteworthy here” OR “I am
going to delve into some things in more detail.”
Action (9%): A statement describing what the
participant was doing, e.g. “I am trying to look at
the local news on this page” OR “I am looking at
his website.”
Question (3%): An interrogative statement, e.g.
“What does he say about war?” OR “Why is
Blanco here?”
Evaluative General (20%): A general evaluative
remark but not related to the ballot item, e.g. “I
am going to stay away from blogs” OR “This
looks out of date.”
Evaluative about an issue (18%): Evaluative
comment but cannot be determined positive or
negative about a political issue, e.g. “He is for
single gender classrooms, which I don’t know is
good or bad.” OR “He suggests instituting a tax
on oil and gas.”
Figure 5: Frequencies of Think Aloud Comment Categories in the Three Annotation Conditions
Positive about an issue (5%): A good evaluative
comment in support of a political issue, e.g. “I
like how he worked for hurricane and stuff” OR
“Strikes me better than the other candidate, talks
about other important issues”.
Negative about an issue (8%): A bad evaluative
remark about a political issue, e.g. “He voted yes
on wire tapping, I don’t like that either” OR “He
can’t even impress me with his own website.”
Fact Discovery (12%): A statement of a nonevaluative piece of information about one of the
candidates, e.g. “It is an open seat, that’s what I
thought” OR “Oh, he is a state senator.”
Issue (8%): A non-evaluative statement about a
particular political issue, e.g. “I am curious about
their stance on Illegal Immigration, but not
finding much” OR “He seems to be focused on
resolving the crime issue, more so than the
others.”
General Statement (14%): A non-evaluative
comment not specifically about a candidate, e.g.
“I am new to the state so don’t know a lot of
stuff” OR “I am beginning to understand this.”
By far the largest percentage of comments (51%)
were evaluative in some way. Figure 5 shows the
distribution of comments across the three annotation
conditions. In general, when there are large
discrepancies among the three annotation conditions,
they tend to be in the direction of more commenting
when taking notes, especially shared notes. The
greatest variation across annotation conditions
involves the Evaluative General and Evaluative Issue
comments. In both cases, note taking increased
commenting, and Shared Notes elicited twice as
many evaluative comments about issues than Private
Notes.
4. Discussion
4.1 Summary
Our results can be summarized as follows:
Participants did not take more time when they
took notes, which meant that they had to use less
time for searching and browsing in the annotation
conditions.
Participants performed more Candidate+ Issue
searches than simple candidate name searches.
Participants never searched just on issues.
Candidate + Issue searches resulted in more
activity and exposure to more information.
Taking notes, especially shared notes, resulted in
a reduction in number of searches.
Taking notes, especially shared notes, resulted in
fewer returns to examine results lists.
Taking notes, especially shared notes, resulted in
exposure to less information as evidenced by
number of websites visited.
Taking notes resulted in more reflection on action
as evidenced by number of comments.
Reflection on action, as evidenced by number of
comments, increased confidence in the final vote.
Participants were primarily thinking about
evaluative issues while searching and browsing.
Taking notes, especially shared notes, increased
evaluative reflection.
Taken together, these results show that note
taking has a powerful influence on the type of
searching and browsing that people do when making
a voting decision. The fact that note taking reduces
the extent and thoroughness of searching and
browsing is perhaps a negative influence. However
note taking does seem to increase evaluative thought
and so could also be shifting cognitive effort from
information foraging and gathering to information
analysis and synthesis.
4.2 Voter-Browser Design
A major goal of our series of experiments in this
area [12, 26, 28, 29] is to develop a browsing tool to
help voters using iterative prototyping based on
empirical data. In previous research [12], we
developed the drop-down interface and demonstrated
its efficacy for increasing the depth and complexity
of searches beyond the typical candidate-name-only
query. In this study we have replicated our finding
with regard to increased issue-based searching, and
we intended to introduce a new annotation
component to the browser. Our results, however, give
us pause in suggesting that adding an annotation
feature is a good idea.
We also hoped to gain some understanding about
how annotation sharing might be integrated into a
voter browser. The introduction of sharing moves the
application into the realm of a socio-technical
system. Even in this impoverished situation where
the voters know little about the candidates and issues,
and where they do not know who will see their notes,
they behaved quite differently when they thought
their notes would be shared. How much more of an
impact on searching and browsing might there be in
“real” situations where the voters are more engaged
with the issues and where they are sharing their
notes/thoughts with a community of interest? Shared
notes have a communicative purpose that private
notes do not. Our results suggest that voters wish to
share their evaluative analyses. This is a significant
“added feature” to the task of gathering information
about how to vote. However, this is precisely the
added feature on which social networking sites and
blogs capitalize, and the use of these sites in political
discourse is increasing dramatically.
A second design feature that was added to the
drop-down interface in this study was the content
category tabs. These tabs organize results into Web,
News, Blog, Video, and Book categories (see Figure
1). While we did not concentrate on the tab feature in
this article, it is worth noting that participants did not
use it much and that some participants even explicitly
mentioned that they were going to “stay away from
blogs.” Reluctance to use the tab feature is yet
another indication that searching and browsing in
order to make a voting decision is a demanding task
from which users do not want many distractions.
We speculate that avoidance of blogs may have to
do with the perceived value of political dialog in final
decision making. Vatrapu, Robertson, & Dissanayake
[30] have pointed out that political blogs operate as
both public spheres and partisan spheres. Blog
information could be most useful for forming general
impressions or developing opinions over time in
social contexts, but less useful for actually deciding
something. Again, this is a contrast of deliberation
versus decision tasks, and may provide a meaningful
caution to designers interested in combining social
technologies with decision support tools. On the
other hand, many of our participants made
considerable use of Wikipedia, another social
technology, but one that is perceived as more
“objective.” Thus, when considering integration of
social technologies in a voter-browser, the type of
discourse and style of collaborative information
management is important.
4.3 E-Democracy Versus E-Participation?
When generalized, the issues discussed above
raise a larger question of whether there will be
important tradeoffs between e-Democracy (involving
information gathering and choice making) and eParticipation (involving discourse and social
deliberation) technologies. Tradeoffs include design
decisions for developers (e.g. Will the addition of a
chat feature decrease searching and browsing?), for
users (e.g. Will I get more out of using a candidate’s
social networking site or their website?), and
theorists (e.g. Does technology that emphasizes
participation
negatively
impact
information
consumption? Or conversely, does information
overload negatively impact civic participation?).
While many researchers have noted dramatically
increased participation of politicians and voters,
especially younger voters, in social networking
contexts, we have yet to find out if this will translate
into being better informed or into actual voting.
Individuals vote, communities don’t. While voting
decisions are influenced by others and by one’s
socially constructed identity and culturally
constituted subjectivity, the nature of the secret ballot
is such that the individual is the dominant decision
maker when the ballot is cast. In previous research
[29], we discovered that simply integrating an online
ballot with a political information browser was
rejected by users possibly because they do not feel
that these are similar activities. The success of
integrating social technologies with information
browsers will depend on the degree to which
information sharing and political discourse is
considered to be different by nature from information
gathering and decision making.
5. Future Work
This study serves a second purpose as a pilot
study for examining integration of personal and
social information management tools with a voterbrowser. In terms of continued development of the
drop-down interface (which currently resides on the
web at http://www.VotesBy.US), we intend to
examine personalization of the drop-down items in
future research. We also intend to explore how
discourse and deliberation components such as chats
and blogs might impact use of the browser. The
current results suggest that these technologies will
have very significant impacts on browsing behavior,
and that the collaboration environment and style of
discourse will have an impact on their acceptance and
usefulness
With the increased use of social networking tools
in this year’s election cycle in the United States, we
have also been studying users of Facebook’s political
applications. This is an example of an application that
is primarily social and deliberative and secondarily
informational.
Finally, we feel that the question of tradeoffs
between technologies for e-Democracy and eParticipation is a surprising and important one, and
we intend to explore this in more detail as well.
6. Acknowledgements
This material is based upon work supported by the
National Science Foundation under Grant No. IIS0535036 to the first author. Any opinions, findings
and conclusions or recommendations expressed in
this material are those of the authors and do not
necessarily reflect the views of the National Science
Foundation.
7. References
[1] L. Raine, J. Horrigan, and M. Cornfield, "The Internet
and Campaign 2004", Pew Internet and American Life
Project Report, Washington, DC, 2005. Available at
http://www.pewinternet.org/pdfs/PIP_2004_Campaign.pdf.
[2] A. P. Williams, and J. C. Tedesco, The Internet
Election: Perspectives on the Web in Campaign 2004,
Rowman & Littlefield Publishers, 2006.
[3] A. Smith, and L. Raine, "The Internet and the 2008
election", Pew Internet and American Life Project Report,
Pew Research Center, Washington, DC. 2008. Available at:
http://www.pewinternet.org/pdfs/PIP_2008_election.pdf
[4] A. Kohut, "Social Networking and Online Videos Take
Off: Internet’s Broader Role in Campaign 2008", Pew
Internet and American Life Project Report, Pew Research
Center,
Washington,
DC, 2008.
Available
at
http://www.pewinternet.org/pdfs/Pew_MediaSources_jan0
8.pdf, 2008.
[5] M. Cornfield, "The Internet and Campaign 2004: A
Look Back at the Campaigners", Pew Research Center,
Washington,
D.C,
2005.
Available
at
http://www.pewinternet.org/files/Cornfield_commentary.pd
f
[6] J. Trippi, The Revolution Will Not Be Televised:
Democracy, the Internet, and the Overthrow of Everything,
Regan Books, 2004.
[7] A. Sullivan, "Barack Obama is Master of the New
Facebook Politics", Sunday London Times, May 25, 2008.
Available
at
Times
Online:
http://www.timesonline.co.uk/tol/comment/columnists/andr
ew_sullivan/article3997523.ece.
[8] B. Williams, and G. Gulati, “The Political Impact of
Facebook: Evidence from the 2006 Midterm Elections and
2008 Nomination Contest”, Politics and Technology
Review, vol. 1, pp. 11-21, 2008.
[9] J. Horrigan, K. Garrett, and P. Resnick, "The Internet
and Democratic Debate", Pew Internet and American Life
Project Report, Washington, DC, 2004. Available at
http://www.pewinternet.org/pdfs/PIP_Political_Info_Repor
t.pdf
[10] S. P. Robertson, “Voter-Centered Design: Toward a
Voter Decision Support System", ACM Transactions on
Computer-Human Interaction (TOCHI), vol. 12, no. 2, pp.
263-292, 2005.
[11] S. P. Robertson, "Design Research in Digital
Government: A Query Prosthesis for Voters", Proceedings
of dg.o2008: 9th Annual Conference on Digital
Government Research, New York: ACM Press, 2008.
[12] S. P. Robertson, C. E. Wania, G. Abraham, and S.J.
Park, “Drop-Down Democracy: Internet Portal Design
Influences Voters' Search Strategies”, Proceedings of the
41st Annual Hawaii International Conference on System
Sciences, 2008.
[13] C. C. Marshall, “Annotation: From Paper Books to the
Digital Library", Proceedings of the Second ACM
International Conference on Digital Libraries, pp. 131-140,
1997.
[14] L. Denoue, and L. Vignollet, “Personal Information
Organization Using Web Annotations", Proceedings of the
WebNet 2001 World Conference on the WWW and
Internet, pp. 279-283, 2001.
[15] L. B. Igo, K. A. Kiewra, and R. Bruning, “Individual
Differences and Intervention Flaws: A Sequential
Explanatory Study of College Students' Copy-and-Paste
Note Taking", Journal of Mixed Methods Research, vol. 2,
no. 2, pp. 149-168, 2008.
[16] R. E. Mayer, and R. Moreno, “Nine Ways to Reduce
Cognitive Load in Multimedia Learning", Educational
Psychologist, vol. 38, no. 1, pp. 43-52, 2003.
[17] T. Olive, R. T. Kellogg, and A. Piolat, "The TripleTask Technique for Studying the Process of Writing", In T.
Olive and C. Levy, eds., Contemporary Tools and
Techniques for Studying Writing, pp. 31-59, Dordrecht:
Kluwer Academic Publishers, 2001.
[18] A. Piolat, T. Olive, and R. T. Kellogg, “Cognitive
Effort During Note Taking", Applied Cognitive
Psychology, vol. 19, no. 3, pp. 291-312, 2005.
[19] I. Glover, Z. Xu, and G. Hardaker, “Online
Annotation–Research and Practices", Computers &
Education, vol. 49, no. 4, pp. 1308-1320, 2007.
[20] P. L. Rau, S. H. Chen, and Y. T. Chin, “Developing
Web Annotation Tools for Learners and Instructors",
Interacting with Computers, vol. 16, no. 2, pp. 163-181,
2004.
[21] X. Fu, T. Ciszek, G. Marchionini, and P. Solomon,
“Annotating the Web: An Exploratory Study of Web Users’
Needs for Personal Annotation Tools", Proceedings of the
68th Annual Meeting of the American Society for
Information Science & Technology, 42. Charlotte, NC,
2005.
[22] W. Y. Hwang, C. Y. Wang, and M. Sharples, “A
Study of Multimedia Annotation of Web-Based Materials",
Computers & Education, vol. 48, no. 4, pp. 680-699, 2007.
[23] P. Nokelainen, M. Miettinen, J. Kurhila, P. Floreen,
and H. Tirri, “A Shared Document-Based Annotation Tool
to Support Learner-Centred Collaborative Learning",
British Journal of Educational Technology, vol. 36, no. 5,
pp. 757-770, 2005.
[24] J. J. Cadiz, A. Gupta, and J. Grudin, “Using Web
Annotations for Asynchronous Collaboration Around
Documents", Proceedings of the 2000 ACM Conference on
Computer Supported Cooperative Work, pp. 309-318,
2000.
[25] C. C. Marshall, and A. J. B. Brush, “Exploring the
Relationship Between Personal and Public Annotations",
Proceedings of the 2004 Joint ACM/IEEE Conference on
Digital Libraries, pp. 349-357, 2004.
[26] S. P. Robertson, C. E. Wania, and S. J. Park, “An
Observational Study of Voters on the Internet",
Proceedings of the 40th Annual Hawaii International
Conference on System Sciences, 2007.
[27] J. Landis, and G. Koch, “The Measurement of
Observer Agreement for Categorical Data", Biometrics,
vol. 33, no. 1, pp. 159-174, 1977.
[28] S. P. Robertson, “Digital Deliberation: Searching and
Deciding About How to Vote", Proceedings of dg.o2006:
7th Annual Conference on Digital Government Research,
New York: ACM Press, 2006.
[29] S. P. Robertson, P. Achananuparp, J. L. Goldman , S.J.
Park, N. Zhou, and M. J. Clare, “Voting and Political
Information Gathering on Paper and Online", Extended
Abstracts of CHI '05: Human Factors in Computing
Systems. New York: ACM Press, pp. 1753-1756, 2005.
[30] R. Vatrapu, S. P. Robertson, and W. Dissanayake,
“Are Political Weblogs Public Spheres or Partisan
Spheres?”, International Reports on Socio-Informatics, in
press.