Even Good Bots Fight
Milena Tsvetkova1, Ruth García-Gavilanes1, Luciano Floridi1,2, Taha Yasseri1,2*
1
Oxford Internet Institute, University of Oxford, Oxford OX1 3JS, UK.
2
Alan Turing Institute, London NW1 2DB, UK.
*Correspondence to: taha.yasseri@oii.ox.ac.uk.
Abstract: In recent years, there has been a huge increase in the number of bots online, varying
from Web crawlers for search engines, to chatbots for online customer service, spambots on
social media, and content-editing bots in online collaboration communities. The online world has
turned into an ecosystem of bots. However, our knowledge of how these automated agents are
interacting with each other is rather poor. In this article, we analyze collaborative bots by
studying the interactions between bots that edit articles on Wikipedia. We find that, although
Wikipedia bots are intended to support the encyclopedia, they often undo each other’s edits and
these sterile “fights” may sometimes continue for years. Further, just like humans, Wikipedia
bots exhibit cultural differences. Our research suggests that even relatively “dumb” bots may
give rise to complex interactions, and this provides a warning to the Artificial Intelligence
research community.
In August 2011, Igor Labutov and Jason Yosinski, two PhD students at Cornell University, let a
pair of chat bots, called Alan and Sruthi, talk to each other online. Starting with a simple
greeting, the one-and-a-half-minute dialogue quickly escalated into an argument about what Alan
and Sruthi had just said, whether they were robots, and about God (1). The first ever
conversation between two simple artificial intelligence agents ended in a conflict.
A bot, or software agent, is a computer program that is persistent, autonomous, and
reactive (2, 3). Bots are defined by programming code that runs continuously and can be
activated by itself. They make and execute decisions without human intervention and perceive
and adapt to the context they operate in. Internet bots, also known as web bots, are bots that run
1
over the Internet. They appeared and proliferated soon after the creation of the World Wide Web
and, since then, they have been responsible for an increasingly larger proportion of Web
activities (4–6). For example, one study estimated that in 2009, bots generated 24% of all tweets
(7). A media analytics company found that 54% of the online ads shown in 2012 and 2013 were
viewed by bots rather than humans (8). And according to an online security company, bots
accounted for 48.5% of website visits in 2015 (9).
As the population of bots active on the Internet 24/7 is growing exponentially, the
quantity and quality of their interactions are equally escalating. An increasing number of
decisions, options, choices, and services depend now on bots working properly, efficaciously,
and successfully. Yet, we know very little about the life and evolution of our digital minions. In
particular, predicting how bots’ interactions will evolve and play out even when they rely on
very simple algorithms is already challenging. Furthermore, as Alan and Sruthi demonstrated,
even if bots are designed to collaborate, conflict may occur inadvertently. Clearly, it is crucial to
understand what could affect bot-bot interactions in order to design cooperative bots that can
manage disagreement, avoid unproductive conflict, and fulfill their tasks in ways that are socially
and ethically acceptable.
There are many types of Internet bots (see Table 1). These bots form an increasingly
complex system of social interactions. Do bots interact with each other in ways that are
comparable to how we humans interact with each other? Bots are predictable automatons that do
not have the capacity for emotions, meaning-making, creativity, and sociality (10). Despite
recent advances in the field of Artificial Intelligence, the idea that bots can have morality and
culture is still far from reality. Today, it is natural to expect interactions between bots to be
relatively predictable and uneventful, lacking the spontaneity and complexity of human social
interactions. However, even in such simple contexts, our research shows that there may be more
similarities between bots and humans than one may expect. Focusing on one particular humanbot community, we find that conflict emerges even among benevolent bots that are designed to
benefit their environment and not fight each other, and that bot interactions reflect differences in
human cultures.
We study bots on Wikipedia, the largest free online encyclopedia. Bots on Wikipedia are
computer scripts that automatically handle repetitive and mundane tasks to develop, improve,
2
and maintain the encyclopedia. They are easy to identify because they operate from dedicated
user accounts that have been flagged and officially approved. Approval requires that the bot
follows Wikipedia’s bot policy.
Table 1. Categorization of Internet bots according to the intended effect of their operations and the kind
of activities they perform, including some familiar examples for each type. Benevolent bots are designed
to support human users or cooperate with them. Malevolent bots are designed to exploit human users and
compete negatively with them. In this study, we use data from editing bots on Wikipedia (benevolent bots
that generate content).
Collect
information
Benevolent
Malevolent
Web crawlers
Spam bots that collect e-mail addresses
Bots used by researchers
Facebook bots that collect private
information
Anti-vandalism bots on Wikipedia
Auction-site bots
Censoring and moderating bots on
chats and forums
High-frequency trading algorithms
Execute
actions
Gaming bots
DDoS attack bots
Viruses and worms
Clickfraud bots that increase views of
online ads and YouTube videos
Generate
content
Emulate
humans
Editing bots on Wikipedia
Spam bots that disseminate ads
Twitter bots that create alerts or
provide content aggregation
Bot farms that write positive reviews
and boost ratings on Apple App Store,
YouTube, etc.
Customer service bots
Social bots involved in astroturfing on
Twitter
@DeepDrumpf and poet-writing
bots on Twitter
AI bots, e.g. IBM’s Watson
Social bots on the cheater dating site
Ashley Madison
Bots are important contributors to Wikipedia. For example, in 2014, bots completed
about 15% of the edits on all language editions of the encyclopedia (11). In general, Wikipedia
bots complete a variety of activities. They identify and undo vandalism, enforce bans, check
spelling, create inter-language links, import content automatically, mine data, identify copyright
3
violations, greet newcomers, and so on (12). Our analysis here focuses on editing bots, which
modify articles directly. We analyze the interactions between bots and investigate the extent to
which they resemble interactions between humans. In particular, we focus on whether bots
disagree with each other, how the dynamics of disagreement differ for bots versus humans, and
whether there are cultural differences between bots operating in different language editions of
Wikipedia.
To measure disagreement, we study reverts. A revert on Wikipedia occurs when an
editor, whether human or bot, undoes another editor’s contribution by restoring an earlier version
of the article. Reverts that occur systematically indicate controversy and conflict (13–15).
Reverts are technically easy to detect regardless of the context and the language, so they enable
analysis at the scale of the whole system.
Our data contain all edits in 13 different language editions of Wikipedia in the first ten
years after the encyclopedia was launched (2001-2010). The languages represent editions of
different size and editors from diverse cultures. We know which user completed the edit, when,
in which article, whether the edit was a revert and, if so, which previous edit was reverted. We
first identified which editors are humans, bots, or vandals. We isolated the vandals since their
short-lived disruptive activity exhibits different time and interaction patterns than the activity of
regular Wikipedia editors.
Bots constitute a tiny proportion of all Wikipedia editors but they stand behind a
significant proportion of all edits (Fig. 1A,B). There are significant differences between different
languages in terms of how active bots are. From previous research, we know that, in small and
endangered languages, bots are extremely active and do more than 50% of the edits, sometimes
up to 100% (12). Their tasks, however, are mainly restricted to adding links between articles and
languages. In large and active languages, the level of bot activity is much lower but also much
more variable.
Compared to humans, a smaller proportion of bots’ edits are reverts and a smaller
proportion get reverted (Fig. 1C,D). Since 2001, the number of bots and their activity has been
increasing but at a slowing rate (Fig. S1). In contrast, the number of reverts between bots has
been continuously increasing (Fig. 2A). This would suggest that bot interactions are not
becoming more efficient. We also see that the proportion of mutual bot-bot reverts has remained
4
relatively stable, perhaps even slightly increasing over time, indicating that bot owners have not
learned to identify bot conflicts faster (Fig. 2B).
Fig. 1. The proportion of Wikipedia editors who are human, vandals, and bots and the type of editorial
activity in which they are involved. A language edition to the left has a higher total number of edits than
one to the right. (A) Bots comprise a tiny proportion of all Wikipedia users, usually less than 0.1% (not
visible in the figure). (B) However, bots account for a significant proportion of the editorial activity. The
level of bot activity significantly differs between different language editions of Wikipedia, with bots
generally more active in smaller editions. (C) A smaller proportion of bots’ edits are reverts compared to
humans’ edits. (D) A smaller proportion of bots’ edits get reverted compared to humans’ edits. Since by
our definition, vandals have all of their edits reverted, we do not show them in this figure.
In general, bots revert each other a lot: for example, over the ten-year period, bots on
English Wikipedia reverted another bot on average 105 times, which is significantly larger than
the average of 3 times for humans (Fig. S2; Table S1). However, bots on German Wikipedia
revert each other to a much lesser extent than other bots (24 times on average). Bots on
Portuguese Wikipedia, in contrast, fight the most, with an average of 185 bot-bot reverts per bot.
5
Fig. 2. The number of bot reverts executed by another bot and the proportion of unique bot-bot pairs that
have at least one reciprocated revert for the period 2001-2010. (A) Generally, the number of bot-bot
reverts has been increasing. (B) However, the proportion of reciprocated reverts has not been decreasing
(error bars correspond to standard error). This suggests that disagreement between bots is not becoming
less common.
The dynamics of disagreement differ significantly between bots and humans (Fig. 3).
Reverts between bots tend to occur at a slower rate and a conflict between two bots can take
place over longer periods of time, sometimes over years. In fact, bot-bot interactions have
different characteristic time scale than human-human interactions (Fig. S3). The characteristic
average time between successive reverts for humans is at 2 minutes, 24 hours, or 1 year. In
comparison, bot-bot interactions have a characteristic average response of 1 month. This
difference is likely because, first, bots systematically crawl articles and, second, bots are
restricted as to how often they can make edits (the Wikipedia bot policy usually requires spacing
of 10 seconds, or 5 for anti-vandalism activity, which is considered more urgent). In contrast,
humans use automatic tools that report live changes made to a pre-selected list of articles (16,
6
17); they can thus follow only a small set of articles and, in principle, react instantaneously to
any edits on those.
Bots also tend to reciprocate each other’s reverts to a greater extent (Fig. 3). In contrast,
humans tend to have highly unbalanced interactions, where one individual unilaterally reverts
another one (Fig. S4-S6; Table S2).
Fig. 3. The change in balance in bot-bot and human-human pairs in English Wikipedia in the period
2001-2010. Balance is defined as follows: starting from yo = 0, balance yt = yt-1 + 1 if i reverts j at time t
and yt = yt-1 – 1 if j reverts i at time t; the labels i and j are assigned so that y >= 0 for the majority of the ij
interaction time; finally, we here show balance as the square root of y. The figures show all interactions
involving more than five reverts between bots and a random sample of the interactions between humans.
Each pair is assigned a random color. (A, B) Compared to human-human interactions, bot-bot interactions
occur at a slower rate and are more balanced, in the sense that reverts go back and forth between the two
editors.
7
These results show that, although in quantitatively different ways, bots on Wikipedia
behave and interact as unpredictably and as inefficiently as the humans. The disagreements likely
arise from the bottom-up organization of the community, whereby human editors individually
create and run bots, without a formal mechanism for coordination with other bot owners. Delving
deeper into the data, we found that most of the disagreement occurs between bots that specialize
in creating and modifying links between different language editions of the encyclopedia. This is
plausible since coordination between editors speaking different languages is particularly hard.
The lack of coordination may be due to different language editions having slightly different
naming rules and conventions.
Wikipedia is perhaps one of the best examples of a populous and complex bot ecosystem
but this does not necessarily make it representative. As Table 1 demonstrates, we have
investigated a very small region of the botosphere on the Internet. The Wikipedia bot ecosystem
is gated and monitored and this is clearly not the case for systems of malevolent social bots, such
as social bots on Twitter posing as humans to spread political propaganda or influence public
discourse (18). Before being able to study the social interactions of these bots, we first need to
learn to identify them (6).
Our analysis shows that a system of simple bots may produce complex dynamics and
unintended consequences. In the case of Wikipedia, we see that benevolent bots that are designed
to collaborate may end up in continuous disagreement. This is both inefficient as a waste of
resources, and inefficacious, for it may lead to local impasse. Although such disagreements
represent a small proportion of the bots’ editorial activity, they nevertheless bring attention to the
complexity of designing artificially intelligent agents. Part of the complexity stems from the
common field of interaction – bots on the Internet, and in the world at large, do not act in
isolation, and interaction is inevitable, whether designed for or not. Part of the complexity stems
from the fact that there is a human designer behind every bot and that human artifacts embody
human culture. As bots continue to proliferate and become more sophisticated, social scientist
will need to devote more attention to understanding their culture and social life.
8
References and Notes:
1. Cornell Creative Machines Lab, AI vs. AI. Two chatbots talking to each other. YouTube
(2011), (available at https://www.youtube.com/watch?v=WnzlbyTZsQY).
2. S. Franklin, A. Graesser, “Is It an agent, or just a program?: A taxonomy for autonomous
agents” in Intelligent Agents III: Agent Theories, Architectures, and Languages, J. P. Müller,
M. J. Wooldridge, N. R. Jennings, Eds. (Springer, 1997), pp. 21–35.
3. L. Floridi, J. W. Sanders, On the morality of artificial agents. Minds Mach. 14, 349–379
(2004).
4. A. Leonard, Bots: The Origin of New Species (Hardwired, 1997).
5. J. Brown, P. Duguid, The Social Life of Information (Harvard Business Press, 2000).
6. E. Ferrara, O. Varol, C. Davis, F. Menczer, A. Flammini, The rise of social bots. Commun.
ACM. 59 (2016).
7. Sysomos, An in-depth look at the most active Twitter user data (2009), (available at
https://sysomos.com/inside-twitter/most-active-twitter-user-data).
8. R. Holiday, Fake traffic means real paydays. Observer (2014), (available at
http://observer.com/2014/01/fake-traffic-means-real-paydays/).
9. I. Zeifman, “2015 bot traffic report: Humans take back the Web, bad bots not giving any
ground” (Imperva Incapsula, 2015).
10. S. Russell, P. Norvig, Artificial Intelligence: A Modern Approach (Pearson, Harlow, UK, ed.
3, 2009).
11. T. Steiner, Bots vs. wikipedians, anons vs. logged-ins (redux). OpenSym 2014, 1–7 (2014).
12. S. Niederer, J. van Dijck, Wisdom of the crowd or technicity of content? Wikipedia as a
sociotechnical system. New Media Soc. 12, 1368–1387 (2010).
13. A. Kittur, B. Suh, B. A. Pendleton, E. H. Chi, He says, she says: Conflict and coordination in
Wikipedia. SIGCHI 2007, 453–462 (2007).
14. U. Brandes, P. Kenis, J. Lerner, D. van Raaij, Network analysis of collaboration structure in
Wikipedia. WWW 2009, 731–740 (2009).
9
15. T. Yasseri, R. Sumi, A. Rung, A. Kornai, J. Kertész, Dynamics of conflicts in Wikipedia.
PLoS One 7, e38869 (2012).
16. A. Halfaker, J. Riedl, Bots and cyborgs: Wikipedia’s immune system. Computer 45, 79–82
(2012).
17. R. S. Geiger, A. Halfaker, When the levee breaks: Without bots, what happens to Wikipedia's
quality control processes? WikiSym 2013, 1–6 (2013).
18. N. Abokhodair, D. Yoo, D. W. McDonald, Dissecting a social botnet: Growth, content and
influence in Twitter. CSCW 2015, 839–851 (2015).
19. R. Sumi, T. Yasseri, A. Rung, A. Kornai, J. Kertész, Edit wars in Wikipedia. PASSAT
SocialCom 2011, 724–727 (2011).
20. M. Kivela et al., Multilayer networks. J. Complex Networks 2, 203–271 (2014).
Acknowledgments: This project has received funding from the European Union’s Horizon 2020
research and innovation program under grant agreement No 645043. The authors thank
Wikimedia Deutchland e.V. and Wikimedia Foundation for the live access to the Wikipedia data
via Toolserver. The data reported in the paper are available at http://wwm.phy.bme.hu.
Author contributions: M.T. and T.Y. designed and performed research. R.G. and T.Y. collected
data. M.T. analyzed data. M.T., R.G., L.F., and T.Y. wrote and reviewed the manuscript.
10
Supplementary Materials for
Even Good Bots Fight
Milena Tsvetkova, Ruth García-Gavilanes, Luciano Floridi, Taha Yasseri
correspondence to: taha.yasseri@oii.ox.ac.uk
11
Materials and Methods
Data
Wikipedia is an ecosystem of bots. Some of the bots are “editing bots”, that work on the
articles. They undo vandalism, enforce bans, check spelling, create inter-language links, import
content automatically, etc. Other bots are non-editing: these bots mine data, identify vandalism,
or identify copyright violations. Bots usually specialize in one activity.
In addition to bots, there are also certain automated services that editors use to streamline
their work. For example, there are automated tools such Huggle and STiki, which produce a
filtered set of edits to review in a live queue. Using these tools, editors can instantly revert the
edit in question with a single click and advance to the next one. There are also user interface
extensions and in-browser functions such as Twinkle, rollback, and undo, which also allow
editors to revert with a single click. Another automated service that is relatively recent and much
more sophisticated is the Objective Revision Evaluation Service (ORES). It uses machinelearning techniques to rank edits with the ultimate goal to identify vandals or low-quality
contributions.
Our research focuses on editing bots. Our data contain who reverts whom, when, and in
what article. To obtain this information, we analyzed the Wikipedia XML Dumps
(https://dumps.wikimedia.org/mirrors.html) of the 13 language editions we study. The data
covers the period from the beginning of Wikipedia (January 15, 2001) until February 2, 2010 –
October 31, 2011, the last date depending on when the data was collected for the particular
language edition. To detect restored versions of an article, a hash was calculated for the complete
article text following each revision and the hashes were compared between revisions (19).
Wikipedia requires that human editors create separate accounts for bots and that the bot
account names clearly indicate the user is a bot, usually by including the word “bot”
(https://en.wikipedia.org/wiki/Wikipedia:Bot_policy). Hence, to identify the bots, we selected all
account names that contain different spelling variations of the word “bot.” We supplemented this
set with all accounts that have currently active bot status in the Wikipedia database but that may
not fit the above criterion (using https://en.wikipedia.org/wiki/Wikipedia:Bots/Status as of
August 6, 2015). We thus obtained a list of 6,627 suspected bots.
12
We then used the Wikipedia API to check the “User” page for each suspected bot account.
If the page contained a link to another account, we confirmed that the current account was a bot
and linked it to its owner. For pages that contained zero or more than one links to other accounts,
we manually checked the “User” and “User_talk” pages for the suspected bot account to see if it
is indeed a bot and to identify its owner. The majority of manually checked accounts were
vandals or humans, so we ended up with 1,549 bots, each linked to its human owner.
We additionally labeled human editors as vandals if they had all their edits reverted by
others. This rule meant that we labeled as vandals also newcomers who became discouraged and
left Wikipedia after all their initial contributions were reverted. Since we are interested in social
interactions emerging from repeated activity, we do not believe that this decision affects our
results.
Using the revert data, we created a directed two-layer multi-edge network, where ownership
couples the layer of human editors and the layer of bots (20). To build the network, we assumed
that a link goes from the editor who restored an earlier version of the article (the “reverter”) to
the editor who made the revision immediately after that version (the “reverted”). All links were
time-stamped. We collapsed multiple bots to a single node if they were owned by the same
human editor; these bots were usually accounts for different generations of the same bot with the
same function. In the network, reverts can be both intra- and inter-layer: they occur within the
human layer, within the bot layer, and in either direction between the human and bot layers. The
multi-layer network was pruned by removing self-reverts, as well as reverts between a bot and its
owner.
Clustering analysis of interaction trajectories
We study interactions in dyads over time. We model the interaction trajectories in twodimensional space, where the x-axis measures time and the y-axis measures the cumulative
balance in reverts between the two editors. Starting from y0 = 0, balance yt = yt-1 + 1 if i reverts j
at time t and yt = yt-1 – 1 if j reverts i at time t; the labels i and j are assigned so that y >= 0 for the
majority of the ij interaction time (see Fig. 3). We analyze three properties of the trajectories:
• Latency. We define latency as the mean log time in seconds between successive reverts:
µ(log10 Δt). Latency measures the average steepness of the interaction trajectories in Figure 3.
• Imbalance. We define imbalance as the final proportion of reverts between i and j that
were not reciprocated: |ri – rj| / (ri + rj), where ri and rj are the number of times i revered j and j
13
reverted i, respectively. Imbalance measures the distance between the x-axis and the last point of
the interaction trajectories in Figure 3.
• Reciprocity. We define reciprocity as the proportion of observed turning points out of all
possible: (# turning points) / (ri + rj – 1) , where ri and rj are the number of times i revered j and j
reverted i, respectively. A turning point occurs when the user who reverts at time t is different
from the user who reverts at time t+1. Reciprocity measures the jaggedness of the interaction
trajectories in Figure 3.
Analyzing the properties of the interaction trajectories suggests that bot-bot interactions
occur at a slower rate and are more balanced than human-human interactions (Fig. S3-S5). We
can quantify these findings more precisely by identifying different types of interaction
trajectories and counting how often they occur for bots and for humans, as well as for specific
languages. To this end, we use k-means clustering on the three properties of the trajectories
(latency, imbalance, and reciprocity) and on all bot-bot and human-human interactions longer
than five reverts (the results are substantively similar without the length restriction). The
algorithm suggests that the data can be best clustered in four trajectory types:
• Fast unbalanced. These trajectories have low reciprocity and latency and high imbalance.
• Somewhat balanced. These trajectories have intermediate imbalance and reciprocity.
• Slow unbalanced. These trajectories have low reciprocity and high latency and
imbalance.
• Well balanced. These trajectories have low imbalance and high reciprocity.
Looking at the prevalence of these four types of trajectories for bots and humans and across
languages, we confirm the previous observations: bot-bot interactions occur at a slower rate and
are more balanced, in the sense that reverts go back and forth between the two bots (Table S2).
Further, we find that bot-bot interactions are more balanced in smaller language editions of
Wikipedia. This could be due to the fact that bots are more active in smaller editions and hence,
interactions between them are more likely to occur. Less intuitively, however, this observation
also suggests that conflict between bots is more likely to occur when there are fewer bots and
when, common sense would suggest, coordination is easier.
14
Fig. S1. The number of bots, the number of edits by bots, and the proportion of edits done by bots
between 2001 and 2010. (A, B, C) Between 2003 and 2008 the number of bots and their activity have
been increasing. This trend, however, appears to have subsided after 2008, suggesting that the system
may have stabilized.
15
Fig. S2. For the majority of languages, bots are mainly reverted by other bots, as opposed to human
editors or vandals. English and the Romance languages in our data present exceptions, with less than 20%
of bot reverts are done by other bots.
16
Fig. S3. Bot-bot interactions have different characteristic time scale than human-human interactions. The
figures show the distribution of interactions for a particular latency, where we define latency as the mean
log time in seconds between successive reverts. (A) Bot-bot interactions have a characteristic latency of 1
month, as indicated by the peak in the figure. (B) Human-human interactions occur with a latency of 2
minutes, 24 hours, or 1 year.
17
Fig. S4. Bot-bot interactions are on average more balanced than human-human interactions. We define
imbalance as the final proportion of reverts between i and j that were not reciprocated. (A) A significant
proportion of bot-bot interactions have low imbalance. (B) The majority of human-human interactions are
perfectly unbalanced.
18
Fig. S5. At a smaller timescale, bots also reciprocate much more than humans do. We measure reciprocity
as the proportion of observed turning points out of all possible. (A) A significant proportion of bot-bot
interactions have intermediate or high values of reciprocity. (B) The majority of human-human
interactions are not reciprocated.
19
Fig. S6. Four types of interaction trajectories suggested by the k-means analysis. The left panels show a
sample of the trajectories, including bot-bot and human-human interactions and trajectories from all
languages. The right panels show the distribution of latency, imbalance, and reciprocity for each type of
trajectory. (A) Fast unbalanced trajectories have low reciprocity and latency and high imbalance. (B)
Somewhat balanced trajectories have intermediate imbalance and reciprocity. (C) Slow unbalanced
trajectories have low reciprocity and high latency and imbalance. (D) Well balanced trajectories have low
imbalance and high reciprocity.
20
Table S1. Descriptive statistics for the bot-bot layer and the human-human layer in the multi-layer
networks of reverts. (A, B) Bots revert each other to a great extent. They also reciprocate each other’s
reverts to a considerable extent. Their interactions are not as clustered as for human editors. Still, both for
bots and humans, more senior editors tend to revert less senior editors, as measured by node assortativity
by number of edits completed. Bots on German Wikipedia revert each other to a much lesser extent than
other bots. Bots on Portuguese Wikipedia revert each other the most.
Number
of nodes
(A) Bot-bot
English
319
Japanese
182
Spanish
204
French
225
Portuguese
164
German
178
Chinese
151
Hebrew
124
Hungarian
116
Czech
122
Arabic
132
Romanian
104
Persian
106
(B) Human-human
English
4127880
Japanese
193203
Spanish
508815
French
181395
Portuguese
262293
German
206734
Chinese
66470
Hebrew
70816
Hungarian
21036
Czech
23792
Arabic
39083
Romanian
16625
Persian
18657
Degree
(avg.
reverts
per
node)
Prop. dyads
with at least
one revert
reciprocated
Assortat
ivity by
number
of edits
Avg.
clustering
Avg.
clustering /
avg.
clustering in
random
network
104.6
100.4
71.7
59.3
185
24.1
103
83.9
66.8
59
161.7
70.8
63.8
0.46
0.57
0.53
0.47
0.57
0.43
0.59
0.59
0.54
0.57
0.6
0.55
0.5
-0.02
-0.13
-0.06
-0.1
-0.12
-0.1
-0.16
-0.11
-0.13
-0.18
-0.05
-0.11
-0.05
0.43
0.58
0.57
0.5
0.64
0.4
0.62
0.59
0.6
0.56
0.6
0.6
0.53
19
9
12
12
8
10
9
7
8
8
7
7
8
3.1
2.6
2.3
2.6
2.3
2.2
3.2
2.9
2.4
3.1
2.2
2.1
0.09
0.16
0.09
0.09
0.09
0.11
0.18
0.13
0.11
0.1
0.08
0.1
-0.05
-0.05
-0.11
-0.02
-0.14
-0.1
-0.14
-0.13
-0.13
-0.18
-0.17
-0.2
0.04
0.02
0.1
0.04
0.12
0.03
0.08
0.2
0.1
0.19
0.11
0.11
72370
2971
33433
4011
19762
3703
3377
7458
1265
2262
2947
1371
3.6
0.16
-0.13
0.21
1972
21
Table S2. The prevalence of the four types of trajectories for bots and humans and for different
language editions of Wikipedia. Higher intensity of color of cells corresponds to higher
proportion. (A, B) Bot-bot interactions occur at a slower rate and are more balanced, in the sense
that reverts go back and forth between the two bots. Further, bot-bot interactions are more
balanced in smaller language editions of Wikipedia.
Fast
unbalanced
Somewhat
Slow
balanced
unbalanced
(A) Bot-bot
0.40
0.28
0.44
0.31
0.46
0.24
0.39
0.28
0.40
0.33
0.42
0.26
0.45
0.29
0.43
0.23
0.38
0.22
0.48
0.22
0.42
0.27
0.43
0.24
0.45
0.20
(B) Human-human
0.19
0.14
0.23
0.15
0.17
0.17
0.16
0.11
0.18
0.22
0.19
0.10
0.26
0.16
0.22
0.23
0.18
0.17
0.15
0.40
0.18
0.27
0.17
0.21
0.20
0.14
Well
balanced
English
Japanese
Spanish
French
Portuguese
German
Chinese
Hebrew
Hungarian
Czech
Arabic
Romanian
Persian
0.20
0.03
0.09
0.14
0.08
0.11
0.04
0.04
0.09
0.01
0.04
0.08
0.07
0.12
0.23
0.22
0.20
0.20
0.22
0.22
0.29
0.31
0.29
0.28
0.25
0.28
English
Japanese
Spanish
French
Portuguese
German
Chinese
Hebrew
Hungarian
Czech
Arabic
Romanian
Persian
0.43
0.51
0.50
0.52
0.42
0.33
0.40
0.34
0.43
0.30
0.34
0.48
0.29
0.23
0.12
0.16
0.21
0.18
0.37
0.18
0.21
0.21
0.15
0.21
0.14
0.36
22