Dr Timothy Graham, who is studying the phenomenon along with his colleague, Professor Axel Bruns, told iTWire that the research was still in an initial phase.
He provided iTWire with a list of 127 Twitter handles which he said had a high bot score. Among the characteristics contribute to such a score are incomplete or very simplistic profile information; few or no connection with other accounts or connections only with accounts suspected of being bots; or patterns of posting tweets that were unlikely to represent human activity.
"We used the state-of-the-art Botometer machine learning classifier to analyse the accounts," Dr Graham said. "The accounts I've sent you have a high bot score, which means that they are very likely to have been automated in a bot-like way.
"There is also some complexity between bots and trolls, whereby trolls are controlled by humans (e.g., the Russian interference via the Internet Research Agency) but use automated scripts to retweet content in a bot-like way."
Dr Timothy Graham: "It doesn't mean they are bots in the conventional sense, i.e., a piece of code controlling an account with no human intervention, but more that the account is highly automated in some way."
Professor Bruns, who studied the overall action patterns on Twitter during the 2013 and 2016 Federal Elections, told iTWire that his studies in those two years — plus the 2017 Queensland state election — focused more on the overall interaction patterns on Twitter around candidate accounts: who was getting attention, and what kind (were they being tweeted at, or were their messages being retweeted); what level of activity did the candidates themselves show in their tweeting; how did they interact with each other, and with the general public; and what themes and topics were being addressed in the tweets.
"This enabled us to assess the relative engagement with each party, and the evolution of topics over the course of the campaign," he said.
Two articles which he wrote about his studies during 2013 and 2016 were published in the online publication, The Conversation. In the 2013 poll, he and his team tracked about 117 ALP accounts, 100 Coalition accounts, 68 Greens accounts, and 112 independent and minor party accounts over the course of the campaign.
"During the campaign period from 5 August to 7 September 2013, ALP candidates out-tweeted their competitors by a substantial margin. They posted more than twice as many tweets as Coalition candidates, in spite of their broadly comparable number of accounts. The Greens, by contrast, were considerably more active on Twitter – outdoing even the Coalition, although they fielded only two thirds as many tweeting candidates," he wrote at the time.
At the end of the 2016 campaign, Professor Bruns observed that ALP accounts were considerably more active than Coalition accounts, but the latter received a considerably greater volume of mentions than those of Labor candidates.
Professor Axel Bruns says his studies of the 2013 and 2016 elections helped "assess the relative engagement with each party, and the evolution of topics over the course of the campaign".
Dr Graham said the study was prompted by the events of the 2016 US presidential election - when fake news was posted on social media in an effort to sway voters - "not only in terms of outcome but also the events that occurred in terms of foreign or national interference in manipulating voters' opinions on social media. The Mueller report goes into some detail about this".
He said misinformation in Australia was very likely to be different to other contexts, including the US. "Misinformation campaigns in the US largely were complex, and included a combination of strategies. One of these includes sowing discord and general unhappiness with the democratic process, potentially driving voters away from booths.
"By and large, however, it was more a strategic effort to 'astro turf' political discussions and magnify/spread support for [Donald] Trump and dissent against [Democratic nominee Hillary] Clinton. This occurred in multiple ways, and one was certainly bots and trolls on social media, particularly those with a Russian origin (the Internet Research Agency)."
Dr Graham studied Twitter activity during the US presidential election as well, beginning right after the first debate. For that study, he bought a dataset from Twitter through third-party data broker Uberlink. HE and his fellow researchers analysed 1.5 million accounts for influence, political behaviour and botness.
He said that about 4.5% of accounts were believed to be bots, a percentage quite similar to that which he had found during his current study, adding that bots on the social media platform were 2.5 times more influential than humans.
The current study was begun after the election was called on 11 April and Dr Graham said he hoped to finalise it a few months after the election, hopefully by August or September.
Asked whether Twitter was aware of the study, he said he was not sure. "I am actually not sure if Twitter is aware of the study – there are many thousands of Twitter-related research studies! We don't approach them for information. We use the Twitter API, which is a kind of data gateway that enables the public to access Twitter data through official channels."