We recently published Climate Strike Observation, which covered our setup work ahead of the global protests scheduled for the week of 20 – 29 September. This is a specific example of the first step in what has become a fairly well structured analytical process.
When presented with a mystery, either anticipated, currently happening, or in the past, the first thing we do is a recitation of relevant assets. Much of our content is related to Twitter, but this is by no means our sole source. We have sources of Telegram data, as well as Google/Talkwalker Alerts, web site scrapes, and document caches to which we have access. All of that is slowly getting massaged into a workable environment based on Open Semantic Search.
Since Twitter is the most developed aspect, as well as not providing us an opportunity to slip on sources and methods, we’ll stick to it as the example.
The system as it is employed today, can capture about sixty streams, which are based on text terms or numeric userids. Each stream has two indices, one for tweets, and one for the user accounts who made them or who were mentioned in them. Documents, Elastic’s term for items in an index, are uniquely identified with the “snowflake number” Twitter uses. Embedding Twitter’s numbers offers many benefits, both on and off the platform.
Let’s assume we want to know about an event in Canada. What resources do we have available?
- Canadian MPs stream -2.5 million tweets from 270k accounts.
- Canada Proud streams – 950k tweets from over 70k accounts.
- Rebel Media stream – 430k tweets from 100k accounts.
- National Observer – 362k tweets from 116k accounts.
- Gab Canada – Gab users’ tweets & profiles tied to Canadian MPs.
- TrudeauMustGo – streaming anti-Trudeau terms.
- CAFife – stream related to Robert Fife’s recent Trudeau smear.
- TrudeauBots – a stream based on some automated accounts.
All but the last of these streams are active today. There are other perspectives available as well.
- campfollowers – every follower of every Canadian MP.
- canada350 – 8k followers of 350’s Canadian role account.
- collectusers – 55m profiles of mostly politically engaged accounts.
We have access to the staff of the National Observer, the host of The View Up Here podcast, some of his show’s guests, and other personal relationships. Each of the FVEYS countries nominally speaks English, but these are five mutually intelligible dialects. Local knowledge is a must for any serious analysis.
As far as tools, we have the Kibana interface for Elasticsearch, which means an analyst can create a visualization, and share it in a variety of ways. We have Maltego, Gephi, and Graphistry for data visualization. Elasticsearch can be made to produce both CSV and GML files, the latter being a native graph format.
Summarizing: Tasked with an outstanding question, likely guided by local knowledge, we apply a variety of tools to extract meaning from the noise.
One of the most important functions Elasticsearch provides is the ability to precisely control time. We keep the following as search parameters:
- Twitter’s status_at timestamp for each tweet in a tweet index.
- The created_at timestamp for the account that tweeted.
- created_at for each account in a user index.
- status_at is each user’s last action.
- collected_at shows when we most recently saw each account.
Those are hard timestamps, but there are many ‘softer’ forms of time. We can bracket events using phrases such as “not prior to date X” or “between event Y and event Z”. An example where this is used is inferring follower arrival date ranges based on the fact that the Twitter API returns userids in reverse arrival order.
Our data visualization tools in descending order of use are:
- Kibana’s many Visualization type options.
- Gephi for handling large scale social network visualization.
- Maltego for more detailed SNA and infrastructure analysis.
- Graphistry is new to us and it mostly competes with Gephi.
Kibana directly handles time. Gephi will intake CSV or GML files are produced using time aware tools. Maltego generally doesn’t come out until we start tracking infrastructure or making detailed notes about a small set of entities. Graphistry can handle volumes of data that would smash Gephi flat, but at the cost of much less control over the graph’s appearance.
Summarizing: We approach the outstanding question with multiple tools that are applied to multiple streams of content.
Having completed this inventory the very first question we ask is “Do we have anything that matters?” We stream multiple perspectives ranging from the 336 MP accounts on a continuing basis down to a handful of hashtags related to a specific event. The hashtag based stream almost always starts because someone noticed an event getting attention. We can often find the first few updates by applying the same terms to the continuing collection indices.
Once we have picked the best index to use, we begin to narrow down the window of time as well as the participants, using Kibana. This likely leads to some Saved Searches, which may be shared as a Kibana URL with others, or used as the basis for a Visualization, which can also be shared. Kibana has the ability to export CSV files for use in other tools.
There is a command line tool that will accept sets of files containing numeric follower IDs, which can include a file of accounts that participated in a given stream. The output is a GML file containing not just names, but numeric data such as follower count, number of favorites, and so on.
Pulling the GML file in Gephi, we can arrange, filter, and label nodes on the basis of the numeric attributes. We can always highlight the accounts that contributed their followers, which is all but impossible using a CSV import. We run Louvain community detection to color label communities in the graph. We run Force Atlas 2 to lay out the graph. We filter on the various attributes and the return for more layout work.
The visualizations start as the means analysts use to separate signal from noise. They are complete when the analyst can use the graphics in a report on the question at hand. All through this process the analyst may be checking with the source of the question or the local expertise.
Summarizing: If the available information was deemed sufficient to answer a given question, a series of searches and visualizations are used to isolate the incident, culminating in some carefully honed views being used to illustrate a narrative.
Human observers equipped with only a small mobile device screen are at the mercy of algorithms. The only protection Twitter offers is against one on one harassment that crosses certain lines. While that is a welcome relief from what the platform was like, it does nothing to address the disinformation problem.
And it may be impossible to untangle U.S. free speech, profiling, and marketing methods in order to get at the disinformation problem at the level on which it is dispensed.
We have the experience and facilities to untangle such things. There are very few who do. This really needs to be solved at a policy level, rather than playing whack-a-mole on a weekly basis.
This was a very meta post, but our investigation into the #LeaveAlliance hashtag trending yesterday was what set it in motion. Usually such work for SMIU is used internally, but this one was made open, so you can see the principles described in this post applied to a real world problem.