Analyzing Twitter Streams

Our prior work on Twitter content has involved bulk collection of the following types of data:

  • Tweets, including raw text suitable for stylometry.
  • Activity time for the sake of temporal signatures.
  • Mentions including temporal data for conversation maps.
  • User ID data for profile searches.
  • Follower/following relationships, often using Maltego.

Early on this involved simply running multiple accounts in parallel, each working on their own set of tasks. Seemingly quick results were a matter of knowing what to collect and letting things happen. Hardware upgrades around the start of 2019 permitted us to run sixteen accounts in parallel … then thirty two … and finally sixty four, which exceeded the bounds of 100mbit internet service.

We had never done much with the Twitter streaming API until just two weeks ago, but our expanded ability to handle large volumes of raw data has made this a very interesting proposition. There are now ten accounts engaged in collecting either a mix of terms or following lists of hundreds of high value accounts.

Indexing Many Streams

What we get from streams at this time includes:

  • Tweet content.
  • RT’d tweet content.
  • Quoted tweet content.
  • Twitter user data for the source.
  • Twitter user data for accounts mentioned.
  • Twitter user data for accounts that are RT’d.
  • User to mentioned account event including timestamp.
  • User to RT’d account event including timestamp.

This data is currently accumulating in a mix of Elasticsearch indices. We recognize that we have at least three document types:

  • Tweets.
  • User data.
  • Interaction data.

Our current setup is definitely beta at this point. We probably need more attention on the natural language processing aspect of the tweets themselves, particularly as we expand into handling multiple European languages. User data could standing having hashtags extracted from profiles, which we missed the first time around, otherwise this seems pretty simple.

The interaction data is where things become uncertain. It is good to have this content in Elasticsearch for the sake of filtering. It is unclear precisely how much we should permit to accumulate in these derivative documents; at this point they’re just the minimal data from each tweet that permits establishing the link between accounts involved. Do we also do this for hashtags?

Once we have this, the next question is what do we do with it? The search, sorting, and time slicing of Elasticsearch is nice, but this is really network data, and we want to visualize it.

Maltego is out of the running before we even start; 10k nodes maximum has been a barrier for a long time. Gephi is unusable on a 4k Linux display due to font sizing for readability, and it will do just enough on a half million node network to leave one hanging with an analysis half finished on a smaller display.

The right answer(s) seem to be to get moving on Graphistry and Neo4j. An EVGA GTX 1060 turned up here a few weeks ago, displaying a GT 1030 to an associate. Given the uptime requirements for Elasticsearch, not much has happened towards Graphistry use other than the physical install. It looks like Docker is a requirement, and that’s a synonym for “invasive nuisance”.

Neo4j has some visualization abilities but its real attraction is the native handling of storage and queries for graphs. Our associates who engage in analysis ask questions that are easily answered with Elasticsearch … and other questions that are utterly impossible to resolve with any tool we currently wield.

Conclusion

Expanding capacity has permitted us to answer some questions … but on the balance its uncovered more mysteries than it has resolved. This next month is going to involve getting some standard in place for assessing incoming streams, and pressing on both means of handling graph data, to see which one we can bring to bear first.