This tweet came into the collection thanks to a study of #Qanon we did earlier this year. The actual inception of our current cluster hardware appears to have been on January 29th of 2019. The very earliest it could have been created was December 19th of 2018 – the release date for Elasticsearch 6.5.4.
The system is resilient to the loss of any one system, which was given an unintended test last night, with an inadvertent shutdown of one of the servers in the cluster. Recovery takes a couple of minutes given the services and virtual machines, but there was not even an interruption in processing.
Today, for a variety of reasons, we began the process of upgrading to the June 20th, 2019 release of Elasticsearch 6.8.1. There are a number of reasons for doing this:
Index Life Cycle Management (6.6)
Cross Cluster Replication (6.6)
Elasticsearch 7 Upgrade Assistant (6.6)
Rolling Upgrade To Elasticsearch 7 (6.7)
Better Index Type Labeling (6.7)
Security Features Bundled for Community Edition (6.8)
Conversion From Ubuntu to Debian Linux
We are not jumping directly to Elasticsearch 7.x due to some fairly esoteric issues involving field formats and concerns regarding some of the Python libraries that we use. Ubunt1.u has been fine for both desktop and server use, but we recently began using the very fussy Open Semantic Search, and it behaves well with Debian. Best of all, the OVA of a working virtual machine with the Netwar System code installed and running is just 1.9 gig.
Alongside the production ready Elasticsearch based system we are including Neo4j with some example data and working code. The example data is a small network taken from Canadian Parliament members and the code produces flat files suitable for import as well as native GML file output for Gephi. We ought to be storing relationships to Neo4j as we see them in streams, but this is still new enough that we are not confident shipping it.
Some questions that have cropped up and our best answers as of today:
Is Open Semantic Search going to be part of Netwar System?
We are certainly going to be doing a lot of work with OSS and this seems like a likely outcome, given that it has both Elasticsearch and Neo4j connectors. The driver here is the desire to maintain visibility into Mastodon instances as communities shift off Twitter – we can use OSS to capture RSS feeds.
Will Netwar System still support Search Guard?
Yes, because their academic licensing permits things that the community edition of Elasticsearch does not. We are not going to do Search Guard integration into the OVA, however. There are a couple reasons for that:
Doesn’t make sense on a single virtual machine.
Duplicate configs means a bad actor would have certificate based access to the system.
Eager, unprepared system operators could expose much more than just their collection system if they try to use it online.
Netdata monitoring provides new users insight into Elasticsearch behavior, and we have not managed to make that work with SSL secured systems.
We are seeking a sensible free/paid break point for this system. It’s not clear where a community system would end and an enterprise system would begin.
Is there a proper FOSS license?
Not yet, but we are going to follow customs in this area. A university professor should expect to be able to run a secure system for a team oriented class project without incurring any expense. Commercial users who want phone support will incur an annual cost. There will be value add components that will only be available to paying customers. Right now 100% of revenue is based on software as a service and we expect that to continue to be the norm.
So the BSD license seems likely.
When will the OVA be available?
It’s online this morning for internal users. If it doesn’t explode during testing today, a version with our credentials removed should be available Tuesday or Wednesday. Most of the work required to support http/https transparently was finished during first quarter. One it’s up we’ll post a link to it here and there will be announcements on Twitter and LinkedIn.
Online conflicts are evolving rapidly and escalating. Three years ago we judged that it was best to not release a conflict oriented tool, even one that is used purely for observation. Given the events since then, this notion of not proliferating seems … quaint.
So we released the Netwar System code, the companion ELK utilities, and this week we are going to revisit the Twitter Utils, a set of small scripts that are part of our first generation software, and which are still used for some day to day tasks.
When you live with a programming language and a couple of fairly complex distributed systems, there are troubles that arise which can be dispatched almost without thought. A new person attempting to use such a system might founder on one of these, so this post is going to memorialize what is required for a from scratch install on a fresh Ubuntu 18.04 installation.
We converted to Python 3 a while ago. The default install includes Python 3.6.7, but you need pip, and git, too.
You can manually install those and they’ll all work, except for squish2, the name for our internal package that contains the code to “squish” bulky, low value fields out of tweets and user profiles. This requires special handling like so.
cd NetwarSystem/squish2 pip install -e .
If you have any errors related to urllib3, SSL, or XML, those might be subtle dependency problems. Post them as issues on Github.
There are a bunch of Elasticsearch related scripts in the ELKSG repository. You should clone them and then copy them into your path.
The ELK software can handle a simple install, or one with Search Guard. This is the simple setup, so add this final line to your ~/.profile so the scripts know where to find Elasticsearch.
You need the following four pieces of software to get the system running in standalone mode.
Redis and Netdata are simple.
apt update apt install redis
There is an install procedure for Netdata that is really slick. Copy one command, paste it in a shell, it does the install, and makes the service active on port 19999.
Elasticsearch and Neo4j require a bit more work to get the correct version. The stricken lines used to install the Oracle JDK 8, but they changed licensing in late April of 2019. The OpenJDK install seems to make its dependents happy, but we are trialing this with a new system. Our production stuff still has the last working official Oracle setup.
echo 'deb https://debian.neo4j.org/repo stable/' | tee -a /etc/apt/sources.list.d/neo4j.list
apt install neo4j=1:3.5.4
Note that the version mentioned there is just what happens to be in the Neo4j install instructions on the day this article was written. This is not sensitive the way Elasticsearch is.
At this point you should have all four applications running. The one potential problem is Kibana, which may fail to start because it depends on Elasticsearch, which takes a couple minutes to come alive the first time it is run. Try these commands to verify:
systemctl status redis systemctl status elasticsearch systemctl status kibana systemctl status neo4j
In terms of open TCP ports, try the following, which checks the access ports for Kibana, Redis, Neo4j, and Elasticsearch.
We also need to adjust the file handles and process limits upward for Elasticsearch’s Lucene component and Neo4j’s worker threads. Add these lines to /etc/security/limits.conf, and note that there are tab stops in the actual file, this looks terrible on the blog. Here it’s just best to reboot to make these settings active.
If you’re running this software on your desktop, pointing a web browser at port 5601 will show Kibana and 7474 will show Neo4j. If you’re using a standalone or virtual machine, you’ll need to open some access. Here are three one liners with sed that will do that.
sed -i 's/#network.host: 192.168.0.1/network.host: 0.0.0.0/' /etc/elasticsearch/elasticsearch.yml
sed -i 's/#server.host: \"localhost\"/server.host: 0.0.0.0/' /etc/kibana/kibana.yml
sed -i 's/#dbms.connectors.default_listen/dbms.connectors.default_listen/' /etc/neo4j/neo4j.conf
systemctl restart elasticsearch
systemctl restart kibana
systemctl restart neo4j
Elasticsearch doesn’t require a password in this configuration, but Neo4j does, and it’ll make you change it from the default of ‘neo4j’ the first time you log in to the system.
OK, point your browser at port 19999, and you should see this:
Notice the elasticsearch local and Redis local tabs at the lower right. You can get really detailed information on what Elasticsearch is doing, which is helpful when you are just starting to explore its capabilities.
Configuring Your First Twitter Account
You must have a set of Twitter application keys to take the next step. You’ll need to add the Consumer Key and Consumer Secret to the tw-auth command. Run it, paste the URL it offers into a browser, log in with your Twitter account, enter the seven digit PIN from the browser into the script, and it will create a ~/.twitter file that looks something like this.
You’ll need to enter the Neo4j password you set earlier. The elksg variable has to point to the correct host and port. The elksguser/elksgpass entries are just placeholders. If you got this right, this command will cough up your login shell name and Twitter screen name.
Next, you can check that your Elasticsearch commands are working:
Now is the time to get Elasticsearch ready to accept Twitter data. Mostly this involves making sure it recognizes timestamps. Issue these commands:
The first three ensure that timestamps work for the master user index, any tu* index related to a specific collection, and any tw* index containing tweets. The mylog command ensures the perflog indec is searchable. The last command bumps the field limit on indices. Experienced Elasticsearch users will be scratching their heads on this one – we still have much to learn here, feel free to educate us on how to permanently handle that problem.
If you want to see what these did, this command will show you a lot of JSON.
And now we’re dangerously close to actually getting some content in Elasticsearch. Try the following commands:
tw-friendquick NetwarSystem > test.txt
This should produce a file with around 180 numeric Twitter IDs that are followed by @NetwarSystem, load them into Redis for processing, and the last command will give you a count of how many are loaded. This is the big moment, try this command next:
That command should spew a bunch of JSON as it runs. The preceding time command will tell you how long it took, a useful thing when performance tuning long running processes.
Now try this one:
You should get back two very long lines of text – one for the usertest index, show about 180 documents, and the other for perflog, which will just have a few.
There, you’ve done it! Now let’s examine the results.
Your next steps require the Kibana graphical interface. Point your browser at port 5601 on your system. You’ll be presented with the Kibana welcome page. You can follow their tutorial if you’d like. Once you’ve done that, or skipped it, you will do the following:
Go the Management tab
Select Index Patterns
Create an Index Pattern for the usertest index
There should be a couple of choices for time fields – one for when the user account was created, the other is the date for their last tweet. Once you’ve done this, go to the Discover tab, which should default to your newly created Index Pattern. Play with the time picker at the upper right, find the Relative option, and set it to 13 years. You should see a creation date histogram something like this:
Writing this post involved grinding off every burr we found in the Github repositories, which was an all day job, but we’ve come to the point where you have cut & pasted all you can. The next steps will involve watching videos about how to use Kibana, laying hands on a copy of Elasticsearch: The Definitive Guide, and installing Graphileon so you can explore the Neo4j data.
Our prior work on Twitter content has involved bulk collection of the following types of data:
Tweets, including raw text suitable for stylometry.
Activity time for the sake of temporal signatures.
Mentions including temporal data for conversation maps.
User ID data for profile searches.
Follower/following relationships, often using Maltego.
Early on this involved simply running multiple accounts in parallel, each working on their own set of tasks. Seemingly quick results were a matter of knowing what to collect and letting things happen. Hardware upgrades around the start of 2019 permitted us to run sixteen accounts in parallel … then thirty two … and finally sixty four, which exceeded the bounds of 100mbit internet service.
We had never done much with the Twitter streaming API until just two weeks ago, but our expanded ability to handle large volumes of raw data has made this a very interesting proposition. There are now ten accounts engaged in collecting either a mix of terms or following lists of hundreds of high value accounts.
What we get from streams at this time includes:
RT’d tweet content.
Quoted tweet content.
Twitter user data for the source.
Twitter user data for accounts mentioned.
Twitter user data for accounts that are RT’d.
User to mentioned account event including timestamp.
User to RT’d account event including timestamp.
This data is currently accumulating in a mix of Elasticsearch indices. We recognize that we have at least three document types:
Our current setup is definitely beta at this point. We probably need more attention on the natural language processing aspect of the tweets themselves, particularly as we expand into handling multiple European languages. User data could standing having hashtags extracted from profiles, which we missed the first time around, otherwise this seems pretty simple.
The interaction data is where things become uncertain. It is good to have this content in Elasticsearch for the sake of filtering. It is unclear precisely how much we should permit to accumulate in these derivative documents; at this point they’re just the minimal data from each tweet that permits establishing the link between accounts involved. Do we also do this for hashtags?
Once we have this, the next question is what do we do with it? The search, sorting, and time slicing of Elasticsearch is nice, but this is really network data, and we want to visualize it.
Maltego is out of the running before we even start; 10k nodes maximum has been a barrier for a long time. Gephi is unusable on a 4k Linux display due to font sizing for readability, and it will do just enough on a half million node network to leave one hanging with an analysis half finished on a smaller display.
The right answer(s) seem to be to get moving on Graphistry and Neo4j. An EVGA GTX 1060 turned up here a few weeks ago, displaying a GT 1030 to an associate. Given the uptime requirements for Elasticsearch, not much has happened towards Graphistry use other than the physical install. It looks like Docker is a requirement, and that’s a synonym for “invasive nuisance”.
Neo4j has some visualization abilities but its real attraction is the native handling of storage and queries for graphs. Our associates who engage in analysis ask questions that are easily answered with Elasticsearch … and other questions that are utterly impossible to resolve with any tool we currently wield.
Expanding capacity has permitted us to answer some questions … but on the balance its uncovered more mysteries than it has resolved. This next month is going to involve getting some standard in place for assessing incoming streams, and pressing on both means of handling graph data, to see which one we can bring to bear first.
Last month we announced the Netwar System Community Edition, the OVA for which is still not posted publicly. In our defense, what should have been a couple days with our core system has turned into a multifaceted month long bug hunt. A good portion could be credited to “unfamiliar with Search Guard”, but there is a hard kernel of “WTF, Twitter, WTF?!?” that we want to describe for other analysts.
Core System Configuration
First, some words about what we’ve done with the core of the system use day to day. After much experimentation we settled on the following configuration for our Elasticsearch dependent environment.
HP Z620 workstations with dual eight core Xeons.
128 gig of ram.
Dual Seagate IronWolf two terabyte drives in a mirror.
Single Samsung SSD for system and ZFS cache.
Trio of VirtualBox VMs with 500 gig of storage each.
32 gig for host, ZFS ARC (cache) limited to 24 gig.
24 gig per VM, JVM limited to 12 to 16 gig.
There are many balancing acts in this, too subtle and too niche to dig into here. It should be noted that FreeBSD Mastery:ZFS is a fine little book, even if you’re using Linux. The IronWolf drives are helium filled gear meant for NAS duty. In retrospect, paying the 50% premium for IronWolf Pro gear would have been a good move and we’ll go that way as we outgrow these.
We’ve started with a pair of machines, we’re defaulting to three shards per index, and a single replica for each. The Elasticsearch datacenter zones feature proved useful; pulling the network cable on one machine triggers some internal recovery processes, but there is no downtime from the user’s perspective. We’re due for a third system with similar specifications, it will receive the same configuration including a zone of its own, and we’ll move from one replica per index to two. This will be a painless shift to N+1 redundancy.
API Mysteries At Scale
Our first large scale project has been profiling the followers of 577 MPs in the U.K. Parliament. There are 20.6M follow relationships with 6.6M unique accounts. Extracting their profiles would require forty hours with our current configuration … but there are issues.
Users haven’t seen a Twitter #FailWhale in years, but as continuous users of the API we expect to see periods of misbehavior on about a monthly basis. February featured some grim adjustments, signs that Twitter is further clamping down on bots, which nipped our read only analytical activities. There are some features that seem to be permanently throttled now based on IP address.
When we arrived at what we thought was the end of the road, we had 6.26M profiles in Elasticsearch rather than the 6.6M we knew to exist, a discrepancy of about 350,000. We tested all 6.6M numeric IDs against the index and found just 325,000 negative responses. We treated that set as a new batch and the system captured 255,000, leaving only 70,000 missing. Repeating the process again with the 70,000 we arrived at a place where the problem was amenable to running a small batch in serial fashion.
Watching a batch of a thousand of these stragglers, roughly a quarter got an actual response, a quarter came back as suspended, and the remainder came back as page not found. The last response is expected when an account has renamed or self suspended, but we were using numeric ID rather than screen name.
And the API response to this set was NOT deterministic. Run the process again with the same data, the percentages were similar, but different accounts were affected.
A manual inspection of the accounts returned permits the formulation of a theory as to why this happens. We know the distribution of the creation dates of these accounts:
The bulk of the problematic accounts are dated between May and August of 2018. Recall that Twitter completed its acquisition of Smyte and shut down 70 million bots during that time frame. May in the histogram is the first month where account creation dates are level. A smaller set clustered around the same day in mid-December of 2012, another fertile time period for bot creation.
The affected accounts have many of the characteristics we associate with bots:
Steeply inverted following to follow ratio.
Complete lack of relationships to others.
Relatively few tweets.
Default username with eight trailing digits.
An account that was created and quickly abandoned will share these attributes. So our theory regarding the seeming problem with the API is as follows:
These accounts that can not be accessed in a deterministic fashion using the API are in some sort of Smyte induced purgatory. They are not accessible, protected, empty of content, suspended, or renamed, which are five conditions our code already recognizes. There is a new condition, likely “needs to validate phone number”, and accounts that have not done this are only likely of interest to their botnet operators, or researchers delving very deeply into the system’s behavior.
But What Does This MEAN?
Twitter has taken aggressive steps to limit the creation of bots. Accounts following MPs seem to have fairly evenly distributed creation dates, less the massive hump from early 2016 to mid 2018. We know botnet operators are liquidating collections of accounts that have been wiped of prior activity for as little as $0.25 each. There are reportedly offerings of batches of accounts considered to be ‘premium’, but what we know of this practice is anecdotal.
Our own experience is limited to maintaining a couple platoons of collection oriented accounts, and Twitter has erected new barriers, requiring longer lasting phone numbers, and sometimes voice calls rather than SMS.
This coming month we are going to delve into the social bot market, purchasing a small batch, which we will host on a remote VPS and attempt to use for collection work.
The bigger implication is this … Twitter’s implementation of Smyte is good, but it’s created a “hole in the ocean problem”, a reference to modern submarines with acoustic signatures that are less than the noise floor in their environment. If the affected accounts are all bots, and they’re just standing deadwood of no use to anyone, that’s good. But if they can be rehabilitated or repurposed, they are still an issue.
Seems like we have more digging to do here …
Mystery Partially Resolved …
So there was an issue with the API, but an issue on our side.
When a Twitter account gets suspended, it’s API tokens will still permit you to check its credentials. So a script like this reports all is well:
But if three of the sixty four accounts used in doing numeric ID to profile lookups have been suspended … 3/64 = 4.69% failure rate. That agrees pretty well with some of the trouble we observed. We have not had cause to process another large batch of numeric IDs yet, but when we do, we’ll check this theory against the results.
This site has been quiet the last five weeks, but many good things happened in the background. One of those good things has been progress on a small Netwar System demonstrator virtual machine, tentatively named the Community Edition.
What can you do with Netwar System CE? It supports using one or two Twitter accounts to record content on an ongoing basis, making the captured information available via the Kibana graphical front end to Elasticsearch. Once the accounts are authorized the system checks them every two minutes for any list that begins with “nsce-“, and accounts on those lists are recorded.
Each account used for recording produces a tw<name> index containing tweets and a tu<name> index containing the profiles of the accounts.
The tw* and tu* are index patterns that cover the respective content from all three accounts. The root account is the system manager and we assume users might place a set of API tokens on that account for command line testing.
This is a view from Kibana’s Discovery tab. The timeframe can be controller via the time picker at the upper right, the Search box permits filtering, the activity per date histogram appears at the top, and in the case we can see a handful of Brexit related tweets.
There are a variety of visualization tools within Kibana. He we see a cloud of hashtags used by the collected accounts. The time picker can be adjusted to a certain time frame, search terms may be added so that the cloud reflects only hashtags used in conjunction with the search term, and there are many further refinements that can be made.
What does it take to run Netwar System CE? The following is a minimal configuration of a desktop or laptop that could host it:
8 gig of ram
solid state disk
four core processor
There are entry level Dell laptops on Amazon with these specifications in the $500 range.
The VM itself is very light weight – two cores, four gig of ram, and the OVA file for the VM is just over four gig to download.
As shipped, the system has the following limits:
Tracking via two accounts
Disk space for about a million tweets
Collects thirty Twitter accounts per hour per account
If you are comfortable with the Linux command line it is fairly straightforward to add additional accounts. If you have some minimal Linux administration capabilities you could add a virtual disk, relocate the Elasticsearch data, and have room for more tweets.
If you are seeking to do a larger project, you should not just multiply these numbers to determine overall capacity. An eight gig VM running our adaptive code can cover about three hundred accounts per hour and a sixty four gig server can exceed four thousand.
After Implementing Search Guard ten days ago I was finally pushed into using Elasticsearch 6. Having noticed that 6.5.0 was out I decided to wait until Search Guard, which seems to lag about a week behind, managed to get their update done.
The 6.5.0 release proved terribly buggy, but now here we are with 6.5.1, running tests in A Small Development Environment, and the results are impressive. The combination of this code and an upgrade from Ubuntu 16.04 to 18.04 has made the little test machine, which we refer to as ‘hotpot‘, as speedy as our three node VPS based cluster.
This is a solid long term average of fully collecting over eleven accounts per minute, but the curious thing is that it’s not obvious what resource is limiting throughput. Ram utilization eventually ratcheted up to 80% but the CPU load average has been not more than 20% the whole time.
There is still a long learning curve ahead, but what I think I see here is that an elderly four core i7, if it has a properly tuned zpool disk subsystem, will be able to support a group of eight users in constant collection mode.
And that makes this page of Kimsufi Servers intriguing. The KS-9 looks to be the sweet spot, due to the presence of SSDs instead of spindles. If our monthly hardware is $21 that puts us in a place where maybe a $99/month small team setup makes sense to offer.
There is much to be done with Search Guard before this can happen, but hopefully we’ll be ready at the start of 2019.
One of the perennial problems in this field is the antiquated notion of jurisdiction, as well as increasing pressure on Westphalian Sovereignty. JP and I touched on this during our November 5th appearance on The View Up Here. The topic is complex and visual, so this post offers some images to back up the audio there.
Regional Internet Registries
The top level administrative domains for the network layer of the internet are the five Regional Internet Registries. These entities were originally responsible for blocks of 32 bit IPv4 addresses and 16 bit Autonomous System numbers. Later we added 128 bit IPv6 addresses and 32 bit Autonomous System numbers as the original numbers were being exhausted.
When you plug your home firewall into your cable modem it receives an IP address from your service provider and a default route. That outside IP is globally unique, like a phone number, and the default route is where any non-local traffic is sent.
Did you ever stop to wonder where your cable modem provider gets their internet service? The answer is that there is no ‘default route’ for the world, they connect at various exchange points, and they share traffic there. The ‘default route’ for the internet is a dynamic set of not quite 700,000 blocks of IP addresses, known as prefixes, which originate from 59,000 Autonomous Systems.
The Autonomous System can be though of as being similar to an telephone system country code. It indicates from a high level where a specific IP address prefix is located. The prefix can be thought of as an area code or city code, it’s a more specific location within the give Autonomous System.
There isn’t a neat global map for this stuff, but if you’re trying to make a picture, imagine a large bunch of grapes. The ones on the outside of the bunch are the hosting companies and smaller ISPs, who only touch a couple neighbors. The ones in the middle of the bunch touch many neighbors and are similar in position to the big global data carriers.
Domain Name Service
Once a new ISP has circuits from two or more upstream providers they can apply for an Autonomous System number and ask for IP prefixes. Those prefixes used to come straight from the RIRs, but any more you have to be a large provider to do that. Most are issued to smaller service providers by the large ones, but the net effect is the same.
Having addresses is just a start, the next step is finding interesting things to do. This requires the internet’s phone book – the Domain Name System. This is how we map names, like netwarsystem.com, to an IP address, like 18.104.22.168. There is also a reverse DNS domain that is meant to associate IP addresses with names. If you try to check that IP I just mentioned it’ll fail, which is a bit funny, as that’s not us, that’s kremlin[.]ru.
Domain Name Registrars & Root DNS Servers
How do you get a DNS name to use in the first place? Generally speaking, you have to pay a Registrar a fee for your domain name, there is some configuration done regarding your Start Of Authority, which is a fancy way of saying which name servers are responsible for your domain, then this is pushed to the DNS Root Servers.
There are nominally thirteen root servers. That doesn’t mean thirteen computers, it means there are twelve different organizations manage them (Verisign handles two), and their addresses are ‘anycast’, which means they originate from multiple locations, while the actual systems themselves are hidden from direct access. This is sort of a CDN for DNS data, and it exists due to endless attacks that are directed at these systems.
Verisign’s two systems are in datacenters on every continent and have over a hundred staff involved in their ongoing operation.
Layers Of Protection
And then things start to get fuzzy, because people who are in conflict will protect both their servers and their access.
Our web server is behind the Cloudflare Content Distribution Network. There are other CDNs out there and they exist to accelerate content as well as protect origin servers from attack. We like this service because it keeps our actual systems secret. This would be one component of that Adversary Resistant Hosting that we don’t otherwise discuss here.
When accessing the internet it is wise to conceal one’s point of origin if there may be someone looking back. This is Adversary Resistant Networking, which is done with Virtual Private Networks, the Tor anonymizing network, misattribution services like Ntrepid, and other methods that require some degree of skill to operate.
Peeling The Onion
Once you understand how all the pieces fit together there are still complexity and temporal issues.
Networked machines can generate enormous amounts of data. We previously used Splunk and recently shifted to Elasticsearch, both of which are capable of handling tens of millions of datapoints per day, even on the limited hardware we have available to us. Both systems permit time slicing of data as well as many other ways to abstract and summarize.
Data visualization can permit one to see relationships that are impenetrable to a manual examination. We use Paterva‘s Maltego for some of this sort of work and we reach for Gephi when there are larger volumes to handle.
Some of the most potent tools in our arsenal are RiskIQ and Farsight. These services collect passive DNS resolution data, showing bindings between names and IP addresses when they were active. RiskIQ collects time series domain name registration data. We can examine SSL certificates, trackers from various services, and many other aspects of hosting in order to accurately attribute activity.
The world benefits greatly from citizen journalists who dig into all sorts of things. This is less than helpful when it comes to complex infrastructure problems. Some specific issues that have arisen:
People who are not well versed in the technologies used can manage to sound credible to the layman. There have been numerous instances where conspiracy theorists have made comical attribution errors, in particular geolocation data for IPs being used to assert correlations where none exists.
There is a temporal component that arises when facing any opponent with even a bit of tradecraft and freely available tools don’t typically address that, so would-be investigators are left piecing things together, often without all of the necessary information.
Free access to quality tools like Maltego and RiskIQ are both intentionally limited. RiskIQ in particular cases problems in the hands of the uninitiated – a domain hosted on a Cloudflare IP will have thousands of fellows, but the free system will only show a handful. There have been many instances of people making inferences based on that limited data that have no connection to objective reality.
We do not have a y’all come policy in this area, we specifically seek out those who have the requisite skills to do proper analysis, who know when they are out on a limb. When we do find such an individual who has a legitimate question, we can bring a great deal of analytical power to bear.
That specific scenario happened today, which triggered the authoring of this article. We may never be able to make the details public, but an important thing happened earlier, and the world is hopefully a little safer for it.
The early version of the Netwar System ran with a handful of Twitter accounts and a flat file system. Today we use a 64 gig Xeon with 48 Twitter accounts for internal studies and a trio of 16 gig VPSes for botnetsu.press, our semi-public service. The requirements for an R&D system exceed a virtual machine, unless you’ve got a Xeon grade desktop.
We happen to have a Dell m4600 laptop and eight unallocated Twitter accounts, so this has been built out as an R&D environment. The system has a four core i7, 16 gig of ram, and in addition to the system volume there is a 60 gig msata SSD and a 500 gig spindle in the disk carrier that fits in the CD/DVD bay. This is essentially a miniature of our larger Xeon system.
Disk performance has always been our problem with Elasticsearch, so the msata drive was split into cache and log space for a 465G ZFS partition.
Once you’ve got them all installed you’ll see the following ports in use.
A few caveats, first be sure these are the final lines in /etc/security/limits.conf or you will quickly learn to hate Elasticsearch.
elasticsearch – nofile 300000
root – nofile 300000
Next, examine the configurations for Elasticsearch and Kibana in /etc. You’ll want to ensure there is more than the default 2 gig for the JVM and modify the Kibana config so you can reach port 5601 from elsewhere.
We have come to the point where we must release configuration advice and some Python code in order for others to learn to use the system. We’re going to trust that the requisite system integration capabilities, analytical tradecraft, and team management skills are going to limit the number of players who can actually do this. There isn’t a specific Github repository for this just yet, but there will be in the coming days.
The initial wave of NPCs were taken out by Twitter, about 1,500 all together according to reporting. A small number lingered, somehow slipping past the filter, and now they are regrouping. A tweet regarding the initial outbreak collected several new likes, among them this group of five:
And their 740 closest friends are all pretty homogenous:
A fast serial collection of these 740 accounts they follow was undertaken. Their mentions reveal some accounts that are early adopters, survivors of the first purge, or otherwise influential. 735 of them came through collection, the missing were empty, locked, or suspended.
These accounts made 469,889 mentions of others. First we’ll look at 285,102 mentions of normal accounts, then we’ll see 184,787 mentions of Celebrity, Media, and Political accounts. Given that there are 67,000 accounts involved in this mention map, we’ll employ some methods we don’t normally use. This layout was done with OpenOrd rather than Force Atlas 2 and the name size denotes volume of mentions produced.
The large names here are based on Eigenvector centrality – they are likely popular members of the group, or in the case of Yotsublast, a popular content creator aligned with NPC messaging.
Usually we filter CMP – Celebrities, Media, and Politicians. These accounts are actively seeking attention so it is interesting to see who they reach out to in order to achieve that in these 184,787 mentions to about 18,500 others.
Attempting different splines with Eigenvector centrality leads to, after several tries, this mess.
Beyond the core at the bottom, Kathy Griffin, Alexandra Ocasio-Cortez, and Hillary Clinton are singled out for attention.
Mentions are directed but the best way to handle them at this scale seems to be treating them as undirected and using the KBrace filter. This is a manageable set of accounts to examine and the groupings make intuitive sense.
The 742 accounts were placed into our “slow cooker” but only 397 were visible. It isn’t clear why 350 were missed, but Twitter’s quality filter may have something to do with that.
Unlike the group of accounts in yesterday’s A Deadpool Of Bots, this wake/sleep cycle over the last ten days looks like humans making their own accounts to join in the fun. Given a good sized sample of tweets, an average adult will only consistently be inactive from 0200 – 0500, so those empty three hour windows, except for the first day, are a pretty convincing sign.
Their hashtag usage is entirely what one would expect.
Given the tight timeframe it was interesting to look at an area graph of their daily hashtag use for the last ten days.
As a society we have barely begun to adapt to automated propaganda, and now we’re facing a human wave playing at being automation. This is an interesting, helpful thing, as it provides a perfect contrast to what we explored in A Deadpool Of Bots.
Yesterday in chat someone pointed out a small set of accounts that followed this Dirty Dozen of known harassment artists.
The accounts all had the format <first name><last name><two digits>. We extracted two dozen from the followers of these twelve accounts, then ran their followers and found a total of 112 accessible accounts that had this same format. Suspecting a botnet, we created a mention map to see how they have been spending their time.
This image is immediately telling for those used to examining mention maps. There are too many communities (denoted by different colors) present for such a small group and the isolated or nearly isolated islands just don’t look like a human interaction pattern.
Adjusting from outbound degree to Eigenvector centrality, it was immediately clear what the focus of this group of accounts was. The next level of zoom in the names revealed two other cryptocurency news sites and a leader in the field as being the targets of the these accounts.
Thinking that 112 was a small number, we extracted their 7,029 unique followers’ IDs and got their names. Nearly 600 matched the first/last/digits format, but there were other similarities as well. We placed all 7,029 in our “slow cooker”, set to capture all of their tweets.
We were expecting to find signs of a botnet, but it appears the entire set of accounts are part of the same effort. The 6,897 we managed to collect were all created in the same twelve week period. The gap between creation times and the steady production of about eighty accounts per day seems to indicate a small hand run operation in a country with cheap labor.
The network is transparently focused on cryptocurrency over the long haul. Adjusting the timeframe to the last thirty days moved the keywords around a bit but the word cloud is largely the same. The clue to how these accounts got into the mix is there in the lower right quadrant – #followme and #followback indicate a willingness to engage whomever from the world at large, in addition to their siblings.
Why we have pursued this so far when it looks like just a cryptocurrency botnet is due to this clue.
Here are five bad actors with two other accounts created right in between them time wise. This is the most striking example, but there are others like it. And understand the HUMINT that triggered this – a group of people who do nothing but take down racist hate talkers all day feel besieged by a group that manages to immediately regenerate after losing an account.
Our Working Theory
What we think we are seeing here is a pool of low end crypto pump and dump accounts that were either created for or later sold to a ringleader in this radicalized right wing group.
Now that we have roughly 7,000 of them on record, we have to decide what to do. This is just such a blatant example of automation that Twitter might immediately take it down if they notice. The 6.5 million tweets we collected are utterly dull – the prize here is the user profile dataset. We’d need some mods to our software, but maybe we need to collect all the followers for this group of 7,000 and figure out what the actual boundaries of this botnet truly are.
This has been a tiresome encounter for those who make it their business to drive hate speech from Twitter, but this may be the light at the end of the tunnel. If one group is using pools of purchased accounts to put their foot soldiers back in play the minute they get suspended, others are doing this, too. No effort was made to conceal this one from even moderate analysis efforts. If we demonstrate this is a pattern and Twitter is forced to act, we may well find that a lot of the heat will go out of political discourse on that platform.