Several Upgrade Efforts

It was pointed out that stating we had Six Hours Of Downtime, then not saying much, was creating the perception of an uncertain future. Not at all the case, here is what’s been happening.

The outage was triggered by a power transient and it was the first of a couple of problems we had, which triggered some introspection regarding design and implementation choices. Here we are a month later and we have a few bits of wisdom to share:

  • Elasticsearch 6.5.4 was OK, but 6.8.2. is fine.
  • Headless Debian 9.9 is much smaller than Lubuntu.
  • Do not make fixed size disks for your virtual machines.
  • Definitely do not try to run ZFS in a VM on top of ZFS. Just don’t.
  • Three VMs on a single spindle pair is a PILOT configuration.
  • Give data VMs one more core than they seem to need.
  • A brief visit to swap seems to be OK.

Taking care of all these equipment and software upgrades has left us with a system that will respond to queries against our collection of 53 million Twitter profiles in one to three seconds, instead of multiple thirty second waits before it finally does the job. While capturing fifty streams. And merging two large user profile indices.

That last point requires some explanation – we know the Lucene component of Elasticsearch is known to make heavy use of “off-heap” memory. Data VMs have been set with 8 gig of Java heap space and anywhere from 16 to 32 gig of ram. No matter what, they all “stick their toes” into their swap pools, but never more than a few megabytes worth. Normally swap access is the last gasp before things spiral out of control, but they’ve been doing this for days under intentionally demanding conditions and we haven’t been able to bowl them over.

What is driving this is a dramatic increase in volume is our rapidly evolving stream handling capability and increase in analysts using the system. We currently have the following areas of operation:

  • Legislatures – stream members and their interactions for 12 countries.
  • European MPs, U.S. Governors, and U.S. Presidential candidates.
  • Campaigns – non-election related advocacy.
  • Disinformation – things that do not appear to be legitimate.
  • Threat Monitoring – likely trouble areas get proactively captured.

Threat Monitoring is, somewhat sadly, the busiest area thanks to America’s penchant for mass shootings. Seven analysts inhabit a task oriented channel and they always have more to examine than they have time to do.

Disinformation has a strong overlap with Threat Monitoring, and whatever is on fire at the moment has been taking precedence over delving into the campaigns that create the preconditions for trouble. More hands and eyes will improve things in this area.

Campaigns & Legislatures have been purely the domain of Social Media Intelligence Unit. There are reports the go out, but they don’t see the light of day. We should probably periodically pick an area and publish the same sort of work here. In our copious spare time.

As we hinted above, we have added storage to our existing systems, but that is an interim measure. We are currently collecting hardware specifications and reviewing data center locations and rates. The cheapest is Hurricane Electric in Fremont, but the best, given what we do, might be Raging Wire in Sacramento. That’s the oldest and largest of Twitter’s four datacenters, for those not familiar with the name.

Legislatures and much of Campaigns are on their way to that datacenter. Disinformation has no public facing facet and will remain in house where we can keep an eye on it. Threat Monitoring is a mixed bag; some will stay in the office, some will involve tight integration with client systems.

Those who pay attention to the Netwar System Github will have noticed many changes over the last few days. We are approaching the time where we will declare the Elasticsearch 6.8.2 upgrade complete, and then maybe have another go at offering a Netwar System Community Edition VM. When this is ready there will be an announcement here.