Howdy from the Tech Team!

While the marketing folks and communications scientists are off doing their respective thing, we thought you might enjoy a brief behind-the-scenes peek at the sorts of things that we’re doing to make VERN awesome to use.

Specifically, we’re going to talk about VERN integration, brag a bit about speed, and we’ll talk a bit about the system architecture.

 

 

 

Fair warning: This one is geared more towards folks that are nerds like us, so it might be a little dry if you’re not a nerd like us. That said, we do hope that you’ll still find it interesting and would love to hear any questions you might have about the things that we talk about here.

Integration

When we started out on the project, we wanted to make VERN absolutely as easy to use as possible. While we considered a few ways to make this happen, we decided that the best solution for all sides of the equation would be to present it as a REST API that speaks JSON. This way, the only barriers to entry are the ability to create an account and subsequently make requests over the internet. The entire API is a single endpoint (POST /analyze) for the time being, and there are only two known failure scenarios.

To make things even easier, we’re definitely going to release client libraries for the various languages out there, but for those who use a language for which we haven’t released a client, integration is still pretty easy. For example, let’s say that we want to analyze the phrase “my sausages turned to gold.” Here’s how we’d make that request with curl:

 

curl -s -X POST \
  -H 'Authorization: my-vern-creds' \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{"text":"my sausages turned to gold"}' \
  https://vernapi.com/analyze

 

In return, you get back a JSON response that contains both the text that was analyzed as well as a collection of confidence scores:

 

{
  "text" : "my sausages turned to gold",
  "scores" : [
    {"name" : "sadness", "value" : 12.5},
    {"name" : "anger", "value" : 12.5},
    {"name" : "humor", "value" : 12.5}
  ]
}

 

So, it’s a run-of-the-mill REST API, and that means we’re drastically sacrificing performance, right?

Speed

Not really. For various reasons, we chose Go as our implementation language for our API. We may do a piecemeal switch to Rust at some point, but for now, Go is the sweet spot for our development style and performance needs.

In our most recent round of stress testing, we used both siege and hey to hit the production environment with a total of 25000 analyze requests per run via either 100 or 200 emulated concurrent users (depending on our mood). Here’s what we know after multiple test runs with both tools over the last week:

    • We’re handling an average of just above 300 requests per second under load.
    • The production infrastructure isn’t being bogged down by those requests at all, really.
    • The request rate appears to be network and CPU bound … on the client side. Or it’s solar flares.

At that rate, we’re pretty confident saying that so long as your internet connection is decent and the sun is behaving, you should see rather decent response times from VERN.

Architecture

Where would we be without our friends? Heck, it took two people to write this article!

To stretch that metaphor a bit, VERN is a highly service-oriented system, he’s brought his friends along for the ride, and everybody in the clique has a single responsibility. That’s part of how we’ve managed to keep response times low while keeping the implementation maintainable. As a matter of fact, each of those services addresses its respective internal components as services as well to allow for the possibility that these parts may need to be moved out of main at some point.

So, who are VERN’s friends? ARTHUR and HORACE:

    • VERN knows how to read people’s emotions, so that’s what he does.
    • ARTHUR the Authenticator knows how to identify people, so he’s the door man.
    • HORACE the Hoarder never loses anything, so he’s in charge of keeping records.

When you make a request to VERN, he asks ARTHUR if the credentials that you’ve provided are valid. If ARTHUR says you’re cool, VERN does his thing, then leaves a note for HORACE (as a background job) to make sure that the fact that he’s talking to you is recorded.

We’re running all of these (and a database) on the same server in our dev and staging environments, but they’re on separate servers in the production environment to prevent one service from impacting another. To sweeten the deal a bit, we’ve got a load balancer for each service for the sake of scale-out simplicity.

Wrapping Up

We’ll have many more technical blogs where we discuss the architecture, and take input from our community. VERN is designed to be a tech-first company so we’re focusing on the development and capabilities. It means we only roll out our emotions when the statistical data means it’s quite significant across multiple frames. It means we focus on the end user, and try to anticipate what your needs may be. If you have any suggestions, or would like to participate in upcoming chats and meetups, sign up for our newsletter today.