Marco Savard

PostgreSQL vs ArangoDB

Development

For nearly 2 years now, I’m responsible to implement and support the business rules for the company I’m working for.

Recently, my boss asked me to find out how often a specific situation happened in the last month.

Now, for the (nearly) last 2 years, I’ve been using ArangoDB while I was investigating the input and output from the BRMS. The reason behind the decision of using ArangoDB is that it support natively JSON since it’s a document database based on JSON and there is an GUI bundle with the server that let you query and explore the documents very easily.

I must add to that, the BRMS receive 50k calls per day on 2 servers, which is not much. All inputs and ouputs are saved in the form of events that one must reassemble to have a complete picture of what happend for a specific call.

So last week, I took a piece of code I had in Go to calculate stats and add to it an option to actually save the logs into ArangoDB. It was fairly simple since the code already do everything except save the data.

Friday morning, I started the job to import the logs from one server into Arango. After a few hours (and a few corrections), the first batch was loaded. The collection was containing 600 000 documents. While loading I noticed that the CPU for the Arango daemon was high and it was taking a LOT of memory.

Anyway, I did not formalise of the situation and I just tried to execute the query that I prepared. Whitout luck. Every time I tried to execute it, the computer was running out of memory. Now I took time not only to prepare the query using AQL but also to properly index the ArangoDB collection so that the query would be performant. Or so I thought. But it seem’s I was wrong. I must take time to investigate about ArangoDB performance it that situation.

Now I need my answer quickly and I don’t have time to investigate right now. So I added a piece of code to save into PostgreSQL using it’s JSONB data type. I love PostgreSQL. But since JSON is not structured, it’s harder to work your way using SQL. Now, ArangoDB was a natural choice because each request generate a lot of information. So I was wondering how PostgreSQL would be able to manage that kind of volume in the index.

I did go forward anyway and change the code and created a table with proper indexes using the awesome GIN index. I launched the jobs in parallel saturday evening. It completed at 2 in the morning. Now before I go to the bed, I watched the CPU and memory and everything was … normal. Like nothing was actually happening. I did a query on the table and I saw that it was filled with requests, so everything was normal. Just, the computer was not over processing like it did with ArangoDB.

This morning, I took a look at the data: 1.2 million records. So I executed my query and it took… 3 seconds to run.

OK, this is disturbing. I don’t know well ArangoDB. Not as much as I know PostgreSQL. Which I don’t actually know very well either. So what did I do wrong using ArangoDB?

21 Nov 2016 #Development #Database #PostgreSQL #ArangoDB