How to create a Cassandra container for testing with Keyspace and the latest schema with a single script call
In other words, we need the Cassandra container to have the Keyspace and the up-to-date schema following our migration scripts by executing something like ./run_containerised_cassandra_locally.sh
Breaking this problem into two sub-problems we identify that in order to do that we need a DB migration tool. In RDBMS we have tools like Flyway or Liquibase. For Cassandra, with a little bit of digging, com.hhandoko:cassandra-migration can be found. Which we can be incorporated into our codebase as a dependency or can exist as a standalone artefact (jar file) to be run on demand by a script. The latest is what we are going to use here.
When we go to releases of that repo, we can see the version of the jar with dependencies. (Current version r).
We want to create a structure like the following:
Under schema_migration we place another directory which will hold our migration scripts. The naming should be like the one in the screenshot. Starting with V1__ (mind the double underscore) and a description of the script it contains.
Now if we want to run the schema migration we need to call the java jar file with some parameters. For the shake of convenience, we introduce the script run.sh that you can see on the screenshot above.
We need to make it runnable with chmod +x run.sh.
And finally, before we run it, we need to export the variables that are passed as system variables to the migration utility. (For local development that there are no really sensitive values We can keep a file with the commands that we will source to our environment, example to follow.
As a bonus, because We are using the migration job as part of our pipeline, we have containerized the migration utility so we can easily use it on Jenkins. You don't have to do the next part if you only want this for the local testing/development container.
Now we are ready to create the Cassandra container with our stuff :D
Now, we are in the local_cassandra directory of the above screenshot. In the dbScripts directory, we just add a simple cql command that will be used to create the keyspace once Cassandra is up and running.
Let's create the docker file now:
On line 3 we copy a file with all variables that the migration script will need. Since we do not have sensitive secrets for the local env we just have the following.
On line 4 we copy the intermediate script on the container. I call it intermediate because it will run before the real entry point. It is like a wrapper. It is a modified version of a script that we found on StackOverflow script (source).
It is definitely an interesting one. Let's analyse its functionality.
On line 5 we source the above file so it can export the variable we set earlier for the migration script. This will make use of the schema name value.
Then it will loop and wait for 2 seconds until the creation of the keyspace will return successfully when it will, it will also create a flag file in /tmp/cassandra_up.flag folder that we will use to make sure that Cassandra is up and the keyspace has been created. This file will trigger the schema migration process in the script run_containerised_cassandra_locally.sh. The & at the end of the line 13 makes the above block of code run asynchronously on the background.
On line 16 we proceed with the real docker entry point passing all the arguments that were initially passed on this script on the Dockerfile.
Finally, our script that will orchestrate all the above and create the Cassandra container.
Lines 3-5 clean up the already existing Cassandra containers with the name localCassandra so we won't have any conflicts.
Lines 7-8 are running the container.
Lines 14 -18 are waiting for the flag file to be created. This will determine when the container is ready for the next step.
Finally the Cassandra migration utility (jar) after we source the variables once again.
Don't forget to make it executable!
Now we can have a Cassandra container with our schema and everything just by running