Well, it turns out that I am the one to have the honor of making the following announcement: you can now deploy stuff with CloudSigma using Terraform! Ain’t that awesome?! Allow me to say: yes, it is awesome. One of the products of HashiCorp, Terraform, is a tool for building, changing, and versioning infrastructure safely and efficiently. You can easily deploy your infrastructure on CloudSigma using Terraform and, in this tutorial, I will show you how you can do that.
That said, we just started developing the provider. The development has reached v1. In fact, we’re at v1.2.1 at the time of writing. Currently, the development is ongoing. Therefore, we are constantly adding new features. You can feel free to tell us about any bugs or to pull a request here.
You can check out Terraform’s mini-site here:
https://registry.terraform.io/providers/cloudsigma/cloudsigma/latest/docs.
In any case, in order to start using it, you need to:
- Install Terraform
- Set up your working environment
In order to install Terraform, you can follow the installation procedure here.
In my case, I use Fedora, so I just:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
# become root su - # install necessary plugins for DNF dnf install -y dnf-plugins-core # add the repo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/fedora/hashicorp.repo # install it dnf -y install terraform # logout logout |
Keep in mind that the installation instructions vary widely across systems. Just follow the instructions for yours. OK, so we now have Terraform installed. Next, we are going to setup our working environment. If you’re using Terraform, it’s always smart to enable the autocompletion for your shell. In my case, I just need to:
1 |
terraform -install-autocomplete |
This will help me a lot when I run commands by letting me use the mythical <tab> key to complete commands.
I will be creating a directory for this:
1 2 |
mkdir tf-cs-my_project cd $_ |
This will help me identify that this is a Terraform deployment at CloudSigma. A bit pragmatic, but it works for me. Now, I’ll create a file called: main.tf. It will contain my main deployment definition:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
terraform { required_providers { cloudsigma = { source = "cloudsigma/cloudsigma" version = "1.2.1" } } } # variables variable "cloudsigma_username" {} variable "cloudsigma_password" {} variable "location" {} provider "cloudsigma" { username = var.cloudsigma_username password = var.cloudsigma_password location = var.location } |
After that, we need to provide our secrets to it. For this, I will create a file called: terraform.tfvars so it gets them from there.
I will be using git to track changes in these files. It is very important that I do not forget to ignore the aforementioned file:
1 2 3 4 5 6 |
# credentials cloudsigma_username = "my_email@mydomain.tld" cloudsigma_password = "Some ridiculously hard to bruteforce password" # settings location = "sjc" |
Also, just ignore files we don’t want to track:
1 2 3 4 5 6 7 8 |
cat < .gitignore *.tfstate *.backup .terraform/ *.tfvars *.lock.hcl EOF |
And just start tracking what we have:
1 2 3 |
git init git add . git commit -am 'first commit' |
Now, we have everything we require to start creating virtual infrastructure, but we still, need to init the Terraform “repo”:
1 |
terraform init |
This will set things up to start using the definitions there. Now, for the first server definition, we will append the following to main.tf:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
resource "cloudsigma_server" "web" { cpu = 2 * 2.5 * 1000 memory = 2 * 1024 * 1024 * 1024 name = "web" vnc_password = "my_vnc_pass" drive { uuid = cloudsigma_drive.data.id } network { type = "dhcp" } } resource "cloudsigma_drive" "data" { media = "disk" name = "web-data" size = 100 * 1024 * 1024 * 1024 clone_drive_id = "7a786142-5f3d-4e9f-9c64-30ad66afe1c3" } |
This requires a bit of an explanation. First, we define the CPU in kHz. Usually, we want to match our host’s actual CPU speed so that we can guarantee 100% time assignment to that core. It will let us use the core at its full potential. This number may vary across locations and hosts. It’s always good to ask your friendly neighborhood support agent for this particular number so that you can correctly match it.
So, the cpu line could be read as:
1 |
cpu = number_of_cores_required * host_cpu_clock * make_it_megahertz |
In regards to the memory, in this case, we want 2 Gibibytes worth of memory. The memory gets defined in bytes so, that’s why we multiply it 3 times by 1024. To make them Gibibytes. Similarly, this line could be read like this:
1 |
memory = number_of_gibibytes_required * make_it_kibibytes * make_it_mebibytes * make_it_gibibytes |
Yeah, kind of confusing, eh? Not really. That’s how APIs work. There is a parameter I provided to the drive, called: clone_drive_id. In this case, I tell it to create the drive as a clone of said ID. In this case, that ID corresponds to our San Jose location’s CentOS 8.3 server drive. This means it will get CentOS 8.3 and it will resize it to 100 Gibibytes for me. Pretty cool, eh?
Just note that this ID changes for each location. You need to check out the “library” section and obtain the UUID for the drive you want to clone. The rest are kind of self-explanatory. The name stands for the server’s name. Vnc_password stands for?.. you guessed it! Let’s move on.
Now, in order to see if this definition is valid, we can just tell Terraform to validate it:
1 |
terraform validate |
This only tells you if you didn’t screw up the syntax or indentation. For indentation fixes, we can ask Terraform to do it for us:
1 |
terraform fmt |
In any case, the command we want to use to get an idea of what we are getting with that definition is the following:
1 |
terraform plan |
It will give us a readable definition of what we asked Terraform to deploy. And, if we agree with what we see, we can just deploy:
1 |
terraform apply |
When that succeeds (like in 5 seconds for me), you will have a brand new server called “web”; with a 100 GiB drive to play with. Pretty cool no?
But, I would like one with a static IP. In order to add that, I need to subscribe, manually, to a static IP, so it can be assigned. Once you do that, you can just add the following within the server resource. Let’s say the IP I got is: 104.36.19.217. Then, I would add:
1 2 3 4 |
network { ipv4_address = "104.36.19.217" type = "static" } |
So, it looks like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
resource "cloudsigma_server" "web" { cpu = 2 * 2.5 * 1000 memory = 2 * 1024 * 1024 * 1024 name = "web" vnc_password = "my_vnc_pass" drive { uuid = cloudsigma_drive.data.id } network { ipv4_address = "104.36.19.217" type = "static" } } |
And re-apply the definitions:
1 |
terraform apply |
Done! You have static IPs. The same goes for attaching a vLAN for private networking:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
resource "cloudsigma_server" "web" { cpu = 2 * 2.5 * 1000 memory = 2 * 1024 * 1024 * 1024 name = "web" vnc_password = "my_vnc_pass" drive { uuid = cloudsigma_drive.data.id } network { ipv4_address = "104.36.19.217" type = "static" } network { vlan_uuid = "02b6e778-9fc7-45d7-b3a1-c77d557ad447" } } |
Yep, easy!
The complete example should look like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
terraform { required_providers { cloudsigma = { source = "cloudsigma/cloudsigma" version = "1.2.1" } } } # variables variable "cloudsigma_username" {} variable "cloudsigma_password" {} variable "location" {} provider "cloudsigma" { username = var.cloudsigma_username password = var.cloudsigma_password location = var.location } resource "cloudsigma_server" "web" { cpu = 2 * 2.5 * 1000 memory = 2 * 1024 * 1024 * 1024 name = "web" vnc_password = "my_vnc_pass" drive { uuid = cloudsigma_drive.data.id } network { ipv4_address = "104.36.19.217" type = "static" } network { vlan_uuid = "02b6e778-9fc7-45d7-b3a1-c77d557ad447" } } resource "cloudsigma_drive" "data" { media = "disk" name = "web-data" size = 100 * 1024 * 1024 * 1024 clone_drive_id = "7a786142-5f3d-4e9f-9c64-30ad66afe1c3" } |
There are a few things that, at the time of writing, we cannot do, though. For example, we can create an SSH key resource. We, still, cannot assign it. But only for a short while. We will get it working soon and you’ll be happy to try it out!
Happy Computing!
- How to Deploy your Virtual Infrastructure at CloudSigma with Terraform - March 15, 2021
- Testing out rook/EdgeFS + NFS (RTRD) on Minikube - May 7, 2020
- Automate LetsEncrypt SSL Certificate Renewals for NginX - May 22, 2017
- A How-to Guide: Connect VPN Network to CloudSigma Infrastructure - July 15, 2016
- HowTo: CGroups - December 29, 2015