My Terraform Development Workflow

Brendan Thompson • 19 November 2021 • 9 min read

In this post, I will take you through my standard workflow when I am developing Terraform code. This workflow stays the same no matter what cloud or service I am coding against or how complex the code might get, I always start the same way!

I take a four-phased approach to my Terraform development. I will not always use all four phases; however, if it makes sense to stop at an earlier stage, I absolutely will, and 100% recommend doing that! For instance, if you're deploying a straightforward service - or even a single resource - it is not going to make sense to use domain-based files and, absolutely does not make sense to produce a module.

The below diagram outlines what the process is that we are going to be following.

Phase 1 - The Diagram#

No matter what I am developing, I will always start with a diagram; sometimes, the diagram is just a scribble on a piece of paper, and other times it's an insanely complex OmniGraffle diagram with a multitude of components and services.

I find that by starting with a diagram, you can conceptualize where there are likely going to be points of coupling in the code or where it makes sense to split things apart into their own module or file. Creating a diagram becomes even more critical when working with complex architectures, as you need to understand the lay of the land before you turn it into code. Using this approach has the added advantage of getting your understanding of the project reviewed before you start coding; this helps reduce time spent refactoring code.

Phase 2 - The Single File#

I find in most things in life that it is always best to follow the KISS Principle seldom - if ever - does it lead you astray. If you were to go and look at any documentation or tutorial on Terraform, you would likely see it telling you to structure your files like the below:

.
├── main.tf
├── variables.tf
├── outputs.tf
├── providers.tf
└── versions.tf

In this, all our code to build out resources would live within main.tf; any input variables would live within variables.tf, outputs in outputs.tf, provider declarations within providers.tf and finally, if you're running a modern version of Terraform, a versions.tf to keep any constraints about the providers and Terraform version you're using. I would agree with this as a starting pattern for any Terraform code. You can quickly tell where relevant things are on the filesystem. However, this does not scale very well when your codebase becomes more extensive and more complex. We will touch on what to do at that point next. I would call this the Single File stage, as ostensibly, there is only a single file where the doing code exists.

After creating my diagram, I will always start with this model, and for a few reasons:

  1. It is easy to tell where everything is
  2. As you're writing the code, it is quick to identify where the coupling in the code is and where it makes sense to split it later down the track
  3. It can be lined up to your diagram to make sure all the components are present

And sometimes, the single file is enough! I tend to find the sweet spot for a Terraform file to be around the 150-300 lines of code mark. Once you get much larger than 300 LOC, it can be rather painful to find things quickly and is incredibly cumbersome for newcomers to the codebase to understand what is going on.

Phase 3 - The Domain Files#

Now that we have our one giant main.tf file - I would expect it to breach the 150-300 LOC mark - we can break up the code into separate files based on their domain. As I spend most of my time deploying Cloud environments, I mostly think of domains like the following:

The above is not exhaustive and may not even be correct for the work you are doing. However, it does help articulate the point. Domains here refer to areas of concern, if we were to take the Network domain from the above list this should contain the code for; Virtual Networks, Subnets, Network Security Groups, Networking Peering, Express Routes, User Defined Routing and so on. It is important to group like things in this phase, doing so will help you identify where there is repeated or redundant code.

I would first look at my diagram and the single file and create .tf files for the domains relevant to code that has/is being written. During this process, I will generally look for improvement points or places to be more DRY. If I see anything to improve, then this is likely the time to do it! Once I had a file for each domain, I would move the code out of the single file and into their respective domain files.

Phase 4 - The Modules#

The final phase of the refactoring process is to create modules. If you have never written any Terraform modules before; then I highly recommend reading a post I wrote entitled: Terraform; to module, or not to module. That post is a guide on when you should or should not create a module. In my experience, I see a lot of Terraform beginners - and even veterans - make the mistake of creating modules simply for the sake of it; any time I see this, I ask them, "How is this module making our lives better?". This simple question helps to focus in a conversation on the viability of creating and using a module in the given space.

So, the process I follow for creating modules is looking through the codebase, and the diagram (of course) identifies points where code is coupled together and is repeatable. A good thing to think about as a candidate for is something like a web server.

On Azure, to deploy a web server, you're going to need a few things:

Now, not all of the items are going to come together to become a module. For instance, the resource group and the virtual network are likely to be passed into your module. So, in this instance, the module would have included; VM, NIC, HDD, PIP, VMSS/AS, and an LB. All of those components would be declared together in a module providing a simple interface for consumption that could be used over and over again by just calling the module instead of having to construct each of those resources over and over again.

Now armed with the above knowledge about an ideal scenario for creating a module and having read the post I linked too you can assess your codebase and identify areas where a module is appropriate and create one.

Miscellaneous#

There are a few other miscellaneous things that I like to do as part of this process.

Input Variables — when I am going through the development cycle, I tend to set default values for all of the input variables for the codebase, this reduces the amount of information that I need to pass in on local executions of the Terraform code. Once you've completed the development process though it is imperative to remember to remove the default values for things that do not make sense to have them, this should be a manual process in which you carefully consider where it is appropriate to have default values set.

Local Configuration — more times than not, instead of using external configuration through yaml, json or tfvars, I will opt to use a locals {} block or even multiple blocks. Again this keeps the development cycle simple, and it is super easy to see what external config you will need to interact with. I have written a post about reducing code duplication, which talks about using yaml configuration.

Scratch Provider — when working with data structures, new resource or data sources, and complex for_each loops I, will tend to use the scratch provider that I created a little while ago to help me understand - or even explain - the objects better. You can read a little more about the provider here.

Closing Out#

This post went through the four phases I use to develop Terraform code in my daily work and personal life. Yes, I really do use Terraform out of work. Those phases again are:

Hopefully, it has given you some pause for thought, and you're reflecting on what your workflow is. Having a consistent approach to Terraform development helps me produce clean, clear and maintainable code. If this helps anyone else do the same, I would love to hear from you, and if you have a completely different workflow, I would also love to hear from you!

Brendan Thompson

Principal Cloud Engineer

Azenix

Discuss on Twitter