Tarmak 0.6 released

Written by Josh Van Leeuwen


			Tarmak 0.6 released
Release of Tarmak 0.6

Published on our Cloud Native Blog.
Tagged with

Update tarmak is being updated all the time, check the releases page on GitHub for the latest release.

We are excited to announce the release of Tarmak, 0.6! If unfamiliar, Tarmak is a CLI toolkit to provision and manage Kubernetes clusters on AWS with security-first principles. This new release brings a host of great new features and improvements, including pre-built AMI images for worker nodes, new CLI commands, use of the Kubernetes Addon-manager and more.

Tarmak 0.6 major new features and changes

  • Worker node AMI images
  • Pre-built default AMI image
  • Calico Kubernetes backend
  • New CLI commands - cluster logs and environment destroy
  • Using Kubernetes Addon-manager
  • Using an in-package solution to SSH with a secure approach to public key advertising
Tarmak Runway

Worker Node AMI Images and Default Image

In this release we have a new image type that can be assigned to your worker instance pools - centos-puppet-agent-k8s-worker. This image type causes Tarmak to pre-install all the node components when building the AMI image, rather than installing them at boot time. This means that time from boot to node status Ready is greatly reduced, giving more resources to your triggered scaling groups faster.

We have also created a public AMI image. If no privately built images are available for your cluster, Tarmak will use the Jetstack’s published image instead. This change is great for new users as they can get a new cluster up and running faster, without having to wait for long build times.

Calico Kubernetes Backend

We’ve added new options for how you deploy Calico into your clusters. Instead of using Etcd, the default Calico backend, we now give the option to use Kubernetes with a toggle in the Tarmak configuration. Deploying a huge cluster? With this option you can also choose to deploy Typha which will help with the load of Calico on the Kubernetes backend. This is also simply enabled and configured through the Tarmak configuration which you can read how here.

New CLI Commands

In the unfortunate event you’re having issues with your cluster and seeking some support, it is always a pain to copy and paste logs from your components running on multiple machines. This is very time consuming and always seems like the logs you missed are the most needed! To help with this, we’ve created a new command cluster logs that will go ahead and fetch all systemd logs from your targeted instance pools (vault, workers, control-plane etc.), bundle them up into a reader friendly file structure and compressed into a tar ball. This is then ready to be shipped off to someone else over the net. This is really beneficial in making the support feedback loop more efficient and a great quality of life improvement.

Another CLI command change that we’ve added is the addition of environment destroy. As it sounds, this is the big brother to cluster destory and will destroy all clusters in the environment, including the hub. This is a command that’s helped us a lot internally, and is another nice quality of life +improvement. Do be careful though that you’re sure you want to run it!

Kubernetes Addon-manager

We are now using the Kubernetes Addon-manager which is a controller like service that runs on master nodes of Tarmak. The service is constantly watching for resources in Kubernetes with a label and comparing them with local manifests inside a directory. If resources are changed or removed from the local manifest set, the Addon-manager will then update them in Kubernetes to keep it in sync.

This has been working really well for Tarmak deployments and has been handling updates and migrations well. For example when you upgrade your cluster to 1.10 or higher, we now install CoreDNS over Kube-dns which Addon-manager will replace. Addon-manager also helps to seamlessly reconfigure Calico if it’s deployment has been changed in the Tarmak configuration described earlier.

SSH Overhaul and Instance Public Key Advertising

With this release, we’ve also made some huge changes to how we are creating and managing our SSH connections. This is one of the core components of Tarmak as it enables connections to components such as wing - a small binary sitting on all nodes to report it’s state and implement configuration updates - or creating tunnels that allow initialisation and communication with vault as well as accessing the Kubernetes API server when not using a public load balancer endpoint. Previously, we had been using the OpenSSH client on your machine to create and manage these connections however, has now been replaced with a custom SSH client that uses the standard Go SSH library. What does this mean for users? Connections should now be much more reliable and we can now use these connections more efficiently. It has also enabled us to develop more sophisticated features such as the log aggregation command mentioned earlier and mitigate problems caused by inconsistencies between OpenSSH versions installed on different machines.

With this change we have also updated the way we handle verifying instance’s public keys that we SSH to along with managing the local SSH hosts file. Now when an instance boots, wing will gather the public keys, sign its AWS identity document with them and send them all to an Amazon Lambda function. Once the function has verified these keys, it will tag that instance with them. Once an instance has been tagged, they will not be changed. Locally Tarmak can use these to populate the local hosts file and be used to verify SSH connections to the instance. This change bolsters security for connecting to the cluster.

Tarmak Road

Other features include improving the reliability of bootstrapping vault instances, updates to components and some bug fixes. You can read more in the CHANGELOG or on the GitHub release page.

Give the release a go, we look forward to hearing your feedback!

Get started with Jetstack

Enquire about Subscription

Contact us