Announcing HashiCorp Nomad 1.2 Beta

Announcing HashiCorp Nomad 1.2 Beta

We are excited to announce that the beta release of HashiCorp Nomad 1.2 is now available. Nomad is a simple and flexible orchestrator used to deploy and manage containers and non-containerized applications. Nomad works across on-premises and cloud environments. It is widely adopted and used in production by organizations such as Cloudflare, Roblox, Q2, Pandora, and GitHub.

Let’s take a look at what’s new in Nomad and in the Nomad ecosystem, including:

  • System Batch jobs
  • User interface upgrades
  • Nomad Pack

»System Batch Jobs

Nomad 1.2 introduces a new type of job to Nomad called sysbatch. This is short for “System Batch”. These jobs are meant for cluster-wide, short-lived, tasks. System Batch jobs are an excellent option for regularly upgrading software that runs on your client nodes, triggering garbage collection or backups on a schedule, collecting client metadata, or doing one-off client maintenance tasks.

Like System jobs, System Batch jobs work without an update stanza and will run on any node in the cluster that is not excluded via constraints. Unlike System jobs, System Batch jobs will run only on clients that are ready at the time the job was submitted to Nomad.

Like Batch jobs, System Batch jobs are meant to run to completion, can be run on a scheduled basis, and support dispatch execution with per-run parameters.

If you want to run a simple sysbatch job, the job specification might look something like this:

job "sysbatchjob" {
  datacenters = ["dc1"]

  type = "sysbatch"

  constraint {
    attribute = "${attr.kernel.name}"
    value     = "linux"
  }

  group "sysbatch_job_group" {
    count = 1

    task "sysbatch_task" {
      driver = "docker"

      config {
        image = "busybox:1"

        command = "/bin/sh"
        args    = ["-c", "echo hi; sleep 1"]
      }
    }
  }
}

This will run a short-lived Docker task on every client node in the cluster that is running Linux.

To run this job at regular intervals, you would add a periodic stanza:

periodic {
  cron             = "0 0 */2 ? * *"
  prohibit_overlap = true
}

For instance, the stanza above instructs Nomad to re-run the sysbatch job every hour.

Additionally, sysbatch jobs can be parameterized and then invoked later using the dispatch command. These specialized jobs act less like regular Nomad jobs and more like cluster-wide functions.

Adding a parameterized stanza defines the arguments that can be passed into the job. For example, a sysbatch job that upgrades Consul to a different version might have a parameterized stanza that looks like this:

parameterized {
  payload   	= "forbidden"
  meta_required = ["consul_version"]
  meta_optional = ["retry_count"]
}

This sysbatch job could then be registered using the run command, and executed using the dispatch command:

$ nomad job run upgrade_consul
$ nomad job dispatch upgrade_consul -meta consul_version=1.11.0

»User Interface Upgrades

Traditional Batch jobs and System Batch jobs now include an upgraded Job Status section, which includes two new statuses: Not Scheduled and Degraded.

Not Scheduled shows the client nodes that did not run a job. This could be due to a constraint that excluded the node based on its attributes, or because the node was added to the cluster after the job was run.

The Degraded state shows jobs in which any allocations did not complete successfully.

sysbatch

Additionally, you can now view all the client nodes that batch and sysbatch jobs run on with the new Clients tab. This allows you to quickly assess the state of each job across the cluster.

clients

»Nomad Pack (Tech Preview)

We are excited to announce the tech preview of Nomad Pack, a package manager for Nomad. Nomad Pack makes it easy to define reusable application deployments. This lets you quickly spin up popular open source applications, define deployment patterns that can be reused across teams within your organization, and discover job specifications from the Nomad community. Need a quick Traefik load balancer? There’s a Pack for that.

Each Pack is a group of resources that are meant to be deployed to Nomad together. In the Tech Preview, these resources must be Nomad jobs, but we expect to add volumes and ACL policies in a future release.

Let’s take a look at Nomad Pack, using the Nomad Autoscaler as an example.

Traditionally, users deploying the Nomad Autoscaler often need to deploy or configure multiple jobs within Nomad, usually Grafana, Loki, the autoscaler itself, an APM, and a load balancer.

With Nomad Pack you can run a single command to deploy all the necessary autoscaler resources to Nomad. Optionally, the deployment can be customized by passing in a variable value:

A

This allows you to spend less time learning and writing Nomad job specs for each app you deploy. See the Nomad Pack repository for more details on basic usage.

By default, Nomad Pack uses the Nomad Pack Community Registry as its source for Packs. This registry provides a location for the Nomad community to share their Nomad configuration files, learn app-specific best practices, and get feedback and contributions from the broader community. Alternative registries and internal repositories can also be used with Nomad Pack. To view available packs, run the registry list command:

Nomad

You can easily write and customize Packs for your specific organization’s needs using Go Template, a common templating language that is simple to write but can also contain complex logic. Templates can be composed and re-used across multiple packs, which allows organizations to more easily standardize Nomad configurations, codify best practices, and make changes across multiple jobs at once.

To learn more about writing your own packs and registries, see the Writing Custom Packs guide in the repository.

A Tech Preview release of Nomad Pack will be available in the coming weeks. The Nomad team is still validating the design and specifications around the tool and packs. While we don’t expect changes to the user flows that Nomad Pack enables, some details may change based on user feedback. Until the release, to use Nomad Pack you can build from the source code. Details can be found in the repository’s contributing guide.

As you use Nomad Pack and write your own packs, please don’t hesitate to provide feedback. Issues and pull requests are welcome on the GitHub repository and Pack suggestions and votes are encouraged via Community Pack Registry issues.

»What’s Next?

We encourage you to experiment with the new features in Nomad 1.2 and Nomad Pack, but we recommend against using Nomad 1.2 in a production environment until the official GA release. We are eager to see how the new features and projects enhance your Nomad experience. If you encounter an issue, please file a new bug report in GitHub and we’ll take a look.

Finally, on behalf of the Nomad team, I’d like to thank our amazing community. Your dedication, feature requests, pull requests, and bug reports help us make Nomad better. We are deeply grateful for your time, passion, and support.


Source: HashiCorp Blog

Announcing HashiCorp Waypoint 0.6

Announcing HashiCorp Waypoint 0.6

We are pleased to announce the general availability of HashiCorp Waypoint 0.6. Waypoint is an application deployment tool that aims to deliver a PaaS-like experience for Kubernetes, ECS, and other platforms. Kubernetes is one of the world’s most popular deployment platforms, but there’s still too much work to go from an empty Kubernetes cluster to production-ready application deployments.

In this release, we’ve shipped a Helm-based installation method to install Waypoint with familiar tools in a couple of commands. We now support Helm as an application deployment option to make adopting Waypoint easier in existing environments.

For our YAML-free deployment options, we now support horizontal pod auto-scaling, sidecars, and Kubernetes Ingress. The result of all this work is that whether you’re writing a new application or optimizing an existing one, you can get up and running with Waypoint quickly, and it feels great on Kubernetes.

Here are some of the significant features in this release:

  • Helm-based server install: The Waypoint server can now be installed into Kubernetes clusters using an official Helm chart. Installations on other platforms remain unchanged.
  • Docker builds in Kubernetes: Waypoint can now build Docker images directly in Kubernetes pods, enabling a secure, self-hosted remote Docker build environment.
  • Helm-based application deployment: You can now deploy your applications from Helm charts. If you already use Helm, this lets you adopt Waypoint with almost no additional work.
  • Kubernetes resources in the UI: The Kubernetes resources created for a deployment (by Waypoint, Helm, or any other plugin) are now listed in the UI along with their health status.
  • Kubernetes auto-scaling deployments: Allow users to configure an autoscaler in Kubernetes with Waypoint and its deployments when using a metrics server.
  • Kubernetes Ingress for releases: Waypoint will support releasing deployments using an ingress controller and configuring an ingress resource for your deployments.

This release includes many additional new features, workflow enhancements, general improvements, and bug fixes. The Waypoint 0.6 changelog contains a detailed list of all changes in this release.

»Helm-Based Server Install

Waypoint 0.6 comes with a new, official Helm chart to help facilitate a Kubernetes-native way to install the Waypoint server. The Waypoint Helm chart also allows for external implementations like Terraform, using the Helm provider, to install and configure Waypoint into your Kubernetes cluster.

Helm provides a direct experience to install Waypoint on Kubernetes:

$ helm repo add hashicorp https://helm.releases.hashicorp.com
"hashicorp" has been added to your repositories

$ helm install waypoint hashicorp/waypoint

Once the installation has been completed and Waypoint server is up and running, you can log in and set up your local CLI with an authentication token. From the same machine that ran the helm install, run the following command to perform an initial login to the newly installed Waypoint server:

$ waypoint login -from-kubernetes

You can then run waypoint ui to open the web UI. Please see the Installing Waypoint for Kubernetes documentation for more details.

»Docker Builds in Kubernetes Pods

Waypoint now integrates with Kaniko to support building Docker images directly within unprivileged Kubernetes pods and enable remote Docker builds within a trusted environment. When paired with our GitOps workflow, this allows an entire build-to-deploy lifecycle powered by Waypoint.

Waypoint

Waypoint previously required a privileged execution environment that could be difficult to configure within hosted Kubernetes providers. Now, Waypoint works out of the box with all major Kubernetes providers. This requires no additional configuration and automatically happens for Waypoint build operations triggered by Git or the -remote flag.

We also added a guide to using Waypoint with externally built images. Many people adopting Waypoint already have a workflow to build container images, such as through CI. In this scenario, Waypoint doesn’t need to build the image, but it still needs to know the name of the resulting image so it can be used for deployment. This guide explains how to integrate externally built images into a Waypoint workflow.

»Helm-Based Application Deployment

Waypoint now supports deploying applications using Helm. If you already use Helm or prefer to use Helm, this is the perfect option for deploying to Kubernetes with Waypoint. For people adopting Waypoint with existing applications, this is a great way to get started with Waypoint without feeling like you’re going “all-in.”

Configuring Waypoint to deploy with Helm only requires a few lines of configuration:

app "my-app" {
  deploy {
    use "helm" {
      name  = "my-app"
      chart = "${path.app}/helm"

      set {
        name  = "image.repository"
        value = artifact.image
      }

      set {
        name  = "image.tag"
        value = artifact.tag
      }
    }
  }
}

One big benefit of using Helm with Waypoint is access to dynamic information such as the artifact image and tag, as noted above. This artifact may come from a Waypoint run build step or an externally built image.

Another benefit of Helm is that it enables you to use any available Kubernetes resource since you can write any YAML resource description you want. While Waypoint provides an opinionated Kubernetes plugin that allows deployments without YAML and minimal configuration, this plugin comes at the cost of not supporting every feature of Kubernetes. Having access to Helm provides an additional first-class option.

And, you can always use different deployment plugins for other applications. One application may use Helm, another may use our opinionated Kubernetes plugin, and another may not be on Kubernetes at all. But within the Waypoint workflow, it is all uniform.

»Resource Listing in the UI

The Waypoint web UI now shows a listing of all of the platform resources created by a deployment and release. This works with all deployment plugins. All resources created will be shown in the listing, regardless of which instantiation method is used in Kubernetes

Waypoint

This allows users of Waypoint to quickly diagnose any issues. A common scenario we found in earlier versions of Waypoint was that the Waypoint deployment succeeded, but a configuration error caused the deployed application to be broken. For example, an environment variable to connect to the database might be missing. Now, users can see that while the Waypoint deployment succeeded, the launched resources may still have errors they need to resolve.

Future versions of Waypoint will continue to add additional functionality to the resource listing, such as the ability to view more details, see diffs between deployment versions, notify on status changes, and more.

»Kubernetes Auto-Scaling Deployments

In earlier versions of Waypoint, only a single pod was generated when using the opinionated Kubernetes plugin. With 0.6, Waypoint gains the ability to scale an application horizontally (increase or decrease the number of pods) by setting values for min_replicas and max_replicas fields within the autoscale stanza in a Waypoint configuration file.

A basic use case of this feature can be implemented as follows:

app “my-app” {
  build { ... }

  deploy {
    use “kubernetes” {
      cpu {
        request = "250m"
        limit   = "500m"
      }

      autoscale {
        min_replicas = 2
        max_replicas = 5
        cpu_percent  = 75
      }

    }
  }

  release { ... }
}

As you can see, configuring horizontal pod autoscaling required only about eight lines of configuration. This highlights a huge benefit of using our opinionated Kubernetes plugin: for typical applications such as web services, you can avoid writing hundreds of lines of YAML and focus on just getting your application deployed.

If you require more flexibility and configuration, we always support Helm and kubectl apply as first-class deployment plugins, but you have to manually configure features such as pod autoscaling.

»Kubernetes Ingress for Releases

Waypoint now has an additional way to configure and release your deployments in Kubernetes when using our opinionated Kubernetes deployment plugin. Users can now create an ingress resource for a release.

Unlike the other options Waypoint supports for release, an ingress resource can be configured to match certain inbound traffic that should be routed through the ingress controller to an application’s deployment. The ingress resource will use an existing ingress controller for routing traffic per release, rather than spinning up an additional load balancer, which may cost extra money and take longer to initialize.

Configuring an ingress resource for a release is as simple as defining an ingress stanza:

app “my-microservice” {
  build { ... }

  deploy {
    use "kubernetes" {
      probe_path = "/"
    }
  }

  release {
    use "kubernetes" {
      ingress "http" {
        path_type = "Prefix"
        path      = "/"
      }
    }
  }
}

»What’s Next for Waypoint?

There are many more features and improvements in Waypoint 0.6, but they are too numerous to detail in this post. For a complete listing of changes in Waypoint 0.6, please see the CHANGELOG.

One of our primary focuses is improving workflows around multiple environments (staging, production, etc.) in future releases. Working with various environments using Kubernetes today is a very manual and error-prone process. We hope that we can bring significant automation and opinionated workflows to streamline this process better.

We hope you enjoy Waypoint 0.6!

»Next Steps


Source: HashiCorp Blog