Qstack Application Orchestration - starter guide

About Kubernetes

Kubernetes is an open-source system for managing containerized applications across multiple hosts in a cluster. The objective of Kubernetes is to abstract away the complexity of managing a fleet of containers, which represent packaged applications that include everything needed to run wherever they’re provisioned.

Kubernetes provides mechanisms for application deployment, scheduling, updating, maintenance, and scaling. A key feature of Kubernetes is that it actively manages the containers to ensure the state of the cluster continually matches the user's intentions.

Kubernetes enables you to respond quickly to customer demand by scaling or rolling out new features. It also allows you to make maximal use of your hardware.

About Qstack AO

Qstack’s AO module provides an easy-to-use interface for deploying and managing containerized applications on top of Kubernetes clusters, instead being required to use the Kubernetes native kubectl command line interface. For more technical users, Kubernetes API is however exposed in the AO module as well as including a kubectl console.

About containerized applications

A containerized “application” is a collection of microservices that together deliver a meaningful service for the users. Each microservice has a particular role and usually runs on a dedicated pod (a set of containers). Applications scripts can be uploaded via customizable YAML scripts, deployed from a Docker repository (create from image) or from pre-packaged Helm Charts, which are Kubernetes ready applications maintained in an official Helm registry.



When an application is deployed from a YAML script, the application “image” (or container image file) is by default obtained from the Docker Hub repository, or can also be uploaded from any correctly configured user-defined Docker registry. Parameters for the URL can be set in the YAML script itself or in the AO UI. YAML scripts offer the potential of customizing an application, but have the limitation of usually only contain a individual application, e.g. a Nginx or Apache web-servers - as opposed to a set of apps that together deliver a larger application.


Helm chart (or blueprint) is a set of templates that describe all the things required to set up an application in Kubernetes. When deploying an application using a Helm chart - Helm converts the templates into Kubernetes YAML files required to automatically deploy the necessary components. As an example is the Wordpress chart that deploys a frontend and database components along with configuring passwords, disk allocations and IP reserving required for a standalone application. 

Creating a new application

A containerized application is a collection of microservices that together deliver a meaningful service for the users. Each microservice has a particular role and usually runs on a dedicated pod (a set of containers). Applications scripts can be uploaded via customizable YAML scripts or Helm Charts, which are Kubernetes ready applications maintained by Deis in cooperation with Google and others.

 When creating a new application, the user needs to decide how to deploy the application, i.e. through YAML file, Helm chart/blueprint or from a native command line interface (kubectl). The OA module simplifies application deployment by offering a easy-to-use UI that focuses on using the Helm charts and YAML files.

Option 1: Creating application with YAML file








  • Type or paste in the YAML file into the editor

  • Normally the “Namespace” is kept as default, unless you want to group resources into multiple namespaces. A namespace is like a prefix to the name of a resource. Namespaces help different projects, environments (e.g. dev and production), teams, or customers share the same cluster. It does this by preventing name collisions.


Option 2: Create application with a blueprint (Helm chart)

  • Same as above, set the name and then select the application/blueprint you want to deploy

    Before launching the blueprint application (chart), you can configure the application script in the editor shown, e.g. change the blog name, password etc. (example of a Wordpress installation) 

Application/Pod management 

When the application has been created it can be displayed in the AO overview window




  • Qstack supports service export using the LoadBalancer type and automatically creates a Load Balancer for those and allocates an IP address for them.

  • The overview window also displays the number of pods for each running application. This can easily be scaled by clicking on the +/- signs.


Clicking on any application name will bring you to the “application detail page” 


  • This page shows a number of configuration options that can be used to customize the application

    • “Pod limits”.  Setting the minimum maximum size of the CPU and RAM for the pods

    • “Autoscaling”. Allow autoscaling of the application by setting a min and max number of pods. The target CPU % determines when a new pod is being created and destroyed - i.e. when the load exceeds to target limit

    • “Images”. Can be used to update a Docker container image, or set a particular release. Qstack will then update the image according to its settings in the image repository. By default, Qstack’s AO fetches and updates containers from Docker Hub

    • “Show YAML”. The YAML file for the deployment can be displayed and copied.  

The lower part of the window displays the number of pods running, on which node in the cluster they are running and several other details.

Here the user can display logs for each pod, display the YAML file for the pod and delete a pod 


Cluster management

 The AO overview page has a “Show details” button for the underlying cluster

The AO overview page has a “Show details” button for the underlying cluster



  • Click the “Show details” button to open the Cluster management page (see below)

  • Kubernetes version: shows the current Kubernetes version used in Qstack

  • Node size: shows the selected cluster size when created (service offering)

  • Template: is the underlying OS running on the nodes in the cluster

  • Running:

    • Show config: displays the configuration of the cluster

    • Download config: to download the config file and use to activate cluster with CLI tools like kubectl (see below)

    • Delete cluster: to completely delete/remove the cluster (along with all existing applications)

  • Name: name of nodes in the cluster

  • IP addresses: the IP addresses of nodes in cluster. The master node has a public IP, while the slave nodes only have private IPs.

  • Zone: to which zone the cluster (nodes) belong to

  • Ready, Health, Metrics: shows health status of the nodes

  • Action: to display the YAML file

Other ways of creating and managing applications

Using the command line for managing YAML files is also possible using kubectl (needs to be installed on client machine)

  • Use Kubectl. This is the command line interface for the cluster. To start working with kubectl, you first need to download and install the kubectl CLI tools, e.g. into a directory called “kubernetes”. The kubectl tool can be downloaded from here, https://kubernetes.io/docs/user-guide/prereqs/

  • You can then download the “cluster config file” (shown above) and copy the file into another directory, e.g. called “.kube”. You have then created connection to the cluster

  • Start using the kubectl CLI tools, see https://kubernetes.io/docs/user-guide/ 

General info and terminology

Qstack new Application Orchestration (AO) module allows devops people to easily deploy applications on top of Kubernetes clusters. At the center, a Kubernetes cluster contains pods, that are a group of Docker-based containers that can run any container-ready application through YAML scripts or preconfigured Helm charts.  

Instead of the traditional way of running apps on hosts, including virtual machines, container-based applications leverage OS-level virtualization and support portability, scalability and self-healing capabilities. The Kubernetes clustering provides a layer for managing or orchestrating multiple different services into a single coherent application.

The OA module is especially powerful when it comes to deploying and managing applications on top of the Kubernetes cluster. When a new application is deployed, the user can determine the number of replicated pods to enhance reliability and high-availability.   


A cluster is a group of physical or virtual nodes tied together for deploying scalable container based applications onto.

  • A new cluster will be created for each user the first time the user deploys a containerized application

  • Cluster size is by default 3-nodes, but size can be manually scaled in the beginning of creating a new application or afterwards. One of the node becomes the “master” node and the remaining nodes are “worker” nodes.

  • One of the node in the cluster becomes a “master node”, managing other worker nodes (in the cluster) and supporting administration tasks, including API and CLI interfaces


Pods are a group of one or more containers, their shared storage, and options about how to run them. Each pod gets its own IP address.

  • The AO module will automatically and dynamically scale the pod sizes within the cluster, depending on the load status.

  • Users can either upload their own YAML based charts for deploying a new application or use any of the preconfigured Helm-charts directly from official the Helm repository

  • Each application is typically comprised of several pods, where each pod runs a part of the application that solves a well defined task.  Each pod can be scaled independently either manually or automatically by Kubernetes


YAML file example

Below is an example of a simple YAML configuration file that deploys nginx web server on 3 pods (replicates).

Explanations are in parentheses. The → show the character indent in the file

Yaml configurationexplanation

apiVersion: extensions/v1beta1

kind: Deploymen



→→run: nginx

 name: nginx



replicas: 3



run: nginx




run: nginx



→name: nginx

→image: nginx:1.7.9

→imagePullPolicy: IfNotPresent


→containerPort: 80


apiVersion: v1

kind: Service


→name: nginx

→namespace: default



→port: 80

→protocol: TCP

→targetPort: 80


→run: nginx

→type: LoadBalancer

(sets the name for the deployment)

(sets “Deployment” as the type)

(runs an nginx webserver)

(the name given to the container)

(the “prefix” name for the resource - “default” if no other selected)

(the number of replicated pods created in the cluster)

(what pods does this deployment comprise of)

(labels for this deployment / you can have overlap)

(what labels to apply to this pod)

(defines how the pod is built) 

(container 1 of 1) 

(name of container)

(fetches the image from the repository)

(pull container image only if it is not already present)

(what ports the container is listening on)


(exposes a specified container port to connect the nginx server)

(v1 is the version for the service)

(Provides a single stable name and address for a set of pods)

(same as above)

(same as above)

(opens for external port)

(sets the target port)

(what labels do the pod have that this service exposes)

(creates a load balancer)








  • No labels
Breyta takkanum