Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
gitpod-io
GitHub Repository: gitpod-io/gitpod
Path: blob/main/install/installer/docs/overview.md
2498 views

Gitpod

Always ready-to-code.

Installer

The best way to get started with Gitpod is by using our recommended & default installation method described in our documentation. In fact, our default installation method actually wraps this installer into a UI that helps you manage, update and configure Gitpod in a streamlined way. This document describes how to use the installer directly.

The installer is an internal tool and as such not expected to be used by those external to Gitpod.

Requirements

Or, open a Gitpod workspace

The process to install Gitpod is:

  1. generate a base config

  2. amend the config for your own use-case

  3. validate

  4. render the Kubernetes YAML

  5. kubectl apply

Quickstart

Download the Installer on Linux

Releases can be downloaded from GitHub Releases Select your desired binary, download and install

  1. Download the latest release with the command:

curl -fsSLO https://github.com/gitpod-io/gitpod/releases/latest/download/gitpod-installer-linux-amd64
  1. Validate the binary (optional)

Download the checksum file:

curl -fsSLO https://github.com/gitpod-io/gitpod/releases/latest/download/gitpod-installer-linux-amd64.sha256

Validate the binary against the checksum file:

echo "$(<gitpod-installer-linux-amd64.sha256)" | sha256sum --check

If valid, the output is:

gitpod-installer-linux-amd64: OK
  1. Install the binary

sudo install -o root -g root gitpod-installer-linux-amd64 /usr/local/bin/gitpod-installer
  1. Test to ensure the version you installed is up-to-date:

gitpod-installer version

Generate the base config

gitpod-installer config init -c gitpod.config.yaml

Customise your config

There are many things you can change in your config, which can be found in the Config Guide.

For the purposes of a quickstart, just change the domain to one of your own.

Validate

# Checks the validity of the configuration YAML gitpod-installer validate config --config gitpod.config.yaml

Any errors here must be fixed before deploying. See Config for more details.

# Checks that your cluster is ready to install Gitpod gitpod-installer validate cluster --kubeconfig ~/.kube/config --config gitpod.config.yaml

Any errors here must be fixed before deploying. See Cluster Dependencies for more details.

Render the YAML

gitpod-installer render --config gitpod.config.yaml > gitpod.yaml

Deploy

kubectl apply -f gitpod.yaml

After a few minutes, your Gitpod installation will be available on the specified domain.

Uninstallation

The Installer generates a ConfigMap with the metadata of every Kubernetes object generated by the Installer. This can be retrieved to remove Gitpod from your cluster.

kubectl get configmaps gitpod-app -o jsonpath='{.data.app\.yaml}' \ | kubectl delete -f - # Piping to this will delete automatically

Important. This may leave certain objects still in your Kubernetes cluster. This will include Secrets generated from internal Certificates and PersistentVolumeClaims. These will need to be manually deleted.

Batch jobs are not included in this ConfigMap by design. These have ttlSecondsAfterFinished defined in the spec and so will be deleted shortly after the jobs have run.

Advanced topics

Post-processing the YAML

Here be dragons.

Whilst you are welcome to post-process your YAML should the need arise, it is not recommended and is entirely unsupported. Do so at your own risk.

The Gitpod Installer is designed as a way of providing you a robust and well-tested framework for installing Gitpod to your own infrastructure. There may be times when this framework doesn't meet your individual requirements. In these situations, you should post-process the generated YAML.

As an example, this will allow you to change the proxy service to a ClusterIP type instead of LoadBalancer using yq.

yq eval-all --inplace \ '(select(.kind == "Service" and .metadata.name == "proxy") | .spec.type) |= "ClusterIP"' \ gitpod.yaml

Similarly, if you are doing a Workspace only install (specifying Workspace as the kind in config) you might want to change the service type of ws-proxy to ClusterIP instead of the default LoadBalancer. You can post-process the YAML to change that.

yq eval-all --inplace \ '(select(.kind == "Service" and .metadata.name == "ws-proxy") | .spec.type) |= "ClusterIP"' \ gitpod.yaml

Error validating StatefulSet.status

error: error validating "gitpod.yaml": error validating data: ValidationError(StatefulSet.status): missing required field "availableReplicas" in io.k8s.api.apps.v1.StatefulSetStatus; if you choose to ignore these errors, turn validation off with --validate=false

Depending upon your Kubernetes implementation, you may receive this error. This is due to a bug in the underlying StatefulSet dependency, which is used to generate the OpenVSX proxy (see #8529).

To fix this, you will need to post-process the rendered YAML to remove the status field.

yq eval-all --inplace \ 'del(select(.kind == "StatefulSet" and .metadata.name == "openvsx-proxy").status)' \ gitpod.yaml

What is installed

  • All Gitpod components

  • Container registry*

  • MySQL database*

  • Minio object storage*

* By default, these dependencies are installed if the inCluster setting is true. External dependencies can be used in their place

Config

Not every parameter is discussed in this table, just ones that are likely to need changing. The full config structure is available in config.go.

PropertyRequiredDescriptionNotes
domainYThe domain to deploy toThis will need to be changed on every deployment
kindYInstallation type to run - for most users, this will be FullAvailable options are:
  • Meta: To install the tools that make up the front-end facing side of Gitpod
  • Workspace: To install the components that make up the Gitpod Workspaces
  • Full: To install the complete setup, i.e. both Meta and Workspace
    metadata.regionYLocation for your objectStorage providerIf using Minio, set to local
    workspace.runtime.containerdRuntimeDirYThe location of containerd on host machineCommon values are:
    • /run/containerd/io.containerd.runtime.v2.task/k8s.io (K3s)
    • /var/lib/containerd/io.containerd.runtime.v2.task/k8s.io (AWS/GCP)
    • /run/containerd/io.containerd.runtime.v1.linux/k8s.io
    • /run/containerd/io.containerd.runtime.v1.linux/moby
      workspace.runtime.containerdSocketYThe location of containerd socket on the host machine
      workspace.runtime.fsShiftMethodYFile systemCan be shiftfs.

      Auth Providers

      Gitpod must be connected to a Git provider. This can be done via the dashboard on first load, or by providing authProviders configuration as a Kubernetes secret.

      Setting via config

      1. Update your configuration file:

      authProviders: - kind: secret name: public-github
      1. Create a secret file:

      # Save this public-github.yaml id: Public-GitHub host: github.com type: GitHub oauth: clientId: xxx clientSecret: xxx callBackUrl: https://$DOMAIN/auth/github.com/callback settingsUrl: xxx
      1. Create the secret:

      kubectl create secret generic --from-file=provider=./public-github.yaml public-github

      In-cluster vs External Dependencies

      Gitpod requires certain services for it to function correctly. The Installer provides all of these in-cluster, but they can be configured to use services external to the cluster.

      To use the in-cluster dependency, set inCluster to be true.

      Container Registry

      containerRegistry: inCluster: false external: url: <url of registry> certificate: kind: secret name: container-registry-token

      The container-registry-token secret must contain a .dockerconfigjson key - this can be created by using the kubectl create secret docker-registry command.

      Using Amazon Elastic Container Registry (ECR)

      Gitpod is compatible with any registry that implements the Docker Registry HTTP API V2 specification. Amazon ECR does not implement this spec fully. The spec expects that, if an image is pushed to a repository that doesn't exist, it creates the repository before uploading the image. Amazon ECR does not do this - if the repository doesn't exist, it will error on push.

      To configure Gitpod to use Amazon, you will need to use the in-cluster registry and configure it to use S3 storage as the backend storage.

      containerRegistry: inCluster: true s3storage: bucket: <name of bucket> certificate: kind: secret name: s3-storage-token

      The secret expects to have two keys:

      • s3AccessKey

      • s3SecretKey

      Database

      Gitpod requires an instance of MySQL 5.7 for data storage.

      The default encryption keys are [{"name":"general","version":1,"primary":true,"material":"4uGh1q8y2DYryJwrVMHs0kWXJlqvHWWt/KJuNi04edI="}]

      Google Cloud SQL Proxy

      If using a GCP SQL instance, a Cloud SQL Proxy connection can be used.

      database: inCluster: false cloudSQL: instance: <PROJECT_ID>:<REGION>:<INSTANCE> serviceAccount: kind: secret name: cloudsql-token

      The cloudsql-token secret must contain the following key/value pairs:

      • credentials.json - GCP Service Account key with roles/cloudsql.client role

      • encryptionKeys - database encryption key. Use default value as above if unsure

      • password - database password

      • username - database username

      External Database

      For all other connections, use an external database configuration.

      database: inCluster: false external: certificate: kind: secret name: database-token

      The database-token secret must contain the following key/value pairs:

      • encryptionKeys - database encryption key. Use default value as above if unsure

      • host - IP or URL of the database

      • password - database password

      • port - database port, usually 3306

      • username - database username

      Object Storage

      Gitpod supports the following object storage providers:

      GCP

      metadata: region: <gcp-region-code, eg europe-west2> objectStorage: inCluster: false cloudStorage: project: <PROJECT_ID> serviceAccount: kind: secret name: gcp-storage-token

      The gcp-storage-token secret must contain the following key/value pairs:

      • service-account.json - GCP Service Account key with roles/storage.admin and roles/storage.objectAdmin roles

      S3

      This is currently only tested with AWS. Other S3-compatible providers should work but there may be compatibility issues - please raise a ticket if you have issues with other providers

      metadata: region: <aws-region-code, eg eu-west-2> objectStorage: inCluster: false s3: endpoint: s3.amazonaws.com credentials: kind: secret name: s3-storage-token

      The s3-storage-token secret must contain the following key/value pairs:

      • accessKeyId - username that has access to S3 account

      • secretAccessKey - password that has access to S3 account

      In AWS, the accessKeyId/secretAccessKey are an IAM user's credentials with AmazonS3FullAccess policy

      Cluster Dependencies

      In order for the deployment to work successfully, there are certain dependencies that need to be installed.

      Kernel and Runtime

      Your Kubernetes nodes must have the Linux kernel v5.4.0 or above and have a containerd runtime.

      Affinity Labels

      Your Kubernetes nodes must have the following labels applied to them:

      • gitpod.io/workload_meta

      • gitpod.io/workload_ide

      • gitpod.io/workload_services

      • gitpod.io/workload_workspace_regular

      • gitpod.io/workload_workspace_headless

      It is recommended to have a minimum of two node pools, grouping the meta and ide nodes together and the workspace nodes together.

      TLS certificates

      It is a requirement that a certificate secret exists, named as per certificate.name in your config YAML with tls.crt and tls.key in the secret data. How this certificate is generated is entirely your choice - we suggest cert-manager for simplicity, however any certificate authority can be used by creating a Kubernetes secret.

      The certificate must be associated with the following domains (where $DOMAIN is the value in config domain):

      • $DOMAIN

      • *.$DOMAIN

      • *.ws.$DOMAIN

      See FAQs for help with creating a TLS certificate using cert-manager.

      cert-manager

      cert-manager MUST be installed to your cluster. In order to secure communication between the various components, the application creates internally which are created using the cert-manager Certificate and Issuer Custom Resource Definitions.

      helm repo add jetstack https://charts.jetstack.io helm repo update helm upgrade \ --atomic \ --cleanup-on-fail \ --create-namespace \ --install \ --namespace='cert-manager' \ --reset-values \ --set installCRDs=true \ --set 'extraArgs={--dns01-recursive-nameservers-only=true,--dns01-recursive-nameservers=8.8.8.8:53\,1.1.1.1:53}' \ --wait \ cert-manager \ jetstack/cert-manager

      FAQs

      Why are you writing your own Installer instead of using Helm/Kustomize/etc?

      The Installer is a complete replacement for our Helm charts. Over time, this had grown to be too complex to effectively support and was a barrier to entry for new users - the base config was many hundreds of lines long and did not have effective validation on it. By contrast, the Installer's config is < 50 lines long and can be fully validated before running.

      Also, by baking-in the container image versions to each release of the Installer, we reduce the potential for variance making it easier to support the community.

      How do I use Cert Manager to create a TLS certificate?

      Please see cert-manager.io for full documentation. This should be considered a quickstart guide.

      There are two steps to creating a public TLS certificate using cert-manager.

      1. Create the Issuer/ClusterIssuer

      As the certificate is a wildcard certificate, you must use the DNS01 Challenge Provider. Please consult their documentation for instructions. This can be either an Issuer or ClusterIssuer.

      2. Create the certificate

      Replace $DOMAIN with your domain. This example assumes you have created a ClusterIssuer called gitpod-issuer - please change this if necessary.

      This certificate is called https-certificates - please use that in your Gitpod installer config.

      apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: https-certificates spec: secretName: https-certificates issuerRef: name: gitpod-issuer kind: ClusterIssuer dnsNames: - $DOMAIN - "*.$DOMAIN" - "*.ws.$DOMAIN"

      How do I use my own TLS certificate?

      If you don't wish to use cert-manager to create a TLS certificate with a public certificate authority, you can bring your own.

      To do this, generate your certificate as you would normally and then create a secret with the CRT set to tls.crt and the Key set to tls.key.

      The DNS names must be $DOMAIN, *.$DOMAIN and *.ws.$DOMAIN, where $DOMAIN is your domain.

      apiVersion: v1 kind: Secret metadata: name: https-certificates data: tls.crt: xxx tls.key: xxx

      How can I install to a Kubernetes namespace?

      By default, Gitpod will be installed to the default Kubernetes namespace. To install to a different namespace, pass a namespace flag to the render command.

      gitpod-installer render --config gitpod.config.yaml --namespace gitpod > gitpod.yaml

      The validate cluster command also accepts a namespace, allowing you to run the checks on that namespace.

      gitpod-installer validate cluster --kubeconfig ~/.kube/config --config gitpod.config.yaml --namespace gitpod

      IMPORTANT: this does not create the namespace, so you will need to create that separately. This is so that uninstallation of Gitpod does not remove any Kubernetes objects, such as your TLS certificate or connection secrets.

      kubectl create namespace gitpod