Skip to main content

Installation Requirements

This topic describes the requirements for installing applications with Replicated KOTS. It includes requirements for installing KOTS in existing clusters and in clusters created with Replicated Embedded Cluster or Replicated kURL.

note

This topic does not include any requirements specific to the application. Ensure that you meet any additional requirements for the application before installing.

Supported Browsers

The following table lists the browser requirements for the Replicated KOTS Admin Console with the latest version of KOTS.

BrowserSupport
Chrome66+
Firefox58+
Opera53+
Edge80+
Safari (Mac OS only)13+
Internet ExplorerUnsupported

Kubernetes Version Compatibility

Each release of KOTS maintains compatibility with the current Kubernetes version, and the two most recent versions at the time of its release. This includes support against all patch releases of the corresponding Kubernetes version.

Kubernetes versions 1.25 and earlier are end-of-life (EOL). For more information about Kubernetes versions, see Release History in the Kubernetes documentation.

Replicated recommends using a version of KOTS that is compatible with Kubernetes 1.26 and higher.

KOTS VersionsKubernetes Compatibility
1.117.0 and later1.31, 1.30, 1.29
1.109.1 to 1.116.11.30, 1.29, 1.28
1.105.2 to 1.109.01.29, 1.28

Existing Cluster Requirements

To install KOTS in an existing cluster, your environment must meet the following minimum requirements.

Minimum System Requirements

To install the Admin Console on an existing cluster, the cluster must meet the following requirements:

  • Admin console minimum requirements: Existing clusters that have LimitRanges specified must support the following minimum requirements for the Admin Console:

    • CPU resources and memory: The Admin Console pod requests 100m CPU resources and 100Mi memory.

    • Disk space: The Admin Console requires a minimum of 5GB of disk space on the cluster for persistent storage, including:

      • 4GB for S3-compatible object store: The Admin Console requires 4GB for an S3-compatible object store to store appplication archives, support bundles, and snapshots that are configured to use a host path and NFS storage destination. By default, KOTS deploys MinIO to satisfy this object storage requirement. During deployment, MinIO is configured with a randomly generated AccessKeyID and SecretAccessKey, and only exposed as a ClusterIP on the overlay network.

        note

        You can optionally install KOTS without MinIO by passing --with-minio=false with the kots install command. This installs KOTS as a StatefulSet using a persistent volume (PV) for storage. For more information, see Installing Without Object Storage.

      • 1GB for rqlite PersistentVolume: The Admin Console requires 1GB for a rqlite StatefulSet to store version history, application metadata, and other small amounts of data needed to manage the application(s). During deployment, the rqlite component is secured with a randomly generated password, and only exposed as a ClusterIP on the overlay network.

  • Supported operating systems: The following are the supported operating systems for nodes:

    • Linux AMD64
    • Linux ARM64
  • Available StorageClass: The cluster must have an existing StorageClass available. KOTS creates the required stateful components using the default StorageClass in the cluster. For more information, see Storage Classes in the Kubernetes documentation.

  • Kubernetes version compatibility: The version of Kubernetes running on the cluster must be compatible with the version of KOTS that you use to install the application. This compatibility requirement does not include any specific and additional requirements defined by the software vendor for the application.

    For more information about the versions of Kubernetes that are compatible with each version of KOTS, see Kubernetes Version Compatibility above.

  • OpenShift version compatibility: For Red Hat OpenShift clusters, the version of OpenShift must use a supported Kubernetes version. For more information about supported Kubernetes versions, see Kubernetes Version Compatibility above.

  • Storage class: The cluster must have an existing storage class available. For more information, see Storage Classes in the Kubernetes documentation.

  • Port forwarding: To support port forwarding, Kubernetes clusters require that the SOcket CAT (socat) package is installed on each node.

    If the package is not installed on each node in the cluster, you see the following error message when the installation script attempts to connect to the Admin Console: unable to do port forwarding: socat not found.

    To check if the package that provides socat is installed, you can run which socat. If the package is installed, the which socat command prints the full path to the socat executable file. For example, usr/bin/socat.

    If the output of the which socat command is socat not found, then you must install the package that provides the socat command. The name of this package can vary depending on the node's operating system.

RBAC Requirements

The user that runs the installation command must have at least the minimum role-based access control (RBAC) permissions that are required by KOTS. If the user does not have the required RBAC permissions, then an error message displays: Current user has insufficient privileges to install Admin Console.

The required RBAC permissions vary depending on if the user attempts to install KOTS with cluster-scoped access or namespace-scoped access:

Cluster-scoped RBAC Requirements (Default)

By default, KOTS requires cluster-scoped access. With cluster-scoped access, a Kubernetes ClusterRole and ClusterRoleBinding are created that grant KOTS access to all resources across all namespaces in the cluster.

To install KOTS with cluster-scoped access, the user must meet the following RBAC requirements:

  • The user must be able to create workloads, ClusterRoles, and ClusterRoleBindings.
  • The user must have cluster-admin permissions to create namespaces and assign RBAC roles across the cluster.

Namespace-scoped RBAC Requirements

KOTS can be installed with namespace-scoped access rather than the default cluster-scoped access. With namespace-scoped access, a Kubernetes Role and RoleBinding are automatically created that grant KOTS permissions only in the namespace where it is installed.

note

Depending on the application, namespace-scoped access for KOTS is required, optional, or not supported. Contact your software vendor for application-specific requirements.

To install or upgrade KOTS with namespace-scoped access, the user must have one of the following permission levels in the target namespace:

  • Wildcard permissions (Default): By default, when namespace-scoped access is enabled, KOTS attempts to automatically create the following Role to acquire wildcard (* * *) permissions in the target namespace:

    apiVersion: "rbac.authorization.k8s.io/v1"
    kind: "Role"
    metadata:
    name: "kotsadm-role"
    rules:
    - apiGroups: ["*"]
    resources: ["*"]
    verb: "*"

    To support this default behavior, the user must also have * * * permissions in the target namespace.

  • Minimum KOTS RBAC permissions: In some cases, it is not possible to grant the user * * * permissions in the target namespace. For example, an organization might have security policies that prevent this level of permissions.

    If the user installing or upgrading KOTS cannot be granted * * * permissions in the namespace, then they can instead request the minimum RBAC permissions required by KOTS. Using the minimum KOTS RBAC permissions also requires manually creating a ServiceAccount, Role, and RoleBinding for KOTS, rather than allowing KOTS to automatically create a Role with * * * permissions.

    To use the minimum KOTS RBAC permissions to install or upgrade:

    1. Ensure that the user has the minimum RBAC permissions required by KOTS. The following lists the minimum RBAC permissions:

       - apiGroups: [""]
      resources: ["configmaps", "persistentvolumeclaims", "pods", "secrets", "services", "limitranges"]
      verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
      - apiGroups: ["apps"]
      resources: ["daemonsets", "deployments", "statefulsets"]
      verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
      - apiGroups: ["batch"]
      resources: ["jobs", "cronjobs"]
      verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
      - apiGroups: ["networking.k8s.io", "extensions"]
      resources: ["ingresses"]
      verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
      - apiGroups: [""]
      resources: ["namespaces", "endpoints", "serviceaccounts"]
      verbs: ["get"]
      - apiGroups: ["authorization.k8s.io"]
      resources: ["selfsubjectaccessreviews", "selfsubjectrulesreviews"]
      verbs: ["create"]
      - apiGroups: ["rbac.authorization.k8s.io"]
      resources: ["roles", "rolebindings"]
      verbs: ["get"]
      - apiGroups: [""]
      resources: ["pods/log", "pods/exec"]
      verbs: ["get", "list", "watch", "create"]
      - apiGroups: ["batch"]
      resources: ["jobs/status"]
      verbs: ["get", "list", "watch"]
      note

      The minimum RBAC requirements can vary slightly depending on the cluster's Kubernetes distribution and the version of KOTS. Contact your software vendor if you have the required RBAC permissions listed above and you see an error related to RBAC during installation or upgrade.

    2. Save the following ServiceAccount, Role, and RoleBinding to a single YAML file, such as rbac.yaml:

      apiVersion: v1
      kind: ServiceAccount
      metadata:
      labels:
      kots.io/backup: velero
      kots.io/kotsadm: "true"
      name: kotsadm
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: Role
      metadata:
      labels:
      kots.io/backup: velero
      kots.io/kotsadm: "true"
      name: kotsadm-role
      rules:
      - apiGroups: [""]
      resources: ["configmaps", "persistentvolumeclaims", "pods", "secrets", "services", "limitranges"]
      verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
      - apiGroups: ["apps"]
      resources: ["daemonsets", "deployments", "statefulsets"]
      verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
      - apiGroups: ["batch"]
      resources: ["jobs", "cronjobs"]
      verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
      - apiGroups: ["networking.k8s.io", "extensions"]
      resources: ["ingresses"]
      verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
      - apiGroups: [""]
      resources: ["namespaces", "endpoints", "serviceaccounts"]
      verbs: ["get"]
      - apiGroups: ["authorization.k8s.io"]
      resources: ["selfsubjectaccessreviews", "selfsubjectrulesreviews"]
      verbs: ["create"]
      - apiGroups: ["rbac.authorization.k8s.io"]
      resources: ["roles", "rolebindings"]
      verbs: ["get"]
      - apiGroups: [""]
      resources: ["pods/log", "pods/exec"]
      verbs: ["get", "list", "watch", "create"]
      - apiGroups: ["batch"]
      resources: ["jobs/status"]
      verbs: ["get", "list", "watch"]
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
      labels:
      kots.io/backup: velero
      kots.io/kotsadm: "true"
      name: kotsadm-rolebinding
      roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: kotsadm-role
      subjects:
      - kind: ServiceAccount
      name: kotsadm
    3. If the application contains any Custom Resource Definitions (CRDs), add the CRDs to the Role in the YAML file that you created in the previous step with as many permissions as possible: ["get", "list", "watch", "create", "update", "patch", "delete"].

      note

      Contact your software vendor for information about any CRDs that are included in the application.

      Example

      rules:
      - apiGroups: ["stable.example.com"]
      resources: ["crontabs"]
      verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
    4. Run the following command to create the RBAC resources for KOTS in the namespace:

      kubectl apply -f RBAC_YAML_FILE -n TARGET_NAMESPACE

      Replace:

      • RBAC_YAML_FILE with the name of the YAML file with the ServiceAccount, Role, and RoleBinding and that you created.
      • TARGET_NAMESPACE with the namespace where the user will install KOTS.
note

After manually creating these RBAC resources, the user must include both the --ensure-rbac=false and --skip-rbac-check flags when installing or upgrading. These flags prevent KOTS from checking for or attempting to create a Role with * * * permissions in the namespace. For more information, see Prerequisites in Online Installation in Existing Clusters.

Embedded Cluster Requirements

To install with the Embedded Cluster installer, your environment must meet the following requirements.

System Requirements

  • Linux operating system

  • x86-64 architecture

  • systemd

  • At least 2GB of memory and 2 CPU cores

  • The filesystem at /var/lib/embedded-cluster has 40Gi or more of total space and must be less than 80% full

    note

    The directory used for data storage can be changed by passing the --data-dir flag with the Embedded Cluster install command. For more information, see Change the Default Data Directory in Installing with Embedded Cluster.

  • (Online installations only) Access to replicated.app and proxy.replicated.com or your custom domain for each

  • Embedded Cluster is based on k0s, so all k0s system requirements and external runtime dependencies apply. See System requirements and External runtime dependencies in the k0s documentation.

Port Requirements

Embedded Cluster requires that the following ports are open and available:

  • 2379/TCP *
  • 2380/TCP
  • 4789/UDP
  • 6443/TCP
  • 7443/TCP
  • 9091/TCP
  • 9099/TCP *
  • 9443/TCP
  • 10248/TCP *
  • 10249/TCP
  • 10250/TCP
  • 10256/TCP
  • 10257/TCP *
  • 10259/TCP *
  • 30000/TCP ***
  • 50000/TCP * ** ***

* These ports are used only by processes running on the same node. Ensure that there are no other processes using them. It is not necessary to create firewall openings for these ports.

** Required for air gap installations only.

*** By default, the Admin Console and Local Artifact Mirror (LAM) run on ports 30000 and 50000, respectively. If these ports are occupied, you can select different ports during installation. For more information, see Change the Admin Console and LAM Ports.

kURL Requirements

To install with kURL, your environment must meet the following requirements.

Minimum System Requirements

  • 4 CPUs or equivalent per machine

  • 8GB of RAM per machine

  • 40GB of disk space per machine

  • TCP ports 2379, 2380, 6443, 6783, and 10250 open between cluster nodes

  • UDP port 8472 open between cluster nodes

    note

    If the Kubernetes installer specification uses the deprecated kURL Weave add-on, UDP ports 6783 and 6784 must be open between cluster nodes. Reach out to your software vendor for more information.

  • Root access is required

  • (Rook Only) The Rook add-on version 1.4.3 and later requires block storage on each node in the cluster. For more information about how to enable block storage for Rook, see Block Storage in Rook Add-On in the kURL documentation.

Additional System Requirements

You must meet the additional kURL system requirements when applicable:

  • Supported Operating Systems: For supported operating systems, see Supported Operating Systems in the kURL documentation.

  • kURL Dependencies Directory: kURL installs additional dependencies in the directory /var/lib/kurl and the directory requirements must be met. See kURL Dependencies Directory in the kURL documentation.

  • Networking Requirements: Networking requirements include firewall openings, host firewalls rules, and port availability. See Networking Requirements in the kURL documentation.

  • High Availability Requirements: If you are operating a cluster with high availability, see High Availability Requirements in the kURL documentation.

  • Cloud Disk Performance: For a list of cloud VM instance and disk combinations that are known to provide sufficient performance for etcd and pass the write latency preflight, see Cloud Disk Performance in the kURL documentation.

Private Registry Requirements

This section describes the requirements for using a private image regsitry for KOTS installations.

About Using a Private Registry

A private image registry is required for air gap installations. For air gap installations in existing clusters, you must provide credentials for a compatible private registry during installation.

For air gap installations in kURL clusters, the kURL installer automatically uses the registry add-on to meet the private registry requirement. For more information, see Registry Add-on in the kURL documentation.

Private registry settings can be changed at any time. For more information, see Using Private Registries.

Compatible Registries

KOTS has been tested for compatibility with the following registries:

  • Docker Hub

    note

    To avoid the November 20, 2020 Docker Hub rate limits, use the kots docker ensure-secret CLI command. For more information, see Avoiding Docker Hub Rate Limits.

  • Quay

  • Amazon Elastic Container Registry (ECR)

  • Google Container Registry (GCR)

  • Azure Container Registry (ACR)

  • Harbor

  • Sonatype Nexus

Firewall Openings for Online Installations

The domains for the services listed in the table below need to be accessible from servers performing online installations. No outbound internet access is required for air gapped installations.

For services hosted at domains owned by Replicated, the table below includes a link to the list of IP addresses for the domain at replicatedhq/ips in GitHub. Note that the IP addresses listed in the replicatedhq/ips repository also include IP addresses for some domains that are not required for installation.

For third-party services hosted at domains not owned by Replicated, the table below lists the required domains. Consult the third-party's documentation for the IP address range for each domain, as needed.

HostEmbedded ClusterExisting ClusterskURL ClustersDescription
Docker HubNot RequiredRequiredRequiredSome dependencies of KOTS are hosted as public images in Docker Hub. The required domains for this service are index.docker.io, cdn.auth0.com, *.docker.io, and *.docker.com.
replicated.appRequiredRequiredRequired

Upstream application YAML and metadata is pulled from replicated.app. The current running version of the application (if any), as well as a license ID and application ID to authenticate, are all sent to replicated.app. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA.

For the range of IP addresses for replicated.app, see replicatedhq/ips in GitHub.

proxy.replicated.comRequiredRequired*Required*

Private Docker images are proxied through proxy.replicated.com. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA.

For the range of IP addresses for proxy.replicated.com, see replicatedhq/ips in GitHub.

registry.replicated.comRequired**Required**Required**

Some applications host private images in the Replicated registry at this domain. The on-prem docker client uses a license ID to authenticate to registry.replicated.com. This domain is owned by Replicated, Inc which is headquartered in Los Angeles, CA.

For the range of IP addresses for registry.replicated.com, see replicatedhq/ips in GitHub.

kots.ioNot RequiredRequiredNot RequiredRequests are made to this domain when installing the Replicated KOTS CLI. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA.
github.com Not RequiredRequiredNot RequiredRequests are made to this domain when installing the Replicated KOTS CLI. For information about retrieving GitHub IP addresses, see About GitHub's IP addresses in the GitHub documentation.
k8s.kurl.sh
s3.kurl.sh
Not RequiredNot RequiredRequired

kURL installation scripts and artifacts are served from kurl.sh. An application identifier is sent in a URL path, and bash scripts and binary executables are served from kurl.sh. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA.

For the range of IP addresses for k8s.kurl.sh, see replicatedhq/ips in GitHub.

The range of IP addresses for s3.kurl.sh are the same as IP addresses for the kurl.sh domain. For the range of IP address for kurl.sh, see replicatedhq/ips in GitHub.

amazonaws.comNot RequiredNot RequiredRequiredtar.gz packages are downloaded from Amazon S3 during installations with kURL. For information about dynamically scraping the IP ranges to allowlist for accessing these packages, see AWS IP address ranges in the AWS documentation.

* Required only if the application uses the Replicated proxy registry. Contact your software vendor for more information.

** Required only if the application uses the Replicated registry. Contact your software vendor for more information.