Kubernetes Clusters on Hyperstack - How-to Guide
Hyperstack supports API-based, on-demand provisioning of Kubernetes clusters, providing a fast and efficient way to deploy and manage containerized applications. Whether you're developing complex AI models, running large-scale data pipelines, or managing cloud-native workloads, Hyperstack On-Demand Kubernetes is designed to simplify your infrastructure needs.
Similar to other major cloud providers, you only need to specify the Kubernetes version, node type, and a few basic parameters via API—Hyperstack handles the rest. NVIDIA GPU optimization, high-speed networking, and persistent storage options are built-in, ensuring clusters are ready for data-intensive AI tasks from the moment they are launched.
This document provides step-by-step instructions on using the Hyperstack API to create, connect to, and scale a Kubernetes cluster. By following these guidelines, you can leverage Kubernetes for orchestration, scalability, and application management with minimal effort.
Hyperstack Kubernetes is currently in Beta, and you may experience bugs or performance issues as we refine it.
In this article
Features of Hyperstack Kubernetes
-
Single API Request: Deploy a complete Kubernetes cluster with a single API call, provisioning key components like the master node, load balancer, bastion VM, and worker nodes.
-
AI-Optimized Kubernetes: Built for Hyperstack’s infrastructure with custom NVIDIA-optimized drivers, enhancing performance for AI applications.
-
NVIDIA GPU Optimization: Each Kubernetes cluster comes pre-configured with NVIDIA-optimized drivers, accelerating AI/ML workloads and enabling seamless processing of large datasets and complex computations.
-
Streamlined Workflows: Hyperstack’s intuitive APIs simplify deployment and automation, allowing users to efficiently manage and scale Kubernetes clusters.
-
Effortless Deployment: Launch fully configured Kubernetes clusters with minimal setup. Hyperstack’s backend automates complex provisioning, ensuring clusters are operational within minutes.
-
High-Speed Networking: Hyperstack Kubernetes clusters leverage low-latency, high-speed networking, ideal for distributed AI applications requiring fast data throughput.
-
Role-Based Access Control (RBAC): Secure and manage access with RBAC, ensuring team members have the appropriate permissions for collaborative AI projects.
- Intuitive Management Interface: Soon, you will be able to manage clusters through Hyperstack’s user-friendly UI or API, enabling easy monitoring and adjustment of configurations, node counts, and more.
Hyperstack Kubernetes Architecture
Hyperstack Kubernetes clusters are designed with a robust architecture to ensure scalability, security, and high availability. Each cluster consists of multiple node types, each serving a specific role in managing workloads and maintaining cluster operations.
Bastion Node
The bastion node acts as a secure gateway for administrative access to the cluster. It provides a controlled entry point for SSH access, reducing direct exposure of critical components to external threats.
Master Node
The master node runs the control plane, which manages the cluster’s state and orchestrates workloads. It is responsible for scheduling, maintaining application lifecycle states, and ensuring overall cluster stability.
Worker Nodes
Worker nodes handle the actual execution of workloads by running containerized applications. Each worker node is equipped with Kubernetes components such as Kubelet and a container runtime, allowing it to process scheduled tasks efficiently.
Load Balancer Nodes
Load balancer nodes distribute incoming traffic across worker nodes to ensure efficient resource utilization and high availability. They play a crucial role in managing network traffic and optimizing cluster performance.
This architecture enables Hyperstack Kubernetes to support AI, machine learning, and high-performance computing workloads efficiently, providing a seamless and scalable container orchestration environment.
How to create a Kubernetes cluster
1. Create a Kubernetes cluster
POST https://infrahub-api.nexgencloud.com/v1/core/clusters
To create a Kubernetes cluster, specify its configuration details by filling in the required fields in the request payload. These fields include the cluster name, node type, Kubernetes version, hardware configuration, and other relevant parameters. Once the payload is complete, send the request to the /core/clusters
API using the POST method to deploy the cluster with the specified configuration.
Cluster creation can take between 5-20 minutes, depending on the cluster size and the number of clusters being created. You can check the status of the cluster by calling the 'Retrieve Cluster Details' API. If your cluster is not in the ACTIVE
state after 60 minutes, please delete the cluster using the Delete Cluster API and try again.
Request body parameters
name string
Required
The name assigned to the Kubernetes cluster.
Cluster names can only contain alphanumeric characters, hyphens, underscores, and spaces, with a maximum length of 20 characters.
Leading or trailing spaces and special characters are not allowed.
environment_name string
Required
The name of the environment where the cluster will be created.
To learn how to create a new environment, click here.
keypair_name integer
Required
The name of the SSH key used to securely access your cluster.
To learn how to create a new SSH key, click here.
image_name string
optional
Name of the operating system image to be installed on your cluster.
Use the GET /core/images
API to retrieve a list of images offered by Hyperstack.
Recommended image: Ubuntu Server 22.04 LTS R535 CUDA 12.2
Only images with the k8s
label can be used for worker nodes.
kubernetes_version string
Required
The version of Kubernetes to be used for the cluster. See a list of supported versions by calling the List Cluster Versions API.
Recommended version: 1.27.8
master_flavor_name string
Required
The flavor of the master node determines its hardware configuration, including the CPUs, RAM, and disk storage capacity.
The master node requires a CPU-only flavor. Select the size that best suits your workload. Please note that there are no running costs associated with the resources consumed by the master node.
We recommend using a CPU flavor size of n1-cpu-medium
or larger.
Click here to see the available CPU-only flavors and their hardware configurations
Flavor Name | CPU Cores | CPU Sockets | RAM (GB) | Root Disk (GB) | Ephemeral Disk (GB) | Region |
---|---|---|---|---|---|---|
n1-cpu-small | 4 | 2 | 4 | 100 | 0 | NORWAY-1, CANADA-1 |
n1-cpu-medium | 8 | 2 | 8 | 100 | 0 | NORWAY-1, CANADA-1 |
n1-cpu-large | 16 | 2 | 16 | 200 | 0 | NORWAY-1, CANADA-1 |
node_flavor_name string
Required
The flavor used for the worker nodes which determines its hardware configuration, including the GPU model and quantity, CPUs, RAM, and disk storage capacity.
Call the GET /core/flavors
API to retrieve a list of available flavors.
node_count integer
Required
The number of worker nodes in the cluster. Must be between 1 and 20.
curl -X POST "https://infrahub-api.nexgencloud.com/v1/core/clusters" \
-H "accept: application/json"\
-H "api_key: YOUR API KEY"\
-H "content-type: application/json" \
-d '{
"name": "example-cluster",
"environment_name": "CANADA-1",
"keypair_name": "example-key",
"image_name": "Ubuntu Server 22.04 LTS R535 CUDA 12.2",
"kubernetes_version": "1.27.8",
"master_flavor_name": "n1-cpu-large",
"node_flavor_name": "n3-A100x1",
"node_count": 2
}'
- To authenticate Infrahub API requests, add an
api_key
header to your API request that contains an API Key.
{
"status": true,
"message": "Success",
"cluster": {
"id": 1,
"name": "example-cluster",
"environment_name": "example-environment",
"kubernetes_version": "1.27.8",
"api_address": "",
"kube_config": "",
"status": "CREATING",
"status_reason": String,
"node_count": 3,
"node_flavor": {
"name": "n3-A100x1",
"cpu": 28,
"ram": 120.0,
"disk": 100,
"ephemeral": 750,
"gpu": "A100-80G-PCIe",
"gpu_count": 1
},
"keypair_name": "example-ssh-key",
"created_at": "2024-06-17T09:33:46",
}
}
Save the cluster ID returned, as it will be used in the next step to 'Check cluster status'.
Response
Returns the status of cluster deployment and the cluster
object containing the details of the created cluster. The CREATING
status in the response indicates a successful deployment of the Kubernetes cluster.
If you encounter an error due to insufficient permissions, ensure the following steps:
1: Check Account Role: Confirm that you’re not using an account without an assigned Role. Without proper permissions, you won’t be able to perform certain actions.
2: Admin Actions Required: Only Admins can assign Roles. If you’re not the Admin, ask them to:
- Go to “My Organization” in the Hyperstack WebUI.
- Select “Create a new User role”, name it (e.g., “Operator”), and enable all permissions.
3: Assign Role:
- In “My Organization”, find your account.
- Click “Change Role” and assign the newly created “Operator” role to your account.
See more details on User Roles here.
2. Check cluster status
The final step in deploying a cluster is to verify that its status is ACTIVE
and ready for use by calling the Retrieve Cluster Details API as follows:
GET https://infrahub-api.nexgencloud.com/v1/core/clusters/{id}
The operational status of the deployed cluster can be checked using the Retrieve Cluster Details API, which provides information about the deployed cluster, including the status
field indicating its operational state. Send the request to the /core/clusters/{id}
API using the GET method, replacing the {id}
in the path with the cluster ID obtained in the previous step.
Path parameters
id integer
Required
The ID of the cluster for which we are retrieving details.
This is obtained from the id
field in the response of the Create Cluster API call from the previous step.
Attributes
status boolean
Indicates the result of the request to retrieve cluster details. true
signifies success, while false
indicates an encountered error.
message string
A description of the status of the request.
cluster object
The cluster
object contains details including configuration and specification details about the cluster.
Click here for descriptions of the fields within the cluster
object.
id integer
The unique identifier assigned to the cluster.
name string
The name of the Kubernetes cluster.
environment name string
The name of the environment where the cluster is deployed.
kubernetes_version string
The version of Kubernetes running on the cluster.
kube_config string
A string representing the kubeconfig file contents required for connecting to the cluster.
status_reason string
A message providing information about the status of the cluster, especially in case of errors.
status string
The current status of the Kubernetes cluster.
Possible values:
CREATING
: The cluster is being created.ACTIVE
: The cluster is active and operational.ERROR
: An error occurred during cluster creation or operation. Check thestatus_reason
field for more details.
node_count number
The number of nodes (virtual machines) in the cluster.
node_flavor object
Details about the flavor (hardware configuration) of the nodes in the cluster.
keypair_name string
The name of the key pair used for SSH access to the nodes.
created_at datetime
The date and time when the cluster was created.
curl -X GET "https://infrahub-api.nexgencloud.com/v1/core/clusters/{id}" \
-H "accept: application/json"\
-H "api_key: YOUR API KEY"
{
"status": true,
"message": "Success",
"cluster": {
"id": 1,
"name": "example-cluster",
"environment_name": "example-environment",
"kubernetes_version": "1.27.8",
"api_address": "",
"kube_config": "",
"status": "ACTIVE",
"status_reason": String,
"node_count": 3,
"node_flavor": {
"name": "n3-A100x1",
"cpu": 28,
"ram": 120.0,
"disk": 100,
"ephemeral": 750,
"gpu": "A100-80G-PCIe",
"gpu_count": 1
},
"keypair_name": "example-ssh-key",
"created_at": "2024-06-17T09:33:46",
}
}
An ACTIVE
status
indicates that your cluster has been successfully deployed and is ready for use. If the status is CREATING
, the cluster is still being deployed—please wait a few minutes and try again.
3. Connecting to your cluster
To manage your Kubernetes cluster with the Kubeconfig file, follow these steps:
-
Retrieve the kubeconfig file from the cluster details and save it to the a file (e.g.
kubeconfig.yaml
. Replace{id}
with the cluster ID and[API_KEY]
with your API key.# Retrieve the kubeconfig file (base64 encoded)
B64_KUBECONFIG = $(curl --location 'https://infrahub-api.nexgencloud.com/v1/core/clusters/{id}' --header 'api_key: [API_KEY]' | jq -r '.cluster.kube_config')
# Save the kubeconfig file to a file
echo $B64_KUBECONFIG | base64 -d > kubeconfig.yamlYou can also get the kubeconfig through Terraform (currently in alpha). You will need the output variable below. For a full example, see instructions here.
output "kube_config" {
value = base64decode(hyperstack_core_cluster.my_k8s.kube_config)
} -
Set the
KUBECONFIG
environment variable to the path of the kubeconfig file:export KUBECONFIG=kubeconfig.yaml
-
Access the Kubernetes cluster using
kubectl
:kubectl get nodes
If successful, the output should be similar to the following:
NAME STATUS ROLES AGE VERSION
kube-example-cluster-default-worker-0 Ready worker 25m v1.27.8
kube-example-cluster-default-worker-1 Ready worker 25m v1.27.8
kube-example-cluster-master Ready control-plane 25m v1.27.8
If you encounter any issues connecting to your Kubernetes cluster, refer to the official Kubernetes troubleshooting guide for detailed troubleshooting steps.
Scaling a Kubernetes cluster
Optimize your Kubernetes cluster's performance for your workload using our scaling APIs, which allow you to add or remove worker nodes from existing clusters.
Retrieve cluster node details
GET https://infrahub-api.nexgencloud.com/v1/core/clusters/{id}/nodes
View details of the nodes in your Kubernetes cluster by including the cluster ID in the request path. You can obtain the ID of the cluster by calling the List Clusters API. For complete API documentation including descriptions of the response fields, click here.
Adding a Cluster Node
POST https://infrahub-api.nexgencloud.com/v1/core/clusters/{id}/nodes
Add one or more worker nodes to an existing Kubernetes cluster by specifying the cluster ID in the request path and setting the count
in the request body. The role
field must be worker
, as only worker nodes can currently be added dynamically. For full API details, click here.
Parameters:
- Replace
{id}
in the path with the ID of the cluster to which you want to add node(s). - Replace
{# of nodes}
in thecount
field with an integer representing the number of worker nodes to add. Must be between 1 and 20. - The
role
field must be set to"worker"
.
Example Request:
curl -X POST "https://infrahub-api.nexgencloud.com/v1/core/clusters/{id}/nodes" \
-H "accept: application/json" \
-H "api_key: YOUR_API_KEY" \
-H "content-type: application/json" \
-d '{
"role": "worker",
"count": {# of nodes}
}'
Deleting a cluster node
DELETE https://infrahub-api.nexgencloud.com/v1/core/clusters/{id}/nodes/{node_id}
Removes a node from a Kubernetes cluster. Include the cluster ID and node ID in the path to delete the specified node. You can obtain the node_id
by calling the Retrieve Node Details API. For full API documentation, click here.
Before deleting a node using the Infrahub API, remove it from Kubernetes to prevent orphaned resources:
kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data # Remove running workloads from the node
kubectl delete node <node-name> # Remove node from Kubernetes API
Once completed, you can proceed with deleting the node using this API.
Note: Any data stored on a deleted node will be permanently lost and cannot be recovered.
Parameters:
-
Replace
{id}
in the path with the Kubernetes cluster ID where you want to add nodes. -
Replace
{node_id}
in the path with the ID of the node you want to delete.
Example Request:
curl -X DELETE "https://infrahub-api.nexgencloud.com/v1/core/clusters/{id}/nodes/{node_id}" \
-H "accept: application/json"\
-H "api_key: YOUR API KEY"
All Kubernetes Cluster APIs
Click on the name of the cluster APIs below to view detailed documentation.
Endpoint Name | URL | Description |
---|---|---|
Create Cluster | POST /core/clusters | Creates a Kubernetes cluster. |
List Clusters | GET /core/clusters | Retrieves a list of your clusters along with their details. |
Retrieve Cluster Details | GET /core/clusters/{id} | Retrieves the details of a specified cluster. |
Delete Cluster | DELETE /core/clusters/{id} | Deletes a specified cluster. |
List Cluster Versions | GET /core/clusters/versions | Retrieves a list of compatible Kubernetes versions. |
Verify Cluster Name Availability | GET /core/clusters/name-availability/{cluster_name} | Verifies the availabity of the specified cluster name. |
Retrieve Cluster Events | GET /core/clusters/{id}/events | Retrieves a list of events for a specified cluster. |
Cluster Node APIs (Scaling)
APIs for managing the scaling of Kubernetes clusters.
Endpoint Name | URL | Description |
---|---|---|
Create Node | POST /core/clusters/nodes | Adds node(s) to an existing cluster. |
Retrieve Node Details | GET /core/clusters/{id}/nodes | Retrieves the details of a clusters nodes. |
Delete Node | DELETE /core/clusters/{id}/nodes | Removes a node from a specified cluster. |
Tips and Tricks for Working with Hyperstack Kubernetes
Below is a list of tips and tricks to help you maximize your experience with Hyperstack Kubernetes clusters.
Enabling the Kubernetes Dashboard
By default, the Kubernetes Dashboard is not enabled on Hyperstack Kubernetes clusters. To enable it, follow the steps outlined here.
Whitelisting IP Addresses for Third-Party Services
To whitelist IP addresses for third-party services, add the IP addresses of the Kubernetes worker nodes to the whitelist. You can retrieve the worker node IP addresses by using the following commands from within the bastion node:
# List nodes
kubectl get nodes
# Open a debug terminal in a worker node (see example command below)
kubectl debug node/kube-cluster-1729305379-default-worker-0 -it --image=busybox
# Retrieve the IP address
wget -qO- ifconfig.me
# Expected output:
# 38.80.122.72
Firewall settings for the nodes
By default, we have configured all network settings to ensure secure access to the nodes. This setup includes the following:
- The bastion node and load balancer are accessible from the public internet via their public IP addresses.
- The master node and worker nodes are not accessible from the public internet through their public IP addresses.
- Worker nodes are accessible only from the master node and the bastion node through the internal network.
- The master node is accessible from the load balancer node through the internal network.
If you wish to modify any of these settings, please keep the following in mind:
- By default, all ports on the worker nodes are open to facilitate inter-node communication. Enabling a public IP address for the worker nodes will expose all ports to the public internet. To restrict access, configure the firewall settings on the worker nodes before enabling a public IP address. You can find instructions on configuring firewall settings here.