1
0
Fork 0
mirror of https://github.com/poseidon/typhoon synced 2024-05-12 18:46:10 +02:00

Add/update docs for asset_dir and kubeconfig usage

* Original tutorials favored including the platform (e.g.
google-cloud) in modules (e.g. google-cloud-yavin). Prefer
naming conventions where each module / cluster has a simple
name (e.g. yavin) since the platform is usually redundant
* Retain the example cluster naming themes per platform
This commit is contained in:
Dalton Hubble 2019-12-05 22:56:42 -08:00
parent 2837275265
commit d9c7a9e049
22 changed files with 185 additions and 106 deletions

View File

@ -4,9 +4,15 @@ Notable changes between versions.
## Latest
* Manage clusters without using a local `asset_dir` ([#595](https://github.com/poseidon/typhoon/pull/595))
* Change `asset_dir` to be optional. Default to "" to skip writing assets locally
* Keep cluster assets (TLS materials, kubeconfig) only in Terraform state, which supports different [remote backends](https://www.terraform.io/docs/backends/types/remote.html) and optional encryption at rest
* Allow `terraform apply` from stateless automation systems
* Improve asset unpacking on controllers to remove unused materials
* Obtain a kubeconfig from Terraform module outputs
* Update CoreDNS from v1.6.2 to v1.6.5 ([#588](https://github.com/poseidon/typhoon/pull/588))
* Add health `lameduck` option to wait before shutdown
* Add CPU requests to control plane static pods ([#589](https://github.com/poseidon/typhoon/pull/589))
* Add CPU requests for control plane static pods ([#589](https://github.com/poseidon/typhoon/pull/589))
* May provide slight edge case benefits and aligns with upstream
* Replace usage of `template_dir` with `templatefile` function ([#587](https://github.com/poseidon/typhoon/pull/587))
* Require Terraform version v0.12.6+ (action required)

View File

@ -47,7 +47,7 @@ A preview of Typhoon for [Fedora CoreOS](https://getfedora.org/coreos/) is avail
Define a Kubernetes cluster by using the Terraform module for your chosen platform and operating system. Here's a minimal example:
```tf
module "google-cloud-yavin" {
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.16.3"
# Google Cloud
@ -58,12 +58,17 @@ module "google-cloud-yavin" {
# configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
asset_dir = "/home/user/.secrets/clusters/yavin"
# optional
worker_count = 2
worker_preemptible = true
}
# Obtain cluster kubeconfig
resource "local_file" "kubeconfig-yavin" {
content = module.yavin.kubeconfig-admin
filename = "/home/user/.kube/configs/yavin-config"
}
```
Initialize modules, plan the changes to be made, and apply the changes.
@ -79,7 +84,7 @@ Apply complete! Resources: 64 added, 0 changed, 0 destroyed.
In 4-8 minutes (varies by platform), the cluster will be ready. This Google Cloud example creates a `yavin.example.com` DNS record to resolve to a network load balancer across controller nodes.
```sh
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes
NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.16.3

View File

@ -99,6 +99,7 @@ variable "ssh_authorized_key" {
variable "asset_dir" {
type = string
description = "Absolute path to a directory where generated assets should be placed (contains secrets)"
default = ""
}
variable "networking" {

View File

@ -99,6 +99,7 @@ variable "ssh_authorized_key" {
variable "asset_dir" {
type = string
description = "Absolute path to a directory where generated assets should be placed (contains secrets)"
default = ""
}
variable "networking" {

View File

@ -86,6 +86,7 @@ variable "ssh_authorized_key" {
variable "asset_dir" {
type = string
description = "Absolute path to a directory where generated assets should be placed (contains secrets)"
default = ""
}
variable "networking" {

View File

@ -70,6 +70,7 @@ variable "ssh_authorized_key" {
variable "asset_dir" {
type = string
description = "Absolute path to a directory where generated assets should be placed (contains secrets)"
default = ""
}
variable "networking" {

View File

@ -71,6 +71,7 @@ variable "ssh_authorized_key" {
variable "asset_dir" {
type = string
description = "Absolute path to a directory where generated assets should be placed (contains secrets)"
default = ""
}
variable "networking" {

View File

@ -69,6 +69,7 @@ variable "ssh_fingerprints" {
variable "asset_dir" {
type = string
description = "Absolute path to a directory where generated assets should be placed (contains secrets)"
default = ""
}
variable "networking" {

View File

@ -31,7 +31,7 @@ resource "google_dns_record_set" "some-application" {
name = "app.example.com."
type = "CNAME"
ttl = 300
rrdatas = ["${module.aws-tempest.ingress_dns_name}."]
rrdatas = ["${module.tempest.ingress_dns_name}."]
}
```
@ -64,7 +64,7 @@ resource "google_dns_record_set" "some-application" {
name = "app.example.com."
type = "A"
ttl = 300
rrdatas = [module.azure-ramius.ingress_static_ipv4]
rrdatas = [module.ramius.ingress_static_ipv4]
}
```
@ -120,7 +120,7 @@ resource "google_dns_record_set" "some-application" {
name = "app.example.com."
type = "CNAME"
ttl = 300
rrdatas = ["${module.digital-ocean-nemo.workers_dns}."]
rrdatas = ["${module.nemo.workers_dns}."]
}
```
@ -158,7 +158,7 @@ resource "google_dns_record_set" "app-record-a" {
name = "app.example.com."
type = "A"
ttl = 300
rrdatas = [module.google-cloud-yavin.ingress_static_ipv4]
rrdatas = [module.yavin.ingress_static_ipv4]
}
resource "google_dns_record_set" "app-record-aaaa" {
@ -169,6 +169,6 @@ resource "google_dns_record_set" "app-record-aaaa" {
name = "app.example.com."
type = "AAAA"
ttl = 300
rrdatas = [module.google-cloud-yavin.ingress_static_ipv6]
rrdatas = [module.yavin.ingress_static_ipv6]
}
```

View File

@ -72,7 +72,7 @@ Write Container Linux Configs *snippets* as files in the repository where you ke
[AWS](/cl/aws/#cluster), [Azure](/cl/azure/#cluster), [DigitalOcean](/cl/digital-ocean/#cluster), and [Google Cloud](/cl/google-cloud/#cluster) clusters allow populating a list of `controller_clc_snippets` or `worker_clc_snippets`.
```
module "digital-ocean-nemo" {
module "nemo" {
...
controller_count = 1
@ -92,7 +92,7 @@ module "digital-ocean-nemo" {
[Bare-Metal](/cl/bare-metal/#cluster) clusters allow different Container Linux snippets to be used for each node (since hardware may be heterogeneous). Populate the optional `clc_snippets` map variable with any controller or worker name keys and lists of snippets.
```
module "bare-metal-mercury" {
module "mercury" {
...
controller_names = ["node1"]
worker_names = [
@ -141,7 +141,7 @@ Container Linux Configs (and the CoreOS Ignition system) create immutable infras
Typhoon chooses variables to expose with purpose. If you must customize clusters in ways that aren't supported by input variables, fork Typhoon and maintain a repository with customizations. Reference the repository by changing the username.
```
module "digital-ocean-nemo" {
module "nemo" {
source = "git::https://github.com/USERNAME/typhoon//digital-ocean/container-linux/kubernetes?ref=myspecialcase"
...
}

View File

@ -17,13 +17,13 @@ module "tempest-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes/workers?ref=v1.14.3"
# AWS
vpc_id = module.aws-tempest.vpc_id
subnet_ids = module.aws-tempest.subnet_ids
security_groups = module.aws-tempest.worker_security_groups
vpc_id = module.tempest.vpc_id
subnet_ids = module.tempest.subnet_ids
security_groups = module.tempest.worker_security_groups
# configuration
name = "tempest-pool"
kubeconfig = module.aws-tempest.kubeconfig
kubeconfig = module.tempest.kubeconfig
ssh_authorized_key = var.ssh_authorized_key
# optional
@ -82,15 +82,15 @@ module "ramius-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes/workers?ref=v1.16.3"
# Azure
region = module.azure-ramius.region
resource_group_name = module.azure-ramius.resource_group_name
subnet_id = module.azure-ramius.subnet_id
security_group_id = module.azure-ramius.security_group_id
backend_address_pool_id = module.azure-ramius.backend_address_pool_id
region = module.ramius.region
resource_group_name = module.ramius.resource_group_name
subnet_id = module.ramius.subnet_id
security_group_id = module.ramius.security_group_id
backend_address_pool_id = module.ramius.backend_address_pool_id
# configuration
name = "ramius-low-priority"
kubeconfig = module.azure-ramius.kubeconfig
kubeconfig = module.ramius.kubeconfig
ssh_authorized_key = var.ssh_authorized_key
# optional
@ -149,12 +149,12 @@ module "yavin-worker-pool" {
# Google Cloud
region = "europe-west2"
network = module.google-cloud-yavin.network_name
network = module.yavin.network_name
cluster_name = "yavin"
# configuration
name = "yavin-16x"
kubeconfig = module.google-cloud-yavin.kubeconfig
kubeconfig = module.yavin.kubeconfig
ssh_authorized_key = var.ssh_authorized_key
# optional

View File

@ -47,7 +47,7 @@ Terraform [modules](https://www.terraform.io/docs/modules/usage.html) allow a co
Clusters are declared in Terraform by referencing the module.
```tf
module "google-cloud-yavin" {
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes"
cluster_name = "yavin"
...

View File

@ -79,7 +79,6 @@ module "tempest" {
# configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
asset_dir = "/home/user/.secrets/clusters/tempest"
# optional
worker_count = 2
@ -118,9 +117,9 @@ Apply the changes to create the cluster.
```sh
$ terraform apply
...
module.aws-tempest.null_resource.bootstrap: Still creating... (4m50s elapsed)
module.aws-tempest.null_resource.bootstrap: Still creating... (5m0s elapsed)
module.aws-tempest.null_resource.bootstrap: Creation complete after 11m8s (ID: 3961816482286168143)
module.tempest.null_resource.bootstrap: Still creating... (4m50s elapsed)
module.tempest.null_resource.bootstrap: Still creating... (5m0s elapsed)
module.tempest.null_resource.bootstrap: Creation complete after 11m8s (ID: 3961816482286168143)
Apply complete! Resources: 98 added, 0 changed, 0 destroyed.
```
@ -129,10 +128,19 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
## Verify
[Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your system. Use the generated `kubeconfig` credentials to access the Kubernetes cluster and list nodes.
[Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your system. Obtain the generated cluster `kubeconfig` from module outputs (e.g. write to a local file).
```
$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
resource "local_file" "kubeconfig-tempest" {
content = module.tempest.kubeconfig-admin
filename = "/home/user/.kube/configs/tempest-config"
}
```
List nodes in the cluster.
```
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-3-155 Ready <none> 10m v1.16.3
@ -177,7 +185,6 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/aws/con
| dns_zone | AWS Route53 DNS zone | "aws.example.com" |
| dns_zone_id | AWS Route53 DNS zone id | "Z3PAABBCFAKEC0" |
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3NZ..." |
| asset_dir | Absolute path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/tempest" |
#### DNS Zone
@ -200,6 +207,7 @@ Reference the DNS zone id with `aws_route53_zone.zone-for-clusters.zone_id`.
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| asset_dir | Absolute path to a directory where generated assets should be placed (contains secrets) | "" (disabled) | "/home/user/.secrets/clusters/tempest" |
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | EC2 instance type for controllers | "t3.small" | See below |

View File

@ -76,7 +76,6 @@ module "ramius" {
# configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
asset_dir = "/home/user/.secrets/clusters/ramius"
# optional
worker_count = 2
@ -115,9 +114,9 @@ Apply the changes to create the cluster.
```sh
$ terraform apply
...
module.azure-ramius.null_resource.bootstrap: Still creating... (6m50s elapsed)
module.azure-ramius.null_resource.bootstrap: Still creating... (7m0s elapsed)
module.azure-ramius.null_resource.bootstrap: Creation complete after 7m8s (ID: 3961816482286168143)
module.ramius.null_resource.bootstrap: Still creating... (6m50s elapsed)
module.ramius.null_resource.bootstrap: Still creating... (7m0s elapsed)
module.ramius.null_resource.bootstrap: Creation complete after 7m8s (ID: 3961816482286168143)
Apply complete! Resources: 86 added, 0 changed, 0 destroyed.
```
@ -126,10 +125,19 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
## Verify
[Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your system. Use the generated `kubeconfig` credentials to access the Kubernetes cluster and list nodes.
[Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your system. Obtain the generated cluster `kubeconfig` from module outputs (e.g. write to a local file).
```
$ export KUBECONFIG=/home/user/.secrets/clusters/ramius/auth/kubeconfig
resource "local_file" "kubeconfig-ramius" {
content = module.ramius.kubeconfig-admin
filename = "/home/user/.kube/configs/ramius-config"
}
```
List nodes in the cluster.
```
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ramius-controller-0 Ready <none> 24m v1.16.3
@ -175,7 +183,6 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/azure/c
| dns_zone | Azure DNS zone | "azure.example.com" |
| dns_zone_group | Resource group where the Azure DNS zone resides | "global" |
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3NZ..." |
| asset_dir | Absolute path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/ramius" |
!!! tip
Regions are shown in [docs](https://azure.microsoft.com/en-us/global-infrastructure/regions/) or with `az account list-locations --output table`.
@ -211,6 +218,7 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| asset_dir | Absolute path to a directory where generated assets should be placed (contains secrets) | "" (disabled) | "/home/user/.secrets/clusters/ramius" |
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | Machine type for controllers | "Standard_B2s" | See below |

View File

@ -159,7 +159,7 @@ provider "ct" {
Define a Kubernetes cluster using the module `bare-metal/container-linux/kubernetes`.
```tf
module "bare-metal-mercury" {
module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.16.3"
# bare-metal
@ -171,7 +171,6 @@ module "bare-metal-mercury" {
# configuration
k8s_domain_name = "node1.example.com"
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
asset_dir = "/home/user/.secrets/clusters/mercury"
# machines
controllers = [{
@ -223,12 +222,12 @@ $ terraform plan
Plan: 55 to add, 0 to change, 0 to destroy.
```
Apply the changes. Terraform will generate bootstrap assets to `asset_dir` and create Matchbox profiles (e.g. controller, worker) and matching rules via the Matchbox API.
Apply the changes. Terraform will generate bootstrap assets and create Matchbox profiles (e.g. controller, worker) and matching rules via the Matchbox API.
```sh
$ terraform apply
module.bare-metal-mercury.null_resource.copy-controller-secrets.0: Still creating... (10s elapsed)
module.bare-metal-mercury.null_resource.copy-worker-secrets.0: Still creating... (10s elapsed)
module.mercury.null_resource.copy-controller-secrets.0: Still creating... (10s elapsed)
module.mercury.null_resource.copy-worker-secrets.0: Still creating... (10s elapsed)
...
```
@ -253,11 +252,11 @@ Machines will network boot, install Container Linux to disk, reboot into the dis
Wait for the `bootstrap` step to finish bootstrapping the Kubernetes control plane. This may take 5-15 minutes depending on your network.
```
module.bare-metal-mercury.null_resource.bootstrap: Still creating... (6m10s elapsed)
module.bare-metal-mercury.null_resource.bootstrap: Still creating... (6m20s elapsed)
module.bare-metal-mercury.null_resource.bootstrap: Still creating... (6m30s elapsed)
module.bare-metal-mercury.null_resource.bootstrap: Still creating... (6m40s elapsed)
module.bare-metal-mercury.null_resource.bootstrap: Creation complete (ID: 5441741360626669024)
module.mercury.null_resource.bootstrap: Still creating... (6m10s elapsed)
module.mercury.null_resource.bootstrap: Still creating... (6m20s elapsed)
module.mercury.null_resource.bootstrap: Still creating... (6m30s elapsed)
module.mercury.null_resource.bootstrap: Still creating... (6m40s elapsed)
module.mercury.null_resource.bootstrap: Creation complete (ID: 5441741360626669024)
Apply complete! Resources: 55 added, 0 changed, 0 destroyed.
```
@ -276,19 +275,28 @@ To watch the bootstrap process in detail, SSH to the first controller and journa
```
$ ssh core@node1.example.com
$ journalctl -f -u bootstrap
podman[1750]: The connection to the server cluster.example.com:6443 was refused - did you specify the right host or port?
podman[1750]: Waiting for static pod control plane
rkt[1750]: The connection to the server cluster.example.com:6443 was refused - did you specify the right host or port?
rkt[1750]: Waiting for static pod control plane
...
podman[1750]: serviceaccount/calico-node unchanged
rkt[1750]: serviceaccount/calico-node unchanged
systemd[1]: Started Kubernetes control plane.
```
## Verify
[Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your system. Use the generated `kubeconfig` credentials to access the Kubernetes cluster and list nodes.
[Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your system. Obtain the generated cluster `kubeconfig` from module outputs (e.g. write to a local file).
```
$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
resource "local_file" "kubeconfig-mercury" {
content = module.mercury.kubeconfig-admin
filename = "/home/user/.kube/configs/mercury-config"
}
```
List nodes in the cluster.
```
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1.example.com Ready <none> 10m v1.16.3
@ -335,7 +343,6 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
| os_version | Version for a Container Linux derivative to PXE and install | "1632.3.0" |
| k8s_domain_name | FQDN resolving to the controller(s) nodes. Workers and kubectl will communicate with this endpoint | "myk8s.example.com" |
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3Nz..." |
| asset_dir | Absolute path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/mercury" |
| controllers | List of controller machine detail objects (unique name, identifying MAC address, FQDN) | `[{name="node1", mac="52:54:00:a1:9c:ae", domain="node1.example.com"}]` |
| workers | List of worker machine detail objects (unique name, identifying MAC address, FQDN) | `[{name="node2", mac="52:54:00:b2:2f:86", domain="node2.example.com"}, {name="node3", mac="52:54:00:c3:61:77", domain="node3.example.com"}]` |
@ -343,6 +350,7 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| asset_dir | Absolute path to a directory where generated assets should be placed (contains secrets) | "" (disabled) | "/home/user/.secrets/clusters/mercury" |
| download_protocol | Protocol iPXE uses to download the kernel and initrd. iPXE must be compiled with [crypto](https://ipxe.org/crypto) support for https. Unused if cached_install is true | "https" | "http" |
| cached_install | PXE boot and install from the Matchbox `/assets` cache. Admin MUST have downloaded Container Linux or Flatcar images into the cache | false | true |
| install_disk | Disk device where Container Linux should be installed | "/dev/sda" | "/dev/sdb" |

View File

@ -64,7 +64,7 @@ provider "ct" {
Define a Kubernetes cluster using the module `digital-ocean/container-linux/kubernetes`.
```tf
module "digital-ocean-nemo" {
module "nemo" {
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.16.3"
# Digital Ocean
@ -74,7 +74,6 @@ module "digital-ocean-nemo" {
# configuration
ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"]
asset_dir = "/home/user/.secrets/clusters/nemo"
# optional
worker_count = 2
@ -111,11 +110,11 @@ Apply the changes to create the cluster.
```sh
$ terraform apply
module.digital-ocean-nemo.null_resource.bootstrap: Still creating... (30s elapsed)
module.digital-ocean-nemo.null_resource.bootstrap: Provisioning with 'remote-exec'...
module.nemo.null_resource.bootstrap: Still creating... (30s elapsed)
module.nemo.null_resource.bootstrap: Provisioning with 'remote-exec'...
...
module.digital-ocean-nemo.null_resource.bootstrap: Still creating... (6m20s elapsed)
module.digital-ocean-nemo.null_resource.bootstrap: Creation complete (ID: 7599298447329218468)
module.nemo.null_resource.bootstrap: Still creating... (6m20s elapsed)
module.nemo.null_resource.bootstrap: Creation complete (ID: 7599298447329218468)
Apply complete! Resources: 54 added, 0 changed, 0 destroyed.
```
@ -124,10 +123,19 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
## Verify
[Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your system. Use the generated `kubeconfig` credentials to access the Kubernetes cluster and list nodes.
[Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your system. Obtain the generated cluster `kubeconfig` from module outputs (e.g. write to a local file).
```
$ export KUBECONFIG=/home/user/.secrets/clusters/nemo/auth/kubeconfig
resource "local_file" "kubeconfig-nemo" {
content = module.nemo.kubeconfig-admin
filename = "/home/user/.kube/configs/nemo-config"
}
```
List nodes in the cluster.
```
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.132.110.130 Ready <none> 10m v1.16.3
@ -171,7 +179,6 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/digital
| region | Digital Ocean region | "nyc1", "sfo2", "fra1", tor1" |
| dns_zone | Digital Ocean domain (i.e. DNS zone) | "do.example.com" |
| ssh_fingerprints | SSH public key fingerprints | ["d7:9d..."] |
| asset_dir | Absolute path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/nemo" |
#### DNS Zone
@ -212,6 +219,7 @@ Digital Ocean requires the SSH public key be uploaded to your account, so you ma
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| asset_dir | Absolute path to a directory where generated assets should be placed (contains secrets) | "" (disabled) | "/home/user/.secrets/nemo" |
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | Droplet type for controllers | "s-2vcpu-2gb" | s-2vcpu-2gb, s-2vcpu-4gb, s-4vcpu-8gb, ... |

View File

@ -70,7 +70,7 @@ Additional configuration options are described in the `google` provider [docs](h
Define a Kubernetes cluster using the module `google-cloud/container-linux/kubernetes`.
```tf
module "google-cloud-yavin" {
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.16.3"
# Google Cloud
@ -81,7 +81,6 @@ module "google-cloud-yavin" {
# configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
asset_dir = "/home/user/.secrets/clusters/yavin"
# optional
worker_count = 2
@ -118,11 +117,11 @@ Apply the changes to create the cluster.
```sh
$ terraform apply
module.google-cloud-yavin.null_resource.bootstrap: Still creating... (10s elapsed)
module.yavin.null_resource.bootstrap: Still creating... (10s elapsed)
...
module.google-cloud-yavin.null_resource.bootstrap: Still creating... (5m30s elapsed)
module.google-cloud-yavin.null_resource.bootstrap: Still creating... (5m40s elapsed)
module.google-cloud-yavin.null_resource.bootstrap: Creation complete (ID: 5768638456220583358)
module.yavin.null_resource.bootstrap: Still creating... (5m30s elapsed)
module.yavin.null_resource.bootstrap: Still creating... (5m40s elapsed)
module.yavin.null_resource.bootstrap: Creation complete (ID: 5768638456220583358)
Apply complete! Resources: 64 added, 0 changed, 0 destroyed.
```
@ -131,10 +130,19 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
## Verify
[Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your system. Use the generated `kubeconfig` credentials to access the Kubernetes cluster and list nodes.
[Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your system. Obtain the generated cluster `kubeconfig` from module outputs (e.g. write to a local file).
```
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
resource "local_file" "kubeconfig-yavin" {
content = module.yavin.kubeconfig-admin
filename = "/home/user/.kube/configs/yavin-config"
}
```
List nodes in the cluster.
```
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes
NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.16.3
@ -180,7 +188,6 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-
| dns_zone | Google Cloud DNS zone | "google-cloud.example.com" |
| dns_zone_name | Google Cloud DNS zone name | "example-zone" |
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3NZ..." |
| asset_dir | Absolute path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/yavin" |
Check the list of valid [regions](https://cloud.google.com/compute/docs/regions-zones/regions-zones) and list Container Linux [images](https://cloud.google.com/compute/docs/images) with `gcloud compute images list | grep coreos`.
@ -205,6 +212,7 @@ resource "google_dns_managed_zone" "zone-for-clusters" {
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| asset_dir | Absolute path to a directory where generated assets should be placed (contains secrets) | "" (disabled) | "/home/user/.secrets/clusters/yavin" |
| controller_count | Number of controllers (i.e. masters) | 1 | 3 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | Machine type for controllers | "n1-standard-1" | See below |

View File

@ -72,7 +72,7 @@ Additional configuration options are described in the `aws` provider [docs](http
Define a Kubernetes cluster using the module `aws/fedora-coreos/kubernetes`.
```tf
module "aws-tempest" {
module "tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.16.3"
# AWS
@ -82,7 +82,6 @@ module "aws-tempest" {
# configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
asset_dir = "/home/user/.secrets/clusters/tempest"
# optional
worker_count = 2
@ -121,9 +120,9 @@ Apply the changes to create the cluster.
```sh
$ terraform apply
...
module.aws-tempest.null_resource.bootstrap: Still creating... (4m50s elapsed)
module.aws-tempest.null_resource.bootstrap: Still creating... (5m0s elapsed)
module.aws-tempest.null_resource.bootstrap: Creation complete after 5m8s (ID: 3961816482286168143)
module.tempest.null_resource.bootstrap: Still creating... (4m50s elapsed)
module.tempest.null_resource.bootstrap: Still creating... (5m0s elapsed)
module.tempest.null_resource.bootstrap: Creation complete after 5m8s (ID: 3961816482286168143)
Apply complete! Resources: 98 added, 0 changed, 0 destroyed.
```
@ -132,10 +131,19 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
## Verify
[Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your system. Use the generated `kubeconfig` credentials to access the Kubernetes cluster and list nodes.
[Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your system. Obtain the generated cluster `kubeconfig` from module outputs (e.g. write to a local file).
```
$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
resource "local_file" "kubeconfig-tempest" {
content = module.tempest.kubeconfig-admin
filename = "/home/user/.kube/configs/tempest-config"
}
```
List nodes in the cluster.
```
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-3-155 Ready <none> 10m v1.16.3
@ -177,7 +185,6 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/aws/fed
| dns_zone | AWS Route53 DNS zone | "aws.example.com" |
| dns_zone_id | AWS Route53 DNS zone id | "Z3PAABBCFAKEC0" |
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3NZ..." |
| asset_dir | Absolute path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/tempest" |
#### DNS Zone
@ -200,6 +207,7 @@ Reference the DNS zone id with `aws_route53_zone.zone-for-clusters.zone_id`.
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| asset_dir | Absolute path to a directory where generated assets should be placed (contains secrets) | "" (disabled) | "/home/user/.secrets/clusters/tempest" |
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | EC2 instance type for controllers | "t3.small" | See below |

View File

@ -162,7 +162,7 @@ provider "ct" {
Define a Kubernetes cluster using the module `bare-metal/fedora-coreos/kubernetes`.
```tf
module "bare-metal-mercury" {
module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.16.3"
# bare-metal
@ -175,7 +175,6 @@ module "bare-metal-mercury" {
# configuration
k8s_domain_name = "node1.example.com"
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
asset_dir = "/home/user/.secrets/clusters/mercury"
# machines
controllers = [{
@ -224,14 +223,14 @@ $ terraform plan
Plan: 55 to add, 0 to change, 0 to destroy.
```
Apply the changes. Terraform will generate bootstrap assets to `asset_dir` and create Matchbox profiles (e.g. controller, worker) and matching rules via the Matchbox API.
Apply the changes. Terraform will generate bootstrap assets and create Matchbox profiles (e.g. controller, worker) and matching rules via the Matchbox API.
```sh
$ terraform apply
module.bare-metal-mercury.null_resource.copy-kubeconfig.0: Provisioning with 'file'...
module.bare-metal-mercury.null_resource.copy-etcd-secrets.0: Provisioning with 'file'...
module.bare-metal-mercury.null_resource.copy-kubeconfig.0: Still creating... (10s elapsed)
module.bare-metal-mercury.null_resource.copy-etcd-secrets.0: Still creating... (10s elapsed)
module.mercury.null_resource.copy-kubeconfig.0: Provisioning with 'file'...
module.mercury.null_resource.copy-etcd-secrets.0: Provisioning with 'file'...
module.mercury.null_resource.copy-kubeconfig.0: Still creating... (10s elapsed)
module.mercury.null_resource.copy-etcd-secrets.0: Still creating... (10s elapsed)
...
```
@ -256,11 +255,11 @@ Machines will network boot, install Fedora CoreOS to disk, reboot into the disk
Wait for the `bootstrap` step to finish bootstrapping the Kubernetes control plane. This may take 5-15 minutes depending on your network.
```
module.bare-metal-mercury.null_resource.bootstrap: Still creating... (6m10s elapsed)
module.bare-metal-mercury.null_resource.bootstrap: Still creating... (6m20s elapsed)
module.bare-metal-mercury.null_resource.bootstrap: Still creating... (6m30s elapsed)
module.bare-metal-mercury.null_resource.bootstrap: Still creating... (6m40s elapsed)
module.bare-metal-mercury.null_resource.bootstrap: Creation complete (ID: 5441741360626669024)
module.mercury.null_resource.bootstrap: Still creating... (6m10s elapsed)
module.mercury.null_resource.bootstrap: Still creating... (6m20s elapsed)
module.mercury.null_resource.bootstrap: Still creating... (6m30s elapsed)
module.mercury.null_resource.bootstrap: Still creating... (6m40s elapsed)
module.mercury.null_resource.bootstrap: Creation complete (ID: 5441741360626669024)
Apply complete! Resources: 55 added, 0 changed, 0 destroyed.
```
@ -279,10 +278,19 @@ systemd[1]: Started Kubernetes control plane.
## Verify
[Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your system. Use the generated `kubeconfig` credentials to access the Kubernetes cluster and list nodes.
[Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your system. Obtain the generated cluster `kubeconfig` from module outputs (e.g. write to a local file).
```
$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
resource "local_file" "kubeconfig-mercury" {
content = module.mercury.kubeconfig-admin
filename = "/home/user/.kube/configs/mercury-config"
}
```
List nodes in the cluster.
```
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1.example.com Ready <none> 10m v1.16.3
@ -326,7 +334,6 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
| os_version | Fedora CoreOS version to PXE and install | "30.20190716.1" |
| k8s_domain_name | FQDN resolving to the controller(s) nodes. Workers and kubectl will communicate with this endpoint | "myk8s.example.com" |
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3Nz..." |
| asset_dir | Absolute path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/mercury" |
| controllers | List of controller machine detail objects (unique name, identifying MAC address, FQDN) | `[{name="node1", mac="52:54:00:a1:9c:ae", domain="node1.example.com"}]` |
| workers | List of worker machine detail objects (unique name, identifying MAC address, FQDN) | `[{name="node2", mac="52:54:00:b2:2f:86", domain="node2.example.com"}, {name="node3", mac="52:54:00:c3:61:77", domain="node3.example.com"}]` |
@ -334,6 +341,7 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| asset_dir | Absolute path to a directory where generated assets should be placed (contains secrets) | "" (disabled) | "/home/user/.secrets/clusters/mercury" |
| cached_install | PXE boot and install from the Matchbox `/assets` cache. Admin MUST have downloaded Fedora CoreOS images into the cache | false | true |
| install_disk | Disk device where Fedora CoreOS should be installed | "sda" (not "/dev/sda" like Container Linux) | "sdb" |
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |

View File

@ -46,7 +46,7 @@ A preview of Typhoon for [Fedora CoreOS](https://getfedora.org/coreos/) is avail
Define a Kubernetes cluster by using the Terraform module for your chosen platform and operating system. Here's a minimal example.
```tf
module "google-cloud-yavin" {
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.16.3"
# Google Cloud
@ -57,11 +57,16 @@ module "google-cloud-yavin" {
# configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
asset_dir = "/home/user/.secrets/clusters/yavin"
# optional
worker_count = 2
}
# Obtain cluster kubeconfig
resource "local_file" "kubeconfig-yavin" {
content = module.yavin.kubeconfig-admin
filename = "/home/user/.kube/configs/yavin-config"
}
```
Initialize modules, plan the changes to be made, and apply the changes.
@ -77,7 +82,7 @@ Apply complete! Resources: 64 added, 0 changed, 0 destroyed.
In 4-8 minutes (varies by platform), the cluster will be ready. This Google Cloud example creates a `yavin.example.com` DNS record to resolve to a network load balancer across controller nodes.
```
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes
NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.16.3

View File

@ -12,12 +12,12 @@
Typhoon provides tagged releases to allow clusters to be versioned using ordinary Terraform configs.
```
module "google-cloud-yavin" {
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.8.6"
...
}
module "bare-metal-mercury" {
module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.16.3"
...
}
@ -65,7 +65,7 @@ ipmitool -H node1.example.com -U USER -P PASS chassis bootdev pxe
Delete or comment the Terraform config for the cluster.
```
- module "bare-metal-mercury" {
- module "mercury" {
- source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes"
- ...
-}
@ -93,7 +93,7 @@ kubectl apply -f mercury -R
Once you're confident in the new cluster, delete the Terraform config for the old cluster.
```
- module "google-cloud-yavin" {
- module "yavin" {
- source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes"
- ...
-}
@ -261,7 +261,7 @@ terraform apply
# add kubeconfig to new workers
terraform state list | grep null_resource
terraform taint -module digital-ocean-nemo null_resource.copy-worker-secrets.N
terraform taint -module digital-ocean-nemo null_resource.copy-worker-secrets[N]
terraform apply
```
@ -307,7 +307,7 @@ sudo ln -sf ~/Downloads/terraform-0.12.0/terraform /usr/local/bin/terraform12
For existing Typhoon v1.14.2 or v1.14.3 clusters, edit the Typhoon `ref` to first SHA that introduced Terraform v0.12 support (`3276bf587850218b8f967978a4bf2b05d5f440a2`). The aim is to minimize the diff and convert to using Terraform v0.12.x. For example:
```tf
module "bare-metal-mercury" {
module "mercury" {
- source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.14.3"
+ source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=3276bf587850218b8f967978a4bf2b05d5f440a2"
...
@ -316,7 +316,7 @@ For existing Typhoon v1.14.2 or v1.14.3 clusters, edit the Typhoon `ref` to firs
With Terraform v0.12, Typhoon clusters no longer require the `providers` block (unless you actually need to pass an [aliased provider](https://www.terraform.io/docs/configuration/providers.html#alias-multiple-provider-instances)). A regression in Terraform v0.11 made it neccessary to explicitly pass aliased providers in order for Typhoon to continue to enforce constraints (see [terraform#16824](https://github.com/hashicorp/terraform/issues/16824)). Terraform v0.12 resolves this issue.
```tf
module "bare-metal-mercury" {
module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=3276bf587850218b8f967978a4bf2b05d5f440a2"
- providers = {

View File

@ -86,6 +86,7 @@ variable "ssh_authorized_key" {
variable "asset_dir" {
type = string
description = "Absolute path to a directory where generated assets should be placed (contains secrets)"
default = ""
}
variable "networking" {