Rio is a MicroPaaS for Kubernetes designed to run using minimal resources. Rio provides automatic DNS and HTTPS, load balancing, routing, metrics and more. Use it to remove the chore of creating and managing a secure IT infrastructure.
k3s is a lightweight, certified Kubernetes distribution capable of running on constrained hardware and therefore ideal for local, edge and IoT substrates. K3s was originally developed for Rio but useful enough to stand on its own.
Today I’m going to show you how to easily set-up k3s and Rio on Manjaro Linux MacBook and use them to create a self-hosted, git-based continuous delivery pipeline to serve your own website.
If you’re not yet familiar with Kubernetes, no problem. Please let this gentle introduction serve as your practical guide. When you’re finished you’ll have a better understanding of the concepts and tools used in container orchestration and a shiny new website you can use to demonstrate your skills.
Requirements
This tutorial was written for Manjaro Linux. If you’re using Windows or macOS you can do this from a Virtual Machine or dual-boot configuration.
Alternatively you may also adapt these instructions for use on an ODROID, Vultr VPS or even a low-cost personal Raspberry Pi (4GB RAM models).
Guide assumes some command line skills and working knowledge of git.
Install and Run K3s
To run k3s on Manjaro, open Terminal and use Pamac to build it from AUR:
pamac build k3s-bin
You should see output like:
Expand to view output
Building k3s-bin...
==> Making package: k3s-bin 0.8.0-1 (Rab 21 Agu 2019 11:19:26 WITA)
==> Checking runtime dependencies...
==> Checking buildtime dependencies...
==> Retrieving sources...
-> Downloading k3s-0.8.0-x86_64...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 594 0 594 0 0 536 0 --:--:-- 0:00:01 --:--:-- 536
100 40.2M 100 40.2M 0 0 413k 0 0:01:39 0:01:39 --:--:-- 546k
-> Found k3s.service
==> Validating source files with sha256sums...
k3s-0.8.0-x86_64 ... Passed
k3s.service ... Passed
==> Removing existing $srcdir/ directory...
==> Extracting sources...
==> Entering fakeroot environment...
==> Starting package()...
==> Tidying install...
-> Removing libtool files...
-> Purging unwanted files...
-> Removing static library files...
-> Stripping unneeded symbols from binaries and libraries...
-> Compressing man and info pages...
==> Checking for packaging issues...
==> Creating package "k3s-bin"...
-> Generating .PKGINFO file...
-> Generating .BUILDINFO file...
-> Generating .MTREE file...
-> Compressing package...
==> Leaving fakeroot environment.
==> Finished making: k3s-bin 0.8.0-1 (Rab 21 Agu 2019 11:21:23 WITA)
==> Cleaning up...
Resolving dependencies...
Checking inter-conflicts...
Checking keyring... [1/1]
Checking integrity... [1/1]
Loading packages files... [1/1]
Checking file conflicts... [1/1]
Checking available disk space... [1/1]
Installing k3s-bin (0.8.0-1)... [1/1]
Running post-transaction hooks...
Reloading system manager configuration... [1/2]
Arming ConditionNeedsUpdate... [2/2]
Transaction successfully finished.
When the build finishes successfully confirm installation by starting the k3s service using systemctl and check its active status:
sudo systemctl start k3s && \
sudo systemctl status k3s
You should see output like:
Expand to view output
● k3s.service - Lightweight Kubernetes
Loaded: loaded (/usr/lib/systemd/system/k3s.service; disabled; vendor preset: disabled)
Active: active (running) since Wed 2019-08-21 13:28:07 WITA; 16ms ago
Docs: https://k3s.io
Main PID: 4287 (k3s-server)
Tasks: 166
Memory: 178.1M
CGroup: /system.slice/k3s.service
├─ 2386 containerd-shim -namespace k8s.io -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/k8s.io/91c2d5c541a55a90242f1
d708b10c136d92bfa998b24257d0346099b87e11b1e -address /run/k3s/containerd/containerd.sock -containerd-binary /var/lib/rancher/k3s/data/de37a675b342fcd56e57fd570
7882786b0e0c840862d6ddc1e8f5c391fb424c9/bin/containerd
├─ 2402 containerd-shim -namespace k8s.io -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/k8s.io/bc5a42028a74ccc6af2c7
644dad12e2e41840ab05d0b7eb778b8999e4f24c0d9 -address /run/k3s/containerd/containerd.sock -containerd-binary /var/lib/rancher/k3s/data/de37a675b342fcd56e57fd570
7882786b0e0c840862d6ddc1e8f5c391fb424c9/bin/containerd
├─ 2407 /pause
├─ 2433 /pause
├─ 2537 containerd-shim -namespace k8s.io -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/k8s.io/3b32bf8df8f5b89e1cc1f
bbe93a7247dfbb0253f0bea2703ecac8a5df835dec5 -address /run/k3s/containerd/containerd.sock -containerd-binary /var/lib/rancher/k3s/data/de37a675b342fcd56e57fd570
7882786b0e0c840862d6ddc1e8f5c391fb424c9/bin/containerd
├─ 2554 nginx: master process nginx
├─ 2576 nginx: worker process
├─ 4287 /usr/bin/k3s server
├─22639 containerd-shim -namespace k8s.io -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/k8s.io/daec735dd8f34195800cc
cafb5f286f14a9e557cdd360ea79f2924d0b26a7dc5 -address /run/k3s/containerd/containerd.sock -containerd-binary /var/lib/rancher/k3s/data/de37a675b342fcd56e57fd570
7882786b0e0c840862d6ddc1e8f5c391fb424c9/bin/containerd
├─22656 /pause
├─22712 containerd-shim -namespace k8s.io -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/k8s.io/0ba7bd05a5d73ff1c78af
184aa3e96f4e5d53aa16311d581480b8cd136a21c8c -address /run/k3s/containerd/containerd.sock -containerd-binary /var/lib/rancher/k3s/data/de37a675b342fcd56e57fd570
7882786b0e0c840862d6ddc1e8f5c391fb424c9/bin/containerd
├─22752 /coredns -conf /etc/coredns/Corefile
├─22829 containerd-shim -namespace k8s.io -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/k8s.io/0cd1d50906fc26d12df0fdf0f0fe5d77392e521c2f53fa0f948fc489b4fce528 -address /run/k3s/containerd/containerd.sock -containerd-binary /var/lib/rancher/k3s/data/de37a675b342fcd56e57fd5707882786b0e0c840862d6ddc1e8f5c391fb424c9/bin/containerd
├─22848 /pause
├─22916 containerd-shim -namespace k8s.io -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/k8s.io/c30c75ca60f55dcaa630714cb49e1f228d6ed40456e581f086a432549019e047 -address /run/k3s/containerd/containerd.sock -containerd-binary /var/lib/rancher/k3s/data/de37a675b342fcd56e57fd5707882786b0e0c840862d6ddc1e8f5c391fb424c9/bin/containerd
├─22944 /traefik --configfile=/config/traefik.toml
├─23055 containerd-shim -namespace k8s.io -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/k8s.io/46e92d737312a17b37c1a7c60467965a7ac57ba23299d2bb02e459db032549e7 -address /run/k3s/containerd/containerd.sock -containerd-binary /var/lib/rancher/k3s/data/de37a675b342fcd56e57fd5707882786b0e0c840862d6ddc1e8f5c391fb424c9/bin/containerd
├─23072 /pause
├─23100 containerd-shim -namespace k8s.io -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/k8s.io/b440d7149b8ea6099b37f6cbe934e127b9f18436150d698b68618a7cc5d6fcc8 -address /run/k3s/containerd/containerd.sock -containerd-binary /var/lib/rancher/k3s/data/de37a675b342fcd56e57fd5707882786b0e0c840862d6ddc1e8f5c391fb424c9/bin/containerd
├─23118 /bin/sh /usr/bin/entry
├─23151 containerd-shim -namespace k8s.io -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/k8s.io/d808568a6d652cf1b04d4f965ae0c04da36e16f8fdd404d09df8dcbd3bc7f35d -address /run/k3s/containerd/containerd.sock -containerd-binary /var/lib/rancher/k3s/data/de37a675b342fcd56e57fd5707882786b0e0c840862d6ddc1e8f5c391fb424c9/bin/containerd
└─23168 /bin/sh /usr/bin/entry
Agu 21 13:28:07 jos-pc k3s[4287]: E0821 13:28:07.590452 4287 prometheus.go:150] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
Agu 21 13:28:07 jos-pc k3s[4287]: E0821 13:28:07.590505 4287 prometheus.go:162] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
Agu 21 13:28:07 jos-pc k3s[4287]: E0821 13:28:07.590554 4287 prometheus.go:174] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
Agu 21 13:28:07 jos-pc k3s[4287]: E0821 13:28:07.590587 4287 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
Agu 21 13:28:07 jos-pc k3s[4287]: E0821 13:28:07.590615 4287 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
Agu 21 13:28:07 jos-pc k3s[4287]: E0821 13:28:07.590658 4287 prometheus.go:214] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
Agu 21 13:28:07 jos-pc k3s[4287]: time="2019-08-21T13:28:07.642558953+08:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Agu 21 13:28:07 jos-pc k3s[4287]: time="2019-08-21T13:28:07.642579260+08:00" level=info msg="Run: k3s kubectl"
Agu 21 13:28:07 jos-pc k3s[4287]: time="2019-08-21T13:28:07.642594637+08:00" level=info msg="k3s is up and running"
Agu 21 13:28:07 jos-pc systemd[1]: Started Lightweight Kubernetes.
Look for an Active running service and note any issues in the logs. When you’re finished reviewing the output press q to return to the command prompt.
Then run the following to validate:
sudo k3s kubectl get node
You should see output like:
NAME STATUS ROLES AGE VERSION
jos-pc Ready master 4s v1.14.5-k3s.1
This is your Kubernetes Master. Master is the Node responsible for managing your cluster. It may be a single node as shown above or replicated for improved availability (horizontal scale) and redundancy (fault tolerance).
When your master is Ready your cluster is running, ready to do work. Let’s give it something to do by using it to create a MicroPaaS to manage stateless apps.
Deploy Rio into Kubernetes
With the cluster up-and-running download and install the Rio CLI using the following installation script provided in the Rio Quick Start:
curl -sfL https://get.rio.io | sh -
rio-bin
wasn’t available via Pamac from the AUR. To check if its available run pamac search rio-bin at the command prompt.
Once finished run sudo rio install to deploy Rio into your K3s cluster. Installation will take several minutes. You should see output like:
Expand to view output
Defaulting cluster CIDR to 10.43.0.1/16
Deploying Rio control plane....
| Waiting for all the system components to be up. Not ready: [autoscaler build-
/ Waiting for all the system components to be up. Not ready: [autoscaler build-
- Waiting for all the system components to be up. Not ready: [autoscaler build-
\ Waiting for all the system components to be up. Not ready: [autoscaler build-
| Waiting for all the system components to be up. Not ready: [autoscaler build-
Detecting if clusterDomain is accessible...
ClusterDomain is reachable. Run `rio info` to get more info.
Controller logs are available from `rio systemlogs`
Welcome to Rio!
Run `rio run https://github.com/rancher/rio-demo` as an example
Then run sudo rio info to check component status. You should see output like:
Expand to view output
Rio Version: v0.3.2 (a6eeebe4)
Rio CLI Version: v0.3.2 (a6eeebe4)
Cluster Domain: wowa2o.on-rio.io
Cluster Domain IPs: 192.168.1.82
System Namespace: rio-system
System Components:
Autoscaler status: running
BuildController status: running
CertManager status: running
Grafana status: running
IstioCitadel status: running
IstioPilot status: running
IstioTelemetry status: running
Kiali status: running
Prometheus status: running
Registry status: running
Webhook status: running
And check Pods with sudo k3s kubectl get po -n rio-system for output like:
Expand to view output
NAME READY STATUS RESTARTS AGE
activator-657468fc79-6qc25 2/2 Running 0 21m
autoscaler-d7dcb6bf7-bz4sg 2/2 Running 0 21m
build-controller-7449fbc8bc-h7kpd 2/2 Running 0 21m
cert-manager-7dffb75d8d-whqxs 1/1 Running 0 21m
controller-7b569d8785-q7s2v 1/1 Running 0 21m
grafana-5c6f979f59-4pfqz 2/2 Running 0 21m
istio-citadel-79589dc8bc-b7r95 1/1 Running 0 21m
istio-gateway-n84bn 2/2 Running 0 21m
istio-pilot-f69cf4f5b-rj2xv 2/2 Running 0 21m
istio-telemetry-7dcb5c78cb-44tzs 2/2 Running 0 21m
kiali-7b998f55c6-r5h67 2/2 Running 0 21m
prometheus-5cd76fdb66-bh4gg 1/1 Running 0 21m
registry-7c9d85f977-xl2mk 2/2 Running 0 21m
registry-proxy-klw7d 1/1 Running 0 21m
rio-controller-f96644854-cr6cf 1/1 Running 0 22m
socat-qbbtl 1/1 Running 0 21m
svclb-istio-gateway-v0-fbh46 2/2 Running 0 21m
webhook-855c97bd7-659k9 1/1 Running 0 21m
If you see the expected output, you’ve just created your own MicroPaaS capable of building, testing, deploying, scaling, and versioning stateless apps using a lightweight Kubernetes cluster suitable for both edge and IoT.
Before we learn at how to use Rio to create a Git-based continuous delivery pipeline let’s take a brief look at some of the monitoring tools made available upon install.
Explore Rio Monitoring Tools
Rio comes with monitoring tools which may be used to visualize what’s happening inside the cluster. Run sudo rio --system ps to get the ENDPOINT URLs:
Expand to view output
Name CREATED ENDPOINT REVISIONS SCALE WEIGHT DETAIL
rio-system/cert-manager 21 hours ago v0 1 100%
rio-system/kiali 21 hours ago https://kiali-rio-system.wowa2o.on-rio.io:9443 v0 1 100%
rio-system/istio-pilot 21 hours ago v0 1 100%
rio-system/istio-gateway 21 hours ago v0 0/1 100%
rio-system/istio-citadel 21 hours ago v0 1 100%
rio-system/grafana 21 hours ago https://grafana-rio-system.wowa2o.on-rio.io:9443 v0 1 100%
rio-system/istio-telemetry 21 hours ago v0 1 100%
rio-system/prometheus 21 hours ago v0 1 100%
rio-system/webhook 21 hours ago https://webhook-rio-system.wowa2o.on-rio.io:9443 v0 1 100%
rio-system/build-controller 21 hours ago v0 1 100%
rio-system/registry 21 hours ago v0 1 100%
rio-system/controller 21 hours ago v0 1 100%
rio-system/activator 21 hours ago v0 1 100%
rio-system/autoscaler 21 hours ago v0 1 100%
And navigate to the endpoint URLs for kiali and grafana in a browser:
One of Rio’s goals is to automate IT infrastructure to enable more focus on app development. And as we can see this is absolutely the case here. Not only are the tools already configured for us they’re served securely over HTTPS with DNS.
admin
as the username and password.
Feel free to experiment with those a bit before moving on. In the next section we’re going to use them to inspect what happens when we configure Rio to for continuous delivery.
Continuous Delivery
One of Rio’s key advantages is its out-of-the-box ability to perform continuous delivery. Before you delete your Travis CI account – not that you can – let’s test out Rio’s CD directly from our host machine. Usage instructions documented in Rio’s Continuous Delivery docs currently hosted on GitHub.
Note: By the time you read this the Rio docs linked will almost certainly be out of date. Visit the Rio website for a link to the latest docs.
From the Continuous Delivery section of the Rio docs:
Rio supports configuration of a Git-based source code repository to deploy the actual workload. It can be as easy as giving Rio a valid Git repository URL.
Valid git repository URLs may use hosted solutions like Bitbucket, GitHub and GitLab. But I’ll be using a URL from my self-hosted Gitea server instead.
Reading on the Rio docs state the repo should have a Dockerfile
in the root directory of the repository. So let’s create that first.
Create Dockerfile
If you’re not familiar with Docker I have some related material you can use to get up to speed. But if you don’t have time for that right now I’ve created a
demo repo with a purpose-built Dockerfile
you can use to move ahead.
Use Git to clone, fork or mirror the demo repo to your destination of choice:
And observe the contents of the Dockfile
in the repo root directory:
FROM cibuilds/hugo AS assetbuilder
ARG AFTERDARK_VERS=9.1.0
COPY . /tmp/site
RUN ["rm","-rf","/tmp/site/themes/after-dark/*"]
ADD https://registry.npmjs.org/after-dark/-/after-dark-$AFTERDARK_VERS.tgz /var/tmp
RUN tar --strip-components=1 -xzf /var/tmp/after-dark-$AFTERDARK_VERS.tgz -C /tmp/site/themes/after-dark
RUN ["hugo","-d","/var/www","-s","/tmp/site"]
CMD ["hugo","serve","--disableLiveReload","--buildDrafts","--bind","0.0.0.0","--port","8080"]
The Dockerfile
uses a static generator called Hugo to build and run an After Dark website using the source code located in the demo repo.
You don’t need to have Docker installed to use it. But if you do, you can build the Docker image yourself from the repo root directory by running docker build . to build it:
Expand to view build output
Sending build context to Docker daemon 10.14MB
Step 1/8 : FROM cibuilds/hugo AS assetbuilder
---> cb296dda4b02
Step 2/8 : ARG AFTERDARK_VERS=9.1.0
---> Using cache
---> 38f36cd52099
Step 3/8 : COPY . /tmp/site
---> cd9fe94146aa
Step 4/8 : RUN ["rm","-rf","/tmp/site/themes/after-dark/*"]
---> Running in 11d671a923c6
Removing intermediate container 11d671a923c6
---> 6adb001c7458
Step 5/8 : ADD https://registry.npmjs.org/after-dark/-/after-dark-$AFTERDARK_VERS.tgz /var/tmp
Downloading 3.593MB/3.593MB
---> f450789544ac
Step 6/8 : RUN tar --strip-components=1 -xzf /var/tmp/after-dark-$AFTERDARK_VERS.tgz -C /tmp/site/themes/after-dark
---> Running in cf8b7027ce19
Removing intermediate container cf8b7027ce19
---> 8987d2b8792d
Step 7/8 : RUN ["hugo","-d","/var/www","-s","/tmp/site"]
---> Running in bb21a8531a4e
Building sites … WARN 2019/08/23 05:25:41 In the next Hugo version (0.58.0) we will change how $home.Pages behaves. If you want to list all regular pages, replace .Pages or .Data.Pages with .Site.RegularPages in your home page template.
| EN
+------------------+----+
Pages | 8
Paginator pages | 0
Non-page files | 0
Static files | 15
Processed images | 0
Aliases | 1
Sitemaps | 1
Cleaned | 0
Total in 26 ms
Removing intermediate container bb21a8531a4e
---> c1e3980c4ab3
Step 8/8 : CMD ["hugo","serve","--disableLiveReload","--buildDrafts","--bind","0.0.0.0","--port","8080"]
---> Running in 19e17532f633
Removing intermediate container 19e17532f633
---> b3b60eacd7dd
Successfully built b3b60eacd7dd
Then run it locally using port 8080:
docker run -d -p 8080:8080 $(docker images -q | head -n 1)
And, finally, stopping it with docker stop $(docker ps -qn 1).
With your Dockerfile
created and demo repo available via a valid git repo URL you’re ready to use it to run your service using Rio.
Run Service with Rio
Execute rio run as shown here with your own URL:
sudo rio run https://codeberg.org/vhs/sugarloaf.git
Then run sudo rio revision. You should see output like:
Name IMAGE CREATED SCALE ENDPOINT WEIGHT DETAIL
default/focused-swanson5:v0 9 seconds ago 1 https://focused-swanson5-default.wowa2o.on-rio.io:9443
As noted in the Rio docs IMAGE will be empty until the build completes, as which point the service will become active. Check ready state using rio ps:
Name CREATED ENDPOINT REVISIONS SCALE WEIGHT DETAIL
default/focused-swanson5 17 seconds ago https://focused-swanson5-default.wowa2o.on-rio.io:9443 v0 1 100% v0: not ready; v0 waiting on build
Notice the DETAIL column says ready state is v0: not ready; v0 waiting on build.
While you wait open the monitoring tools described earlier and take another look around. Notice in Kiali your new application appears now on the Overview tab as well as under Applications, Workloads and Services.
By the time you’re finished your build should be complete. Verify by running rio revision to find an IMAGE and check the DETAIL column output by rio ps as well. If the workload completed as expected ps will output something like:
Name CREATED ENDPOINT REVISIONS SCALE WEIGHT DETAIL
default/focused-swanson5 2 minutes ago https://focused-swanson5-default.wowa2o.on-rio.io:9443 v0 1 100%
And Kiali Applications will move from health not available:
To health showing a green checkmark:
At that the new service is ready. But if you try to hit the endpoint you’ll see it’s not very useful yet as it returns a blank page. Let’s remedy that.
Fixing Our Service
To fix the service we need to git push an update to the Dockerfile
:
Expand to view update
diff --git a/Dockerfile b/Dockerfile
index 7db2f01..944dabd 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -1,8 +1,9 @@
FROM cibuilds/hugo AS assetbuilder
+EXPOSE 8080
ARG AFTERDARK_VERS=9.1.0
COPY . /tmp/site
RUN ["rm","-rf","/tmp/site/themes/after-dark/*"]
ADD https://registry.npmjs.org/after-dark/-/after-dark-$AFTERDARK_VERS.tgz /var/tmp
RUN tar --strip-components=1 -xzf /var/tmp/after-dark-$AFTERDARK_VERS.tgz -C /tmp/site/themes/after-dark
RUN ["hugo","-d","/var/www","-s","/tmp/site"]
-CMD ["hugo","serve","--disableLiveReload","--buildDrafts","--bind","0.0.0.0","--port","8080"]
+CMD ["hugo","serve","--disableLiveReload","--buildDrafts","--bind","0.0.0.0","--port","8080","--source","/tmp/site"]
Do this now by reverting the last commit and pushing your changes:
git revert 5cb1a58 && git push origin master
Since Rio already knows about the repo it will poll for changes and rebuild the service automatically. Wait a few seconds then run rio revision:
Name IMAGE CREATED SCALE ENDPOINT WEIGHT
default/focused-swanson5:v24d9f 9 seconds ago 1 https://focused-swanson5-v24d9f-default.wowa2o.on-rio.io:9443 0
default/focused-swanson5:v0 default-focused-swanson5:bfe363c591043a21ceb924bddf97ffb789fa08db 2 hours ago 1 https://focused-swanson5-v0-default.wowa2o.on-rio.io:9443 100
Notice a new service revision was created following the git push to master and versioned using the commit SHA of the commit pushed, or 24d9f in this case.
As before the IMAGE isn’t available yet as the build is still occurring. As a result 100 percent of the WEIGHT remains allocated to v0 of the service. Once the build finishes the image will become available and WEIGHT will redistribute.
Once the weight shifts open the new ENDPOINT in the browser to view the result:
The service is working now all thanks to Rio’s git-based continuous delivery.
Troubleshooting
If you’re unable to access an ENDPOINT, run the post-install validations once again and validate Rio system components are running and pods ready.
Look for signs of trouble such as:
Expand to view pod details
NAME READY STATUS RESTARTS AGE
activator-657468fc79-xb22t 0/2 ImagePullBackOff 64 4d11h
autoscaler-d7dcb6bf7-57xgs 0/2 ImagePullBackOff 5 4d11h
build-controller-7449fbc8bc-2vxhq 1/2 Running 6 4d11h
cert-manager-7dffb75d8d-bxc6s 0/1 ImagePullBackOff 2 4d11h
controller-7b569d8785-2xsn4 1/1 Running 3 4d11h
grafana-5c6f979f59-sgb47 0/2 ImagePullBackOff 5 4d11h
istio-citadel-79589dc8bc-nzmpc 1/1 Running 3 4d11h
istio-gateway-ghftc 0/2 ErrImagePull 4 4d11h
istio-pilot-f69cf4f5b-m72pr 0/2 ErrImagePull 4 4d11h
istio-telemetry-7dcb5c78cb-4f4zf 1/2 ImagePullBackOff 5 4d11h
kiali-7b998f55c6-4clrp 1/2 Running 6 4d11h
prometheus-5cd76fdb66-qk6s8 1/1 Running 3 4d11h
registry-7c9d85f977-jwxmc 1/2 Running 6 4d11h
registry-proxy-lltb5 1/1 Running 3 4d11h
rio-controller-f96644854-sbsjs 0/1 ImagePullBackOff 2 4d11h
socat-wx4t6 1/1 Running 3 4d11h
svclb-istio-gateway-v0-8cpl5 2/2 Running 6 4d11h
webhook-855c97bd7-r82lz 0/1 ImagePullBackOff 2 4d11h
And debug as necessary.
If you’re unable to run k3s kubectl verify your K3s service is running and healthy using sudo systemctl status k3s, use sudo systemctl restart k3s to restart the service and journalctl to dig into host system logs.
Finally, you can check Rio system logs using sudo rio systemlogs to identify potential problems if you’re unable to connect to your services.
Summary
In this post I showed you how to perform Git-based continuous delivery using Rio, a MicroPaaS designed for constrained resource environments.
By completing this tutorial you learned:
- How to install k3s, a lightweight Kubernetes
- How to install Rio, a MicroPaaS designed at Rancher Labs
- How to use some of Rio’s data visualization tools
- How to use Rio to perform continuous delivery with git
- Some basic debugging techniques for Manjaro, Kubernetes and Rio
- And you now have an After Dark site with HTTPS and CI
As tools like Rio continue to advance developers can say goodbye to many of the headaches standing in the way of delivering great apps not tomorrow but today.
Still bewildered by Kubernetes? Check out the The Children's Illustrated Guide to Kubernetes by the Cloud Native Computing Foundation.
Hope you enjoyed.