Raspberry Pi K3S Kubernetes Multi Master High Availability Cluster with MySQL

In my previous blog, I’ve showed the installation of Kubernetes cluster using Raspberry Pi’s. It was working very well until recently, when one of the SD cards failed. This is due to heavy R/W’s on the disks thanks to etcd and logging.

In this blog, you’ll see how to setup Rancher K3S Kubernetes Cluster with external database MySQL/PostgreSQL

(more…)

Rancher K3S “nameserver limits exceeded”

How to Fix the 502 Bad Gateway Error in WordPress? - Tech Banker

You’ve created a Rancher K3S cluster and for some reason your ingress URL are not working with the flood of errors “Nameserver limits exceeded”.

Below errors are logged in syslog 

pi-wrkr01 k3s[354088]: I0612 12:04:55.339233  354088 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="a19e2669d074f2faa869ec29f9ced0656c3dbd80cb65f0ae6ed4dafb2f60f9fb"
Jun 12 11:43:23 k3s-pi-wrkr01 k3s[34016]: E0612 11:43:23.391005   34016 dns.go:136] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 192.168.15.1 4.2.2.2 8.8.8.8"
Jun 12 11:43:55 k3s-pi-wrkr01 k3s[34016]: E0612 11:43:55.391080   34016 dns.go:136] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 192.168.15.1 4.2.2.2 8.8.8.8"
Jun 12 11:44:47 k3s-pi-wrkr01 k3s[34016]: E0612 11:44:47.390679   34016 dns.go:136] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 192.168.15.1 4.2.2.2 8.8.8.8"
Jun 12 11:44:58 k3s-pi-wrkr01 k3s[34016]: E0612 11:44:58.391809   34016 dns.go:136] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 192.168.15.1 4.2.2.2 8.8.8.8"
Jun 12 11:45:19 k3s-pi-wrkr01 k3s[34016]: W0612 11:45:19.016239   34016 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 12 11:45:53 k3s-pi-wrkr01 k3s[34016]: E0612 11:45:53.390227   34016 dns.go:136] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 192.168.15.1 4.2.2.2 8.8.8.8"
Jun 12 11:46:02 k3s-pi-wrkr01 k3s[34016]: E0612 11:46:02.391550   34016 dns.go:136] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 192.168.15.1 4.2.2.2 8.8.8.8"
Jun 12 11:47:03 k3s-pi-wrkr01 k3s[34016]: E0612 11:47:03.391764   34016 dns.go:136] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 192.168.15.1 4.2.2.2 8.8.8.8"
Jun 12 11:47:22 k3s-pi-wrkr01 k3s[34016]: E0612 11:47:22.391179   34016 dns.go:136] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 192.168.15.1 4.2.2.2 8.8.8.8"

Fix:

Do not have DNS nameservers more than 3 lines. This is due to the Kubernetes CoreDNS restrictions

A sample working DNS configuration file showing entries for 3 nameservers

# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients directly to
# all known uplink DNS servers. This file lists all configured search domains.
#
# Third party programs should typically not access this file directly, but only
# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
# different way, replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

nameserver 192.168.15.1
nameserver 4.2.2.2
nameserver 8.8.8.8
search .

Please refer the code for more information

https://github.com/kubernetes/kubernetes/blob/c970a46bc1bcc100bbbfabd5c12bd4c5d87f8aea/pkg/apis/core/validation/validation.go#L2944-L2953

Solarwinds IPAM CRUD/Update using Orion/SDK

Note to myself for future use. Did this while troubleshooting Solarwinds IPAM IP reservation automation.

import orionsdk
import requests
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry
from pprint import pprint

def retry_session(retries=3,
                  backoff_factor=0.3,
                  status_forcelist=(500, 502, 504)):
    session = requests.Session()
    retry = Retry(
        total=retries,
        read=retries,
        connect=retries,
        backoff_factor=backoff_factor,
        status_forcelist=status_forcelist)
    adapter = HTTPAdapter(max_retries=retry)
    session.mount('http://', adapter)
    session.mount('https://', adapter)
    return session
class VPNSolarWinds():
    def __init__(self,**kwargs):
        try:
            self.swis = orionsdk.SwisClient(kwargs["host"],
                                        kwargs["user"],
                                        kwargs["password"],
                                        session=retry_session())   #verify="server.pem",
        except Exception as e:
            print("connectionError: {}".format(e))
            
obj = VPNSolarWinds(user="admin",password="password@123",host="10.10.10.10") #sample lab solarwinds ipam tool


query = """ SELECT 
                ipn.subnetid,
                ipn.IPAddress, 
                ipn.Status, 
                ipn.Alias, 
                ipn.MAC, 
                ipn.DnsBackward, 
                ipn.DhcpClientName, 
                ipn.SysName, 
                ipn.Description, 
                ipn.Contact, 
                ipn.Location, 
                ipn.SysObjectID, 
                ipn.Vendor, 
                ipn.VendorIcon, 
                ipn.MachineType, 
                ipn.Comments, 
                ipn.ResponseTime, 
                ipn.LastBoot, 
                ipn.LastSync, 
                ipn.LastCredential, 
                ipn.AllocPolicy, 
                ipn.SkipScan, 
                ipn.LeaseExpires, 
                ipn.DnsBy, 
                ipn.MacBy, 
                ipn.StatusBy, 
                ipn.SystemDataBy,
                ipn.Uri 
            FROM IPAM.IPNode ipn JOIN IPAM.Subnet sbn 
            ON ipn.subnetid=sbn.subnetid 
            WHERE sbn.DisplayName='{subnet_name}' AND ipn.IPAddress='{ip}'""".format(subnet_name="some-subnet",ip="10.20.20.20")
# Added "Uri" to the above column names            


output = obj.swis.query(query)
uri = (output['results'][0]['Uri'])
# obj.swis.update(uri, Status='Reserved')
obj.swis.update(uri, Status='Used')
output_after = obj.swis.query(query)
status = (output['results'][0]['Status'])
print("Status:", status)
    
# pprint(results)
Reference: https://github.com/solarwinds/OrionSDK/wiki/IPAM-4.5.x-API#crud-operations-for-ip-address
#IPAM IP RESERVATION STATUS.
#Value Name
#0 Unknown
#1 Used
#2 Available
#4 Reserved
#8 Transient
#16 Blocked

Automate K8S manifest files using Python and Jinja2

Don’t type, Use Template

Have you ever thought about automation of K8S manifest files (YAML) to simplify building DEV, TEST, PROD environments?

Even though modern day tools does the heavy lifting by automagically building desired infrastructure in minutes compared to days, weeks, or even months. However, the human factor of committing errors while writing these manifests/YAML files can still exists!

During my Storage Admin days, we used to write/copy-paste variables based on the requirements from the requestor or platform teams into an excel file. Using excel formulae and “drag” magic, commands were made available to execute on the enterprise class storage arrays. I was thinking how to apply the similar wisdom into SRE world…

Say Hello to Jinja2 templates

Jinja2 is a modern day templating language for Python developers. It was made after Django’s template. It is used to create HTML, XML or other markup formats that are returned to the user via an HTTP request. However Jinja2 templates can be used for various use cases. In this blog, jinja2 template with python used to build manifests/YAML files.

Installing Jinja2 is pretty easy using pip or easy_install

pip install jinja2
 
easy_install jinja2

Jinja2 Templates

Jinja2 template contain variables which are replaced by the values. These values are passed to the template file using template rendering method. These variables can be strings, dictionaries or even values inside loops like if, for etc..

Delimeters

{%....%} are for statements

{{....}} are expressions used to print to template output

{#....#} are for comments which are not included in the template output

#....## are used as line statements

An example of Jinja2 template manifest file

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: {{ pv.name }}
  labels:
    type: {{ pv.type }}
spec:
  storageClassName: {{ pv.class }}
  capacity:
    storage: {{ pv.capacity }}
  accessModes:
    - {{ pv.mode }}
  hostPath:
    path: {{ pv.path }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  namespace: bitwarden
  name: bitwarden
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 800Mi
---

In the above manifest/YAML file (pv_pvc.yml), first half is about Persistent Volume where the values are template values which begins with ‘pv.<variable name>’

Properties Values

name,type,class,capacity,mode,path,environment
bitwarden.sample,local,manual,800Mi,ReadWriteOnce,'/mnt/sample/bitwarden', sample
bitwarden.pre,local,manual,1000Mi,ReadWriteOnce,'/mnt/pre/bitwarden', pre
bitwarden.prd,local,nfs,2000Mi,ReadWriteOnce,'/mnt/prd/bitwarden', prd
bitwarden.dev,local,manual,800Mi,ReadWriteOnce,'/mnt/dev/bitwarden', dev

In the above properties.csv file, first line is a header, followed by sample value. Next 3 lines are meant for DEV, TEST and PROD.

Python Jinja2 Code with documentation

from os import pread
from jinja2 import Environment, FileSystemLoader
import csv

csvfile = open('properties.csv')

lists = []
for index,line in enumerate(csvfile):
    if not index == 0 and not index == 1:        
        data_line = line.rstrip().split(',')
        lists.append(data_line)

print(lists)
file_loader = FileSystemLoader("templates")
env = Environment(loader=file_loader)
template = env.get_template("pv_pvc.yml")
dev = ["bitwarden.dev","local","manual","800Mi","ReadWriteOnce","'/mnt/dev/bitwarden'", "dev"]
uat = ["bitwarden.pre","local","manual","1000Mi","ReadWriteOnce","'/mnt/pre/bitwarden'", "pre"]
prod = ["bitwarden.prd","local","nfs","2000Mi","ReadWriteOnce","'/mnt/prd/bitwarden'", "prd"]
pv = {}
# lists = [dev, uat, prod]
for index,element in enumerate(lists):
    # print(element[0])
    pv["name"] = element[0]    
    pv["type"] = element[1]
    pv["class"] = element[2]
    pv["capacity"] = element[3]
    pv["mode"] = element[4]
    pv["path"] = element[5]
    # template.render will replace the jinja template variables with list values
    output = template.render(pv=pv)
    # print(output)
    # Create manifests/YAML files from the about output variable
    try:
        with open(f"{element[-1]}.yml", "w+") as yml_file:
            yml_file.write(output)
        yml_file.close()
    except Exception as Error:
        print(f"Unable to open the file, {yml_file} due to the error {Error}")

Results – python process_yamls.py

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: bitwarden.dev
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 800Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: '/mnt/dev/bitwarden'
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  namespace: bitwarden
  name: bitwarden
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 800Mi
---

dev.yml (DEV)

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: bitwarden.pre
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1000Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: '/mnt/pre/bitwarden'
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  namespace: bitwarden
  name: bitwarden
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 800Mi
---

pre.yml (TEST)

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: bitwarden.prd
  labels:
    type: local
spec:
  storageClassName: nfs
  capacity:
    storage: 2000Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: '/mnt/prd/bitwarden'
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  namespace: bitwarden
  name: bitwarden
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 800Mi
---

prd.yml (PROD)

Three files generated based on the values provided in the properties.csv file. In the example code above, I’ve hard-coded the values for simplicity and to easily understand the code. This can be improved and can be applied complex manifests/yamls like configmaps, deployments, DS, RS etc…

I’d like to thank Gangireddy who came up with this idea to automate manifests to automate SRE/DEVOPS engineers BAU stuff!

If you enjoyed this post, I’d be very grateful if you’d help it spread by emailing it to a friend, or sharing it on your social platforms. Thank you!

What am I missing here? Let me know in the comments and I’ll add it in!

Happy New Year 2021

Wish you all happy and prosperous New Year 2021.

Image Courtesy: One of my all time favourite follower 🥰

Thank you for stopping by. I’m no longer going to continue this blog site 😕

Please visit my new blog address

https://Wordpress.vinaybabu.in

Configure backend databases for Rundeck

This is a step by step guide to configure database backend for Rundeck to replace the default H2, an embedded database. H2 DB is great for testing and experimental purposes but not ready yet for production instances. Blackduck scan run against the default setup show H2 DB as one of the vulnerabilities.

(more…)