24 Haziran 2025 Salı

Zot Registry retry issue during pushes

Helm chart for Zot Registry doesn't have any nginx ingress annotations so big layers which exceed default proxy-body-size being interrupted by ingress controller pods. Interruptions result in retries on layer pushes.  Applying these annotations will solve the problem with some additional precautions. 


nginx.ingress.kubernetes.io/client-body-buffer-size: 1m
nginx.ingress.kubernetes.io/client-body-timeout: "900"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-buffering: off
nginx.ingress.kubernetes.io/proxy-connect-timeout: "900"
nginx.ingress.kubernetes.io/proxy-max-temp-file-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "900"
nginx.ingress.kubernetes.io/proxy-request-buffering: off
nginx.ingress.kubernetes.io/proxy-send-timeout: "900"
Share:

23 Haziran 2025 Pazartesi

Macbook Pro T2 100mbps limit issue

Hello dear reader, you are on a page that i wrote for my own further usage and it's a self note for myself.

I encountered a network performance bottleneck on one of my homelab machines an old Macbook Pro 2019 where the connection was limited to 100Mbps. After investigation, I discovered the system was using a Fast Ethernet adapter instead of an available Gigabit Ethernet adapter. Here's how I upgraded the network speed from 100Mbps to 1Gbps.

First, I checked what network interfaces were available:

ip link show
cat /sys/class/net/*/speed 2>/dev/null | head -5

This revealed multiple USB Ethernet adapters with different capabilities. To identify the hardware properly:

lsusb | grep -i ether
ethtool enx00e04c36076b

The investigation showed two USB adapters:
- Realtek RTL8152 Fast Ethernet Adapter (100Mbps) - currently active
- ASIX AX88179 Gigabit Ethernet (1000Mbps) - available but unused

After connecting the cable to the gigabit adapter, I verified it was detected:

ip link set enx000000000272 up
ethtool enx000000000272 | grep -E 'Speed|Link detected'

The output confirmed 1000Mb/s speed and link detection. Now I needed to configure the system to use this adapter as the primary interface.

I checked the current netplan configuration and created a backup:

ls /etc/netplan/
cp /etc/netplan/50-cloud-init.yaml /etc/netplan/50-cloud-init.yaml.bak

Then I updated the netplan configuration to include both adapters with proper routing priorities:

cat > /etc/netplan/50-cloud-init.yaml << 'EOF'
network:
  version: 2
  ethernets:
    enx00e04c36076b:
      dhcp4: true
      dhcp4-overrides:
        route-metric: 200
    enx000000000272:
      dhcp4: true
      dhcp4-overrides:
        route-metric: 100
EOF

The key here is the route-metric values - lower numbers have higher priority. The gigabit adapter gets metric 100 (primary) while the fast ethernet gets metric 200 (backup).

Applied the configuration and verified the results:

netplan apply
sleep 10
ip addr show enx000000000272
ip route show

The routing table now showed the gigabit adapter as the default route with metric 100, while the 100Mbps adapter remained as backup with metric 200.

Final verification confirmed the upgrade was successful:

ethtool enx000000000272 | grep -E 'Speed|Duplex|Link detected'
ping -c 3 8.8.8.8

The results showed:
- Speed: 1000Mb/s
- Link detected: yes
- Network connectivity working properly

This solution provides several benefits:
- 10x network performance improvement (100Mbps → 1000Mbps)
- High availability (both adapters remain configured)
- Automatic failover to backup adapter if primary fails
- Configuration persists across reboots

If you need to troubleshoot similar issues, useful commands include:

# Map interfaces to USB devices
for i in /sys/class/net/enx*/device; do
    echo "Interface: $(basename $(dirname $i))"
    cat $i/uevent | grep -E 'PRODUCT|DRIVER'
    echo
done

# Load specific drivers if needed
modprobe ax88179_178a
dmesg | tail -20 | grep -E 'enx|eth|usb'

To revert changes if needed:

cp /etc/netplan/50-cloud-init.yaml.bak /etc/netplan/50-cloud-init.yaml
netplan apply


Share:

Safely add a new kubeconfig into main kubeconfig

Hello dear reader, you are on a page that I wrote for my own future use and it's a self-note for myself.

I built a simple method for myself after losing kubeconfig during merging new kubeconfigs. It normally should be easy and safe but it can be harsh if you need to deal with a set of kubernetes clusters and some of them are installed with their default values from the very first day of bootstrapping.

Rancher's "local" context name is one example of this situation. If you need to deal with two rancher clusters which are installed with their defaults then you will highly likely overwrite one cluster with another or you will have only one while they both have the same context name.

For such cases, I replace "local" with something else in the new kubeconfig:

sed -i -e 's/local/beast/g' beast__kube_config.yaml

then put the file in a file something like this ~/.kube/config_new

KUBECONFIG=/home/veysel/.kube/config:/home/veysel/.kube/beast__kube_config.yaml kubectl config view --flatten > ~/.kube/config_new

trying to see whether everything seems okay with kubectx

KUBECONFIG=/home/veysel/.kube/config_new kubectx

and then I use this new file for a while before replacing the default ~/.kube/config with this new file just in case.

export KUBECONFIG=/home/veysel/.kube/config_new

then I replace the default kubeconfig with the new one if I decide everything is okay. You can still get a backup if you want.

cp /home/veysel/.kube/config /home/veysel/.kube/config_bak
mv /home/veysel/.kube/config_new /home/veysel/.kube/config



Thanks to Claude for such an automation script. Please note that the script hasn't been tested. I will replace this sentence whenever I have time to test it.

#!/bin/bash

# Simple kubeconfig merge script

# Check if config file provided
if [ -z "$1" ]; then
    echo "Usage: $0 <new_kubeconfig_file> [context_to_replace] [new_context_name]"
    echo "Example: $0 beast__kube_config.yaml local beast"
    exit 1
fi

NEW_CONFIG="$1"
OLD_CONTEXT="${2:-local}"
NEW_CONTEXT="${3:-$(basename $NEW_CONFIG .yaml)}"

# Replace context name in the new config
echo "Replacing '$OLD_CONTEXT' with '$NEW_CONTEXT' in $NEW_CONFIG..."
sed -i -e "s/$OLD_CONTEXT/$NEW_CONTEXT/g" "$NEW_CONFIG"

# Merge configs
echo "Merging configs..."
KUBECONFIG=~/.kube/config:$NEW_CONFIG kubectl config view --flatten > ~/.kube/config_new

# Test the new config
echo "Testing new config..."
KUBECONFIG=~/.kube/config_new kubectx

# Ask to proceed
echo -n "Does everything look good? (y/n): "
read -r response

if [ "$response" = "y" ]; then
    # Backup old config
    cp ~/.kube/config ~/.kube/config_bak
    
    # Replace with new config
    mv ~/.kube/config_new ~/.kube/config
    echo "Done! Old config backed up to ~/.kube/config_bak"
else
    echo "Aborted. New config saved as ~/.kube/config_new"
    echo "You can test it with: export KUBECONFIG=~/.kube/config_new"
fi
Share:

19 Eylül 2024 Perşembe

Fixing lens error: unknown command "oidc-login" for "kubectl"

 Here is a simple snippet for all who faces this issue:)

First of all, don't forget pass the environment variable

export PATH="$PATH:$HOME/.krew/bin"

and use full kubectl path as like I used below.


    - name: your_oidc

      user:

        exec:

            apiVersion: client.authentication.k8s.io/v1beta1

            args:

                - oidc-login

                - get-token

                - --oidc-issuer-url=https://your-oidc-issuer 

                - --oidc-client-id=kubernetes

                - --grant-type=password

                - --username=veysel.sahin

                - --password=YOURPASSWORD

            command: /usr/local/bin/kubectl

            env: null

            interactiveMode: IfAvailable

            provideClusterInfo: false

Share:

5 Mayıs 2020 Salı

Block range of RFC1918 from external interface

We have to find out the external interface. You can find by ip route get 8.8.8.8 command. You will see a multi-column row after the run. The 5th value of this row shows your external interface. You can also get the interface name by using awk command as shown below.  We exporting interface name to make process easier. You can block RFC1918 subnets from the external interface to prevent these to go out by command that I prepared below. You can also replace $INET_IFACE variable by hard-coded way such as eth0. I found my external by this command: ip route get 8.8.8.8 | awk -- '{printf $5}'

export INET_IFACE=$(ip route get 8.8.8.8 | awk -- '{printf $5}')
iptables -A FORWARD -o $INET_IFACE -d 10.0.0.0/8 -j REJECT 
iptables -A FORWARD -o $INET_IFACE -d 172.16.0.0/12 -j REJECT 
iptables -A FORWARD -o $INET_IFACE -d 192.168.0.0/16 -j REJECT
iptables -A FORWARD -o $INET_IFACE -d 100.64.0.0/10 -j REJECT
iptables -A FORWARD -o $INET_IFACE -d 169.254.0.0/16 -j REJECT

Hard-coded way:

iptables -A FORWARD -o eth0 -d 10.0.0.0/8 -j REJECT 
iptables -A FORWARD -o eth0 -d 172.16.0.0/12 -j REJECT 
iptables -A FORWARD -o eth0 -d 192.168.0.0/16 -j REJECT
iptables -A FORWARD -o eth0 -d 100.64.0.0/10 -j REJECT
iptables -A FORWARD -o eth0 -d 169.254.0.0/16 -j REJECT
Share:

27 Ocak 2020 Pazartesi

Trust self-signed certificate at linux mint


Example certificate content for root and intermediate certificate:


sudo mkdir /usr/local/share/ca-certificates/extra
sudo cp root.cert.pem /usr/local/share/ca-certificates/extra/root.cert.crt
sudo update-ca-certificates

Important tip: You have to select the certificate at the top of the list that opens after "sudo update-ca-certificates" command.
Share:

22 Ocak 2019 Salı

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Hi, you have to unmask docker service by command given below if you getting some error during etc sn apshot-save like this "Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running"
rke --debug etcd snapshot-save

unmask command
systemctl unmask docker.service systemctl unmask docker.socket systemctl start docker.service
Share:

2 Kasım 2018 Cuma

26 Ekim 2018 Cuma

solving rancher 2.x setting default storageclass failure on gui

Hi everyone,
If you using rancher 2.x for kubernetes and and nfs-provisioner for storageclass and you cannot set nfs as your default storageclass you have to set this via kubectl.

You can do this by running this command in your main system.
kubectl patch storageclass nfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

If you getting an error like cannot connect host:8080 you must run kubectl with --kubeconfig param that resolves your kube_config_cluster.yml.

example usage:
 kubectl patch storageclass nfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' --kubeconfig=~/kube_config_cluster.yml

You also can add kube_config_cluster.yml as KUBECONFIG environment value by
export KUBECONFIG=/path/to/kube_config_cluster.yml
or adding in /etc/environment file for persistent usage.
Share:

25 Ekim 2018 Perşembe

Failed to list *v1alpha1.Certificate: the server could not find the requested resource (get certificates.certmanager.k8s.io)

Hi everone, if you getting an error like "Failed to list *v1alpha1.Certificate: the server could not find the requested resource (get certificates.certmanager.k8s.io)" in kube-system cert-manager pod's log you just have to install cert-manager catalog app into kube-system namespace in system project.
Share:

24 Ekim 2018 Çarşamba

kubernetes in container dns resolution problems with flannel vxlan backend

You can change net backend type vxlan to host-gw if you are using a ovh or kimsufi server and you don't sure server's vxlan support and docker containers outgoing connections fails because name resolution. I am using rancher and kubernetes and i changed my net-conf.json config from canal-config configmap. 

Change


net-conf.json



{
"Network": "10.42.0.0/16",
"Backend": {
"Type": "vxlan"
}
}


to 
net-conf.json



{
"Network": "10.42.0.0/16",
"Backend": {
"Type": "host-gw"
}
}

Share: