-
Notifications
You must be signed in to change notification settings - Fork 406
Description
What happened?
We are self-hosting GH runners on Kubernetes and were trying to build an image with devspace build.
However, this did not work, because devspace detected that it was running in cluster:
Run devspace build \
devspace build \
--tag "my-tag" \
--skip-push=false \
--debug
shell: /usr/bin/bash -e {0}
08:36:30 info Using namespace 'actions-runners'
08:36:30 info Using kube context 'incluster'
08:36:30 fatal please make sure you have an existing valid kube config. You might want to check one of the following things:
* Make sure you can use 'kubectl get namespaces' locally
* If you are using Loft, you might want to run 'devspace create space' or 'loft create space'
we are using self-hosted runners which are hosted in a kubernetes clusters. thus devspace seems to detect that the runners are in a kubernetes cluster and tries to use the "incluster" context.
We tried to work around that issue by setting KUBECONFIG=/dev/null and removing images.my-image.buildKit.inCluster (which we are not setting in the devspace.yaml) with a profile patch. But to no avail.
What helped was unsetting some of the KUBERNETES_ variables:
- run: |
unset KUBERNETES_SERVICE_HOST
unset KUBERNETES_SERVICE_PORT
devspace build \
--tag "${{ steps.check_release.outputs.simple_api_version }}" \
--skip-push=falseWhat did you expect to happen instead?
We did expect devspace to stick to the documented behavior of using the local daemon when buildKit.inCluster is not specified and ignore the Kubernetes context without needing to work around the issue, like so:
Run devspace build \
devspace build \
--tag "my-tag" \
--skip-push=false
shell: /usr/bin/bash -e {0}
warn Unable to create new kubectl client: kube config is invalid
build:my-image Rebuild image ghcr.io/my-orga/my-repo/my-image because tag is missing
build:my-image Building image 'ghcr.io/my-orga/my-repo/my-image:my-tag' with engine 'buildkit'
... building ensues ...
This is also the behavior when running devspace build on GitHub hosted runners or locally without a Kubernetes config/context.
How can we reproduce the bug? (as minimally and precisely as possible)
Running the following config in a container of a Pod on Kubernetes will probably repoduce the bug. And maybe also "emulating" a in-cluster environment (setting appropriate env vars).
My devspace.yaml:
images:
my-image:
image: ghcr.io/my-orga/my-repo/my-image
tags: ["${GIT_SHA}"]
dockerfile: ./Dockerfile
buildKit:
args: ["--platform", "linux/amd64,linux/arm64"]
context: .
...Local Environment:
- DevSpace Version:
devspace version 6.3.20 - Operating System:
linux - ARCH of the OS:
AMD64
Kubernetes Cluster:
- Cloud Provider:
other - Kubernetes Version:
1.35
Anything else we need to know?
Nope.