User "system:serviceaccount:hm-prefect:prefect-worker" cannot create resource "jobs" in API group "batch" in the namespace "default"

Originally asked in Stack Overflow. Here is a copy. :smiley:


As Prefect work pools and workers are now generally available after 2.11.0. I am trying to switch from Prefect Agent to Prefect Worker.

I deployed Prefect Server by


helm upgrade \
  prefect-server \
  prefect-server \
  --install \
  --repo=https://prefecthq.github.io/prefect-helm \
  --namespace=hm-prefect \
  --create-namespace \
  --values=prefect-server-values.yaml

prefect-server-values.yaml:

server:
  image:
    repository: docker.io/prefecthq/prefect
    prefectTag: 2.11.0-python3.11-kubernetes
  publicApiUrl: https://prefect.mydomain.com/api
helm upgrade \
  prefect-worker \
  prefect-worker \
  --install \
  --repo=https://prefecthq.github.io/prefect-helm \
  --namespace=hm-prefect \
  --create-namespace \
  --values=prefect-worker-values.yaml

prefect-worker-values.yaml:

worker:
  image:
    repository: docker.io/prefecthq/prefect
    prefectTag: 2.11.0-python3.11-kubernetes
  apiConfig: server
  config:
    workPool: hm-kubernetes-pool
  serverApiConfig:
    apiUrl: http://prefect-server.hm-prefect.svc:4200/api
โžœ helm list -n hm-prefect
NAME          	NAMESPACE 	REVISION	UPDATED                             	STATUS  	CHART                    	APP VERSION
prefect-server	hm-prefect	1       	2023-07-31 17:07:50.401888 -0700 PDT	deployed	prefect-server-2023.07.27	2.11.1
prefect-worker	hm-prefect	1       	2023-07-31 17:18:57.586027 -0700 PDT	deployed	prefect-worker-2023.07.27	2.11.1

โžœ kubectl get deployment -n hm-prefect
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
prefect-server   1/1     1            1           17m
prefect-worker   1/1     1            1           6m2s

And I can see Prefect Worker in the UI:

enter image description here

Then I generated YAML file by:

โžœ prefect deployment build src/main.py:print_platform --name=print-platform --infra-block=kubernetes-job/print-platform-kubernetes-job-block --apply --pool=hm-kubernetes-pool
Found flow 'print-platform'
Deployment YAML created at '/Users/hongbo-miao/Clouds/Git/hongbomiao.com/hm-prefect/workflows/print-platform/print_platform-deployment.yaml'.
Deployment storage None does not have upload capabilities; no files uploaded.  Pass --skip-upload to suppress this warning.
Deployment 'print-platform/print-platform' successfully created with id '7f7603ca-697c-4dca-9bcb-28a889165fe8'.

Here is the generated file print_platform-deployment.yaml content:

###
### A complete description of a Prefect Deployment for flow 'print-platform'
###
name: print-platform
description: null
version: e4da5dae95465f73a0e3e0bece1555bb
# The work queue that will handle this deployment's runs
work_queue_name: default
work_pool_name: hm-kubernetes-pool
tags: []
parameters: {}
schedule: null
is_schedule_active: true
infra_overrides: {}

###
### DO NOT EDIT BELOW THIS LINE
###
flow_name: print-platform
manifest_path: null
infrastructure:
  type: kubernetes-job
  env: {}
  labels: {}
  name: null
  command: null
  image: ghcr.io/hongbo-miao/hm-prefect-print-platform:latest
  namespace: hm-prefect
  service_account_name: null
  image_pull_policy: Always
  cluster_config: null
  job:
    apiVersion: batch/v1
    kind: Job
    metadata:
      labels: {}
    spec:
      template:
        spec:
          parallelism: 1
          completions: 1
          restartPolicy: Never
          containers:
          - name: prefect-job
            env: []
  customizations: []
  job_watch_timeout_seconds: null
  pod_watch_timeout_seconds: 60
  stream_output: true
  finished_job_ttl: null
  _block_document_id: 1f5b585c-581d-4ca4-adfa-c69dc5319941
  _block_document_name: print-platform-kubernetes-job-block
  _is_anonymous: false
  block_type_slug: kubernetes-job
  _block_type_slug: kubernetes-job
storage: null
path: /opt/prefect/flows
entrypoint: src/main.py:print_platform
parameter_openapi_schema:
  title: Parameters
  type: object
  properties: {}
  required: null
  definitions: null
timestamp: '2023-08-01T00:32:45.975410+00:00'
triggers: []

Next, I try to run by

โžœ prefect deployment run print-platform/print-platform
Creating flow run for deployment 'print-platform/print-platform'...
Created flow run 'onyx-fennec'.
โ””โ”€โ”€ UUID: 065326e7-1d3e-455a-86fb-b15d553af5bd
โ””โ”€โ”€ Parameters: {}
โ””โ”€โ”€ Scheduled start time: 2023-07-31 17:32:50 PDT (now)
โ””โ”€โ”€ URL: https://prefect.mydomain.com/flow-runs/flow-run/065326e7-1d3e-455a-86fb-b15d553af5bd

However, this gives me error:

Worker 'KubernetesWorker 180550e0-fe47-4a0d-998d-b772d53e14b0' submitting flow run '065326e7-1d3e-455a-86fb-b15d553af5bd'
Creating Kubernetes job...

Failed to submit flow run '065326e7-1d3e-455a-86fb-b15d553af5bd' to infrastructure.
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/prefect_kubernetes/worker.py", line 628, in _create_job
    job = batch_client.create_namespaced_job(
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/kubernetes/client/api/batch_v1_api.py", line 210, in create_namespaced_job
    return self.create_namespaced_job_with_http_info(namespace, body, **kwargs)  # noqa: E501
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/kubernetes/client/api/batch_v1_api.py", line 309, in create_namespaced_job_with_http_info
    return self.api_client.call_api(
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/kubernetes/client/api_client.py", line 348, in call_api
    return self.__call_api(resource_path, method,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
    response_data = self.request(
                    ^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/kubernetes/client/api_client.py", line 391, in request
    return self.rest_client.POST(url,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/kubernetes/client/rest.py", line 276, in POST
    return self.request("POST", url,
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/kubernetes/client/rest.py", line 235, in request
    raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (403)
Reason: Forbidden
HTTP response headers: HTTPHeaderDict({'Audit-Id': '7871421d-254d-4d72-9a30-a7ff3306822b', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'X-Kubernetes-Pf-Flowschema-Uid': 'e5d21bfa-f8ff-4689-965a-2c8efc99569b', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'f86dde2c-b36e-4c12-a44c-31e36a8ecf05', 'Date': 'Tue, 01 Aug 2023 00:32:51 GMT', 'Content-Length': '321'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"jobs.batch is forbidden: User \"system:serviceaccount:hm-prefect:prefect-worker\" cannot create resource \"jobs\" in API group \"batch\" in the namespace \"default\"","reason":"Forbidden","details":{"group":"batch","kind":"jobs"},"code":403}



During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/prefect/workers/base.py", line 834, in _submit_run_and_capture_errors
    result = await self.run(
             ^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/prefect_kubernetes/worker.py", line 506, in run
    job = await run_sync_in_worker_thread(
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/prefect/utilities/asyncutils.py", line 91, in run_sync_in_worker_thread
    return await anyio.to_thread.run_sync(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/prefect_kubernetes/worker.py", line 637, in _create_job
    message += ": " + exc.body["message"]
                      ~~~~~~~~^^^^^^^^^^^
TypeError: string indices must be integers, not 'str'

Completed submission of flow run '065326e7-1d3e-455a-86fb-b15d553af5bd'
Reported flow run '065326e7-1d3e-455a-86fb-b15d553af5bd' as crashed: Flow run could not be submitted to infrastructure

Inside seems this line is the issue

{โ€œkindโ€:โ€œStatusโ€,โ€œapiVersionโ€:โ€œv1โ€,โ€œmetadataโ€:{},โ€œstatusโ€:โ€œFailureโ€,โ€œmessageโ€:โ€œjobs.batch is forbidden: User "system:serviceaccount:hm-prefect:prefect-worker" cannot create resource "jobs" in API group "batch" in the namespace "default"โ€,โ€œreasonโ€:โ€œForbiddenโ€,โ€œdetailsโ€:{โ€œgroupโ€:โ€œbatchโ€,โ€œkindโ€:โ€œjobsโ€},โ€œcodeโ€:403}

I am not sure why it tries to create job in the namespace default instead of hm-prefect. Any ideas? Thanks!

Hi!
You can try adding this in your workerโ€™s values :
namespaceOverride: "hm-prefect"

To list the all values available (more than those shown in the documentation here) :
helm show values prefect/prefect-worker

Thanks @rizziemma my Prefect worker itself is already on namespace hm-prefect. The issue is for job.

@Stoat answered me on Stack Overflow: kubernetes - Prefect: User "system:serviceaccount:hm-prefect:prefect-worker" cannot create resource "jobs" in API group "batch" in the namespace "default" - Stack Overflow

However, I met a new issue. I posted in UPDATE 1. Here is a copy

UPDATE 1:

Based on @Stoat feedback. I did

โžœ prefect init
? Would you like to initialize your deployment configuration with a recipe? [Use arrows to move; enter to select; n to select none]
โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”“
โ”ƒ    โ”ƒ Name         โ”ƒ Description                                                                                   โ”ƒ
โ”กโ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ฉ
โ”‚    โ”‚ s3           โ”‚ Store code within an S3 bucket                                                                โ”‚
โ”‚ >  โ”‚ docker       โ”‚ Store code within a custom docker image alongside its runtime environment                     โ”‚
โ”‚    โ”‚ docker-s3    โ”‚ Store code within S3 and build a custom docker image for runtime                              โ”‚
โ”‚    โ”‚ docker-azure โ”‚ Store code within an Azure Blob Storage container and build a custom docker image for runtime โ”‚
โ”‚    โ”‚ azure        โ”‚ Store code within an Azure Blob Storage container                                             โ”‚
โ”‚    โ”‚ docker-gcs   โ”‚ Store code within GCS and build a custom docker image for runtime                             โ”‚
โ”‚    โ”‚ docker-git   โ”‚ Store code within a git repository and build a custom docker image for runtime                โ”‚
โ”‚    โ”‚ local        โ”‚ Store code on a local filesystem                                                              โ”‚
โ”‚    โ”‚ git          โ”‚ Store code within git repository                                                              โ”‚
โ”‚    โ”‚ gcs          โ”‚ Store code within a GCS bucket                                                                โ”‚
โ””โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
    No, I'll use the default deployment configuration.
                         Required inputs for 'docker' recipe
โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”“
โ”ƒ Field Name โ”ƒ Description                                                          โ”ƒ
โ”กโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ฉ
โ”‚ image_name โ”‚ The image name, including repository, to give the built Docker image โ”‚
โ”‚ tag        โ”‚ The tag to give the built Docker image                               โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
image_name: ghcr.io/hongbo-miao/hm-prefect-print-platform
tag: latest
---------------
Created project in /Users/hongbo-miao/Clouds/Git/hongbomiao.com/hm-prefect/workflows/print-platform with the following new files:
prefect.yaml

I removed build and pubsh section as my Docker image already built. Here is my updated prefect.yaml:
:

name: print-platform
prefect-version: 2.11.1
pull:
- prefect.deployments.steps.set_working_directory:
    directory: /opt/prefect/print-platform
deployments:
- name: print-platform
  version: null
  tags: []
  description: null
  schedule: {}
  flow_name: null
  entrypoint: src/main.py:print_platform
  parameters: {}
  work_pool:
    name: hm-kubernetes-pool
    work_queue_name: null
    job_variables:
      image: ghcr.io/hongbo-miao/hm-prefect-print-platform

I hope avoid prompt, this is how I deploy:

โžœ prefect --no-prompt deploy
โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ Deployment 'print-platform/print-platform' successfully created with id 'e5bb4249-3a9f-4d62-bee2-fc9dce69fbd8'.                                            โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ

View Deployment in UI: https://prefect.hongbomiao.com/deployments/deployment/e5bb4249-3a9f-4d62-bee2-fc9dce69fbd8


To execute flow runs from this deployment, start a worker in a separate terminal that pulls work from the 'hm-kubernetes-pool' work pool:

        $ prefect worker start --pool 'hm-kubernetes-pool'

To schedule a run for this deployment, use the following command:

        $ prefect deployment run 'print-platform/print-platform'

Next I run

โžœ prefect deployment run print-platform/print-platform
Creating flow run for deployment 'print-platform/print-platform'...
Created flow run 'charming-chimpanzee'.
โ””โ”€โ”€ UUID: 1f83d2ee-2584-424e-96ff-11e236ff7f1b
โ””โ”€โ”€ Parameters: {}
โ””โ”€โ”€ Scheduled start time: 2023-08-01 13:57:20 PDT (now)
โ””โ”€โ”€ URL: https://prefect.hongbomiao.com/flow-runs/flow-run/1f83d2ee-2584-424e-96ff-11e236ff7f1b
Worker 'KubernetesWorker 59a0fab6-b9c8-4668-b626-9a5cc0311250' submitting flow run '1f83d2ee-2584-424e-96ff-11e236ff7f1b'
Creating Kubernetes job...
Failed to submit flow run '1f83d2ee-2584-424e-96ff-11e236ff7f1b' to infrastructure.
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/urllib3/connection.py", line 174, in _new_conn
    conn = connection.create_connection(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/urllib3/util/connection.py", line 95, in create_connection
    raise err
  File "/usr/local/lib/python3.11/site-packages/urllib3/util/connection.py", line 85, in create_connection
    sock.connect(sa)
OSError: [Errno 113] No route to host

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 714, in urlopen
    httplib_response = self._make_request(
                       ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 403, in _make_request
    self._validate_conn(conn)
  File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 1053, in _validate_conn
    conn.connect()
  File "/usr/local/lib/python3.11/site-packages/urllib3/connection.py", line 363, in connect
    self.sock = conn = self._new_conn()
                       ^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/urllib3/connection.py", line 186, in _new_conn
    raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0xffffac3a3c10>: Failed to establish a new connection: [Errno 113] No route to host

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/prefect/workers/base.py", line 834, in _submit_run_and_capture_errors
    result = await self.run(
             ^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/prefect_kubernetes/worker.py", line 506, in run
    job = await run_sync_in_worker_thread(
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/prefect/utilities/asyncutils.py", line 91, in run_sync_in_worker_thread
    return await anyio.to_thread.run_sync(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/prefect_kubernetes/worker.py", line 628, in _create_job
    job = batch_client.create_namespaced_job(
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/kubernetes/client/api/batch_v1_api.py", line 210, in create_namespaced_job
    return self.create_namespaced_job_with_http_info(namespace, body, **kwargs)  # noqa: E501
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/kubernetes/client/api/batch_v1_api.py", line 309, in create_namespaced_job_with_http_info
    return self.api_client.call_api(
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/kubernetes/client/api_client.py", line 348, in call_api
    return self.__call_api(resource_path, method,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
    response_data = self.request(
                    ^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/kubernetes/client/api_client.py", line 391, in request
    return self.rest_client.POST(url,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/kubernetes/client/rest.py", line 276, in POST
    return self.request("POST", url,
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/kubernetes/client/rest.py", line 169, in request
    r = self.pool_manager.request(
        ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/urllib3/request.py", line 78, in request
    return self.request_encode_body(
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/urllib3/request.py", line 170, in request_encode_body
    return self.urlopen(method, url, **extra_kw)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/urllib3/poolmanager.py", line 376, in urlopen
    response = conn.urlopen(method, u.request_uri, **kw)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 826, in urlopen
    return self.urlopen(
           ^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 826, in urlopen
    return self.urlopen(
           ^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 826, in urlopen
    return self.urlopen(
           ^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 798, in urlopen
    retries = retries.increment(
              ^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/urllib3/util/retry.py", line 592, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='10.43.0.1', port=443): Max retries exceeded with url: /apis/batch/v1/namespaces/default/jobs (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xffffac3a3c10>: Failed to establish a new connection: [Errno 113] No route to host'))
Completed submission of flow run '1f83d2ee-2584-424e-96ff-11e236ff7f1b'
Reported flow run '1f83d2ee-2584-424e-96ff-11e236ff7f1b' as crashed: Flow run could not be submitted to infrastructure

This time the error is a little bit differnt. However, I am still lost.

I though that the worker didnโ€™t retreive the correct namespace but I understand that itโ€™s actually the work pool configuration that override it!
I have the same worker configuration now but with Prefect Cloud, so I guess the new issue is linked to your Prefect Server instance. Sorry I canโ€™t help further on this topic :frowning:

No problem, thanks @rizziemma ! :smiley:

Hi @Hongbo-Miao I had the same problem and it was because the workspace was set to โ€˜default.โ€™ After I switched it to โ€˜prefectโ€™ it all worked!

Thanks @johnkangw ! :smiley: I will check soon, and post back!