Hi all,
I looked at the quickstart guide of prefect 3 and the flow serve function. This is a great alternative to worker pools if the requirements are more static.
Let’s assume i want to use this approach in k8s and have a deployment of this code:
hello_world.py
from prefect import flow, task
from time import sleep
@task(log_prints=True)
def say_hello(name: str):
print(f"Hello, {name}!")
@flow
def hello_universe(names: list[str]):
for name in names:
say_hello(name)
print("Sleeping for 100 seconds...")
sleep(100)
if __name__ == "__main__":
# create your first deployment to automate your flow
hello_universe.serve(name="your-first-deployment")
I could then run the pod container with python hello_world.py
My issue is that in k8s pods could be restarted in many occasions. It could be due to code changes, cluster upgrades, bin packing, …
I tried locally to kill the script and then the workflow run was marked as cancelled.
The issue is. that when i use this approach in production every running workflow on a pod restart will get into the cancelled state. Is there a way to circumvent this or a way to fix this?
I tried to search in the documentation and ask claude but i did not find relevant information on this topic.
Thanks