Some early questions

Hi there, I’m working with a team in Austria on a satellite imagery pipeline project and we are just getting started with Prefect (straight to v2). So far liking it a lot! We have some things we are still a bit unsure of though. I appreciate that everyone must be super busy with the rollout but would be grateful for any insights, please.

1) Friendly error messages when tasks fail

So far I’ve seen when a task fails it displays a traceback. Is there a way to show a custom message instead?

2) Agent allocation

I’ve looked at the worker queue and agent docs and am unsure of how to best utilize our resources. It seems each agent/worker is tied to a specific queue? Say we have 10 workers (capable of doing the same tasks), and 2 queues, Q1 and Q2. When Q1 and Q2 have lots of work, we would like 6 workers to service Q1 and 4 workers to service Q2. When Q1 or Q2 are empty, the remaining workers should service the other queue. Is this possible to achieve?

3) Task output/results in the dashboard

I can see that task parameters are available in the dashboard, but not results - are there plans around this, or should we decide our own strategy?

Many thanks,


Hi Luke, grüß Dich! :wave:

Yes! You can return a custom state as shown here:

from prefect import task, flow
from prefect.orion.schemas.states import Completed, Failed

def always_fails_task():
    raise ValueError("I fail successfully")

def always_succeeds_task():
    print("I'm fail safe!")
    return "success"

def return_state_manually():
    x = always_fails_task.submit() 
    y = always_succeeds_task.submit()
    if y.result() == "success":
        return Completed(message="I am happy with this result")
        return Failed(message="How did this happen!?")

if __name__ == "__main__":

I understand and we’ll simplify that pattern in the GA release today and provide more guidance on resource allocation in future recipes.

Starting in the GA release, you can add or modify parameters directly from the UI:

The task run results are stored locally in your execution layer

1 Like

Great, that all sounds good, thank you Anna!

1 Like

Hi Anna,
I’ve been keeping half an eye on recent developments, and I’m wondering if there’s any new recommendations towards achieving what we are looking for? e.g… a pool of 10 workers that can service multiple queues as demand changes.

1 Like

That’s already possible, check out the Prefect Deployments FAQ which provides reference on how you can set this up already e.g. by starting 10 agents polling from the same work queue. Alternative is Kubernetes and using those workers as your Kubernetes data plane