Do I need multiple agent processes in order to set different concurrency limits for deployments with different tags?

View in #prefect-community on Slack

Jacobo_Blanco @Jacobo_Blanco: Hey folks!

We have a single agent running on EC2 and it has a few tags associated with it, some of which are associated with a flow concurrency limit. (e.g. etl_jobs => limit:5, ml_jobs => limit:2, and so on).

Am I reading the Prefect 2.0 docs correctly? We would need to run multiple agents on our existing EC2 instance if we want to have different Flow concurrency limits for different groups of flows?

We would have an “ETL Job” work queue with it’s won limit and a dedicate agent, another work queue for “ML Job” and a related agent.

@Anna_Geller: Let’s start by explaining different types of concurrency limits. This topic explains it:

It looks like you want to limit concurrency on a work-queue level based on specific deployment tags. You could attach multiple tags that require this limit on a single work-queue:

prefect work-queue create your_queue -t etl_jobs

this generates UUID and then you can set:

prefect work-queue set-concurrency-limit WORK_QUEUE_ID 5

and then you create an agent process for this work queue:

prefect agent start WORK_QUEUE_ID

Then you do exactly the same thing for the ml_jobs tag and limit 2.

I believe you understood and implemented everything correctly but what confused you is that you considered the agent as sort of “the thing that executes and runs everything”. A more helpful mental model would be:

  • the EC2 instance is your agent
  • the result of prefect agent start WORK_QUEUE_ID is a single lightweight agent process
  • a work queue is a deployment filter mechanism to ensure that a specific agent process polls only for the relevant deployments and ensures that concurrency limits for those deployments are communicated to the agent
  • each lightweight agent process polls only for the relevant deployments (as defined by work-queue tags, flow runner types, or specific deployment IDs), and deploys only as many as it’s allowed to (as specified by the work-queue concurrency limits)

The way this is currently implemented makes it easier to troubleshoot and is more adaptable to changes in your future workflow requirements. For example, this allows you to:

  • stop the agent polling for ML-specific work queue without affecting your ETL jobs
  • easily move the agent process to different infrastructure, e.g. to Kubernetes without affecting any scheduled runs - they remain queued in the work queue and can be picked up once your agent polling for this work queue ID was started again
  • inspect scheduled runs for ETL or ML jobs separately

But curious to hear your feedback on those abstractions. Do you consider this confusing that the work queue and agent are two separate objects? Or do you see it being somehow negative to have multiple agent processes?

Jacobo_Blanco @Jacobo_Blanco: Hey! Thanks for the very detailed answer I think this clarifies some things. I do have some follow up questions and my apologies I should’ve started with what I’m trying to do and not talk about specifics.

One pattern that emerges very often for us is Singleton flows (For the sake of clarity, only one run of this specific flow can ever be active at any one time). The other hidden constraint is that due to Change Management/Security/Auditing requirements that are in place it’s not a simple procedure to log into the EC2 instance and execute commands.

First assumption check: I assume that the agent commands have to be executed on the instance, is that right?

@Anna_Geller: correct, they must be executed on the instance

re singleton flows: I believe we will add a feature allowing to have only one active flow run of each specific flow without having to explicitly set a work-queue concurrency limit of 1 for that

Jacobo_Blanco @Jacobo_Blanco: If that is the case then this covers the biggest pain point. It would be very unproductive to have to create 50 queues and agents for 50 singleton flows.

Thanks for the quick response.

@Anna_Geller: Add toggle for automatic concurrency limit of one run per deployment · Issue #5623 · PrefectHQ/prefect · GitHub