So it seems like we are not over-using logs here. Also, this seems like a reasonable scenario: doing real-time sync from an operational database (Mongo) to Snowflake, and each Task here represents a collection in Mongo.
The question is if we can somehow raise the limit of the Prefect Cloud log endpoint to support our specific use case. Also maybe this limit should just be set higher to support such use cases.
thanks so much for sharing, glad the post was useful! 429 indicates rate limiting. What is your pricing tier on Prefect Cloud? you can either:
reach out to sales@prefect.io to increase that limit by upgrading to a higher pricing tier
limit the number of logs - perhaps you don’t need to log everything and some logs can be turned into DEBUG? this way you can set the log level to DEBUG and enable those only when needed
Regarding reaching out to sales, this is really limited by the pricing tier? I didn’t see any mention on the pricing page. Again, just to be clear, we are not talking about the total sum of log rows that are being sent. We are talking about a momentary peak of probably too many log requests per second, which is reached from time to time. It’s quite limiting as you are bling regarding what happens inside each task, and blocks the road for many of the use cases of running many Tasks in parallel.
Regarding DEBUG, do you mean the DEBUG log level in the Prefect logger? If so, shooting a log with a level of DEBUG, will cause it be displayed in the Cloud UI?
yup, this is throttled if you are e.g. on a free tier, while we can allocate more resources to handle such spikes for higher pricing tiers - this seems fair tbh, we need more infra for such spiky workloads
yup, exactly - then set prefect config set PREFECT_LOGGING_LEVEL=INFO