Optimizing Large Data Searches to Minimize Workflow Units

I’m hitting performance and workflow-unit limits when running searches over my entire “Data Type” collection. I’m exploring best practices for handling very large datasets (thousands of records) in Bubble. ChatGPT suggested using a backend workflow—specifically “Schedule API workflow on a list”—instead of client-side searches, but I’m struggling with the setup.

What’s the recommended way to configure and invoke Bubble’s “Schedule API workflow on a list” for bulk processing without blowing through my unit quota?

Any guidance or examples would be greatly appreciated. Thanks!

1 Like

hello AMarjeet how are you doing i saw your post on the general bubble
let proceed