Queue based ML inference service.
An asynchronous ML inference service for text classification workloads. InferFlow separates request intake from execution through background job queues and durable job state, with explicit lifecycle tracking. It supports retry safe processing and status polling for reliable high concurrency workflows, and serves as the execution foundation for automation pipeline and MLOps projects.