The issue arises because the Dapr component configuration (e.g., pubsub.yaml) referencing Azure Service Bus is not correctly applied during deployment to Azure Container Apps, causing Dapr to default to Redis.
This is noticeable by inspecting type definitions: PubSub extends PubSubEngine PubSubEngine class implements: asyncIterableIterator<T>(triggers: string | readonly string[]): PubSubAsyncIterableIterator<T>; In npm package "graphql-subscriptions", both asyncIterator and asyncIterableIterator are used (description is probably not updated).
Considering the behavior of pubsub, the only way is using a dead letter queue as you have mentioned. The retry policy is either retry immediately or retry after exponential backoff delay. You cannot configure a linear delay on both options. The exponential backoff is being handled by the API and is subject to change overtime as per this SO answer.
In your Dapr component configuration (pubsub.yaml), check that you have set the isSessionEnabled property to "true" for your Azure Service Bus component. When a request is received, it will attempt to consume a message from the session-enabled Azure Service Bus queue and process it according to the defined logic.
Google provides Pubsub and there are some fully managed Kafka versions out there that you can configure on the cloud and On-prem. Cloud vs On-prem I think this is a real difference between them, because Pubsub is only offered as part of the GCP ecosystem whereas Apache Kafka you can use as a both Cloud service and On-prem service (doing the ...
PubSub is a design pattern on how pieces of a system communicate. It's like the subway/airplane system (think JetBlue, Delta Airlines, NYC Subway System, etc). Common tools to handle PubSub today are Kafka and Redis. Some Backend engineers can build an entire career out of designing well-architected, reliable PubSub systems.
config get client-output-buffer-limit "normal 0 0 0 slave 268435456 67108864 60 pubsub 33554432 8388608 60" The 33554432 after pub/sub is the maximum buffer size for pub/sub clients, and the 8388608 is a soft limit that may not be exceeded for more than 60 seconds. So, my answer below changes if you raise the limit using a command like the following:
Im curious to understand the implementation of GCP's PubSub. Although Pubsub seems to point to follow a Publish-Subscribe design pattern, it seems more close to AWS's SQS (queue) than AWS SNS (that use publish-subscribe model).