- ActiveMQ
- ActiveMQ Artemis
- Apache Kafka
- Apache Pulsar
- AWS CloudWatch
- AWS DynamoDB
- AWS DynamoDB Streams
- AWS Kinesis Stream
- AWS SQS Queue
- Azure Application Insights
- Azure Blob Storage
- Azure Data Explorer
- Azure Event Hubs
- Azure Log Analytics
- Azure Monitor
- Azure Pipelines
- Azure Service Bus
- Azure Storage Queue
- Cassandra
- CouchDB
- CPU
- Cron
- Datadog
- Elasticsearch
- Etcd
- External
- External Push
- Google Cloud Platform Stackdriver
- Google Cloud Platform Storage
- Google Cloud Platform‎ Pub/Sub
- Graphite
- Huawei Cloudeye
- IBM MQ
- InfluxDB
- Kubernetes Workload
- Liiklus Topic
- Loki
- Memory
- Metrics API
- MongoDB
- MSSQL
- MySQL
- NATS JetStream
- NATS Streaming
- New Relic
- OpenStack Metric
- OpenStack Swift
- PostgreSQL
- Predictkube
- Prometheus
- RabbitMQ Queue
- Redis Lists
- Redis Lists (supports Redis Cluster)
- Redis Lists (supports Redis Sentinel)
- Redis Streams
- Redis Streams (supports Redis Cluster)
- Redis Streams (supports Redis Sentinel)
- Selenium Grid Scaler
- Solace PubSub+ Event Broker
KEDA logging and telemetry
The first place to look if something isn’t behaving correctly is the logs generated from KEDA. After deploying you should have a pod with two containers running within the namespace (by default: keda).
You can view the KEDA operator pod via kubectl:
kubectl get pods -n keda
You can view the logs for the keda operator container with the following:
kubectl logs -n keda {keda-pod-name} -c keda-operator
Reporting issues
If you are having issues or hitting a potential bug, please file an issue in the KEDA GitHub repo with details, logs, and steps to reproduce the behavior.