Engine Too Many Alerts
The Engine Too Many Alerts issue appears when your Security Engine generates an abnormally high volume of alerts—more than 250,000 in a 6-hour period. This usually indicates a misconfigured scenario, false positives, or an ongoing large-scale attack.
What Triggers This Issue
- Trigger condition: More than 250,000 alerts in 6 hours
- Criticality: High
- Impact: May indicate false positives, performance issues, or a real attack
Common Root Causes
- Misconfigured or overly sensitive scenario: A scenario with thresholds set too low or matching too broadly can trigger excessive alerts.
- Log duplication: The same log file is being read multiple times due to acquisition misconfiguration.
- Actual large-scale attack: A genuine distributed attack (DDoS, brute force campaign) targeting your infrastructure.
- Parser creating duplicate events: A parser issue causing the same log line to generate multiple events.
How to Diagnose
Check alert volume by scenario
Identify which scenarios are generating the most alerts:
# On host
sudo cscli alerts list -l 100
# Docker
docker exec crowdsec cscli alerts list -l 100
# Kubernetes
kubectl exec -n crowdsec -it $(kubectl get pods -n crowdsec -l type=lapi -o name) -- cscli alerts list -l 100
Look for patterns:
- Is one scenario dominating the alert count?
- Are the same IPs repeatedly triggering alerts?
- Are alerts legitimate threats or false positives?
Check metrics for scenario overflow
# On host
sudo cscli metrics show scenarios
# Docker
docker exec crowdsec cscli metrics show scenarios
# Kubernetes
kubectl exec -n crowdsec -it $(kubectl get pods -n crowdsec -l type=lapi -o name) -- cscli metrics show scenarios
Look for scenarios with extremely high "Overflow" counts or "Current count" numbers.
Check for log duplication
Review acquisition configuration to ensure log files aren't listed multiple times:
# On host
sudo cat /etc/crowdsec/acquis.yaml
sudo ls -la /etc/crowdsec/acquis.d/
# Docker
docker exec crowdsec cat /etc/crowdsec/acquis.yaml
# Kubernetes
kubectl get configmap -n crowdsec crowdsec-config -o yaml | grep -A 20 acquis
Also check metrics for duplicate acquisition sources:
sudo cscli metrics show acquisition
How to Resolve
For misconfigured scenarios
Put the problematic scenario in simulation mode
This allows you to investigate without generating alerts:
# On host
sudo cscli simulation enable crowdsecurity/scenario-name
# Docker
docker exec crowdsec cscli simulation enable crowdsecurity/scenario-name
# Kubernetes
kubectl exec -n crowdsec -it $(kubectl get pods -n crowdsec -l type=lapi -o name) -- cscli simulation enable crowdsecurity/scenario-name
Then reload:
sudo systemctl reload crowdsec
Tune the scenario threshold
If the scenario is triggering too easily, you can create a custom version with adjusted thresholds. See the scenario documentation for details on customizing scenarios.
Use whitelists
If specific IPs or patterns are causing false positives, create a whitelist. See Parser Whitelists or Profiles.
For log duplication
Remove duplicate entries from your acquisition configuration:
- Edit acquisition files:
/etc/crowdsec/acquis.yamlor files in/etc/crowdsec/acquis.d/ - Ensure each log source appears only once
- Restart CrowdSec:
sudo systemctl restart crowdsec
For legitimate large-scale attacks
If you're experiencing a real attack:
- Verify your remediation components are working to block attackers
- Check that decisions are being applied:
cscli decisions list - Consider increasing timeout durations in profiles if attackers are returning
- Subscribe to Community Blocklist for proactive blocking of known malicious IPs
- Monitor your infrastructure for the attack's impact
For parser issues
If a parser is creating duplicate events:
- Use
cscli explainto test parsing:sudo cscli explain --log "<sample log line>" --type <type> - Check if the log line generates multiple events incorrectly
- Review parser configuration or report the issue to the CrowdSec Hub
Verify Resolution
After making changes:
- Restart or reload CrowdSec:
sudo systemctl restart crowdsec - Monitor alert generation for 30 minutes:
watch -n 30 'cscli alerts list | head -20' - Check metrics:
sudo cscli metrics show scenarios - Verify alert volume has returned to normal levels
Performance Impact
Excessive alerts can impact performance:
- High memory usage: Each active scenario bucket consumes memory
- Database growth: Large numbers of alerts increase database size
- API latency: Bouncers may experience slower decision pulls
If performance is degraded, consider:
- Cleaning old alerts:
cscli alerts delete --all(after investigation) - Reviewing database maintenance: Database documentation
Related Issues
- Security Engine Troubleshooting - General Security Engine issues
- LP No Logs Parsed - If parsing is creating unusual events
Getting Help
If you need assistance analyzing alert patterns: