Incident model feedback
Fine-tune your monitoring sensitivity to reduce alert noise and improve data quality insights.
Optimize your data quality monitoring by adjusting alert sensitivity based on your team's experience and data patterns. Decube's machine learning incident model learns from your feedback to deliver more accurate and actionable alerts.
Understanding Incident Model Sensitivity
Decube's incident detection uses confidence intervals (CI) to determine when to trigger alerts. The confidence interval represents how certain the system must be before flagging an anomaly as an incident.
Default Setting: 90% confidence interval (CI = 0.90)
System must be 90% confident an anomaly is significant before alerting
Balances alert accuracy with early detection
Can be adjusted based on your team's preferences
When to Adjust Sensitivity
Increase Sensitivity (More Alerts) When:
Missing critical issues that should be detected
Data quality is paramount for business operations
Team can handle higher alert volume effectively
Early detection outweighs false positives
Decrease Sensitivity (Fewer Alerts) When:
Experiencing alert fatigue from too many notifications
False positives are common in your data patterns
Team resources are limited for incident response
Data patterns have natural variability
How to Adjust Model Sensitivity
Step 1: Access Incident Details
Navigate to Data Quality > Incidents and select any incident you want to provide feedback on.

Step 2: Provide Feedback
Rate the Alert Quality:
Click the 👍 thumbs up if the alert was helpful and accurate
Click the 👎 thumbs down if the alert was unnecessary or inaccurate
Step 3: Adjust Sensitivity (For Auto-Threshold Monitors)
For monitors with automatic thresholding enabled, you'll see a sensitivity adjustment slider after providing negative feedback.
Step 4: Configure Sensitivity Level
Increase Sensitivity (Move slider RIGHT):
Triggers more alerts for smaller anomalies
Better for critical data that requires strict monitoring
Higher chance of false positives
Decrease Sensitivity (Move slider LEFT):
Reduces alert noise by requiring larger anomalies
Better for data with natural variability
Lower chance of catching subtle issues
Step 5: Apply Changes
Once you move the slider and submit feedback, the new sensitivity setting takes effect immediately for future monitor scans.
Sensitivity Scale Reference
Sensitivity Range:
Maximum CI (slider: -5) = 0.99 confidence (least sensitive, fewest alerts)
Default CI (slider: 0) = 0.90 confidence (balanced approach)
Minimum CI (slider: 5) = 0.80 confidence (most sensitive, most alerts)
Each slider step = 0.02 confidence interval change Example: Slider position +1 = 0.92 confidence interval

Best Practices for Feedback
🎯 Strategic Feedback Approach
Start Conservative:
Monitor initial alert patterns for 1-2 weeks before adjusting
Identify recurring false positives vs. legitimate issues
Adjust sensitivity gradually rather than making dramatic changes
Document decisions for team reference
Team Coordination:
Assign feedback responsibility to experienced team members
Review feedback patterns regularly as a team
Coordinate sensitivity changes to avoid conflicting adjustments
Monitor impact of sensitivity changes on alert volume
⚡ Optimization Tips
For High-Volume Data:
Start with lower sensitivity (higher CI)
Gradually increase sensitivity based on missed issues
Consider grouped-by monitoring for dimension-specific thresholds
For Critical Data:
Start with higher sensitivity (lower CI)
Accept some false positives initially
Fine-tune based on operational experience
For Seasonal Data:
Expect sensitivity adjustments around seasonal patterns
Review and adjust quarterly or seasonally
Document seasonal patterns for team awareness
Troubleshooting Common Issues
Too Many False Positives
Decrease sensitivity (move slider left)
Review data patterns for natural variability
Consider grouped-by monitoring for segment-specific thresholds
Evaluate monitor frequency - may be too aggressive
Missing Important Issues
Increase sensitivity (move slider right)
Review threshold settings for manual monitors
Check monitor frequency - may be too infrequent
Consider additional monitor types for comprehensive coverage
Inconsistent Feedback Results
Coordinate team feedback to avoid conflicting adjustments
Review data quality patterns that may have changed
Consider seasonal or business cycle impacts
Document adjustment rationale for future reference
Need Help? Contact [email protected] for guidance on optimizing your incident model sensitivity for your specific data patterns and business requirements.
Last updated