Incident model feedback

Fine-tune your monitoring sensitivity to reduce alert noise and improve data quality insights.

Optimize your data quality monitoring by adjusting alert sensitivity based on your team's experience and data patterns. Decube's machine learning incident model learns from your feedback to deliver more accurate and actionable alerts.

Understanding Incident Model Sensitivity

Decube's incident detection uses confidence intervals (CI) to determine when to trigger alerts. The confidence interval represents how certain the system must be before flagging an anomaly as an incident.

Default Setting: 90% confidence interval (CI = 0.90)

  • System must be 90% confident an anomaly is significant before alerting

  • Balances alert accuracy with early detection

  • Can be adjusted based on your team's preferences


When to Adjust Sensitivity

Increase Sensitivity (More Alerts) When:

  • Missing critical issues that should be detected

  • Data quality is paramount for business operations

  • Team can handle higher alert volume effectively

  • Early detection outweighs false positives

Decrease Sensitivity (Fewer Alerts) When:

  • Experiencing alert fatigue from too many notifications

  • False positives are common in your data patterns

  • Team resources are limited for incident response

  • Data patterns have natural variability


How to Adjust Model Sensitivity

Step 1: Access Incident Details

Navigate to Data Quality > Incidents and select any incident you want to provide feedback on.

Incident details page with feedback options

Step 2: Provide Feedback

Rate the Alert Quality:

  • Click the 👍 thumbs up if the alert was helpful and accurate

  • Click the 👎 thumbs down if the alert was unnecessary or inaccurate

Step 3: Adjust Sensitivity (For Auto-Threshold Monitors)

For monitors with automatic thresholding enabled, you'll see a sensitivity adjustment slider after providing negative feedback.

Sensitivity adjustment is available for:

  • Freshness monitors with automatic thresholds

  • Volume monitors with automatic thresholds

  • Field Health tests with automatic threshold selected

Not available for: Custom SQL monitors, manually configured thresholds

Step 4: Configure Sensitivity Level

Increase Sensitivity (Move slider RIGHT):

  • Triggers more alerts for smaller anomalies

  • Better for critical data that requires strict monitoring

  • Higher chance of false positives

Decrease Sensitivity (Move slider LEFT):

  • Reduces alert noise by requiring larger anomalies

  • Better for data with natural variability

  • Lower chance of catching subtle issues

Step 5: Apply Changes

Once you move the slider and submit feedback, the new sensitivity setting takes effect immediately for future monitor scans.


Sensitivity Scale Reference

Sensitivity adjustment slider for fine-tuning alert behavior

Best Practices for Feedback

🎯 Strategic Feedback Approach

Start Conservative:

  1. Monitor initial alert patterns for 1-2 weeks before adjusting

  2. Identify recurring false positives vs. legitimate issues

  3. Adjust sensitivity gradually rather than making dramatic changes

  4. Document decisions for team reference

Team Coordination:

  • Assign feedback responsibility to experienced team members

  • Review feedback patterns regularly as a team

  • Coordinate sensitivity changes to avoid conflicting adjustments

  • Monitor impact of sensitivity changes on alert volume

⚡ Optimization Tips

For High-Volume Data:

  • Start with lower sensitivity (higher CI)

  • Gradually increase sensitivity based on missed issues

  • Consider grouped-by monitoring for dimension-specific thresholds

For Critical Data:

  • Start with higher sensitivity (lower CI)

  • Accept some false positives initially

  • Fine-tune based on operational experience

For Seasonal Data:

  • Expect sensitivity adjustments around seasonal patterns

  • Review and adjust quarterly or seasonally

  • Document seasonal patterns for team awareness


Troubleshooting Common Issues

Too Many False Positives

  1. Decrease sensitivity (move slider left)

  2. Review data patterns for natural variability

  3. Consider grouped-by monitoring for segment-specific thresholds

  4. Evaluate monitor frequency - may be too aggressive

Missing Important Issues

  1. Increase sensitivity (move slider right)

  2. Review threshold settings for manual monitors

  3. Check monitor frequency - may be too infrequent

  4. Consider additional monitor types for comprehensive coverage

Inconsistent Feedback Results

  1. Coordinate team feedback to avoid conflicting adjustments

  2. Review data quality patterns that may have changed

  3. Consider seasonal or business cycle impacts

  4. Document adjustment rationale for future reference


Need Help? Contact [email protected] for guidance on optimizing your incident model sensitivity for your specific data patterns and business requirements.

Last updated