# Incident model feedback

Optimize your data quality monitoring by adjusting alert sensitivity based on your team's experience and data patterns. Decube's machine learning incident model learns from your feedback to deliver more accurate and actionable alerts.

## Understanding Incident Model Sensitivity

Decube's incident detection uses confidence intervals (CI) to determine when to trigger alerts. The confidence interval represents how certain the system must be before flagging an anomaly as an incident.

**Default Setting**: 90% confidence interval (CI = 0.90)

* System must be 90% confident an anomaly is significant before alerting
* Balances alert accuracy with early detection
* Can be adjusted based on your team's **preferences**

***

## When to Adjust Sensitivity

### Increase Sensitivity (More Alerts) When:

* **Missing critical issues** that should be detected
* **Data quality is paramount** for business operations
* **Team can handle higher alert volume** effectively
* **Early detection outweighs false positives**

### Decrease Sensitivity (Fewer Alerts) When:

* **Experiencing alert fatigue** from too many notifications
* **False positives are common** in your data patterns
* **Team resources are limited** for incident response
* **Data patterns have natural variability**

***

## How to Adjust Model Sensitivity

### Step 1: Access Incident Details

Navigate to **Data Quality > Incidents** and select any incident you want to provide feedback on.

You can click on the thumbs down icon to give the feedback rating. For an incident that has auto-thresholding enabled, you can then see a slider to adjust the sensitivity.

### Step 2: Provide Feedback

**Rate the Alert Quality:**

* Click the **👍 thumbs up** if the alert was helpful and accurate
* Click the **👎 thumbs down** if the alert was unnecessary or inaccurate

### Step 3: Adjust Sensitivity (For Auto-Threshold Monitors)

For monitors with automatic thresholding enabled, you'll see a sensitivity adjustment slider after providing negative feedback.

{% hint style="info" %}
**Sensitivity adjustment is available for:**

* Freshness monitors with automatic thresholds
* Volume monitors with automatic thresholds
* Field Health tests with automatic threshold selected

**Not available for:** Custom SQL monitors, manually configured thresholds
{% endhint %}

### Step 4: Configure Sensitivity Level

**Increase Sensitivity** (Move slider RIGHT):

* Triggers more alerts for smaller anomalies
* Better for critical data that requires strict monitoring
* Higher chance of false positives

**Decrease Sensitivity** (Move slider LEFT):

* Reduces alert noise by requiring larger anomalies
* Better for data with natural variability
* Lower chance of catching subtle issues

### Step 5: Apply Changes

You can click on the thumbs up or down icon to give the feedback rating.

<figure><img src="https://1779874722-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTw0qpCVzfrIXqS4FEg4T%2Fuploads%2Fgit-blob-2d363d73551fe7e67d18034934234bd5fc387b5f%2Fimage.png?alt=media" alt=""><figcaption></figcaption></figure>

<figure><img src="https://1779874722-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTw0qpCVzfrIXqS4FEg4T%2Fuploads%2Fgit-blob-9797fd965d4a6339e381f181734639a90bd547bf%2Fimage.png?alt=media" alt=""><figcaption></figcaption></figure>

For an **incident that has auto-thresholding enabled**, you can then see a **slider** to adjust the sensitivity.

Once you move the slider and submit feedback, the new sensitivity setting takes effect immediately for future monitor scans.

***

## Sensitivity Scale Reference

{% hint style="success" %}
**Sensitivity Range:**

* **Maximum CI** (slider: -5) = 0.99 confidence (least sensitive, fewest alerts)
* **Default CI** (slider: 0) = 0.90 confidence (balanced approach)
* **Minimum CI** (slider: 5) = 0.80 confidence (most sensitive, most alerts)

**Each slider step** = 0.02 confidence interval change **Example:** Slider position +1 = 0.92 confidence interval
{% endhint %}

<figure><img src="https://1779874722-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTw0qpCVzfrIXqS4FEg4T%2Fuploads%2Fgit-blob-2ed0a5c440f388cb20c9eb85910578b8618f957c%2FSCR-20240304-gvmv.png?alt=media" alt=""><figcaption><p>Sensitivity adjustment slider for fine-tuning alert behavior</p></figcaption></figure>

***

## Best Practices for Feedback

### 🎯 Strategic Feedback Approach

**Start Conservative:**

1. **Monitor initial alert patterns** for 1-2 weeks before adjusting
2. **Identify recurring false positives** vs. legitimate issues
3. **Adjust sensitivity gradually** rather than making dramatic changes
4. **Document decisions** for team reference

**Team Coordination:**

* **Assign feedback responsibility** to experienced team members
* **Review feedback patterns** regularly as a team
* **Coordinate sensitivity changes** to avoid conflicting adjustments
* **Monitor impact** of sensitivity changes on alert volume

### ⚡ Optimization Tips

**For High-Volume Data:**

* Start with **lower sensitivity** (higher CI)
* Gradually **increase sensitivity** based on missed issues
* Consider **grouped-by monitoring** for dimension-specific thresholds

**For Critical Data:**

* Start with **higher sensitivity** (lower CI)
* **Accept some false positives** initially
* **Fine-tune based on** operational experience

**For Seasonal Data:**

* **Expect sensitivity adjustments** around seasonal patterns
* **Review and adjust** quarterly or seasonally
* **Document seasonal patterns** for team awareness

***

## Troubleshooting Common Issues

### Too Many False Positives

1. **Decrease sensitivity** (move slider left)
2. **Review data patterns** for natural variability
3. **Consider grouped-by monitoring** for segment-specific thresholds
4. **Evaluate monitor frequency** - may be too aggressive

### Missing Important Issues

1. **Increase sensitivity** (move slider right)
2. **Review threshold settings** for manual monitors
3. **Check monitor frequency** - may be too infrequent
4. **Consider additional monitor types** for comprehensive coverage

### Inconsistent Feedback Results

1. **Coordinate team feedback** to avoid conflicting adjustments
2. **Review data quality patterns** that may have changed
3. **Consider seasonal or business cycle impacts**
4. **Document adjustment rationale** for future reference

***

**Need Help?** Contact <support@decube.io> for guidance on optimizing your incident model sensitivity for your specific data patterns and business requirements.
