# Use Cases

## User Management Automation

### Automated User Onboarding

**Scenario**: Automatically create Decube users when new employees join your organization.

**APIs Used**: Control API - Users

* `POST /user` - Create new users
* `GET /users` - List existing users to avoid duplicates

**Example Workflow**:

1. HR system triggers when new employee is added
2. Check if user already exists using GET /users
3. Create new user account using POST /user
4. User receives welcome email with login instructions from Decube.

### User Lifecycle Management

**Scenario**: Deactivate users when employees leave the organization.

**APIs Used**: Control API - Users

* `DELETE /user` - Deactivate user accounts
* `GET /user` - Verify user status before deactivation

## Data Asset Discovery and Management

### Automated Asset Catalog Updates

**Scenario**: Keep your data catalog synchronized with metadata changes from your data infrastructure.

**APIs Used**: Data API - Assets

* `POST /assets/search` - Find existing assets
* `PATCH /assets` - Update asset metadata and descriptions
* `GET /assets` - Retrieve current asset details

**Example Workflow**:

1. Data pipeline completion triggers metadata update
2. Search for assets using POST /assets/search
3. Update asset descriptions and ownership using PATCH /assets

### Data Asset Discovery

**Scenario**: Build custom dashboards or integrations that surface relevant data assets to users.

**APIs Used**: Data API - Assets

* `POST /assets/search` - Search assets by type, tags, or ownership
* `GET /assets` - Get detailed asset information

## Glossary Management

### Automated Glossary Synchronization

**Scenario**: Maintain consistent business terminology across multiple systems.

**APIs Used**: Data API - Glossary

* `GET /catalog/glossary/list` - List existing terms and categories
* `POST /catalog/glossary` - Create new terms and categories
* `PATCH /catalog/glossary` - Update existing definitions
* `POST /catalog/glossary/documentation` - Attach documentation to terms

**Example Workflow**:

1. Business stakeholders update terms in external system
2. Sync process retrieves updated definitions
3. Create or update glossary terms in Decube
4. Attach rich documentation to terms

## Data Lineage Tracking

### Manual Lineage Documentation

**Scenario**: Document data transformations and dependencies that aren't automatically detected.

**APIs Used**: Data API - Lineage

* `POST /catalog/lineage/manual_lineage` - Create lineage relationships
* `GET /catalog/lineage/manual_lineage` - List existing lineage connections
* `DELETE /catalog/lineage/manual_lineage` - Remove outdated lineage

**Example Workflow**:

1. Data engineer completes new transformation pipeline
2. Create manual lineage connections between source and target datasets
3. Document the transformation logic in lineage metadata
4. Update lineage when pipelines change

## Access Control and Security

### Group-Based Permission Management

**Scenario**: Automate user access provisioning based on organizational roles.

**APIs Used**: Data API - ACL Groups

* `GET /acl/group/list` - List available groups
* `POST /acl/group/add_user` - Add users to appropriate groups
* `POST /acl/group/remove_user` - Remove users when roles change
* `GET /acl/group` - Verify group memberships

**Example Workflow**:

1. Employee role changes in internal system
2. Determine appropriate Decube groups based on new role
3. Add user to new groups and remove from old groups
4. Verify permissions are correctly applied

## Data Quality Reporting

### Automated Data Quality Scorecard Generation

**Scenario**: Generate regular data quality reports for compliance and governance purposes.

**APIs Used**: Data API - Data Quality Scorecard

* `POST /data_quality_scores/report/generate` - Request report generation
* `GET /data_quality_scores/report/result` - Poll for report completion and download results

**Use Cases**:

* Generate periodic data quality reports for compliance and governance
* Export quality metrics for external dashboards and analytics
* Track data quality trends over time across different data sources
* Create automated alerts based on data quality thresholds
* Audit data quality performance by data owner, schema, or asset type

**Example Workflow**:

{% @mermaid/diagram content="flowchart TD
A\[Submit report generation request] --> B\[POST /data\_quality\_scores/report/generate]
B --> C{Job queued}
C --> D\[Receive job\_id]
D --> E\[Poll for results]
E --> F\[GET /data\_quality\_scores/report/result?job\_id]
F --> G{Check status}
G -->|in\_progress| E
G -->|success| H\[Download JSON report]
G -->|failed| I\[Handle error]" %}

1. Scheduled job triggers data quality report request
2. Submit request with desired filters (time range, data sources, dimensions)
3. Receive job\_id for tracking the asynchronous report generation
4. Poll the results endpoint until report is ready
5. Download the complete JSON report with quality scores and metrics
6. Process results for dashboards, alerts, or compliance documentation

## Monitors: Monitoring & Alerting Automation

### Discover & Visualize Monitors

1. A data catalog UI or external tool calls `POST /monitors/search` for a given `asset`.
2. Show available scheduled and on‑demand monitors to users (name, type, incident level, last run status).
3. Link into `GET /monitors?monitor_id={id}` to display the full configuration and recent execution summary.

### Trigger On‑Demand Monitors after Upstream Change

1. Upstream pipeline emits an event after data load/transform.
2. Integration calls `POST /monitors/trigger` with the list of monitor ids or an `asset` identifier to run relevant monitors immediately.
3. Poll `GET /monitors/{monitor_id}/status` to wait for completion.
4. On completion, fetch details with `GET /monitors/{monitor_id}/history` and surface failures to downstream alerting or orchestration systems.

### Temporarily Pause Scheduled Monitors for Maintenance

1. Schedule maintenance window or detect noisy false positives.
2. Call `POST /monitors/enable-disable` with `{ "monitor_id": <id>, "enabled": false }` to disable the scheduled monitor.
3. Re-enable when maintenance completes and verify the `enabled` flag via `GET /monitors?monitor_id={id}`.

### Audit & Reporting

1. Regular job calls `POST /monitors/search` to enumerate monitors for a tenant or team.
2. For each monitor, call `GET /monitors/{monitor_id}/history` to collect run results and incident counts.
3. Aggregate results into dashboards, compliance reports, or SLA measurement.

***

For more information, see:

* [API Overview](https://docs.decube.io/public-api/overview)
* [Authentication Guide](https://docs.decube.io/public-api/api-keys)
* [Data API Reference](https://docs.decube.io/public-api/overview/index)
* [Control API Reference](https://docs.decube.io/public-api/overview/control-api)
