Decube
Try for free
  • 🚀Overview
    • Welcome to decube
    • Getting started
      • How to connect data sources
    • Changelog
    • Public Roadmap
  • 💻Security & Infrastructure
    • Overview
    • Deployment Methods
      • SaaS (Multi-Tenant)
      • SaaS (Single-Tenant)
      • Bring-Your-Own-Cloud (BYOC)
    • Data Policy
  • 🔌Data Warehouses
    • Snowflake
    • Redshift
    • Google Bigquery
    • Databricks
    • Azure Synapse
  • 🔌Relational Databases
    • PostgreSQL
    • MySQL
    • SingleStore
    • Microsoft SQL Server
    • Oracle
  • 🔌Transformation Tools
    • dbt (Cloud Version)
    • dbt Core
    • Fivetran
    • Airflow
    • AWS Glue
    • Azure Data Factory
    • Apache Spark
      • Apache Spark in Azure Synapse
    • OpenLineage (BETA)
    • Additional configurations
  • 🔌Business Intelligence
    • Tableau
    • Looker
    • PowerBI
  • 🔌Data Lake
    • AWS S3
    • Azure Data Lake Storage (ADLS)
      • Azure Function for Metadata
    • Google Cloud Storage (GCS)
  • 🔌Ticketing and Collaboration
    • ServiceNow
    • Jira
  • 🔒Security and Connectivity
    • Enabling VPC Access
    • IP Whitelisting
    • SSH Tunneling
    • AWS Identities
  • ✅Data Quality
    • Incidents Overview
    • Incident model feedback
    • Enable asset monitoring
    • Available Monitor Types
    • Available Monitor Modes
    • Catalog: Add/Modify Monitor
    • Set Up Freshness & Volume Monitors
    • Set Up Field Health Monitors
    • Set Up Custom SQL Monitors
    • Grouped-by Monitors
    • Modify Schema Drift Monitors
    • Modify Job Failure Monitors (Data Job)
    • Custom Scheduling For Monitors
    • Config Settings
  • 📖Catalog
    • Overview of Asset Types
    • Assets Catalog
    • Asset Overview
    • Automated Lineage
      • Lineage Relationship
      • Supported Data Sources and Lineage Types
    • Add lineage relationships manually
    • Add tags and classifications to fields
    • Field Statistcs
    • Preview sample data
  • 📚Glossary
    • Glossary, Category and Terms
    • Adding a new glossary
    • Adding Terms and Linked Assets
  • Moving Terms to Glossary/Category
  • AI Copilot
    • Copilot's Autocomplete
  • 🤝Collaboration
    • Ask Questions
    • Rate an asset
  • 🌐Data Mesh [BETA]
    • Overview on Data Mesh [BETA]
    • Creating and Managing Domains/Sub-domains
    • Adding members to Domain/Sub-domain
    • Linking Entities to Domains/Sub-domains
    • Adding Data Products to Domains/Subdomains
    • Creating a draft Data Asset
    • Adding a Data Contract - Default Settings
    • Adding a Data Contract - Freshness Test
    • Adding a Data Contract - Column Tests
    • Publishing the Data Asset
  • 🏛️Governance
    • Governance module
    • Classification Policies
    • Auto-classify data assets
  • ☑️Approval Workflow
    • What are Change Requests?
    • Initiate a change request
    • What are Access Requests?
    • Initiate an Access Request
  • 📋Reports
    • Overview of Reports
    • Supported sources for Reports
    • Asset Report: Data Quality Scorecard
  • 📊Dashboard
    • Dashboard Overview
    • Incidents
    • Quality
  • ⏰Alert Notifications
    • Get alerts on email
    • Connect your Slack channels
    • Connect to Microsoft Teams
    • Webhooks integration
  • 🏛️Manage Access
    • User Management - Overview
    • Invite users
    • Deactivate or re-activate users
    • Revoke a user invite
  • 🔐Group-based Access Controls
    • Groups Management - Overview
    • Create Groups & Assign Policies
    • Source-based Policies
    • Administrative-based Policies
    • Module-based Policies
    • What is the "Owners" group?
  • 🗄️Org Settings
    • Multi-factor authentication
    • Single Sign-On (SSO) with Microsoft
    • Single Sign-On (SSO) with JumpCloud
  • Export/Import
    • Export/Import Overview
  • Export for Editing existing objects
  • Export for Creating new objects
  • CSV Template Structure (Edit existing items)
  • CSV Template Structure (Add new items)
  • Importing Data (Edit existing items & Add new items)
  • History
  • ❓Support
    • Supported Features by Integration
    • Frequently Asked Questions
    • Supported Browsers and System Requirements
  • Public API (BETA)
    • Overview
      • Data API
        • Glossary
        • Lineage
        • ACL
          • Group
      • Control API
        • Users
    • API Keys
Powered by GitBook
On this page
  • Supported Capabilities
  • Minimum Requirement
  • Credentials Needed
  • Setup on Microsoft Azure
  • Assigning Role to Credentials
  • Path Specs
  • Building Path Spec for ADLS
  • Supported file types
  1. Data Lake

Azure Data Lake Storage (ADLS)

Azure Data Lake Storage (ADLS) is a scalable and secure data lake solution from Microsoft Azure designed to handle the vast amounts of data generated by modern applications.

PreviousAWS S3NextAzure Function for Metadata

Last updated 21 days ago

Supported Capabilities

Catalog
Capability

Data Profiling

Data Preview

Minimum Requirement

To connect your ADLS to Decube, the following information is required:

  • Tenant ID

  • Client ID

  • Client Secret

Potential Data Egress

Under the SaaS deployment model, data must be transferred from the storage container to the Data Plane to inspect files, retrieve schema information, and perform data quality monitoring. See our and on how we treat your data. If this is not preferable, you may opt for a Self-Hosted deployment model or bring your own Azure Function Azure Function for Metadata

Credentials Needed

Setup on Microsoft Azure

  1. On the Azure Home Page, go to Azure Active Directory. The Tenant ID can be copied from the Basic information section.

  2. Go to App registrations.

  3. Click on New registration.

  4. Click Register.

  5. Save the Application (client) ID and Directory (tenant) ID.

  6. Click Add a certificate or secret.

  7. Go to Client secrets and client + New client secret.

  8. Click +Add.

  9. Copy and save the Value for the client secret.

Assigning Role to Credentials

  1. From Azure Services, find and click on Storage Accounts. You should be able to see the option for Access control (IAM) on the left sidebar.

  1. Click on Access Control -> Click on '+Add' -> Click on Role assignments.

  1. Find the role called Storage Blob Data Reader click on it and click next.

  2. On the next page, search for the name of the application that you just created on Microsoft Entra ID.

  3. Assign it to the role.

Path Specs

Building Path Spec for ADLS

  • Remember the storage account, you want to connect to decube.

  • Take note of:

    • Storage account name

    • Container name

    • Folder path

Follow this schema when building a path spec:

"abfs://{container name}@{storage account name}.dfs.core.windows.net/{folder path}"
example
"abfs://first@decubetestadls.dfs.core.windows.net/second/*.*"// Some code

Path Specs - Examples

Example 1 - Individual file as Dataset

Bucket structure:

test-bucket
├── employees.csv
├── departments.json
└── food_items.csv

Path specs config to ingest employees.csv and food_items.csv as datasets:

path_specs:
    - include: abfs://test-container@test-storage-account.dfs.core.windows.net/*.csv

This will automatically ignore departments.json file. To include it, use *.* instead of *.csv.

Example 2 - Folder of files as Dataset (without Partitions)

Bucket structure:

test-bucket
└──  offers
     ├── 1.csv
     └── 2.csv

Path specs config to ingest folder offers as dataset:

path_specs:
    - include: abfs://test-container@test-storage-account.dfs.core.windows.net/{table}/*.csv

{table} represents folder for which dataset will be created.

Example 3 - Folder of files as Dataset (with Partitions)

Bucket structure:

test-bucket
├── orders
│   └── year=2022
│       └── month=2
│           ├── 1.parquet
│           └── 2.parquet
└── returns
    └── year=2021
        └── month=2
            └── 1.parquet

Path specs config to ingest folders orders and returns as datasets:

path_specs:
    - include: abfs://test-container@test-storage-account.dfs.core.windows.net/{table}/*/*/*.parquet

Example 4 - Folder of files as Dataset (with Partitions), and Exclude Filter

Bucket structure:

test-bucket
├── orders
│   └── year=2022
│       └── month=2
│           ├── 1.parquet
│           └── 2.parquet
└── tmp_orders
    └── year=2021
        └── month=2
            └── 1.parquet

Path specs config to ingest folder orders as dataset but not folder tmp_orders:

path_specs:
    - include: abfs://test-container@test-storage-account.dfs.core.windows.net/{table}/*/*/*.parquet
      exclude:
        - **/tmp_orders/**

Example 5 - Advanced - Either Individual file OR Folder of files as Dataset

Bucket structure:

test-bucket
├── customers
│   ├── part1.json
│   ├── part2.json
│   ├── part3.json
│   └── part4.json
├── employees.csv
├── food_items.csv
├── tmp_10101000.csv
└──  orders
    └── year=2022
        └── month=2
            ├── 1.parquet
            ├── 2.parquet
            └── 3.parquet

Path specs config:

path_specs:
    - include: abfs://test-container@test-storage-account.dfs.core.windows.net/*.csv
      exclude:
        - **/tmp_10101000.csv
    - include: abfs://test-container@test-storage-account.dfs.core.windows.net/{table}/*.json
    - include: abfs://test-container@test-storage-account.dfs.core.windows.net/{table}/*/*/*.parquet

Above config has 3 path_specs and will ingest following datasets

  • employees.csv - Single File as Dataset

  • food_items.csv - Single File as Dataset

  • customers - Folder as Dataset

  • orders - Folder as Dataset and will ignore file tmp_10101000.csv

Valid path_specs.include

abfs://test-container@test-storage-account.dfs.core.windows.net/foo/tests/bar.csv # single file table
abfs://test-container@test-storage-account.dfs.core.windows.net/foo/tests/*.* # mulitple file level tables
abfs://test-container@test-storage-account.dfs.core.windows.net/foo/tests/{table}/*.parquet #table without partition
abfs://test-container@test-storage-account.dfs.core.windows.net/tests/{table}/*/*.csv #table where partitions are not specified
abfs://test-container@test-storage-account.dfs.core.windows.net/tests/{table}/*.* # table where no partitions as well as data type specified
abfs://test-container@test-storage-account.dfs.core.windows.net/{dept}/tests/{table}/*.parquet # specifying keywords to be used in display namepartition key and value format

Valid path_specs.exclude

- */tests/**
- abfs://test-container@test-storage-account.dfs.core.windows.net/hr/**
- */tests/*.csv
- abfs://test-container@test-storage-account.dfs.core.windows.net/foo/*/my_table/**

Supported file types

  • CSV (*.csv)

  • TSV (*.tsv)

  • JSON (*.json)

  • JSON (*.jsonl)

  • Parquet (*.parquet)

  • Avro (*.avro) [beta]

Table format:

  • Apache Iceberg [beta]

  • Delta table [beta]

Schemas for Parquet and Avro files are extracted as provided.

Schemas for schemaless formats (CSV, TSV, JSON) are inferred. For CSV, TSV, JSONL files, we consider the first 100 rows by default JSON file schemas are inferred on the basis of the entire file (given the difficulty in extracting only the first few objects of the file), which may impact performance.

Path Specs (path_specs) is a list of Path Spec (path_spec) objects where each individual path_spec represents one or more datasets. Include path (path_spec.include) represents formatted path to the dataset. This path must end with *.* or *.[ext] to represent leaf level. If *.[ext] is provided then files with only specified extension type will be scanned. ".[ext]" can be any of . Refer below for more details.

All folder levels need to be specified in include path. You can use /*/ to represent a folder level and avoid specifying exact folder name. To map folder as a dataset, use {table} placeholder to represent folder level for which dataset is to be created. Refer below for more details.

Exclude paths (path_spec.exclude) can be used to ignore paths that are not relevant to current path_spec. This path cannot have named variables ( {} ). Exclude path can have ** to represent multiple folder levels. Refer below for more details.

🔌
supported file types
example 1
example 2 and 3
example 4
✅
✅