Decube
Try for free
  • 🚀Overview
    • Welcome to decube
    • Getting started
      • How to connect data sources
    • Changelog
    • Public Roadmap
  • 💻Security & Infrastructure
    • Overview
    • Deployment Methods
      • SaaS (Multi-Tenant)
      • SaaS (Single-Tenant)
      • Bring-Your-Own-Cloud (BYOC)
    • Data Policy
  • 🔌Data Warehouses
    • Snowflake
    • Redshift
    • Google Bigquery
    • Databricks
    • Azure Synapse
  • 🔌Relational Databases
    • PostgreSQL
    • MySQL
    • SingleStore
    • Microsoft SQL Server
    • Oracle
  • 🔌Transformation Tools
    • dbt (Cloud Version)
    • dbt Core
    • Fivetran
    • Airflow
    • AWS Glue
    • Azure Data Factory
    • Apache Spark
      • Apache Spark in Azure Synapse
    • OpenLineage (BETA)
    • Additional configurations
  • 🔌Business Intelligence
    • Tableau
    • Looker
    • PowerBI
  • 🔌Data Lake
    • AWS S3
    • Azure Data Lake Storage (ADLS)
      • Azure Function for Metadata
    • Google Cloud Storage (GCS)
  • 🔌Ticketing and Collaboration
    • ServiceNow
    • Jira
  • 🔒Security and Connectivity
    • Enabling VPC Access
    • IP Whitelisting
    • SSH Tunneling
    • AWS Identities
  • ✅Data Quality
    • Incidents Overview
    • Incident model feedback
    • Enable asset monitoring
    • Available Monitor Types
    • Available Monitor Modes
    • Catalog: Add/Modify Monitor
    • Set Up Freshness & Volume Monitors
    • Set Up Field Health Monitors
    • Set Up Custom SQL Monitors
    • Grouped-by Monitors
    • Modify Schema Drift Monitors
    • Modify Job Failure Monitors (Data Job)
    • Custom Scheduling For Monitors
    • Config Settings
  • 📖Catalog
    • Overview of Asset Types
    • Assets Catalog
    • Asset Overview
    • Automated Lineage
      • Lineage Relationship
      • Supported Data Sources and Lineage Types
    • Add lineage relationships manually
    • Add tags and classifications to fields
    • Field Statistcs
    • Preview sample data
  • 📚Glossary
    • Glossary, Category and Terms
    • Adding a new glossary
    • Adding Terms and Linked Assets
  • Moving Terms to Glossary/Category
  • AI Copilot
    • Copilot's Autocomplete
  • 🤝Collaboration
    • Ask Questions
    • Rate an asset
  • 🌐Data Mesh [BETA]
    • Overview on Data Mesh [BETA]
    • Creating and Managing Domains/Sub-domains
    • Adding members to Domain/Sub-domain
    • Linking Entities to Domains/Sub-domains
    • Adding Data Products to Domains/Subdomains
    • Creating a draft Data Asset
    • Adding a Data Contract - Default Settings
    • Adding a Data Contract - Freshness Test
    • Adding a Data Contract - Column Tests
    • Publishing the Data Asset
  • 🏛️Governance
    • Governance module
    • Classification Policies
    • Auto-classify data assets
  • ☑️Approval Workflow
    • What are Change Requests?
    • Initiate a change request
    • What are Access Requests?
    • Initiate an Access Request
  • 📋Reports
    • Overview of Reports
    • Supported sources for Reports
    • Asset Report: Data Quality Scorecard
  • 📊Dashboard
    • Dashboard Overview
    • Incidents
    • Quality
  • ⏰Alert Notifications
    • Get alerts on email
    • Connect your Slack channels
    • Connect to Microsoft Teams
    • Webhooks integration
  • 🏛️Manage Access
    • User Management - Overview
    • Invite users
    • Deactivate or re-activate users
    • Revoke a user invite
  • 🔐Group-based Access Controls
    • Groups Management - Overview
    • Create Groups & Assign Policies
    • Source-based Policies
    • Administrative-based Policies
    • Module-based Policies
    • What is the "Owners" group?
  • 🗄️Org Settings
    • Multi-factor authentication
    • Single Sign-On (SSO) with Microsoft
    • Single Sign-On (SSO) with JumpCloud
  • Export/Import
    • Export/Import Overview
  • Export for Editing existing objects
  • Export for Creating new objects
  • CSV Template Structure (Edit existing items)
  • CSV Template Structure (Add new items)
  • Importing Data (Edit existing items & Add new items)
  • History
  • ❓Support
    • Supported Features by Integration
    • Frequently Asked Questions
    • Supported Browsers and System Requirements
  • Public API (BETA)
    • Overview
      • Data API
        • Glossary
        • Lineage
        • ACL
          • Group
      • Control API
        • Users
    • API Keys
Powered by GitBook
On this page
  • Supported Capabilities
  • Minimum Requirement
  • Connection Options:
  • a. AWS Roles
  • b. AWS IAM User
  • AWS KMS
  • Path Specs
  • Supported file types
  1. Data Lake

AWS S3

Connect your S3 to see your S3 datasets and files within the Catalog.

PreviousPowerBINextAzure Data Lake Storage (ADLS)

Last updated 21 days ago

Supported Capabilities

Catalog
Capability

Data Profiling

Data Preview

Minimum Requirement

To connect your AWS S3 to Decube, we will need the following information:

Chooose authentication method:

a. :

  • Select AWS Identity

  • Customer AWS Role ARN

  • Region

  • Path Specs

  • Data source name

b. AWS Access Key:

  • Access Key ID

  • Secret Access Key

  • Region

  • Path Specs

  • Data source name

Connection Options:

a. AWS Roles

This section will create a Customer AWS Role within your AWS account that has the right set of permission to access your data sources.

  • Step 1: Go to your AWS Account > IAM Module > Roles

  • Step 2: Click on Create role

  • Step 3: Choose Custom trust policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "<DECUBE-AWS-IDENTITY-ARN>"
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringEquals": {
                    "sts:ExternalId": "<EXTERNAL-ID>"
                }
            }
        }
    ]
}

  • Step 5: Click next to proceed to attach policy.

  • Step 6: Click on Attach Policies and Create Policy and choose JSON Editor. Input the following policy and press next, input the policy name of your choice and press Create Policy.

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "VisualEditor0",
			"Effect": "Allow",
			"Action": [
				"s3:GetObject",
				"s3:ListBucket",
				"s3:ListAllMyBuckets"
			],
			"Resource": [
				"arn:aws:s3:::{bucket-name}",
				"arn:aws:s3:::{bucket-name}/*"
			]
		}
	]
}

b. AWS IAM User

  • Step 1: Login to AWS Console and proceed to IAM > User > Create User

  • Step 2: Click on Attach Policies and Create Policy and choose JSON Editor input the following policy and press next, input the policy name of your choice and press Create Policy

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "VisualEditor0",
			"Effect": "Allow",
			"Action": [
				"s3:GetObject",
				"s3:ListBucket",
				"s3:ListAllMyBuckets"
			],
			"Resource": [
				"arn:aws:s3:::{bucket-name}",
				"arn:aws:s3:::{bucket-name}/*"
			]
		}
	]
}

  • Step 3: Search for the policy you created just now, select it and press Next.

  • Step 4: Press Create user

  • Step 5: Navigate to the newly created user and click on Create access key

  • Step 6: Choose Application running outside AWS

  • Step 7: Save the provided access key and secret access key. You will not be able to retrieve these keys again

AWS KMS

If the bucket intended to be connected to Decube is encrypted using a customer managed KMS key, you will need to add the AWS IAM user created above to the key policy statement.

  1. Login to AWS Console and proceed to AWS KMS > Customer-managed keys.

  2. Find the key that was used to encrypt the AWS S3 bucket.

  3. On the Key policy tab, click on Edit

  1. Assuming the user created is decube-s3-datalake

a. If there is not an existing policy attached to the key

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Allow decube to use key",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::<AWSAccountID>:user/{decube-s3-datalake}"
                ]
            },
            "Action": "kms:Decrypt",
            "Resource": "*"
        }
    ]
}

b. If there is an existing policy, append this section to the Statement array:

{
    "Statement": [
        {
            "Sid": "Allow decube to use key",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::<AWSAccountID>:user/{decube-s3-datalake}"
                ]
            },
            "Action": "kms:Decrypt",
            "Resource": "*"
        }
    ]
}
  1. Save Changes

Path Specs

Additional points to note

  • Folder names should not contain {, }, *, / in their names.

  • Named variable {folder} is reserved for internal working. please do not use in named variables.

Path Specs - Examples

Example 1 - Individual file as Dataset

Bucket structure:

test-bucket
├── employees.csv
├── departments.json
└── food_items.csv

Path specs config to ingest employees.csv and food_items.csv as datasets:

path_specs:
    - include: s3://test-bucket/*.csv

This will automatically ignore departments.json file. To include it, use *.* instead of *.csv.

Example 2 - Folder of files as Dataset (without Partitions)

Bucket structure:

test-bucket
└──  offers
     ├── 1.csv
     └── 2.csv

Path specs config to ingest folder offers as dataset:

path_specs:
    - include: s3://test-bucket/{table}/*.csv

{table} represents folder for which dataset will be created.

Example 3 - Folder of files as Dataset (with Partitions)

Bucket structure:

test-bucket
├── orders
│   └── year=2022
│       └── month=2
│           ├── 1.parquet
│           └── 2.parquet
└── returns
    └── year=2021
        └── month=2
            └── 1.parquet

Path specs config to ingest folders orders and returns as datasets:

path_specs:
    - include: s3://test-bucket/{table}/*/*/*.parquet

Example 4 - Folder of files as Dataset (with Partitions), and Exclude Filter

Bucket structure:

test-bucket
├── orders
│   └── year=2022
│       └── month=2
│           ├── 1.parquet
│           └── 2.parquet
└── tmp_orders
    └── year=2021
        └── month=2
            └── 1.parquet

Path specs config to ingest folder orders as dataset but not folder tmp_orders:

path_specs:
    - include: s3://test-bucket/{table}/*/*/*.parquet
      exclude:
        - **/tmp_orders/**

Example 5 - Advanced - Either Individual file OR Folder of files as Dataset

Bucket structure:

test-bucket
├── customers
│   ├── part1.json
│   ├── part2.json
│   ├── part3.json
│   └── part4.json
├── employees.csv
├── food_items.csv
├── tmp_10101000.csv
└──  orders
    └── year=2022
        └── month=2
            ├── 1.parquet
            ├── 2.parquet
            └── 3.parquet

Path specs config:

path_specs:
    - include: s3://test-bucket/*.csv
      exclude:
        - **/tmp_10101000.csv
    - include: s3://test-bucket/{table}/*.json
    - include: s3://test-bucket/{table}/*/*/*.parquet

Above config has 3 path_specs and will ingest following datasets

  • employees.csv - Single File as Dataset

  • food_items.csv - Single File as Dataset

  • customers - Folder as Dataset

  • orders - Folder as Dataset and will ignore file tmp_10101000.csv

Valid path_specs.include

s3://my-bucket/foo/tests/bar.csv # single file table
s3://my-bucket/foo/tests/*.* # mulitple file level tables
s3://my-bucket/foo/tests/{table}/*.parquet #table without partition
s3://my-bucket/foo/tests/{table}/*/*.csv #table where partitions are not specified
s3://my-bucket/foo/tests/{table}/*.* # table where no partitions as well as data type specified
s3://my-bucket/{dept}/tests/{table}/*.parquet # specifying keywords to be used in display namepartition key and value format

Valid path_specs.exclude

- */tests/**
- s3://my-bucket/hr/**
- */tests/*.csv
- s3://my-bucket/foo/*/my_table/**

Supported file types

  • CSV (*.csv)

  • TSV (*.tsv)

  • JSON (*.json)

  • JSON (*.jsonl)

  • Parquet (*.parquet)

  • Avro (*.avro) [beta]

Table format:

  • Apache Iceberg [beta]

  • Delta table [beta]

Schemas for Parquet and Avro files are extracted as provided.

Schemas for schemaless formats (CSV, TSV, JSON) are inferred. For CSV, TSV, JSONL files, we consider the first 100 rows by default JSON file schemas are inferred on the basis of the entire file (given the difficulty in extracting only the first few objects of the file), which may impact performance.

Step 4: Specify the following as the trust policy, replacing DECUBE-AWS-IDENTITY-ARN and EXTERNAL-ID with values from .

Path Specs (path_specs) is a list of Path Spec (path_spec) objects where each individual path_spec represents one or more datasets. Include path (path_spec.include) represents formatted path to the dataset. This path must end with *.* or *.[ext] to represent leaf level. If *.[ext] is provided then files with only specified extension type will be scanned. ".[ext]" can be any of . Refer below for more details.

All folder levels need to be specified in include path. You can use /*/ to represent a folder level and avoid specifying exact folder name. To map folder as a dataset, use {table} placeholder to represent folder level for which dataset is to be created. Refer below for more details.

Exclude paths (path_spec.exclude) can be used to ignore paths that are not relevant to current path_spec. This path cannot have named variables ( {} ). Exclude path can have ** to represent multiple folder levels. Refer below for more details.

Refer if your bucket has more complex dataset representation.

🔌
AWS Identity
supported file types
example 1
example 2 and 3
example 4
example 5
✅
✅
Generating a Decube AWS Identity