Once you have configured the Databricks access permissions, you can establish a connection between Atlan and your Databricks instance. (If you are also using AWS PrivateLink or Azure Private Link for Databricks, you will need to set that up first, too.)
To crawl metadata from your Databricks instance, review the order of operations and then complete the following steps.
Select the source
To select Databricks as your source:
-
In the top right corner of any screen, navigate to New and then click New Workflow.
- From the list of packages, select Databricks Assets, and click Setup Workflow.
Provide credentials
Choose your extraction method:
- In Direct extraction, Atlan connects to your database and crawls metadata directly. Next, select an authentication method:
- In JDBC, you will need a personal access token and HTTP path for authentication.
- In AWS Service, you will need a client ID and client secret for AWS service principal authentication.
- In Azure Service, you will need a tenant ID, client ID, and client secret for Azure service principal authentication.
- In Offline extraction, you will need to first extract metadata yourself and make it available in S3.
JDBC
To enter your Databricks credentials:
- For Host, enter the hostname, AWS PrivateLink endpoint, or Azure Private Link endpoint for your Databricks instance.
- For Port, enter the port number of your Databricks instance.
- For Personal Access Token, enter the access token you generated when setting up access.
- For HTTP Path, enter one of the following:
- A path starting with
/sql/1.0/warehouses
to use the Databricks SQL warehouse. - A path starting with
sql/protocolv1/o
to use the Databricks interactive cluster.
- A path starting with
- Click Test Authentication to confirm connectivity to Databricks using these details.
- Once successful, at the bottom of the screen click Next.
AWS service principal
To enter your Databricks credentials:
- For Host, enter the hostname or AWS PrivateLink endpoint for your Databricks instance.
- For Port, enter the port number of your Databricks instance.
- For Client ID, enter the client ID for your AWS service principal.
- For Client Secret, enter the client secret for your AWS service principal.
- Click Test Authentication to confirm connectivity to Databricks using these details.
- Once successful, at the bottom of the screen click Next.
Azure service principal
To enter your Databricks credentials:
- For Host, enter the hostname or Azure Private Link endpoint for your Databricks instance.
- For Port, enter the port number of your Databricks instance.
- For Client ID, enter the application (client) ID for your Azure service principal.
- For Client Secret, enter the client secret for your Azure service principal.
- For Tenant ID, enter the directory (tenant) ID for your Azure service principal.
- Click Test Authentication to confirm connectivity to Databricks using these details.
- Once successful, at the bottom of the screen click Next.
Offline extraction method
Atlan supports the offline extraction method for fetching metadata from Databricks. This method uses Atlan's databricks-extractor tool to fetch metadata. You will need to first extract the metadata yourself and then make it available in S3.
To enter your S3 details:
- For Bucket name, enter the name of your S3 bucket.
- For Bucket prefix, enter the S3 prefix under which all the metadata files exist. These include
output/databricks-example/catalogs/success/result-0.json
,output/databricks-example/schemas/{{catalog_name}}/success/result-0.json
,output/databricks-example/tables/{{catalog_name}}/success/result-0.json
, and so on. - (Optional) For Bucket region, enter the name of the S3 region.
- When complete, at the bottom of the screen, click Next.
Configure the connection
To complete the Databricks connection configuration:
- Provide a Connection Name that represents your source environment. For example, you might want to use values like
production
,development
,gold
, oranalytics
. - (Optional) To change the users able to manage this connection, change the users or groups listed under Connection Admins.
🚨 Careful! If you do not specify any user or group, nobody will be able to manage the connection — not even admins.
- (Optional) To prevent users from querying any Databricks data, change Allow SQL Query to No.
- (Optional) To prevent users from previewing any Databricks data, change Allow Data Preview to No.
- (Optional) To prevent users from running large queries, change Max Row Limit or keep the default selection.
- At the bottom of the screen, click the Next button to proceed.
Configure the crawler
Before running the Databricks crawler, you can further configure it.
JDBC extraction method
The JDBC extraction method uses JDBC queries to extract metadata from your Databricks instance. This was the original extraction method provided by Databricks. This extraction method is only supported for personal access token authentication.
You can override the defaults for any of these options:
- To select the assets you want to include in crawling, click Include Metadata. (This will default to all assets, if none are specified.)
- To select the assets you want to exclude from crawling, click Exclude Metadata. (This will default to no assets if none are specified.)
- To have the crawler ignore tables and views based on a naming convention, specify a regular expression in the Exclude regex for tables & views field.
- For View Definition Lineage, keep the default Yes to generate upstream lineage for views based on the tables referenced in the views or click No to exclude from crawling.
- For Advanced Config, keep Default for the default configuration or click Advanced to further configure the crawler:
- To enable or disable schema-level filtering at source, click Enable Source Level Filtering and select True to enable it or False to disable it.
REST API extraction method
The REST API extraction method uses Unity Catalog to extract metadata from your Databricks instance. This extraction method is supported for all three authentication options: personal access token, AWS service principal, and Azure service principal.
- This method is only supported by Unity Catalog-enabled workspaces.
- If you enable an existing workspace, you also need to upgrade your tables and views to Unity Catalog.
While REST APIs are used to extract metadata, JDBC queries are still used for querying purposes.
You can override the defaults for any of these options:
- Change the extraction method under Extraction method to REST API.
- To select the assets you want to include in crawling, click Include Metadata. (This will default to all assets, if none are specified.)
- To select the assets you want to exclude from crawling, click Exclude Metadata. (This will default to no assets if none are specified.)
- To import tags from Databricks to Atlan, change Import Tags to Yes. Note that you must have a Unity Catalog-enabled workspace to import Databricks tags in Atlan.
- For SQL warehouse, click the dropdown to select the SQL warehouse you have configured.
Run the crawler
To run the Databricks crawler, after completing the steps above:
- To check for any permissions or other configuration issues before running the crawler, click Preflight checks.
- You can either:
- To run the crawler once immediately, at the bottom of the screen, click the Run button.
- To schedule the crawler to run hourly, daily, weekly, or monthly, at the bottom of the screen, click the Schedule Run button.
Once the crawler has completed running, you will see the assets in Atlan's asset page! 🎉