How to crawl Databricks

Once you have configured the Databricks access permissions, you can establish a connection between Atlan and your Databricks instance. (If you are also using AWS PrivateLink or Azure Private Link for Databricks, you will need to set that up first, too.)

To crawl metadata from your Databricks instance, review the order of operations and then complete the following steps.

Select the source

To select Databricks as your source:

  1. In the top right corner of any screen, navigate to New and then click New Workflow.

  2. From the list of packages, select Databricks Assets, and click Setup Workflow.

Provide credentials

Choose your extraction method:

JDBC

To enter your Databricks credentials:

  1. For Host, enter the hostname, AWS PrivateLink endpoint, or Azure Private Link endpoint for your Databricks instance.
  2. For Port, enter the port number of your Databricks instance.
  3. For Personal Access Token, enter the access token you generated when setting up access.
  4. For HTTP Path, enter one of the following:
  5. Click Test Authentication to confirm connectivity to Databricks using these details.
  6. Once successful, at the bottom of the screen click Next.
🚨 Careful! Make sure your Databricks instance (SQL warehouse or interactive cluster) is up and running, otherwise the Test Authentication step will time out.

AWS service principal

To enter your Databricks credentials:

  1. For Host, enter the hostname or AWS PrivateLink endpoint for your Databricks instance.
  2. For Port, enter the port number of your Databricks instance.
  3. For Client ID, enter the client ID for your AWS service principal.
  4. For Client Secret, enter the client secret for your AWS service principal.
  5. Click Test Authentication to confirm connectivity to Databricks using these details.
  6. Once successful, at the bottom of the screen click Next.

Azure service principal

To enter your Databricks credentials:

  1. For Host, enter the hostname or Azure Private Link endpoint for your Databricks instance.
  2. For Port, enter the port number of your Databricks instance.
  3. For Client ID, enter the application (client) ID for your Azure service principal.
  4. For Client Secret, enter the client secret for your Azure service principal.
  5. For Tenant ID, enter the directory (tenant) ID for your Azure service principal.
  6. Click Test Authentication to confirm connectivity to Databricks using these details.
  7. Once successful, at the bottom of the screen click Next.

Offline extraction method

Atlan supports the offline extraction method for fetching metadata from Databricks. This method uses Atlan's databricks-extractor tool to fetch metadata. You will need to first extract the metadata yourself and then make it available in S3.

To enter your S3 details:

  1. For Bucket name, enter the name of your S3 bucket.
  2. For Bucket prefix, enter the S3 prefix under which all the metadata files exist. These include output/databricks-example/catalogs/success/result-0.json, output/databricks-example/schemas/{{catalog_name}}/success/result-0.json, output/databricks-example/tables/{{catalog_name}}/success/result-0.json, and so on.
  3. (Optional) For Bucket region, enter the name of the S3 region.
  4. When complete, at the bottom of the screen, click Next.

Configure the connection

To complete the Databricks connection configuration:

  1. Provide a Connection Name that represents your source environment. For example, you might want to use values like production, development, gold, or analytics.
  2. (Optional) To change the users able to manage this connection, change the users or groups listed under Connection Admins.
    🚨 Careful! If you do not specify any user or group, nobody will be able to manage the connection — not even admins.
  3. (Optional) To prevent users from querying any Databricks data, change Allow SQL Query to No.
  4. (Optional) To prevent users from previewing any Databricks data, change Allow Data Preview to No.
  5. (Optional) To prevent users from running large queries, change Max Row Limit or keep the default selection.
  6. At the bottom of the screen, click the Next button to proceed.

Configure the crawler

Before running the Databricks crawler, you can further configure it.

JDBC extraction method

The JDBC extraction method uses JDBC queries to extract metadata from your Databricks instance. This was the original extraction method provided by Databricks. This extraction method is only supported for personal access token authentication.

You can override the defaults for any of these options:

  • To select the assets you want to include in crawling, click Include Metadata. (This will default to all assets, if none are specified.)
  • To select the assets you want to exclude from crawling, click Exclude Metadata. (This will default to no assets if none are specified.)
  • To have the crawler ignore tables and views based on a naming convention, specify a regular expression in the Exclude regex for tables & views field.
  • For View Definition Lineage, keep the default Yes to generate upstream lineage for views based on the tables referenced in the views or click No to exclude from crawling.
  • For Advanced Config, keep Default for the default configuration or click Advanced to further configure the crawler:
    • To enable or disable schema-level filtering at source, click Enable Source Level Filtering and select True to enable it or False to disable it.

REST API extraction method

The REST API extraction method uses Unity Catalog to extract metadata from your Databricks instance. This extraction method is supported for all three authentication options: personal access token, AWS service principal, and Azure service principal.

While REST APIs are used to extract metadata, JDBC queries are still used for querying purposes.

You can override the defaults for any of these options:

  • Change the extraction method under Extraction method to REST API.
  • To select the assets you want to include in crawling, click Include Metadata. (This will default to all assets, if none are specified.)
  • To select the assets you want to exclude from crawling, click Exclude Metadata. (This will default to no assets if none are specified.)
  • To import tags from Databricks to Atlan, change Import Tags to Yes. Note that you must have a Unity Catalog-enabled workspace to import Databricks tags in Atlan.
    • For SQL warehouse, click the dropdown to select the SQL warehouse you have configured.
💪 Did you know? If an asset appears in both the include and exclude filters, the exclude filter takes precedence.

Run the crawler

To run the Databricks crawler, after completing the steps above:

  1. To check for any permissions or other configuration issues before running the crawler, click Preflight checks.
  2. You can either:
    • To run the crawler once immediately, at the bottom of the screen, click the Run button.
    • To schedule the crawler to run hourly, daily, weekly, or monthly, at the bottom of the screen, click the Schedule Run button.

Once the crawler has completed running, you will see the assets in Atlan's asset page! 🎉

Related articles

Was this article helpful?
1 out of 1 found this helpful