How to crawl Databricks

Have more questions? Submit a request

Once you have configured the Databricks access permissions, you can establish a connection between Atlan and your Databricks instance.

To crawl metadata from your Databricks instance, complete the following steps.

Select the source

To select Databricks as your source:

  1. In the top right corner of any screen, navigate to New and then click New Workflow.

  2. From the list of packages, select Databricks Assets, and click Setup Workflow.

Provide credentials

To enter your Databricks credentials:

  1. For Host enter the host for your Databricks instance.
  2. For Port enter the port number of your Databricks instance.
  3. For Personal Access Token enter the access token you generated when setting up access.
  4. For HTTP Path enter one of the following:
    • A path starting with /sql/1.0/endpoints to use the Databricks SQL endpoint.
    • A path starting with sql/protocolv1/o to use the Databricks interactive cluster.
  5. Click Test Authentication to confirm connectivity to Databricks using these details.
  6. Once successful, at the bottom of the screen click Next.
🚨 Careful! Make sure your Databricks instance (SQL endpoint or interactive cluster) is up and running, otherwise the Test Authentication step will time out.

Configure the connection

To complete the Databricks connection configuration:

  1. Provide a Connection Name that represents your source environment. For example, you might want to use values like production, development, gold, or analytics.
  2. (Optional) To change the users able to manage this connection, change the users or groups listed under Connection Admins.
    🚨 Careful! If you do not specify any user or group, nobody will be able to manage the connection β€” not even admins.
  3. (Optional) To prevent users from querying any Databricks data, change Allow SQL Query to No.
  4. (Optional) To prevent users from previewing any Databricks data, change Allow Data Preview to No.
  5. At the bottom of the screen, click the Next button to proceed.

Configure the crawler

Before running the Databricks crawler, you can further configure it.

You can override the defaults for any of these options:

  • Change the extraction method under Extraction method (see options below).
  • Select assets you want to include in crawling in the Include Metadata field. (This will default to all assets, if none are specified.)
  • Select assets you want to exclude from crawling in the Exclude Metadata field. (This will default to no assets, if none are specified.)
  • To have the crawler ignore temporary tables based on a naming convention, specify a regular expression in the Temporary table regex field.
πŸ’ͺ Did you know? If an asset appears in both the include and exclude filters, the exclude filter takes precedence.

JDBC extraction method

The JDBC extraction method uses JDBC queries to extract metadata from your Databricks instance. This was the original extraction method provided by Databricks, and is still recommended.

REST API extraction method

The REST API extraction method uses Unity Catalog to extract metadata from your Databricks instance.

While REST APIs are used to extract metadata, JDBC queries are still used for querying purposes.

Run the crawler

To run the Databricks crawler, after completing the steps above:

  • To run the crawler once, immediately, at the bottom of the screen click the Run button.
  • To schedule the crawler to run hourly, daily, weekly or monthly, at the bottom of the screen click the Schedule & Run button.

Once the crawler has completed running, you will see the assets in Atlan's asset page! πŸŽ‰

Related articles

Was this article helpful?
0 out of 0 found this helpful