How to crawl Google BigQuery

Have more questions? Submit a request

Once you have configured the Google BigQuery user permissions, you can establish a connection between Atlan and Google BigQuery.

To crawl metadata from Google BigQuery, complete the following steps.

Select the source

To select Google BigQuery as your source:

  1. In the top right corner of any screen, click New and then click New Workflow.
  2. From the list of packages, select BigQuery Assets and click Setup Workflow.

Provide credentials

To enter your Google BigQuery credentials:

  1. For Project Id enter the value of project_id from the JSON for the service account you created.
  2. For Service Account Json paste in the entire JSON for the service account you created.
  3. For Service Account Email enter the value of client_email from the JSON for the service account you created.
  4. At the bottom of the form, click the Test Authentication button to confirm connectivity to Google BigQuery using these details.
  5. When successful, at the bottom of the screen click the Next button.

Configure the connection

To complete the Google BigQuery connection configuration:

  1. Provide a Connection Name that represents your source environment. For example, you might want to use values like production, development, gold, or analytics.
  2. (Optional) To change the users able to manage this connection, change the users or groups listed under Connection Admins.
    🚨 Careful! If you do not specify any user or group, nobody will be able to manage the connection β€” not even admins.
  3. (Optional) To prevent users from querying any Google BigQuery data, change Allow SQL Query to No.
  4. (Optional) To prevent users from previewing any Google BigQuery data, change Allow Data Preview to No.
  5. At the bottom of the screen, click the Next button to proceed.

Configure the crawler

Before running the Google BigQuery crawler, you can further configure it.

You can override the defaults for any of these options:

  • Select assets you want to include in crawling in the Include Metadata field. (This will default to all assets, if none are specified.)
  • Select assets you want to exclude from crawling in the Exclude Metadata field. (This will default to no assets, if none are specified.)
  • To have the crawler ignore temporary tables based on a naming convention, specify a regular expression in the Temporary table regex field.
πŸ’ͺ Did you know? If a folder or project appears in both the include and exclude filters, the exclude filter takes precedence.

Run the crawler

To run the Google BigQuery crawler, after completing the steps above:

  • To run the crawler once, immediately, at the bottom of the screen click the Run button.
  • To schedule the crawler to run hourly, daily, weekly or monthly, at the bottom of the screen click the Schedule & Run button.

Once the crawler has completed running, you will see the assets in Atlan's asset page! πŸŽ‰

Related articles

Was this article helpful?
1 out of 1 found this helpful