How to crawl PostgreSQL

Once you have configured the PostgreSQL user permissions, you can establish a connection between Atlan and PostgreSQL. (If you are using a private network for PostgreSQL, you will need to set that up first, too.)

To crawl metadata from PostgreSQL, review the order of operations and then complete the following steps.

Select the source

To select PostgreSQL as your source:

  1. In the top right of any screen, navigate to New and then click New Workflow.
  2. From the list of packages, select Postgres Assets and click on Setup Workflow.

Provide credentials

Choose your extraction method:

Direct extraction method

To enter your PostgreSQL credentials:

  1. For Host enter the host for your PostgreSQL instance.
  2. For Port enter the port number of your PostgreSQL instance.
  3. For Authentication choose the method you configured when setting up the PostgreSQL user:
    • For Basic authentication, enter the Username and Password you configured in PostgreSQL.
    • For IAM User authentication, enter the AWS Access KeyAWS Secret Key, and database Username you configured.
    • For IAM Role authentication, enter the AWS Role ARN of the new role you created and database Username you configured. (Optional) Enter the AWS External ID only if you have not configured an external ID in the role definition.
  4. For Database enter the name of the database to crawl.
  5. Click Test Authentication to confirm connectivity to PostgreSQL using these details.
  6. When successful, at the bottom of the screen click Next.

S3 extraction method

Atlan also supports the S3 extraction method for fetching metadata from PostgreSQL. This method uses Atlan's metadata-extractor tool to fetch metadata. You will need to first extract the metadata yourself and then make it available in S3.

To enter your S3 details:

  1. For S3 bucket name, enter the name of your S3 bucket. If you are re-using Atlan's S3 bucket, you can leave this blank.
  2. For S3 prefix, enter the S3 prefix under which all the metadata files exist. These include database.json, columns-<database>.json, and so on.
  3. For S3 region, enter the name of the S3 region.
  4. When complete, at the bottom of the screen click Next.

Configure the connection

To complete the PostgreSQL connection configuration:

  1. Provide a Connection Name that represents your source environment. For example, you might use values like production, development, gold, or analytics.
  2. (Optional) To change the users able to manage this connection, change the users or groups listed under Connection Admins.
    🚨 Careful! If you do not specify any user or group, nobody will be able to manage the connection β€” not even admins.
  3. At the bottom of the screen, click Next to proceed.

Configure the crawler

Before running the PostgreSQL crawler, you can further configure it. (These options are only available when using the direct extraction method.)

You can override the defaults for any of these options:

  • To select the assets you want to exclude from crawling, click Exclude Metadata. (This will default to no assets if none are specified.)
  • To select the assets you want to include in crawling, click Include Metadata. (This will default to all assets, if none are specified.)
  • To have the crawler ignore tables and views based on a naming convention, specify a regular expression in the Exclude regex for tables & views field.
  • For Advanced Config, keep Default for the default configuration or click Custom to configure the crawler:
    • For Enable Source Level Filtering, click True to enable schema-level filtering at source or click False to disable it.
    • For Use JDBC Internal Methods, click True to enable JDBC internal methods for data extraction or click False to disable it.
πŸ’ͺ Did you know? If an asset appears in both the include and exclude filters, the exclude filter takes precedence.

Run the crawler

To run the PostgreSQL crawler, after completing the steps above:

  1. To check for any permissions or other configuration issues before running the crawler, click Preflight checks.
  2. You can either:
    • To run the crawler once immediately, at the bottom of the screen, click the Run button.
    • To schedule the crawler to run hourly, daily, weekly, or monthly, at the bottom of the screen, click the Schedule Run button.

Once the crawler has completed running, you will see the assets in Atlan's asset page! πŸŽ‰

Related articles

Was this article helpful?
1 out of 1 found this helpful