Once you have configured the PostgreSQL user permissions, you can establish a connection between Atlan and PostgreSQL. (If you are using a private network for PostgreSQL, you will need to set that up first, too.)
To crawl metadata from PostgreSQL, review the order of operations and then complete the following steps.
Select the source
To select PostgreSQL as your source:
- In the top right of any screen, navigate to New and then click New Workflow.
- From the list of packages, select Postgres Assets and click on Setup Workflow.
Provide credentials
Choose your extraction method:
- In Direct extraction, Atlan connects to your database and crawls metadata directly.
- In Offline extraction, you need to first extract metadata yourself and make it available in S3.
Direct extraction method
To enter your PostgreSQL credentials:
- For Host enter the host for your PostgreSQL instance.
- For Port enter the port number of your PostgreSQL instance.
- For Authentication choose the method you configured when setting up the PostgreSQL user:
- For Basic authentication, enter the Username and Password you configured in PostgreSQL.
- For IAM User authentication, enter the AWS Access Key, AWS Secret Key, and database Username you configured.
- For IAM Role authentication, enter the AWS Role ARN of the new role you created and database Username you configured. (Optional) Enter the AWS External ID only if you have not configured an external ID in the role definition.
- For Database enter the name of the database to crawl.
- Click Test Authentication to confirm connectivity to PostgreSQL using these details.
- When successful, at the bottom of the screen click Next.
Offline extraction method
Atlan also supports the offline extraction method for fetching metadata from PostgreSQL. This method uses Atlan's metadata-extractor tool to fetch metadata. You will need to first extract the metadata yourself and then make it available in S3.
To enter your S3 details:
- For Bucket name, enter the name of your S3 bucket or Atlan's bucket.
- For Bucket prefix, enter the S3 prefix under which all the metadata files exist. These include
database.json
,columns-<database>.json
, and so on. - When complete, at the bottom of the screen click Next.
Configure the connection
To complete the PostgreSQL connection configuration:
- Provide a Connection Name that represents your source environment. For example, you might use values like
production
,development
,gold
, oranalytics
. - (Optional) To change the users able to manage this connection, change the users or groups listed under Connection Admins.
🚨 Careful! If you do not specify any user or group, nobody will be able to manage the connection — not even admins.
- At the bottom of the screen, click Next to proceed.
Configure the crawler
Before running the PostgreSQL crawler, you can further configure it. (These options are only available when using the direct extraction method.)
You can override the defaults for any of these options:
- To select the assets you want to exclude from crawling, click Exclude Metadata. (This will default to no assets if none are specified.)
- To select the assets you want to include in crawling, click Include Metadata. (This will default to all assets, if none are specified.)
- To have the crawler ignore tables and views based on a naming convention, specify a regular expression in the Exclude regex for tables & views field.
- For Advanced Config, keep Default for the default configuration or click Custom to configure the crawler:
- For Enable Source Level Filtering, click True to enable schema-level filtering at source or click False to disable it.
- For Use JDBC Internal Methods, click True to enable JDBC internal methods for data extraction or click False to disable it.
Run the crawler
To run the PostgreSQL crawler, after completing the steps above:
- To check for any permissions or other configuration issues before running the crawler, click Preflight checks.
- You can either:
- To run the crawler once immediately, at the bottom of the screen, click the Run button.
- To schedule the crawler to run hourly, daily, weekly, or monthly, at the bottom of the screen, click the Schedule Run button.
Once the crawler has completed running, you will see the assets in Atlan's asset page! 🎉