How to crawl MySQL

Once you have configured the MySQL user permissions, you can establish a connection between Atlan and MySQL. (If you are also using a private network for MySQL, you will need to set that up first, too.)

To crawl metadata from MySQL, review the order of operations and then complete the following steps.

Select the source

To select MySQL as your source:

  1. In the top right of any screen, navigate to New and then click New Workflow.
  2. From the list of packages, select MySQL Assets and click on Setup Workflow.

Provide credentials

Choose your extraction method:

Direct extraction method

To enter your MySQL credentials:

  1. For Host Name enter the host for your MySQL instance.
  2. For Port enter the port number of your MySQL instance.
  3. For Authentication choose the method you configured when setting up the MySQL user:
    • For Basic authentication, enter the Username and Password you configured in MySQL.
    • For IAM User authentication, enter the AWS Access KeyAWS Secret Key, and database Username you configured.
    • For IAM Role authentication, enter the AWS Role ARN of the new role you created and database Username you configured. (Optional) Enter the AWS External ID only if you have not configured an external ID in the role definition.
  4. Click Test Authentication to confirm connectivity to MySQL using these details.
    πŸ’ͺ Did you know? If you get an Error: 1129: Host ... is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts', ask your database admin to run the FLUSH HOSTS; command in the RDS instance, and then try again.
  5. When successful, at the bottom of the screen click Next.

S3 extraction method

Atlan also supports the S3 extraction method for fetching metadata from MySQL. This method uses Atlan's metadata-extractor tool to fetch metadata. You will need to first extract the metadata yourself and then make it available in S3.

To enter your S3 details:

  1. For S3 bucket name, enter the name of your S3 bucket. If you are re-using Atlan's S3 bucket, you can leave this blank.
  2. For S3 prefix, enter the S3 prefix under which all the metadata files exist. These include databases.json, columns-<database>.json, and so on.
  3. (Optional) For S3 region, enter the name of the S3 region.
  4. When complete, at the bottom of the screen click Next.

Configure the connection

To complete the MySQL connection configuration:

  1. Provide a Connection Name that represents your source environment. For example, you might use values like production, development, gold, or analytics.
  2. (Optional) To change the users able to manage this connection, change the users or groups listed under Connection Admins.
    🚨 Careful! If you do not specify any user or group, nobody will be able to manage the connection β€” not even admins.
  3. (Optional) To prevent users from querying any MySQL data, change Allow SQL Query to No.
  4. (Optional) To prevent users from previewing any MySQL data, change Allow Data Preview to No.
  5. At the bottom of the screen, click Next to proceed.

Configure the crawler

Before running the MySQL crawler, you can further configure it. (Some of the options may only be available when using the direct extraction method.)

You can override the defaults for any of these options:

  • To select the assets you want to include in crawling, click Include Metadata. (This will default to all assets, if none are specified.)
  • To select the assets you want to exclude from crawling, click Exclude Metadata. (This will default to no assets if none are specified.)
  • To have the crawler ignore tables and views based on a naming convention, specify a regular expression in the Exclude regex for tables & views field.
  • To enable or disable schema-level filtering at source, click Enable Source Level Filtering and select the relevant option.
πŸ’ͺ Did you know? If an asset appears in both the include and exclude filters, the exclude filter takes precedence.

Run the crawler

To run the MySQL crawler, after completing the steps above:

  1. To check for any permissions or other configuration issues before running the crawler, click Preflight checks.
  2. You can either:
    • To run the crawler once immediately, at the bottom of the screen, click the Run button.
    • To schedule the crawler to run hourly, daily, weekly, or monthly, at the bottom of the screen, click the Schedule Run button.

Once the crawler has completed running, you will see the assets in Atlan's asset page! πŸŽ‰

Related articles

Was this article helpful?
1 out of 1 found this helpful