How to crawl Redpanda Kafka

Once you have configured the Redpanda Kafka permissions, you can establish a connection between Atlan and Redpanda Kafka.

πŸ’ͺ Did you know? Atlan currently supports the offline extraction method for fetching metadata from Redpanda Kafka. This method uses Atlan's kafka-extractor tool to fetch metadata.

To crawl metadata from Redpanda Kafka after uploading the results to S3, review the order of operations and then complete the following steps.

Select the source

To select Redpanda Kafka as your source:

  1. In the top right of any screen in Atlan, navigate to +New and click New workflow.
  2. From the Marketplace page, click Redpanda Kafka Assets.
  3. In the right panel, click Setup Workflow.

Provide credentials

In offline extraction, you need to first extract metadata yourself and make it available in S3.

To enter your S3 details:

  1. For Extraction method, Offline is the default selection.
  2. For Bucket name, enter the name of your S3 bucket.
  3. For Bucket prefix, enter the S3 prefix under which all the metadata files exist. These include topics.json, topic-configs.json, and so on.
  4. For Bucket region, enter the name of the S3 region.
  5. When complete, at the bottom of the screen, click Next.

Configure the connection

To complete the Redpanda Kafka connection configuration:

  1. Provide a Connection Name that represents your source environment. For example, you might use values like production,development,gold, or analytics.
  2. (Optional) To change the users who are able to manage this connection, change the users or groups listed under Connection Admins.
    🚨 Careful! If you do not specify any user or group, no one will be able to manage the connection β€” not even admins.
  3. Navigate to the bottom of the screen and click Next to proceed.

Configure the crawler

Before running the Redpanda Kafka crawler, you can further configure it.

On the Metadata page, you can override the defaults for any of these options:

  • To select the assets you want to exclude from crawling, click Exclude topics regex. (This will default to no assets, if none specified.)
  • To select the assets you want to include in crawling, click Include topics regex. (This will default to all assets, if none are specified.)
πŸ’ͺ Did you know? If an asset appears in both the include and exclude filters, the exclude filter takes precedence.

Run the crawler

To run the Redpanda Kafka crawler, after completing the steps above:

  1. To check for any permissions or other configuration issues before running the crawler, click Preflight checks.
  2. You can either:
    • To run the crawler once immediately, at the bottom of the screen, click the Run button.
    • To schedule the crawler to run hourly, daily, weekly, or monthly, at the bottom of the screen, click the Schedule & Run button.

Once the crawler has completed running, you will see the assets on Atlan's asset page! πŸŽ‰

Related articles

Was this article helpful?
0 out of 0 found this helpful