How to crawl Confluent Kafka

Once you have configured the Confluent Kafka permissions, you can establish a connection between Atlan and Confluent Kafka.

To crawl metadata from Confluent Kafka, review the order of operations and then complete the following steps.

Select the source

To select Confluent Kafka as your source:

  1. In the top right of any screen in Atlan, navigate to +New and click New Workflow.
  2. From the Marketplace page, click Confluent Kafka Assets.
  3. In the right panel, click Setup Workflow.

Provide credentials

Direct extraction method

To enter your Confluent Kafka credentials:

  1. For Bootstrap servers, enter the hostname(s) of your Confluent Kafka broker(s) β€” for multiple hostnames, separate each entry with a semicolon ;.
  2. For API Key, enter the API key you copied.
  3. For API Secret, enter the API secret you copied.
  4. For Security protocol, click Plaintext to connect to Confluent Kafka through a non-encrypted channel or click SSL to connect via a Secure Sockets Layer (SSL) channel.
  5. Click the Test Authentication button to confirm connectivity to Confluent Kafka.
  6. Once authentication is successful, navigate to the bottom of the screen and click Next.

Offline extraction method

Atlan also supports the offline extraction method for fetching metadata from Confluent Kafka. This method uses Atlan's kafka-extractor tool to fetch metadata. You will need to first extract the metadata yourself and then make it available in S3.

To enter your S3 details:

  1. For Bucket name, enter the name of your S3 bucket.
  2. For Bucket prefix, enter the S3 prefix under which all the metadata files exist. These include topics.json, topic-configs.json, and so on.
  3. For Bucket region, enter the name of the S3 region.
  4. (Optional) This is required only if you're using your own Azure blob container for offline extraction, enter the name of your Storage Account and SAS token for Blob SAS Token.
  5. When complete, at the bottom of the screen, click Next.

Configure the connection

To complete the Confluent Kafka connection configuration:

  1. Provide a Connection Name that represents your source environment. For example, you might use values like production,development,gold, or analytics.
  2. (Optional) To change the users who are able to manage this connection, change the users or groups listed under Connection Admins.
    🚨 Careful! If you do not specify any user or group, no one will be able to manage the connection β€” not even admins.
  3. Navigate to the bottom of the screen and click Next to proceed.

Configure the crawler

Before running the Confluent Kafka crawler, you can further configure it.

On the Metadata page, you can override the defaults for any of these options:

  • For Skip internal topics, keep the default option Yes to skip internal Kafka topics or click No to enable crawling them.
  • To select the assets you want to exclude from crawling, click Exclude topics regex. (This will default to no assets, if none specified.)
  • To select the assets you want to include in crawling, click Include topics regex. (This will default to all assets, if none are specified.)
πŸ’ͺ Did you know? If an asset appears in both the include and exclude filters, the exclude filter takes precedence.

Run the crawler

To run the Confluent Kafka crawler, after completing the steps above:

  • To run the crawler once, immediately, at the bottom of the screen, click the Run button.
  • To schedule the crawler to run hourly, daily, weekly, or monthly, at the bottom of the screen, click the Schedule & Run button.

Once the crawler has completed running, you will see the assets on Atlan's asset page! πŸŽ‰

Related articles

Was this article helpful?
0 out of 0 found this helpful