How to crawl Confluent Kafka

Atlan crawls metadata from your Confluent Kafka cluster, allowing you to discover, classify, and govern your Kafka topics and schemas. This guide walks you through the steps to configure and run the Confluent Kafka crawler in Atlan.

Prerequisites

Before you begin, complete the following prerequisites:

  • Confluent Kafka setup: You have configured the Confluent Kafka permissions, you can establish a connection between Atlan and Confluent Kafka.
  • Order of operations: Review the order of operations to understand the sequence of tasks for crawling metadata.
  • Access to Atlan workspace: You must have the required permissions in Atlan to create and manage a connection.

Select the source

To select Confluent Kafka as your source:

  1. In the top right of any screen in Atlan, navigate to +New and click New Workflow.
  2. From the Marketplace page, click Confluent Kafka Assets.
  3. In the right panel, click Setup Workflow.

Provide credentials

In Direct extraction, Atlan connects to Confluent Kafka and crawls metadata directly.

In Offline extraction, you need to first extract metadata yourself and make it available in S3.

Direct extraction method

To enter your Confluent Kafka credentials:

  1. For Bootstrap servers, enter the hostname(s) of your Confluent Kafka broker(s). Separate multiple hostnames with a comma , or semicolon ;.
  2. For API Key, enter the API key you copied.
  3. For API Secret, enter the API secret you copied.
  4. For Security protocol, click SASL_PLAINTEXT to connect to Confluent Kafka through a non-encrypted channel or click SASL_SSL to connect via a Secure Sockets Layer (SSL) channel.
  5. Click the Test Authentication button to confirm connectivity to Confluent Kafka.
  6. Once authentication is successful, navigate to the bottom of the screen and click Next.

Offline extraction method

Atlan also supports the offline extraction method for fetching metadata from Confluent Kafka. This method uses Atlan's kafka-extractor tool to fetch metadata. You will need to first extract the metadata yourself and then make it available in S3.

To enter your S3 details:

  1. For Bucket name, enter the name of your S3 bucket.
  2. For Bucket prefix, enter the S3 prefix under which all the metadata files exist. These include topics.json, topic-configs.json, and so on.
  3. Based on your cloud platform, enter the following details:
    • If using AWS, for Role ARN, enter the ARN of the AWS role to assume. This role ARN will be used to copy the files from S3.
    • If using Microsoft Azure, enter the name of your Azure Storage Account and the SAS token for Blob SAS Token.
    • If using Google Cloud Platform, no further configuration is required.
  4. When complete, at the bottom of the screen, click Next.

Configure the connection

To complete the Confluent Kafka connection configuration:

  1. Provide a Connection Name that represents your source environment. For example, use values like production, development, gold, or analytics.
  2. (Optional) To change the users who can manage this connection, update the users or groups listed under Connection Admins.
    🚨 Careful! If you do not specify any user or group, no one will be able to manage the connection — not even admins.
  3. Navigate to the bottom of the screen and click Next to proceed.

Configure the crawler

Before running the Confluent Kafka crawler, you can further configure it.

On the Metadata page, you can override the defaults for any of these options:

  • For Skip internal topics, keep the default option Yes to skip internal Kafka topics or click No to enable crawling them.
  • To select the assets you want to exclude from crawling, click Exclude topics regex. (This will default to no assets, if none specified.)
  • To select the assets you want to include in crawling, click Include topics regex. (This will default to all assets, if none are specified.)
💪 Did you know? If an asset appears in both the include and exclude filters, the exclude filter takes precedence.

Run the crawler

To run the Confluent Kafka crawler:

  • To run the crawler once, immediately, click the Run button at the bottom of the screen.
  • To schedule the crawler to run hourly, daily, weekly, or monthly, click the Schedule & Run button at the bottom of the screen.

Once the crawl completes, your assets appear in Atlan! 🎉

Related articles

Was this article helpful?
0 out of 0 found this helpful