How to crawl Apache Kafka

Atlan crawls metadata from your Apache Kafka cluster, allowing you to discover, classify, and govern your Kafka topics and schemas. This guide walks you through the steps to configure and run the Apache Kafka crawler in Atlan.

Prerequisites

Before you begin, complete the following prerequisites:

  • Apache Kafka setup: You have configured the Apache Kafka permissions, you can establish a connection between Atlan and Apache Kafka.
  • Order of operations: Review the order of operations to understand the sequence of tasks for crawling metadata.
  • Access to Atlan workspace: You must have the required permissions in Atlan to create and manage a connection.

Select the source

To select Apache Kafka as your source:

  1. In Atlan, click New, and from the menu, select New Workflow.
  2. From the Marketplace page, click Apache Kafka Assets.
  3. Click Setup Workflow in the right panel to proceed with configuration.

Provide credentials

Direct extraction method

To enter your Apache Kafka credentials:

  1. For Bootstrap servers, enter the hostname(s) of your Apache Kafka broker(s) — for multiple hostnames, separate each entry with a comma , or semicolon ;.
  2. For Authentication, Atlan provides the following authentication methods:
    • No Authentication: If your Apache Kafka cluster does not require authentication, Atlan can connect without any credentials..
    • Basic Authentication (SASL/PLAIN): Uses a username and password with the SASL_PLAIN mechanism for authentication.
    • SCRAM Authentication (SASL/SCRAM): Uses a username and password with the SASL_SCRAM mechanism (SCRAM-SHA-256 or SCRAM-SHA-512) for secure authentication.
    • Username, enter the username for your Apache Kafka brokers.
    • Password, enter the password for the username.
  3. For Security protocol, select Plaintext or SSL for No Auth, and SASL_PLAINTEXT or SASL_SSL for Basic and SCRAM authentication.
  4. For SASL Mechanism (optional for SCRAM authentication), choose the appropriate mechanism for your Kafka cluster.
  5. Click Test Authentication to confirm connectivity.
  6. Once authentication is successful, click Next.

Offline extraction method

Atlan also supports the offline extraction method for fetching metadata from Apache Kafka. This method uses Atlan's kafka-extractor tool to fetch metadata. You will need to first extract the metadata yourself and then make it available in S3.

To enter your S3 details:

  1. For Bucket name, enter the name of your S3 bucket.
  2. For Bucket prefix, enter the S3 prefix under which all the metadata files exist. These include topics.json, topic-configs.json, and so on.
  3. Based on your cloud platform, enter the following details:
    • If using AWS, for Role ARN, enter the ARN of the AWS role to assume. This role ARN will be used to copy the files from S3.
    • If using Microsoft Azure, enter the name of your Azure Storage Account and the SAS token for Blob SAS Token.
    • If using Google Cloud Platform, no further configuration is required.
  4. When complete, at the bottom of the screen, click Next.

Configure the connection

To complete the Apache Kafka connection configuration:

  1. Provide a Connection Name that represents your source environment. For example, you might use values like production,development,gold, or analytics.
  2. (Optional) To change the users who are able to manage this connection, change the users or groups listed under Connection Admins.
    🚨 Careful! If you do not specify any user or group, no one will be able to manage the connection — not even admins.
  3. Navigate to the bottom of the screen and click Next to proceed.

Configure the crawler

Before running the Apache Kafka crawler, you can further configure it.

On the Metadata page, you can override the defaults for any of these options:

  • For Skip internal topics, keep the default option Yes to skip internal Apache Kafka topics or click No to enable crawling them.
  • To select the Apache Kafka assets you want to exclude from crawling, click Exclude topics regex. (This will default to no assets, if none specified.)
  • To select the Apache Kafka assets you want to include in crawling, click Include topics regex. (This will default to all assets, if none are specified.)
💪 Did you know? If an asset appears in both the include and exclude filters, the exclude filter takes precedence.

Run the crawler

To run the Apache Kafka crawler, after completing the steps above:

  1. Click Preflight checks to verify configuration.
  2. Choose one of the following options:
    • To run the crawler once immediately, click Run.
    • To schedule the crawler, click Schedule & Run.

Once the crawler has completed running, you will see the assets on Atlan's asset page! 🎉

Related articles

Was this article helpful?
0 out of 0 found this helpful