How to crawl on-premises Kafka

Once you have set up the kafka-extractor tool, you can extract metadata from your on-premises Kafka instances by completing the following steps.

Run kafka-extractor

Crawl all Kafka connections

To crawl all Kafka connections using the kafka-extractor tool:

  1. Log into the server with Docker Compose installed.
  2. Change to the directory containing the compose file.
  3. Run Docker Compose: sudo docker-compose up

Crawl a specific connection

To crawl a specific Kafka connection using the kafka-extractor tool:

  1. Log into the server with Docker Compose installed.
  2. Change to the directory containing the compose file.
  3. Run Docker Compose: sudo docker-compose up <connection-name>

(Replace <connection-name> with the name of the connection from the services section of the compose file.)

(Optional) Review generated files

The kafka-extractor tool will generate many folders with JSON files for each service. For example:

  • topics
  • topic-configs
  • consumer-groups
  • consumer-groups-members
  • and many others

You can inspect the metadata and make sure it is acceptable for providing metadata to Atlan.

Upload generated files to S3

To provide Atlan access to the extracted metadata, you will need to upload the metadata to an S3 bucket. 

To upload the metadata to S3:

  1. Ensure that all files for a particular connection have the same prefix.
  2. Upload the files to the S3 bucket using your preferred method — include all the files from the output folder generated after running Docker Compose.

For example, to upload all files using the AWS CLI:

aws s3 cp output/kafka-example s3://my-bucket/metadata/kafka-example --recursive

Crawl metadata in Atlan

Once you have extracted metadata on-premises and uploaded the results to S3, you can crawl the metadata into Atlan:

Be sure you select S3 for the Extraction method.

Related articles

Was this article helpful?
0 out of 0 found this helpful