Author : MD TAREQ HASSAN | Updated : 2021/06/29

Prerequisites

In case you want to delete already running kafka and start fresh:

Max Message Size

Broker level or Topic level (do either one, topic level is preferred)

Broker configuration: strimzi-kafka-deployment.yaml

# ... ... ...

  config:
	message.max.bytes: n # i.e. 1024 * 1024 * 10 (10 MB), message size allowed by broker
    replica.fetch.max.bytes: n # i.e. 1024 * 1024 * 10 (10 MB), for replicatioin inside cluster

# ... ... ...

Topic configuration: my-topic.yaml

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
  name: my-topic
  namespace: kafka
  labels:
    strimzi.io/cluster: my-cluster
spec:
  partitions: 2
  replicas: 2
  max.message.bytes: n # i.e. 1024 * 1024 * 10 (10 MB)

Producer configuration

Program.cs

// ... ... ...

const string KEY_REQUEST_SIZE = "max.request.size"; 

var configProperties = new Dictionary<string, string>
{
	[KEY_REQUEST_SIZE] = 1024 * 1024 * 10 * n, // n messages per request where each message is 10 MB
};

var producerConfig = new ProducerConfig(configProperties);

// ... ... ...

Consumer configuration

Program.cs

// ... ... ...

var brokerList = "10.10.0.75"; // external bootstrap (Strimzi kafka running in AKS)

var consumerConfig = new ConsumerConfig
{
	BootstrapServers = brokerList,
	GroupId = "my-topic-group", 
	MaxPartitionFetchBytes = 1024 * 1024 * 10,
	FetchMaxBytes = 1024 * 1024 * 10, // * n (if required)
	
	// ... ... ...
};

// ... ... ...

See: https://stackoverflow.com/a/67721976/4802664

Network buffer size

# ... ... ...

    config:
		replica.socket.receive.buffer.bytes=65536
		socket.request.max.bytes=104857600
		# default values of the buffers for sending and receiving messages might be too small for the required throughput
		socket.send.buffer.bytes=1048576
		socket.receive.buffer.bytes=1048576
		
# ... ... ...

https://strimzi.io/blog/2021/06/08/broker-tuning/