kafka笔记

kafka集群(3节点)+zk集群(3节点)接入项目中,待观察性能

kafka集群(3节点)+zk集群(3节点)接入项目中,待观察性能

环境配置:

  1. 操作系统 windows7旗舰版 16G内存 64位

      2.kafka集群节点分别安装在三台win7系统上

      3.zk集群安装在一台win7系统上,是伪集群

kafka配置文件:

server.properties

#此Broker的ID,集群中每个Broker的ID不可相同

broker.id=0

#监听器,端口号与port一致即可

listeners=PLAINTEXT://:9092

#Broker的端口 

port=9092

#Broker的Hostname,填主机IP即可

host.name=192.168.1.102

#向Producer和Consumer建议连接的Hostname和port

advertised.host.name=192.168.1.102

advertised.port=9092

#进行IO的线程数,应大于主机磁盘数 

num.network.threads=8

# The number of threads doing disk I/O

num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server

socket.send.buffer.bytes=1048576

# The receive buffer (SO_RCVBUF) used by the socket server

socket.receive.buffer.bytes=1048576

# The maximum size of a request that the socket server will accept (protection against OOM)

socket.request.max.bytes=104857600

queued.max.requests=16

fetch.purgatory.purge.interval.requests=100

producer.purgatory.purge.interval.requests=100

#自动进行重新选举leader

auto.leader.rebalance.enable=true

# Replication configurations

num.replica.fetchers=4

replica.fetch.max.bytes=1048576

replica.fetch.wait.max.ms=500

replica.high.watermark.checkpoint.interval.ms=5000

replica.socket.timeout.ms=30000

replica.socket.receive.buffer.bytes=65536

replica.lag.time.max.ms=10000

############################# Log Basics #############################

#消息文件存储的路径 

log.dirs=D:/kafka_2.11-0.9.0.0/logs

#每个Topic默认的分区数,一般在创建Topic时都会指定分区数,所以这个配成1就行了 

num.partitions=1

message.max.bytes=1000000

auto.create.topics.enable=true

log.index.interval.bytes=4096

log.index.size.max.bytes=10485760

log.flush.scheduler.interval.ms=2000

log.roll.hours=168

controller.socket.timeout.ms=30000

controller.message.queue.size=10

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.

# This value is recommended to be increased for installations with data dirs located in RAID array.

num.recovery.threads.per.data.dir=1

############################# Log Flush Policy #############################

# The number of messages to accept before forcing a flush of data to disk

log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush

log.flush.interval.ms=1000

############################# Log Retention Policy #############################

#消息文件清理周期,即清理x小时前的消息记录 

log.retention.minutes=2

# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining

# segments don't drop below log.retention.bytes.

#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.

log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according 

# to the retention policies

log.retention.check.interval.ms=300000

# By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.

# If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.

log.cleaner.enable=false

############################# Zookeeper #############################

#Zookeeper连接串,此处填写上一节中安装的三个zk节点的ip和端口即可 

zookeeper.connect=192.168.1.102:2181,192.168.1.102:2182,192.168.1.102:2183

#zookeeper.connect=192.168.1.102:2181

# Timeout in ms for connecting to zookeeper

zookeeper.connection.timeout.ms=6000

zookeeper.sync.time.ms=2000

zk配置文件:

zoo.cfg

tickTime=2000

initLimit=10

syncLimit=5

dataDir=D:/software/zookeeper-2181/data

dataLogDir=D:/software/zookeeper-2181/logs

clientPort=2181

server.0=192.168.1.102:2777:3777

server.1=192.168.1.102:2888:3888

server.2=192.168.1.102:2999:3999