这篇文章将为大家详细讲解有关如何进行Kafka 1.0.0 d代码示例分析,文章内容质量较高,因此小编分享给大家做个参考,希望大家阅读完这篇文章后对相关知识有一定的了解。
package kafka.demo;import java.util.HashMap;import java.util.Map;import org.apache.kafka.clients.producer.KafkaProducer;import org.apache.kafka.clients.producer.ProducerRecord;public class KafkaProduderDemo {public static void main(String[] args) {Map<String,Object> props = new HashMap<>();props.put("acks", "1");//配置默认的分区方式props.put("partitioner.class", "org.apache.kafka.clients.producer.internals.DefaultPartitioner");//配置topic的序列化类props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");//配置value的序列化类props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("bootstrap.servers", "bigdata01:9092,bigdata02:9092,bigdata03:9092");//topicString topic = "test7";KafkaProducer< String, String> producer = new KafkaProducer< String, String>(props);for(int i = 1 ;i <= 100 ; i++) {String line = i+" this is a test ";ProducerRecord<String,String> record = new ProducerRecord<String,String>(topic,line );producer.send(record);}producer.close();}}//---------------------------------------------------------------------------------------------------------------------------package kafka.demo;import java.util.Arrays;import java.util.Properties;import java.util.Random;import org.apache.kafka.clients.consumer.ConsumerRecord;import org.apache.kafka.clients.consumer.ConsumerRecords;import org.apache.kafka.clients.consumer.KafkaConsumer;import org.apache.kafka.common.errors.WakeupException;public class KafkaConsumerDemo {public static void main(String[] args) {Properties props = new Properties();props.put("bootstrap.servers", "bigdata01:9092,bigdata02:9092,bigdata03:9092");props.put("group.id", "group_test7");//配置topic的序列化类props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");//配置value的序列化类props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");//自动同步offset props.put("enable.auto.commit","true"); //自动同步offset的时间间隔 props.put("auto.commit.intervals.ms", "2000"); //当在zookeeper中发现要消费的topic没有或者topic的offset不合法时自动设置为最小值,可以设的值为 latest, earliest, none,默认为largest props.put("auto.offset.reset", "earliest "); KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props); consumer.subscribe(Arrays.asList("test7"));//consumer.beginningOffsets("");try {while(true) {ConsumerRecords<String, String> records = consumer.poll(1000);for(ConsumerRecord<String, String> record: records) {System.out.println("partition:"+record.partition()+" "+record.value());}//consumer.commitSync();if((new Random(10)).nextInt()>5) {consumer.wakeup();}}}catch(WakeupException e) {e.printStackTrace();}finally {consumer.close();}}}
关于如何进行Kafka 1.0.0 d代码示例分析就分享到这里了,希望以上内容可以对大家有一定的帮助,可以学到更多知识。如果觉得文章不错,可以把它分享出去让更多的人看到。