找到了我自己问题的答案。
序列化
下面的代码片段说明了如何序列化一个实例Foo
to Thrift
- 兼容字节(使用 ThriftCompact
协议)。为了使用Binary
协议,替换TCompactProtocol
with TBinaryProtocol
.
#include <thrift/transport/TBufferTransports.h>
#include <thrift/protocol/TCompactProtocol.h>
using apache::thrift::protocol::TCompactProtocol;
using apache::thrift::transport::TMemoryBuffer;
...
...
boost::shared_ptr<TMemoryBuffer> buffer(new TMemoryBuffer());
boost::shared_ptr<TCompactProtocol> protocol(new TCompactProtocol(buffer));
uint8_t **serialized_bytes = reinterpret_cast<uint8_t **>(malloc(sizeof(uint8_t *)));
uint32_t num_bytes = 0;
// 'foo' is an instance of Foo
foo->write(protocol.get());
buffer->getBuffer(serialized_bytes, &num_bytes);
发送到Kafka集群
以下代码片段说明了如何将 Thrift 兼容字节发送到 Kafka 集群。
NOTE:下面使用的kafka客户端库是自由卡夫卡 https://github.com/edenhill/librdkafka.
#include "rdkafkacpp.h"
std::string errstr;
// Create global configuration
RdKafka::Conf *conf = RdKafka::Conf::create(RdKafka::Conf::CONF_GLOBAL);
conf->set("metadata.broker.list", "localhost:9092", errstr);
conf->set("api.version.request", "true", errstr);
// Create kafka producer
RdKafka::Producer *producer = RdKafka::Producer::create(conf, errstr);
// Create topic-specific configuration
RdKafka::Topic *topic = RdKafka::Topic::create(producer, "topic_name", nullptr, errstr);
auto partition = 1;
// Sending the serialized bytes to Kafka cluster
auto res = producer->produce(
topic, partition,
RdKafka::Producer::RK_MSG_COPY /* Copy payload */,
serialized_bytes, num_bytes,
NULL, NULL);
if (res != RdKafka::ERR_NO_ERROR) {
std::cerr << "Failed to publish message" << RdKafka::err2str(res) << std::endl;
} else {
std::cout << "Published message of " << num_bytes << " bytes" << std::endl;
}
producer->flush(10000);