## Schema evolution in Avro, Protocol Buffers and Thrift

http://martin.kleppmann.com/2012/12/05/schema-evolution-in-avro-protocol-buffers-thrift.html

So you have some data that you want to store in a file or send over the network. You may find yourself going through several phases of evolution:

1. Using your programming language’s built-in serialization, such as Java serialization, Ruby’s marshal, or Python’s pickle. Or maybe you even invent your own format.
2. Then you realise that being locked into one programming language sucks, so you move to using a widely supported, language-agnostic format like JSON (or XML if you like to party like it’s 1999).
3. Then you decide that JSON is too verbose and too slow to parse, you’re annoyed that it doesn’t differentiate integers from floating point, and think that you’d quite like binary strings as well as Unicode strings. So you invent some sort of binary format that’s kinda like JSON, but binary (1, 2, 3, 4, 5, 6).
4. Then you find that people are stuffing all sorts of random fields into their objects, using inconsistent types, and you’d quite like a schema and some documentation, thank you very much. Perhaps you’re also using a statically typed programming language and want to generate model classes from a schema. Also you realize that your binary JSON-lookalike actually isn’t all that compact, because you’re still storing field names over and over again; hey, if you had a schema, you could avoid storing objects’ field names, and you could save some more bytes!

Once you get to the fourth stage, your options are typically Thrift, Protocol Buffers or Avro. All three provide efficient, cross-language serialization of data using a schema, and code generation for the Java folks.

In real life, data is always in flux. The moment you think you have finalised a schema, someone will come up with a use case that wasn’t anticipated, and wants to “just quickly add a field”. Fortunately Thrift, Protobuf and Avro all support schema evolution: you can change the schema, you can have producers and consumers with different versions of the schema at the same time, and it all continues to work. That is an extremely valuable feature when you’re dealing with a big production system, because it allows you to update different components of the system independently, at different times, without worrying about compatibility.

The example I will use is a little object describing a person. In JSON I would write it like this:

{
"favouriteNumber": 1337,
"interests": ["daydreaming", "hacking"]
}


This JSON encoding can be our baseline. If I remove all the whitespace it consumes 82 bytes.

#### Protocol Buffers

The Protocol Buffers schema for the person object might look something like this:

message Person {
required string user_name        = 1;
optional int64  favourite_number = 2;
repeated string interests        = 3;
}
首先PB使用IDL来表示person的schema对于每个field都有一个唯一的tag,作为标识, 所以=1, =2, =3不是赋值, 是注明每个field的tag然后每个field可以是optional, required and repeated

When we encode the data above using this schema, it uses 33 bytes, as follows:

对于增加field, 只要使用新的tag, 就不会有任何问题

#### Thrift

Thrift is a much bigger project than Avro or Protocol Buffers, as it’s not just a data serialization library, but also an entire RPC framework.
It also has a somewhat different culture: whereas Avro and Protobuf standardize a single binary encoding, Thrift embraces a whole variety of different serialization formats (which it calls “protocols”).

Thrift的功能比较强大, 不仅仅是数据序列化库, 还是一整套的RPC框架, 支持完整的RPC协议栈.

Thrift IDL和PB其实很像, 不同是使用1:(而非=1)来标注field tag, 并且没有optional, required and repeated类型 All the encodings share the same schema definition, in Thrift IDL:

struct Person {
2: optional i64 favouriteNumber,
3: list<string> interests
}

The BinaryProtocol encoding is very straightforward, but also fairly wasteful (it takes 59 bytes to encode our example record):

The CompactProtocol encoding is semantically equivalent, but uses variable-length integers and bit packing to reduce the size to 34 bytes:

#### Avro

Avro schemas can be written in two ways, either in a JSON format:

{
"type": "record",
"name": "Person",
"fields": [
{"name": "userName",        "type": "string"},
{"name": "favouriteNumber", "type": ["null", "long"]},
{"name": "interests",       "type": {"type": "array", "items": "string"}}
]
}


…or in an IDL:

record Person {
union { null, long } favouriteNumber;
array<string>        interests;
}

Notice that there are no tag numbers in the schema! So how does it work?

Here is the same example data encoded in just 32 bytes:

Avro是比较新的方案, 现在使用的人还比较少, 主要在Hadoop. 同时设计也比较独特, 和Thrift和PB相比

2. 没有field tag, 只能使用field name作为标识符, Avro支持field name的改变, 但需要先通知所有reader, 如下

Because fields are matched by name, changing the name of a field is tricky. You need to first update all readers of the data to use the new field name, while keeping the old name as an alias (since the name matching uses aliases from the reader’s schema). Then you can update the writer’s schema to use the new field name.

3. 读数据的时候是按照schema的field定义顺序依次读取的, 所以对于optional field需要特别处理, 如例子使用union { null, long }

if you want to be able to leave out a value, you can use a union type, like union { null, long } above. This is encoded as a byte to tell the parser which of the possible union types to use, followed by the value itself. By making a union with the null type (which is simply encoded as zero bytes) you can make a field optional.

4. 可以选择使用Json实现schema, 而对于Thrift或PB只支持通过IDL将schema转化为具体的代码. 所以avro可以实现通用的客户端和server, 当schema变化时, 只需要更改Json, 而不需要重新编译

5. writer的schema和reader的schema不一定完全匹配, Avro parser可以使用resolution rules进行data translation

So how does Avro support schema evolution?
Well, although you need to know the exact schema with which the data was written (the writer’s schema), that doesn’t have to be the same as the schema the consumer is expecting (the reader’s schema). You can actually give two different schemas to the Avro parser, and it uses resolution rules to translate data from the writer schema into the reader schema.

6. 支持简单的增加或减少field

You can add a field to a record, provided that you also give it a default value (e.g. null if the field’s type is a union with null). The default is necessary so that when a reader using the new schema parses a record written with the old schema (and hence lacking the field), it can fill in the default instead.

Conversely, you can remove a field from a record, provided that it previously had a default value. (This is a good reason to give all your fields default values if possible.) This is so that when a reader using the old schema parses a record written with the new schema, it can fall back to the default.

This leaves us with the problem of knowing the exact schema with which a given record was written.
The best solution depends on the context in which your data is being used:

• In Hadoop you typically have large files containing millions of records, all encoded with the same schema. Object container files handle this case: they just include the schema once at the beginning of the file, and the rest of the file can be decoded with that schema.
• In an RPC context, it’s probably too much overhead to send the schema with every request and response. But if your RPC framework uses long-lived connections, it can negotiate the schema once at the start of the connection, and amortize that overhead over many requests.
• If you’re storing records in a database one-by-one, you may end up with different schema versions written at different times, and so you have to annotate each record with its schema version. If storing the schema itself is too much overhead, you can use a hash of the schema, or a sequential schema version number. You then need a schema registry where you can look up the exact schema definition for a given version number.

Avro相对于Thrift和PB, 更加复杂和难于使用, 当然有如下优点...

At first glance it may seem that Avro’s approach suffers from greater complexity, because you need to go to the additional effort of distributing schemas.
However, I am beginning to think that Avro’s approach also has some distinct advantages:

• Object container files are wonderfully self-describing: the writer schema embedded in the file contains all the field names and types, and even documentation strings (if the author of the schema bothered to write some). This means you can load these files directly into interactive tools like Pig, and it Just Works™ without any configuration.
• As Avro schemas are JSON, you can add your own metadata to them, e.g. describing application-level semantics for a field. And as you distribute schemas, that metadata automatically gets distributed too.
• A schema registry is probably a good thing in any case, serving as documentation and helping you to find and reuse data. And because you simply can’t parse Avro data without the schema, the schema registry is guaranteed to be up-to-date. Of course you can set up a protobuf schema registry too, but since it’s not required for operation, it’ll end up being on a best-effort basis.

posted on 2013-05-14 16:32  fxjwind  阅读(1705)  评论(0编辑  收藏