-
Notifications
You must be signed in to change notification settings - Fork 60
Transporting data
Once we have defined our data model, and can refer to it with an instance of a SerializationFramework, then we can use it in the Zeno framework.
One of the Zeno framework’s most powerful capabilities is the ability to, from a single machine, control the data set in memory of any number of additional machines. Zeno accomplishes this by producing and consuming blob files. For some high level detail about how this works at Netflix, see the Netflix Tech Blog.
We will be creating a process which runs in an infinite loop. Inside the loop, we will read all of the data we want to propagate to downstream servers. We transform that data into java objects (conforming to our object model), then we use the FastBlobStateEngine to push “blob” files to some persistence store where downstream servers will pick it up.
Zeno does not require use of a persistent file store, however by using one to propagate this data, we achieve fault-tolerance. If the data origination server goes down for any reason, the serialized representation of our POJO instances is still available from the file store. The data will simply become “stale” until a new data origination server comes back online.
The following pages provide example code to accomplish this goal:
