🎓 Take the Serialization 101 Course & View Benchmark Reports
Our official documentation site serves as a comprehensive guide for Senior Software Developers and Data Scientists, covering serialization theory, C#/Python language specifics, and deep-dive benchmark results.
This repository contains the source code for benchmarking serialization libraries across .NET (C#) and Python, using identical test data, dual-mode APIs, and comparable metrics to enable fair cross-language performance analysis.
- 38 serializers including Json.NET, Protobuf-net, MessagePack, MemoryPack, FlatSharp, and more.
- Dual-mode testing: String and Stream APIs.
- Detailed CSV reporting with timing (ticks) and size (bytes).
- Jupyter Notebook analysis included.
- Serializers across JSON, Binary, Schema, and Python-native groups.
- Dual-mode testing: bytes and stream APIs.
- Extended metrics: throughput, latency, memory allocation, output size, and type fidelity.
- Dockerized execution with
uvfor reproducible environments.
cd c-sharp
./scripts/run-benchmarks.sh smokecd python
./scripts/run-benchmarks.sh smokeBoth benchmarks use the same conceptual test data types, defined in schemas/benchmark_data.proto and configured via schemas/test_data_config.json.
For details on how test data is generated and configured, see the Test Data Configuration.
- Person: Nested POCO with enums, strings, and arrays.
- Integer: Primitive throughput baseline.
- Telemetry: Numeric arrays testing binary efficiency.
- SimpleObject: Minimal overhead.
- StringArray: GC/memory pressure test.
- EDI_835: Deeply nested real-world document.
- ObjectGraph: Circular references (only native serializers expected to pass).
Results are written to:
logs/csharp/benchmark-log.csvlogs/csharp/benchmark-errors.csvlogs/python/benchmark-log.csvlogs/python/benchmark-errors.csv
MIT