Skip to content

Add example for efficient handling of large weather datasets#194

Open
AjayKumar0403 wants to merge 1 commit into52North:mainfrom
AjayKumar0403:data-pipeline-example
Open

Add example for efficient handling of large weather datasets#194
AjayKumar0403 wants to merge 1 commit into52North:mainfrom
AjayKumar0403:data-pipeline-example

Conversation

@AjayKumar0403
Copy link
Copy Markdown

Description

This PR adds an example demonstrating how to efficiently handle large weather datasets in the Weather Routing Tool.

What is included

  • Loading datasets using chunking (Dask)
  • Subsetting and interpolation of data
  • Saving processed data in NetCDF and Zarr formats
  • Basic performance comparison between storage formats

Motivation

While working with weather datasets, I observed that handling large NetCDF files can lead to increased memory usage and slower processing.

This example provides a simple reference for using chunked processing and alternative storage formats to improve scalability and performance.

Expected Benefit

  • Helps users work with large datasets more efficiently
  • Demonstrates scalable data handling techniques
  • Improves usability for real-world weather data scenarios

Note

This PR does not modify any core functionality and only adds an example and documentation.

@AjayKumar0403 AjayKumar0403 force-pushed the data-pipeline-example branch from 85af470 to 21b1c9a Compare March 31, 2026 15:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant