I am currently using the Wan2.2-Fun-5B-InP model. I noticed that the base Wan2.1/Wan2.2 series models are typically trained on high-resolution datasets (e.g., 1280x1280).
Could you please clarify if the Wan2.2-Fun-5B-InP variant has undergone additional post-training or fine-tuning specifically on low-resolution datasets or datasets with varying aspect ratios?
I am currently using the Wan2.2-Fun-5B-InP model. I noticed that the base Wan2.1/Wan2.2 series models are typically trained on high-resolution datasets (e.g., 1280x1280).
Could you please clarify if the Wan2.2-Fun-5B-InP variant has undergone additional post-training or fine-tuning specifically on low-resolution datasets or datasets with varying aspect ratios?