Hi, I'm finetuning this model to my problem, where i need to edit document text. However, even after finetuning (200 images, 20 epochs), the images generated by model continue to look handwritten, whereas I need them to resemble printed text, like my ground truth is. Is there a way to adapt this approach to generate proper (not handwritten or scene) text? Should I try training from scratch on my images?
Some generated images:


It should resemble the Arial font here (the background and the colors are pretty spot on though)