Hi, I am testing the node with a text around 3000 chars and I used various chunk lenghts with various concatenating methods, in one of them (600 chars + crossfade) I observed that the model dropped the last two words from a sentence in the audio. I don't lnow if the dropping occured at an end of one chunk or not but I would appreciate if you could invest some time in the following possible features for enhancing the node:
- Adding a new boolean parameter called something like "save_individual_chunks"
either
- Adding a new pipe output to the node called "audio_chunks" that would take an Save Audio (FLAC) node as endpoint and save each chunks once it is generated bu using the "audio" property of the Save Audio (Flac) node and adding an incremental suffix 00001 or 0001 and saving all the chunks as flac files in the folder indicated by the path in the audio property
or
- Add a new property to the Chatterbox node called "audio_chunks_path" and then incrementally save the chunks as flac (or whatever audio format available) by using the workflow name to which and incremental index would be appended, 00001 or similar
By this it would help me in many ways to both find out which chunk was affected by any possible error and also regenerate in case of a larger text only the chunk that was affected.
Thanks a lot
Hi, I am testing the node with a text around 3000 chars and I used various chunk lenghts with various concatenating methods, in one of them (600 chars + crossfade) I observed that the model dropped the last two words from a sentence in the audio. I don't lnow if the dropping occured at an end of one chunk or not but I would appreciate if you could invest some time in the following possible features for enhancing the node:
either
or
By this it would help me in many ways to both find out which chunk was affected by any possible error and also regenerate in case of a larger text only the chunk that was affected.
Thanks a lot