Hi, I'm digging into the codebase to better understand the paper and found something I was hoping you could clarify.
In Appendix C.3 of the paper (arXiv:2502.02216v2), it's mentioned:
Surprisingly, our experiments demonstrate that the constraint-free variant could always generate semantically correct tokenized SENTs... Therefore, we did not use any transition constraints during the inference in our experiments.
However, in the HFSequenceModel.generate method, it looks like a LabeledGraph LogitsProcessor is instantiated and used when self.tokenizer.labeled_graph is True:
|
if self.tokenizer.labeled_graph: |
|
logits_processor = LogitsProcessorList([ |
|
LabeledGraph(self.tokenizer, batch_size, input_ids.device), |
|
]) |
Since the configs for molecular datasets (like QM9, MOSES) set labeled_graph: true, it seems this processor would be active for those experiments.
I was just wondering if you could clarify this? Was this LogitsProcessor indeed used for the labeled graph experiments, with the statement in the appendix referring only to the simpler, unattributed graph experiments?
Thanks for the great work!
Hi, I'm digging into the codebase to better understand the paper and found something I was hoping you could clarify.
In Appendix C.3 of the paper (arXiv:2502.02216v2), it's mentioned:
However, in the
HFSequenceModel.generatemethod, it looks like aLabeledGraphLogitsProcessoris instantiated and used whenself.tokenizer.labeled_graphisTrue:AutoGraph/autograph/models/seq_models.py
Lines 282 to 285 in 3dd8170
Since the configs for molecular datasets (like QM9, MOSES) set
labeled_graph: true, it seems this processor would be active for those experiments.I was just wondering if you could clarify this? Was this
LogitsProcessorindeed used for the labeled graph experiments, with the statement in the appendix referring only to the simpler, unattributed graph experiments?Thanks for the great work!