From 5736c7df1bcd185a1e943ef807c8a002d09195f6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=B6=A6=E5=BF=83?= Date: Fri, 22 Mar 2024 19:51:50 +0100 Subject: [PATCH 1/2] fix: github-mathjax rendering issue When I view the readme, there is an error box "Missing or unrecognized delimiter for \left" as mentioned in https://github.com/orsharir/github-mathjax/issues/16#issuecomment-510267343 Not sure if it's the same in your side. --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index fa57070..a41a7f5 100644 --- a/README.md +++ b/README.md @@ -183,7 +183,7 @@ I utilised the AdamW optimiser and conducted hyperparameter tuning on the learn I fine-tuned for at most 10 epochs and utilise an early stopping strategy as follows: -Let $P_i$ be the model parameter after the $i$-th epoch of the fine-tuning ($i \in \left\{ 0, .., 9\right\}$), and $L_i$ the loss of the model on the evaluation dataset with parameter $P_i$. If there exists an $i$ that satisfies $L_i < L_{i+1} < L_{i+2}$, then the model parameter $P_i$ that satisfies this condition with the smallest $i$ is taken as the final result. Otherwise, the parameter of the last epoch is taken as the final result. +Let $P_i$ be the model parameter after the $i$-th epoch of the fine-tuning $i \in \lbrace 0, ..., 9\rbrace$, and $L_i$ the loss of the model on the evaluation dataset with parameter $P_i$. If there exists an $i$ that satisfies $L_i < L_{i+1} < L_{i+2}$, then the model parameter $P_i$ that satisfies this condition with the smallest $i$ is taken as the final result. Otherwise, the parameter of the last epoch is taken as the final result. ### My Model From ca65e038747b36ae95ee99db0b6cdb1bed4a4a9e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=B6=A6=E5=BF=83?= Date: Fri, 22 Mar 2024 19:55:10 +0100 Subject: [PATCH 2/2] update --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index a41a7f5..ecd691c 100644 --- a/README.md +++ b/README.md @@ -183,7 +183,7 @@ I utilised the AdamW optimiser and conducted hyperparameter tuning on the learn I fine-tuned for at most 10 epochs and utilise an early stopping strategy as follows: -Let $P_i$ be the model parameter after the $i$-th epoch of the fine-tuning $i \in \lbrace 0, ..., 9\rbrace$, and $L_i$ the loss of the model on the evaluation dataset with parameter $P_i$. If there exists an $i$ that satisfies $L_i < L_{i+1} < L_{i+2}$, then the model parameter $P_i$ that satisfies this condition with the smallest $i$ is taken as the final result. Otherwise, the parameter of the last epoch is taken as the final result. +Let $P_i$ be the model parameter after the $i$-th epoch of the fine-tuning ($i \in \lbrace 0, ..., 9\rbrace$), and $L_i$ the loss of the model on the evaluation dataset with parameter $P_i$. If there exists an $i$ that satisfies $L_i < L_{i+1} < L_{i+2}$, then the model parameter $P_i$ that satisfies this condition with the smallest $i$ is taken as the final result. Otherwise, the parameter of the last epoch is taken as the final result. ### My Model