-
Notifications
You must be signed in to change notification settings - Fork 1
1) Scripts usage
Enric Ribera Borrell edited this page Jun 8, 2022
·
2 revisions
sample not controlled trajectories
python src/sampling/sample_not_controlled.py \
--d 3 \
--alpha-i 1. \
--beta 1. \
--dt 0.01 \
--K 1000
solve associated BVP by using finite differences
python src/hjb/compute_hjb_solution.py \
--d 2 \
--alpha-i 5. \
--beta 1. \
--h-hjb 0.005
sample metadynamics trajectories
python src/sampling/sample_metadynamics.py \
--d 4 \
--alpha-i 5. \
--beta 1. \
--meta-type cum \
--weights-type const \
--omega-0 1. \
--dt-meta 0.01 \
--K-meta 1 \
--sigma-i-meta 0.5 \
--delta-meta 1 \
--seed 1
sample optimal controlled trajectories (control from the hjb-solution)
python src/sampling/sample_optimal_controlled.py \
--d 2 \
--alpha-i 5. \
--beta 1. \
--h-hjb 0.005 \
--dt 0.01 \
--K 1000
controlled Sampling with the bias potential from metadynamics. Control represented with uniform distributed gaussian gradients.
sample controlled trajectories (control from the metadynamics algorithm)
python src/sampling/sample_meta_controlled.py \
--d 1 \
--alpha-i 5. \
--beta 1. \
--meta-type cum \
--weights-type const \
--omega-0 1. \
--dt-meta 0.01 \
--K-meta 1 \
--sigma-i-meta 0.5 \
--sigma-meta 1 \
--seed 1 \
--distributed uniform \
--theta meta \
--dt 0.01 \
--K 1000
controlled Sampling with the bias potential from metadynamics. Control represented by feed-forward NN.
sample controlled trajectories (control from the metadynamics algorithm)
python src/sampling/sample_meta_controlled_nn.py \
--d 1 \
--alpha-i 5. \
--beta 1. \
--meta-type cum \
--weights-type const \
--omega-0-meta 1. \
--dt-meta 0.01 \
--delta-meta 1 \
--K-meta 1 \
--dt 0.01 \
--K 1000 \
--d-layers 30 30 \
--activation tanh \
--distributed meta \
--theta meta
- with null initial bias potential
python src/soc/sgd_grad_loss_gaussian_ansatz.py \
--d 1 \
--alpha-i 1. \
--beta 0.5 \
--distributed uniform \
--sigma-i 0.5 \
--m-i 50 \
--theta null \
--dt 0.01 \
--K 1000 \
--lr 0.01 \
--n-iterations-lim 10
- with initial bias potential fitted from the metadynamics algorithm
python src/soc/sgd_grad_loss_gaussian_ansatz.py \
--d 1 \
--alpha-i 1. \
--beta 0.5 \
--distributed uniform \
--sigma-i 0.5 \
--m-i 50 \
--theta meta \
--meta-type cum \
--weights-type const \
--omega-0-meta 1. \
--dt-meta 0.01 \
--sigma-meta 1. \
--K-meta 1 \
--dt 0.01 \
--K 1000 \
--lr 0.01 \
--n-iterations-lim 10
- nn initialized with random coefficients
python src/soc/sgb_eff_loss_gaussian_ansatz.py \
--d 1 \
--alpha-i 1. \
--beta 1. \
--theta random \
--dt 0.01 \
--K 1000 \
--optimizer sgd \
--lr 0.01 \
--n-iterations-lim 10 \
--seed 1
- nn initialized with coefficients trained such that the control is zero in the whole domain
python src/soc/sgb_eff_loss_gaussian_ansatz.py \
--d 1 \
--alpha-i 1. \
--beta 1. \
--theta null \
--d-layers 30 30 \
--activation tanh \
--dt 0.01 \
--K 1000 \
--optimizer sgd \
--lr 0.01 \
--n-iterations-lim 10 \
--seed 1
- nn initialized with coefficients trained such that the control is the meta controlled
python src/soc/sgb_eff_loss_gaussian_ansatz.py \
--d 1 \
--alpha-i 1. \
--beta 1. \
--theta meta \
--d-layers 30 30 \
--activation tanh \
--dt 0.01 \
--K 1000 \
--optimizer sgd \
--lr 0.01 \
--n-iterations-lim 10 \
--seed 1
- nn initialized with random coefficients
python src/soc/sgb_eff_loss_feed_forward_nn.py \
--d 1 \
--alpha-i 1. \
--beta 1. \
--theta random \
--d-layers 30 30 \
--activation tanh \
--dt 0.01 \
--K 1000 \
--optimizer sgd \
--lr 0.01 \
--n-iterations-lim 10 \
--seed 1
- nn initialized with coefficients trained such that the control is zero in the whole domain
python src/soc/sgb_eff_loss_feed_forward_nn.py \
--d 1 \
--alpha-i 1. \
--beta 1. \
--theta null \
--d-layers 30 30 \
--activation tanh \
--dt 0.01 \
--K 1000 \
--optimizer sgd \
--lr 0.01 \
--n-iterations-lim 10 \
--seed 1
- nn initialized with coefficients trained such that the control is the meta controlled
python src/soc/sgb_eff_loss_feed_forward_nn.py \
--d 1 \
--alpha-i 1. \
--beta 1. \
--theta meta \
--d-layers 30 30 \
--activation tanh \
--dt 0.01 \
--K 1000 \
--optimizer sgd \
--lr 0.01 \
--n-iterations-lim 10 \
--seed 1