Welcome to the repository of a reproducibility study for the paper Edit Away and My Face Will not Stay: Personal Biometric Defense against Malicious Generative Editing.
We implemented some modifications and also added a few pipelines to the repo, but most of the code comes from the repo of the studied paper. If you like our work, make sure to also check out their work.
Recent advances in diffusion-based image editing have enabled highly realistic and accessible manipulation of facial images, raising serious concerns about biometric privacy and malicious misuse. FaceLock, introduced in Edit Away and My Face Will Not Stay: Personal Biometric Defense against Malicious Generative Editing, proposes an optimization-based defense that embeds subtle perturbations into images at publication time to induce identity distortion in downstream generative edits. The method claims prompt-agnostic effectiveness and strong performance across multiple editing scenarios, supported by open-source code. In this paper, we present a systematic reproducibility study of FaceLock that evaluates its technical, quantitative, and qualitative reproducibility. We assess whether the reported results can be obtained using the released codebase, analyze the correspondence between the paper’s algorithmic description and its implementation, and document ambiguities that impact reproducibility. We further examine quantitative reproducibility by attempting to recover the reported performance trends and relative ranking against baselines. We, however, were not able to reproduce the originally reported performance trends, and our outputs were generally worse than those presented in the original paper. Beyond that, we expand the qualitative analysis to a broader set of image–prompt pairs and an additional, harder facial dataset to better test generalization behavior. While we obtained some successful outputs, only a small fraction of our qualitative results matched the consistently high quality reported by the authors. Finally, we introduce an extension to the FaceLock method that helps with robustness, and we critically examine the evaluation criteria used to measure defense effectiveness, highlighting limitations of prompt fidelity as a primary metric and arguing for a more explicit consideration of the trade-off between identity protection and preservation of the original image.
The authors of the studied paper provided a conda env file for environment setup.
conda env create -f environment.yml
conda activate facelockThe set up typically works fine for the image editing pipeline, but it might encounter some CUDA kernel issues for other pipelines (like the defense one). If that is the case, make sure to update your torch and torchvision packages so they both match the version provided in the environment.yml, but also your CUDA version (if it's different from 11.8). Just as an example, the code below updates the packages to match both this project and CUDA versions 12.* . Before blindly doing that though, you should always check whether a pre-built wheel for your Python + package + CUDA + CPU_architecture exists in https://download.pytorch.org/whl/
pip install torch==2.3.1+cu121 -f https://download.pytorch.org/whl/torch/
pip install torchvision==0.18.1+cu121 -f https://download.pytorch.org/whl/torchvision/They began by presenting the code for image editing and defending applied to a single input image.
python edit.py --input_path=${input image path} --prompt=${the instruction prompt used to edit the image} [--num_inference_steps=50 --image_guidance_scale=1.5 --guidance_scale=7.5 --help]Arguments explanation:
input_paththe path to the image to be editedpromptthe instruction prompt used to edit the imagenum_inference, image_guidance_scale, guidance_scaleconfigurations used to guide the image editing processhelpto view other arguments for image editing
python defend.py --input_path=${input image path} --defend_method=${selected defense method} --fr_folder=${absolute path to download and save face recognition model at} [--attack_budget=0.02 --step_size=0.003 --num_iters=100 --output_path=${path to location+name_of_file to save the defended image} --help]Argument explanation:
input_paththe path to the image to be protecteddefend_methodthe selected defense method, the authors of FaceLock provided options among[encoder/vae/cw/facelock]. We also introduced two new options[simple_facelock/eot_facelock]that can be used the same way.fr_folderthe absolute path to download and save face recognition model atoutput_pathpath to location+name_of_file to save the defended imageattack_budget, step_size, num_itershyper-parameters for the defend processhelpto view other arguments for defending a single image
For the defense code, they made use of the Hugging Face Hub. We adapted their coded to make it more user-friendly so you should simply create a file hf_token inside the FaceLock folder with a single line containing your Hugging Face token to run the defense pipeline. This file will also be needed for when we scale up the pipeline.
Next, they extended this to demonstrate the code for handling image editing and defending across multiple images.
python main_edit.py --src_dir=${input image dir} --edit_dir=${output image dir} [--num_inference_steps=50 --image_guidance_scale=1.5 --guidance_scale=7.5 --help]Arguments explanation:
src_dirthe path to the directory of the source images to be editededit_dirthe path to the directory containing the edited images generated- other arguments are similar to the single image editing version, use
helpto see more details
python main_defend.py --image_dir=${input image dir} --output_dir=${output image dir} --defend_method=${selected defense method} --fr_folder=${absolute path to download and save face recognition model at} [--attack_budget=0.02 --step_size=0.003 --num_iters=100 --help]Arguments explanation:
image_dirthe path to the directory of the source images to be protectedoutput_dirthe path to the directory containing the protected images generated- other arguments are similar to the single image defending version, use
helpto see more details
They provide the evaluation code for computing the PSNR, SSIM, LPIPS, CLIP-S, CLIP-I, FR metrics mentioned in the paper.
cd evaluation
# PSNR metric
python eval_psnr.py --clean_edit_dir=${path to the clean edits} --defend_edit_dirs=${sequence of path to the protected edits} --seed_clean=${the seed used to edit and evaluate on the clean images}
--seed_defend=${the seed used to edit and evaluate on the defended images}
# SSIM metric
python eval_ssim.py --clean_edit_dir=${path to the clean edits} --defend_edit_dirs=${sequence of path to the protected edits} --seed_clean=${the seed used to edit and evaluate on the clean images}
--seed_defend=${the seed used to edit and evaluate on the defended images}
# LPIPS metric
python eval_lpips.py --clean_edit_dir=${path to the clean edits} --defend_edit_dirs=${sequence of path to the protected edits} --seed_clean=${the seed used to edit and evaluate on the clean images}
--seed_defend=${the seed used to edit and evaluate on the defended images}
# CLIP-S metric
python eval_clip_s.py --src_dir=${path to the source images} --defend_edit_dirs=${sequence of path to the protected edits} --seed=${the seed used to edit and evaluate on} [--clean_edit_dir=${path to the clean edits}]
# CLIP-I metric
python eval_clip_i.py --src_dir=${path to the source images} --defend_edit_dirs=${sequence of path to the protected edits} --seed=${the seed used to edit and evaluate on} [--clean_edit_dir=${path to the clean edits}]
# FR metric
python eval_facial.py --src_dir=${path to the source images} --defend_edit_dirs=${sequence of path to the protected edits} --seed=${the seed used to edit and evaluate on} --fr_folder=${absolute path to download and save face recognition model at} [--clean_edit_dir=${path to the clean edits}]For PSNR, SSIM, and LPIPS, the computations are performed between the edits on the protected images and the edits on the clean images. Therefore, the inputs defend_edit_dirs and clean_edit_dir are required.
For CLIP-S, the computation involves the source image, the edited image, and the edit prompt. To handle your unique instructions, you can modify utils.py. This requires additional input for the source image directory src_dir. If clean_edit_dir is provided, CLIP-S results will also be calculated for the edits on unprotected images.
For CLIP-I and FR, the computations use the source image and the edited image, sharing the same input settings as CLIP-S.
We also provide an additional evaluation pipeline. We kept the authors of the studied paper's original face recognition image integrity pipeline as we understand its importance, but we used the prompt fidelity pipeline to instead assess image similarity between source images and defended images. This comes from our proposal of what should be tracked for image defense pipelines: a trade-off between how much you protect an image vs how much you distort the image in publication time to do so.
cd evaluation
# (Alternative) PSNR metric
python eval_psnr_alt.py --src_dir=${path to source images} --defend_dirs=${path to the defended images}
# (Alternative) SSIM metric
python eval_ssim_alt.py --src_dir=${path to source images} --defend_dirs=${path to the defended images}
# (Alternative) LPIPS metric
python eval_lpips_alt.py --src_dir=${path to source images} --defend_dirs=${path to the defended images}
# (Alternative) CLIP-I metric
python eval_clip_i_alt.py --src_dir=${path to source images} --defend_dirs=${path to the defended images}
# FR metric
python eval_facial.py --src_dir=${path to the source images} --defend_edit_dirs=${sequence of path to the protected edits} --seed=${the seed used to edit and evaluate on} --fr_folder=${absolute path to download and save face recognition model at} [--clean_edit_dir=${path to the clean edits}]If you find this repository helpful for your research, please consider citing our work:
@article{zerkowski2026revisitingfacelock,
title={Revisiting "Edit Away and My Face Will not Stay: Personal Biometric Defense against Malicious Generative Editing"},
author={Zerkowski, Luis Vitor and Chaudhuri, Soham and Helms, Finley and Sombekke, Jelle and Thakur, Udit},
journal={Transactions on Machine Learning Research},
year={2026},
url={https://openreview.net/forum?id=5Q1gr80AXU}
}We also encourage you to cite the authors of the studied paper as this reproducibility study could not have been done without their work.
@article{wang2024editawayfacestay,
title={Edit Away and My Face Will not Stay: Personal Biometric Defense against Malicious Generative Editing},
author={Hanhui Wang and Yihua Zhang and Ruizheng Bai and Yue Zhao and Sijia Liu and Zhengzhong Tu},
journal={arXiv preprint arXiv:2411.16832},
}