Hi @bhung-bdai 🤗
Niels here from the open-source team at Hugging Face. I discovered your work on Arxiv and was wondering whether you would like to submit it to hf.co/papers to improve its discoverability. If you are one of the authors, you can submit it at https://huggingface.co/papers/submit.
The paper page lets people discuss about your paper and lets them find artifacts about it (your models, datasets or demo for instance), you can also claim the paper as yours which will show up on your public profile at HF, and add Github and project page URLs.
I saw in your paper that you are releasing an open-source benchmark and dataset for loco-manipulation, as well as a pre-trained whole-body control policy. It'd be great to make these checkpoints and the dataset available on the 🤗 hub, to improve their discoverability/visibility within the robotics community. We can add tags so that people find them when filtering https://huggingface.co/models and https://huggingface.co/datasets.
Uploading models
See here for a guide: https://huggingface.co/docs/hub/models-uploading.
In this case, one can leverage the hf_hub_download one-liner to download a checkpoint from the hub directly into a research codebase. We encourage researchers to push each model checkpoint to a separate model repository, so that things like download stats also work.
Uploading dataset
Would be awesome to make the benchmark dataset available on 🤗, so that people can do:
from datasets import load_dataset
dataset = load_dataset("your-hf-org-or-username/your-dataset")
See here for a guide: https://huggingface.co/docs/datasets/loading.
Besides that, there's the dataset viewer which allows people to quickly explore the data (like the HDF5 rollout results) in the browser.
Let me know if you're interested/need any help regarding this!
Cheers,
Niels
ML Engineer @ HF 🤗
Hi @bhung-bdai 🤗
Niels here from the open-source team at Hugging Face. I discovered your work on Arxiv and was wondering whether you would like to submit it to hf.co/papers to improve its discoverability. If you are one of the authors, you can submit it at https://huggingface.co/papers/submit.
The paper page lets people discuss about your paper and lets them find artifacts about it (your models, datasets or demo for instance), you can also claim the paper as yours which will show up on your public profile at HF, and add Github and project page URLs.
I saw in your paper that you are releasing an open-source benchmark and dataset for loco-manipulation, as well as a pre-trained whole-body control policy. It'd be great to make these checkpoints and the dataset available on the 🤗 hub, to improve their discoverability/visibility within the robotics community. We can add tags so that people find them when filtering https://huggingface.co/models and https://huggingface.co/datasets.
Uploading models
See here for a guide: https://huggingface.co/docs/hub/models-uploading.
In this case, one can leverage the hf_hub_download one-liner to download a checkpoint from the hub directly into a research codebase. We encourage researchers to push each model checkpoint to a separate model repository, so that things like download stats also work.
Uploading dataset
Would be awesome to make the benchmark dataset available on 🤗, so that people can do:
See here for a guide: https://huggingface.co/docs/datasets/loading.
Besides that, there's the dataset viewer which allows people to quickly explore the data (like the HDF5 rollout results) in the browser.
Let me know if you're interested/need any help regarding this!
Cheers,
Niels
ML Engineer @ HF 🤗