A queue-based Telegram migration tool for moving large batches of channel media without running one giant fragile loop. It separates scanning, processing, and verification, stores all jobs in SQLite, uses one global Telegram API limiter, handles FloodWait, supports album jobs, and can upload to your destination through a bot account.
Use it only for content you are allowed to access and migrate.
- SQLite queue in
data/migration.sqlite3 - resumable
messagestable withpending,downloading,uploading,copied,failed, andskippedstates - separate phases: scan source IDs, process pending jobs, verify destination posts
- one shared rate limiter for Telegram calls
FloodWaitsleeps for Telegram's wait plus extra random padding- batch processing with long pauses between batches
- album/media group detection and one queue job per album
- optional bot upload mode: user session reads, bot session posts
- retry backoff with a maximum attempt count
- controlled download cleanup under
downloads/active,downloads/failed, anddownloads/completed - graceful Ctrl+C handling
pip install -r requirements.txtCreate or edit config.yaml. Secrets can be placed directly in YAML, but using .env is cleaner:
API_ID=123456
API_HASH=your_api_hash
BOT_TOKEN=123456:your_bot_tokenThe app reads .env, then expands values like ${API_ID} in config.yaml.
Important fields in config.yaml:
telegram:
user_session: "tnabil"
bot:
enabled: true
token: "${BOT_TOKEN}"
use_for_uploads: true
migration:
sources:
- chat: "@source_channel_or_-100_id"
message_range:
start: 1
end: 2000
destinations:
- chat: "@destination_channel_or_-100_id"
limits:
read_delay_seconds: 2
download_delay_seconds: 5
upload_delay_seconds: 30
batch:
size: 25
pause_between_batches_seconds: 1800Set the bot as an admin in your destination channel when telegram.bot.use_for_uploads is enabled. The user session still reads the source because bots often cannot access old source history.
Create a user session:
python main.py login --session tnabilScan source message IDs into SQLite:
python main.py scanProcess pending jobs in configured batches:
python main.py processVerify copied destination messages:
python main.py verifyRun scan and process sequentially:
python main.py runShow queue counts:
python main.py statsRecover jobs that were left as downloading or uploading after a crash:
python main.py recoverpython bot.py ... still works as a wrapper around main.py.
app/
config.py
db.py
telegram_client.py
scanner.py
queue.py
worker.py
upload.py
errors.py
logging.py
data/
migration.sqlite3
downloads/
active/
failed/
completed/
config.yaml
main.py
The messages table stores:
source_chat_idsource_message_iddest_chat_idstatusattemptslast_errornext_retry_atfile_unique_keycreated_atupdated_at
Extra columns keep album IDs, source message ID lists, destination message IDs, and verification timestamps.
Retries use queue.retry_backoff_seconds; after queue.max_attempts, repeated failures become failed. Filtered messages become skipped when queue.record_skipped is true.