Adaptive Semantic Communication with proportional DeepJSCC bandwidth allocation.
This repository contains the final conv-based code, selected checkpoints, experiment summaries, and figures used for the paper-ready release of the project.
git clone https://github.com/mark000071/ASCoT.git
cd ASCoT
pip install -r requirements.txtSet the COCO dataset as:
dataset/
coco/
train2017/
val2017/
annotations/
instances_train2017.json
instances_val2017.json
Then reproduce the final conv experiment matrix with:
DEVICE=cuda:0 \
SEMANTIC_ARCH=conv \
CHANNEL_TYPE=AWGN \
RUN_STAMP=march28prop \
IMAGE_SIZE=512 \
MAX_TRAIN_SAMPLES=40000 \
MAX_VAL_SAMPLES=2000 \
EPOCHS=20 \
BATCH_SIZE=2 \
TRAIN_SNRS="-10 0 10" \
TEST_SNRS="-10 0 10" \
bash scripts/run_architecture_fixed_snr_matrix.sh- Final single-pass SemCom + DeepJSCC code
- Conv-based semantic encoder/decoder pipeline
- AWGN, Rayleigh, and Nakagami channel support
- Final three-channel experiment results
- Final ablation-study results
- Selected trained checkpoints for reproducibility
- Reproduction scripts and result-summary scripts
src/- core model, data loading, training, evaluation, and adaptive inference code
scripts/- experiment launchers and report-generation scripts
weights/- final conv checkpoints for AWGN, Rayleigh, and Nakagami
results/awgn/- final AWGN summaries and selected result tables
results/channels/- merged AWGN/Rayleigh/Nakagami comparison tables and plots
results/ablation/- final conv-only ablation summaries and plots
figures/- selected figures organized by topic
docs/- experiment notes and paper-writing helper material
The released main model is the single-pass pipeline:
x -> semantic encoder -> JSCC encoder -> channel -> JSCC decoder -> semantic decoder -> x_hat
The final released variant uses the conv-based semantic encoder/decoder and proportional semantic-symbol settings:
small = (zc=16, jscc=16)medium = (zc=32, jscc=32)large = (zc=64, jscc=64)
The released code supports:
AWGNRayleighNakagami
The final experiments were run under fixed train/test SNR settings:
-10 dB0 dB10 dB
- Training and matrix evaluation:
scripts/run_architecture_fixed_snr_matrix.sh
- Detection evaluation:
src/evaluate_yolo_detection.py
- Adaptive inference:
src/adaptive_semantic_inference.py
- Channel comparison report:
scripts/generate_channel_comparison_report.py
- Conv ablation suite:
scripts/run_conv_ablation_suite.sh
- Ablation summary:
scripts/summarize_conv_ablation.py
Install dependencies:
pip install -r requirements.txtThe original experiments used:
- Python 3.10
- PyTorch
- torchvision
- Ultralytics YOLO
- pycocotools
- lpips
The experiments use COCO 2017 with the following layout:
dataset/
coco/
train2017/
val2017/
annotations/
instances_train2017.json
instances_val2017.json
DEVICE=cuda:0 \
SEMANTIC_ARCH=conv \
CHANNEL_TYPE=AWGN \
RUN_STAMP=march28prop \
IMAGE_SIZE=512 \
MAX_TRAIN_SAMPLES=40000 \
MAX_VAL_SAMPLES=2000 \
EPOCHS=20 \
BATCH_SIZE=2 \
TRAIN_SNRS="-10 0 10" \
TEST_SNRS="-10 0 10" \
bash scripts/run_architecture_fixed_snr_matrix.shFor Rayleigh or Nakagami, replace CHANNEL_TYPE with Rayleigh or Nakagami.
DEVICE=cuda:0 bash scripts/run_conv_ablation_suite.sh
python scripts/summarize_conv_ablation.py- This release intentionally keeps only the final conv-based pipeline and the final selected outputs needed for open-source release and reproduction.
- The repository includes selected final checkpoints, not every intermediate training artifact.
- Large per-image raw logs from every experiment are not duplicated here; the released summaries and figures are the final curated results.
- Source code lives in
src/, and the provided shell scripts automatically exportPYTHONPATH=src. - The final release includes selected checkpoints under
weights/for the three channels and three train-SNR conditions. - YOLO inference requires a local Ultralytics checkpoint, passed through
YOLO_WEIGHTS. - The release has been checked for command-line entry points and script-path consistency after the repo layout was flattened for open-source release.
If you use this repository, please cite it with the metadata in CITATION.cff.
