Skip to content

mark000071/ASCoT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ASCoT

Adaptive Semantic Communication with proportional DeepJSCC bandwidth allocation.

This repository contains the final conv-based code, selected checkpoints, experiment summaries, and figures used for the paper-ready release of the project.

ASCoT overview

Quick start

git clone https://github.com/mark000071/ASCoT.git
cd ASCoT
pip install -r requirements.txt

Set the COCO dataset as:

dataset/
  coco/
    train2017/
    val2017/
    annotations/
      instances_train2017.json
      instances_val2017.json

Then reproduce the final conv experiment matrix with:

DEVICE=cuda:0 \
SEMANTIC_ARCH=conv \
CHANNEL_TYPE=AWGN \
RUN_STAMP=march28prop \
IMAGE_SIZE=512 \
MAX_TRAIN_SAMPLES=40000 \
MAX_VAL_SAMPLES=2000 \
EPOCHS=20 \
BATCH_SIZE=2 \
TRAIN_SNRS="-10 0 10" \
TEST_SNRS="-10 0 10" \
bash scripts/run_architecture_fixed_snr_matrix.sh

What is included

  • Final single-pass SemCom + DeepJSCC code
  • Conv-based semantic encoder/decoder pipeline
  • AWGN, Rayleigh, and Nakagami channel support
  • Final three-channel experiment results
  • Final ablation-study results
  • Selected trained checkpoints for reproducibility
  • Reproduction scripts and result-summary scripts

Repository layout

  • src/
    • core model, data loading, training, evaluation, and adaptive inference code
  • scripts/
    • experiment launchers and report-generation scripts
  • weights/
    • final conv checkpoints for AWGN, Rayleigh, and Nakagami
  • results/awgn/
    • final AWGN summaries and selected result tables
  • results/channels/
    • merged AWGN/Rayleigh/Nakagami comparison tables and plots
  • results/ablation/
    • final conv-only ablation summaries and plots
  • figures/
    • selected figures organized by topic
  • docs/
    • experiment notes and paper-writing helper material

Main model

The released main model is the single-pass pipeline:

x -> semantic encoder -> JSCC encoder -> channel -> JSCC decoder -> semantic decoder -> x_hat

The final released variant uses the conv-based semantic encoder/decoder and proportional semantic-symbol settings:

  • small = (zc=16, jscc=16)
  • medium = (zc=32, jscc=32)
  • large = (zc=64, jscc=64)

Channels

The released code supports:

  • AWGN
  • Rayleigh
  • Nakagami

The final experiments were run under fixed train/test SNR settings:

  • -10 dB
  • 0 dB
  • 10 dB

Key scripts

  • Training and matrix evaluation:
    • scripts/run_architecture_fixed_snr_matrix.sh
  • Detection evaluation:
    • src/evaluate_yolo_detection.py
  • Adaptive inference:
    • src/adaptive_semantic_inference.py
  • Channel comparison report:
    • scripts/generate_channel_comparison_report.py
  • Conv ablation suite:
    • scripts/run_conv_ablation_suite.sh
  • Ablation summary:
    • scripts/summarize_conv_ablation.py

Environment

Install dependencies:

pip install -r requirements.txt

The original experiments used:

  • Python 3.10
  • PyTorch
  • torchvision
  • Ultralytics YOLO
  • pycocotools
  • lpips

Dataset

The experiments use COCO 2017 with the following layout:

dataset/
  coco/
    train2017/
    val2017/
    annotations/
      instances_train2017.json
      instances_val2017.json

Reproducing the final conv experiments

DEVICE=cuda:0 \
SEMANTIC_ARCH=conv \
CHANNEL_TYPE=AWGN \
RUN_STAMP=march28prop \
IMAGE_SIZE=512 \
MAX_TRAIN_SAMPLES=40000 \
MAX_VAL_SAMPLES=2000 \
EPOCHS=20 \
BATCH_SIZE=2 \
TRAIN_SNRS="-10 0 10" \
TEST_SNRS="-10 0 10" \
bash scripts/run_architecture_fixed_snr_matrix.sh

For Rayleigh or Nakagami, replace CHANNEL_TYPE with Rayleigh or Nakagami.

Reproducing the conv ablation suite

DEVICE=cuda:0 bash scripts/run_conv_ablation_suite.sh
python scripts/summarize_conv_ablation.py

Notes

  • This release intentionally keeps only the final conv-based pipeline and the final selected outputs needed for open-source release and reproduction.
  • The repository includes selected final checkpoints, not every intermediate training artifact.
  • Large per-image raw logs from every experiment are not duplicated here; the released summaries and figures are the final curated results.

Reproducibility notes

  • Source code lives in src/, and the provided shell scripts automatically export PYTHONPATH=src.
  • The final release includes selected checkpoints under weights/ for the three channels and three train-SNR conditions.
  • YOLO inference requires a local Ultralytics checkpoint, passed through YOLO_WEIGHTS.
  • The release has been checked for command-line entry points and script-path consistency after the repo layout was flattened for open-source release.

Citation

If you use this repository, please cite it with the metadata in CITATION.cff.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors