Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doctr/models/_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@ def invert_data_structure(
dictionary of list when x is a list of dictionaries or a list of dictionaries when x is dictionary of lists
"""
if isinstance(x, dict):
assert len({len(v) for v in x.values()}) == 1, "All the lists in the dictionnary should have the same length."
assert len({len(v) for v in x.values()}) == 1, "All the lists in the dictionary should have the same length."
return [dict(zip(x, t)) for t in zip(*x.values())]
elif isinstance(x, list):
return {k: [dic[k] for dic in x] for k in x[0]}
Expand Down
2 changes: 1 addition & 1 deletion doctr/models/recognition/crnn/pytorch.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ def ctc_best_path(

def __call__(self, logits: torch.Tensor) -> list[tuple[str, float]]:
"""Performs decoding of raw output with CTC and decoding of CTC predictions
with label_to_idx mapping dictionnary
with label_to_idx mapping dictionary

Args:
logits: raw output of the model, shape (N, C + 1, seq_len)
Expand Down
2 changes: 1 addition & 1 deletion doctr/models/recognition/crnn/tensorflow.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ def __call__(
top_paths: int = 1,
) -> list[tuple[str, float]] | list[tuple[list[str] | list[float]]]:
"""Performs decoding of raw output with CTC and decoding of CTC predictions
with label_to_idx mapping dictionnary
with label_to_idx mapping dictionary

Args:
logits: raw output of the model, shape BATCH_SIZE X SEQ_LEN X NUM_CLASSES + 1
Expand Down
2 changes: 1 addition & 1 deletion doctr/models/recognition/master/pytorch.py
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@ def forward(
return_preds: if True, decode logits

Returns:
A dictionnary containing eventually loss, logits and predictions.
A dictionary containing eventually loss, logits and predictions.
"""
# Encode
features = self.feat_extractor(x)["features"]
Expand Down
2 changes: 1 addition & 1 deletion doctr/models/recognition/master/tensorflow.py
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ def call(
**kwargs: keyword arguments passed to the decoder

Returns:
A dictionnary containing eventually loss, logits and predictions.
A dictionary containing eventually loss, logits and predictions.
"""
# Encode
feature = self.feat_extractor(x, **kwargs)
Expand Down
2 changes: 1 addition & 1 deletion doctr/models/recognition/viptr/pytorch.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ def ctc_best_path(

def __call__(self, logits: torch.Tensor) -> list[tuple[str, float]]:
"""Performs decoding of raw output with CTC and decoding of CTC predictions
with label_to_idx mapping dictionnary
with label_to_idx mapping dictionary

Args:
logits: raw output of the model, shape (N, C + 1, seq_len)
Expand Down
2 changes: 1 addition & 1 deletion references/detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ python references/detection/train_pytorch.py db_resnet50 --train_path path/to/yo

We now use the built-in [`torchrun`](https://pytorch.org/docs/stable/elastic/run.html) launcher to spawn your DDP workers. `torchrun` will set all the necessary environment variables (`LOCAL_RANK`, `RANK`, etc.) for you. Arguments are the same than the ones from single GPU, except:

- `--backend`: you can specify another `backend` for `DistribuedDataParallel` if the default one is not available on
- `--backend`: you can specify another `backend` for `DistributedDataParallel` if the default one is not available on
your operating system. Fastest one is `nccl` according to [PyTorch Documentation](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html).

#### Key `torchrun` parameters:
Expand Down
2 changes: 1 addition & 1 deletion references/recognition/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ python references/recognition/train_pytorch.py crnn_vgg16_bn --train_path path/t

We now use the built-in [`torchrun`](https://pytorch.org/docs/stable/elastic/run.html) launcher to spawn your DDP workers. `torchrun` will set all the necessary environment variables (`LOCAL_RANK`, `RANK`, etc.) for you. Arguments are the same than the ones from single GPU, except:

- `--backend`: you can specify another `backend` for `DistribuedDataParallel` if the default one is not available on
- `--backend`: you can specify another `backend` for `DistributedDataParallel` if the default one is not available on
your operating system. Fastest one is `nccl` according to [PyTorch Documentation](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html).

#### Key `torchrun` parameters:
Expand Down
Loading