Skip to content

Inference API v2 - design docs kick-off#2277

Draft
PawelPeczek-Roboflow wants to merge 1 commit intomainfrom
design/first-scratch-of-new-api-v2
Draft

Inference API v2 - design docs kick-off#2277
PawelPeczek-Roboflow wants to merge 1 commit intomainfrom
design/first-scratch-of-new-api-v2

Conversation

@PawelPeczek-Roboflow
Copy link
Copy Markdown
Collaborator

What does this PR do?

Related Issue(s):

Type of Change

  • Bug fix (non-breaking change that fixes an issue)
  • New feature (non-breaking change that adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Refactoring (no functional changes)
  • Other:

Testing

  • I have tested this change locally
  • I have added/updated tests for this change

Test details:

Checklist

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code where necessary, particularly in hard-to-understand areas
  • My changes generate no new warnings or errors
  • I have updated the documentation accordingly (if applicable)

Additional Context

Copy link
Copy Markdown
Contributor

@dkosowski87 dkosowski87 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1/3 reviewed

* `GET /v2/models` - discover loaded models
* `DELETE /v2/models` - unload all models
* `POST /v2/models/load` - load given model
* `POST /v2/models/unload` - unload given model
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* `POST /v2/models/unload` - unload given model
* `DELETE /v2/models/unload` - unload given model


* `POST /v2/models/infer` - predict from model
* `GET /v2/models/interface` - discover model interface
* `GET /v2/models/compatibility` - discover models compatible with current server configuration
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if GET /v2/models means discover loaded models->GET /v2/models/compatibility` seems confusing as it doesn't operate on the loaded models but, as I understand, returns a broader lists. Maybe

  • GET /v2/models?state=loaded
  • GET /v2/models?state=compatible


## Models endpoints

* `POST /v2/models/infer` - predict from model
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One broader design question. Don't we see any value in separating model management endpoints from prediction endpoints. Similar as in torch serve where we have different ports for both. This probably would only make sense in self-hosted environment. Where a company admin manages model loading and unloading and we have some flag like SMART_MODEL_MANAGEMENT_ON_PREDICT=false where the model manager doesn't decide on loading/unloading models on predict requests.

curl -X POST https://serverless.roboflow.com/v2/models/infer \
--data-urlencode 'model_id=whatever/model-id/we?can?figure-out' \
-F "image=@photo.jpg;type=image/jpeg" \
-F 'inputs={"confidence": 0.5};type=application/json'
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would probably also map nicely for params that be strictly assigned to elements of the batch. Assuming order needs to be kept.:

-F 'inputs=[{"confidence":0.5}, {"confidence":0.4}];type=application/json' \
-F "image=@photo-1.jpg" \
-F "image=@photo-2.jpg"

Mixing scalar and batch:

-F 'inputs=[{"confidence":0.5, "fuse_nms": true}, {"confidence":0.4, "fuse_nms" : true}];type=application/json' \
-F "image=@photo-1.jpg" \
-F "image=@photo-2.jpg"

Alternatively:

-F 'inputs=[{"confidence":0.5}, {"confidence":0.4}];type=application/json' \
-f 'defaults={"fuse_nms": true};type=application/json' \
-F "image=@photo-1.jpg" \
-F "image=@photo-2.jpg"

So we don't duplicate, but also separate batch inputs from scalars.

```

> [!IMPORTANT]
> Since we have `inference-models` and one model may have multiple model-packages, `model_package_id` is natural candidate for structured query param - letting clients specify which exact model package they want, altering dafault auto-loading choice. We can decide also that certain parameters of auto-loading should be possible to be passed (although we need to decide on that relatively fast due to engineering work in progress and impact on model manager).
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably all relevant parameters, not sure if choosing those certain parameters makes sense, if the client is advanced enough to decide on those, he probably want to have the option of full control.

```

> [!IMPORTANT]
> There is security issue **embedded into accepting URLs as inputs - especially on the platform.** We accepted the risk of being middle-man in DDoS attack so far, likely it is going to be the case in the future (for user convenience), but would be good for all of parties involved into discussion to recognize and acknowledge this risk - to avoid surprises in the future.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see we have the ALLOW_URL_INPUT_WITHOUT_FQDN and ALLOW_NON_HTTPS_URL_INPUT vars. This plus timeouts and image size checks is imo ok.

Copy link
Copy Markdown
Contributor

@dkosowski87 dkosowski87 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2/3

{
"type": "roboflow-classification-compact-v1",
"class_names": ["cat", "dog"],
"confidences": [0.6, 0.4],
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Imo top_class is a little bit misleading, one could assume that top class is the highest confidence class, irrespective of the confidence threshold. I would prefer to have the prediction part split from the decision more clearly:

{
    "type": "roboflow-classification-compact-v1",
    "class_names": ["cat", "dog", "mouse"],
    "confidences": [0.3, 0.4, 0.3],
    "top_class_id": 1,

    "confidence_threshold": 0.5,
    "predicted_class_ids": [],
}

Returning the confidence_threshold given that we use default values is imo needed.

We can omit the top_class_* assuming that if the user is advanced enough to decide to use a value that is below the specified threshold, he is also savy enough to get the argmax or do an ordering;)

"top": [{ "class_name": "cat", "class_id": 1, "confidence": 0.92 }]
}
```
_Bonus question - how should filtering work here - just discard **top**, or alter **candidates**?_
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Filter decision, leave prediction. candidates always present as this is what the model returned.

"type": "roboflow-classification-compact-v1",
"class_names": ["cat", "dog"],
"confidences": [0.6, 0.4],
"detected_classes_ids": [0]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

detected sounds like this was object detection - imo predicted plus confidence_threshold and we are aligned with the single(multi) class case.

Copy link
Copy Markdown
Contributor

@dkosowski87 dkosowski87 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3/3 without workflows, need to separately think about them

}
```

Proposed **_rich_** representation:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What should be the role of the rich representation in your opinion?

[0, 1, 2, 3],
[0, 1, 2, 3]
],
"class_id": [0, 1],
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

class_ids, confidences Let's keep it plural as in the case of classification responses

"class_id": [0, 1],
"confidence": [0.33, 0.64],
"tracker_id": [0, 1],
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a slippery slope, but as I recommended including the confidence_threshold in the classification response. Here in order to be consistent we probably would need to provide:

    "confidence_threshold": 0.3,
    "iou_threshold": 0.7,
    "max_detections": 100

Now I'm thinking that this could be included in the rich representation.
So I get only the minimum info on the compact one, but in the rich representation information about:

  • what directly changed the output prediction
  • useful for debugging and interpretation

],
"class_id": [0, 1],
"confidence": [0.33, 0.64],
"tracker_id": [0, 1],
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would the tracker_id be optional?

```json
{
"type": "roboflow-semantic-segmentation-compact-v1",
"pixels_scores": [[0.3, 0.4, ...]], # maybe due to the size, available on demand only?
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CxHxW - this might be huge, C times larger than responses from depth estimation. I see a use case when someone is doing active learning and needs these pixel scores for identifying uncertain areas, but in that case using a dedicated workflow would be more appropriate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants