瀏覽代碼

[BERT/PyT] specify GPU for triton (#666)

Sharath T S 5 年之前
父節點
當前提交
8588e9834c
共有 1 個文件被更改,包括 1 次插入1 次删除
  1. 1 1
      PyTorch/LanguageModeling/BERT/triton/README.md

+ 1 - 1
PyTorch/LanguageModeling/BERT/triton/README.md

@@ -102,7 +102,7 @@ To make the machine wait until the server is initialized, and the model is ready
 
 ## Performance
 
-The numbers below are averages, measured on Triton, with [static batching](https://docs.nvidia.com/deeplearning/sdk/tensorrt-inference-server-guide/docs/model_configuration.html#scheduling-and-batching). 
+The numbers below are averages, measured on Triton on V100 32G GPU, with [static batching](https://docs.nvidia.com/deeplearning/sdk/tensorrt-inference-server-guide/docs/model_configuration.html#scheduling-and-batching). 
 
 | Format | GPUs | Batch size | Sequence length | Throughput - FP32(sequences/sec) | Throughput - mixed precision(sequences/sec) | Throughput speedup (mixed precision/FP32)  |
 |--------|------|------------|-----------------|----------------------------------|---------------------------------------------|--------------------------------------------|