Dheeraj Peri 5 лет назад
Родитель
Сommit
57d9cc0444

+ 1 - 1
TensorFlow/Classification/ConvNets/resnet50v1.5/README.md

@@ -383,7 +383,7 @@ a <a href="https://www.tensorflow.org/versions/r1.14/api_docs/python/tf/contrib/
 for training QAT networks is to train a model until convergence and then finetune with the quantization layers. It is recommended that QAT is performed on a single GPU.
 
 * For 1 GPU
-    * Command: `sh resnet50v1.5/training/QAT/GPU1_RN50_QAT.sh <path to pre-trained ckpt dir> <path to dataset directory> <result_directory>`
+    * Command: `sh resnet50v1.5/training/GPU1_RN50_QAT.sh <path to pre-trained ckpt dir> <path to dataset directory> <result_directory>`
         
 It is recommended to finetune a model with quantization nodes rather than train a QAT model from scratch. The latter can also be performed by setting `quant_delay` parameter.
 `quant_delay` is the number of steps after which quantization nodes are added for QAT. If we are fine-tuning, `quant_delay` is set to 0. 

+ 0 - 4
TensorFlow/Classification/ConvNets/resnet50v1.5/training/QAT/GPU1_RN50_QAT.sh

@@ -1,4 +0,0 @@
-# This script does Quantization aware training of Resnet-50 by finetuning on the pre-trained model using 1 GPU and a batch size of 32.
-# Usage ./GPU1_RN50_QAT.sh <path to the pre-trained model> <path to dataset> <path to results directory>
-
-python main.py --mode=train_and_evaluate --batch_size=32 --lr_warmup_epochs=1 --quantize --symmetric --use_qdq --label_smoothing 0.1 --lr_init=0.00005 --momentum=0.875 --weight_decay=3.0517578125e-05 --finetune_checkpoint=$1 --data_dir=$2 --results_dir=$3 --num_iter 10 --data_format NHWC