diff --git a/docs/build_docs/build.sh b/docs/build_docs/build.sh index d63390cb0..2753a77c8 100755 --- a/docs/build_docs/build.sh +++ b/docs/build_docs/build.sh @@ -91,6 +91,7 @@ cp -f "../CODE_OF_CONDUCT.md" "./source/" sed -i 's/.md/.html/g' ./source/get_started.md sed -i 's/.md/.html/g' ./source/docs/install/install_for_cpp.md +sed -i 's/.md/.html/g' ./source/examples/README.md sed -i 's/pluggable-device-for-tensorflow.html/pluggable-device-for-tensorflow.md/g' ./source/get_started.md sed -i 's/third-party-programs\/THIRD-PARTY-PROGRAMS/https:\/\/github.com\/intel\/intel-extension-for-tensorflow\/blob\/main\/third-party-programs\/THIRD-PARTY-PROGRAMS/g' ./source/get_started.md diff --git a/docs/build_docs/source/index.rst b/docs/build_docs/source/index.rst index ff3a3d8d7..3c31f0daa 100644 --- a/docs/build_docs/source/index.rst +++ b/docs/build_docs/source/index.rst @@ -9,7 +9,7 @@ Welcome to Intel ® Extension for TensorFlow* documentation! docs/guide/infrastructure.md docs/guide/features.rst docs/install/installation_guide.rst - examples/examples.md + examples/README.md docs/guide/practice_guide.md docs/guide/FAQ.md docs/community/releases.md diff --git a/examples/README.md b/examples/README.md index 0c4074dea..4ee5eb283 100644 --- a/examples/README.md +++ b/examples/README.md @@ -1,19 +1,26 @@ # Examples -A wide variety of examples are provided to demonstrate the usage of Intel® Extension for TensorFlow*. +## Prepare for Running + +Before running the training/inference code based on Intel® Extension for TensorFlow*, there are several prepare steps to be executed. Please refer to [Common Guide for Running](./common_guide_running.md). + +## Examples + +A wide variety of examples are provided to demonstrate the usage of Intel® Extension for TensorFlow*. |Name|Description|Hardware| |-|-|-| |[Quick Example](quick_example.md)|Quick example to verify Intel® Extension for TensorFlow* and running environment.|CPU & GPU| -|[ResNet50 Inference](./infer_resnet50)|ResNet50 inference on Intel CPU or GPU without code changes.|CPU & GPU| -|[BERT Training for Classifying Text](./train_bert)|BERT training with Intel® Extension for TensorFlow* on Intel CPU or GPU.
Use the TensorFlow official example without code change.|CPU & GPU| -|[Speed up Inference of Inception v4 by Advanced Automatic Mixed Precision via Docker Container or Bare Metal](./infer_inception_v4_amp)|Test and compare the performance of inference with FP32 and Advanced Automatic Mixed Precision (AMP) (mix BF16/FP16 and FP32).
Shows the acceleration of inference by Advanced AMP on Intel CPU and GPU via Docker Container or Bare Metal.|CPU & GPU| -|[Accelerate AlexNet by Quantization with Intel® Extension for TensorFlow*](./accelerate_alexnet_by_quantization)| An end-to-end example to show a pipeline to build up a CNN model to
recognize handwriting number and speed up AI model with quantization
by Intel® Neural Compressor and Intel® Extension for TensorFlow* on Intel GPU.|GPU| -|[Accelerate Deep Learning Training and Inference for Model Zoo Workloads on Intel GPU](./model_zoo_example)|Examples on running Model Zoo workloads on Intel GPU with the optimizations from Intel® Extension for TensorFlow*.|GPU| -|[Quantize Inception V3 by Intel® Extension for TensorFlow* on Intel® Xeon®](./quantize_inception_v3)|An end-to-end example to show how Intel® Extension for TensorFlow* provides quantization feature by cooperating with Intel® Neural Compressor and oneDNN Graph. It will provide better quantization: better performance and accuracy loss is in controlled.|CPU| -|[ResNet50 and Mnist training with Horovod](./train_horovod)|ResNet50 and Mnist distributed training examples on Intel GPU.|GPU| -|[Stable Diffusion Inference for Text2Image on Intel GPU](./stable_diffussion_inference)|Example for running Stable Diffusion Text2Image inference on Intel GPU with the optimizations from Intel® Extension for TensorFlow*.|GPU| -|[Accelerate ResNet50 Training by XPUAutoShard on Intel GPU](./train_resnet50_with_autoshard)|Example on running ResNet50 training on Intel GPU with the XPUAutoShard feature.|GPU| -|[Accelerate BERT-Large Pretraining on Intel GPU](./pretrain_bert)|Example on running BERT-Large pretraining on Intel GPU with the optimizations from Intel® Extension for TensorFlow*.|GPU| -|[Accelerate Mask R-CNN Training w/o horovod on Intel GPU](./train_maskrcnn)|Example on running Mask R-CNN training on Intel GPU with the optimizations from Intel® Extension for TensorFlow*.|GPU| -|[Accelerate 3D-UNet Training w/o horovod for medical image segmentation on Intel GPU](./train_3d_unet)|Example on running 3D-UNet training for medical image segmentation on Intel GPU with the optimizations from Intel® Extension for TensorFlow*.|GPU| +|[ResNet50 Inference](./infer_resnet50/README.md)|ResNet50 inference on Intel CPU or GPU without code changes.|CPU & GPU| +|[BERT Training for Classifying Text](./train_bert/README.md)|BERT training with Intel® Extension for TensorFlow* on Intel CPU or GPU.
Use the TensorFlow official example without code change.|CPU & GPU| +|[Speed up Inference of Inception v4 by Advanced Automatic Mixed Precision via Docker Container or Bare Metal](./infer_inception_v4_amp/README.md)|Test and compare the performance of inference with FP32 and Advanced Automatic Mixed Precision (AMP) (mix BF16/FP16 and FP32).
Shows the acceleration of inference by Advanced AMP on Intel CPU and GPU via Docker Container or Bare Metal.|CPU & GPU| +|[Accelerate AlexNet by Quantization with Intel® Extension for TensorFlow*](./accelerate_alexnet_by_quantization/README.md)| An end-to-end example to show a pipeline to build up a CNN model to
recognize handwriting number and speed up AI model with quantization
by Intel® Neural Compressor and Intel® Extension for TensorFlow* on Intel GPU.|GPU| +|[Accelerate Deep Learning Training and Inference for Model Zoo Workloads on Intel GPU](./model_zoo_example/README.md)|Examples on running Model Zoo workloads on Intel GPU with the optimizations from Intel® Extension for TensorFlow*.|GPU| +|[Quantize Inception V3 by Intel® Extension for TensorFlow* on Intel® Xeon®](./quantize_inception_v3/README.md)|An end-to-end example to show how Intel® Extension for TensorFlow* provides quantization feature by cooperating with Intel® Neural Compressor and oneDNN Graph. It will provide better quantization: better performance and accuracy loss is in controlled.|CPU| +|[Mnist training with Intel® Optimization for Horovod*](./train_horovod/mnist/README.md)|Mnist distributed training example on Intel GPU. |GPU| +|[ResNet50 training with Intel® Optimization for Horovod*](./train_horovod/resnet50/README.md)|ResNet50 distributed training example on Intel GPU. |GPU| +|[Stable Diffusion Inference for Text2Image on Intel GPU](./stable_diffussion_inference/README.md)|Example for running Stable Diffusion Text2Image inference on Intel GPU with the optimizations from Intel® Extension for TensorFlow*.|GPU| +|[Accelerate ResNet50 Training by XPUAutoShard on Intel GPU](./train_resnet50_with_autoshard/README.md)|Example on running ResNet50 training on Intel GPU with the XPUAutoShard feature.|GPU| +|[Accelerate BERT-Large Pretraining on Intel GPU](./pretrain_bert/README.md)|Example on running BERT-Large pretraining on Intel GPU with the optimizations from Intel® Extension for TensorFlow*.|GPU| +|[Accelerate Mask R-CNN Training w/o horovod on Intel GPU](./train_maskrcnn/README.md)|Example on running Mask R-CNN training on Intel GPU with the optimizations from Intel® Extension for TensorFlow*.|GPU| +|[Accelerate 3D-UNet Training w/o horovod for medical image segmentation on Intel GPU](./train_3d_unet/README.md)|Example on running 3D-UNet training for medical image segmentation on Intel GPU with the optimizations from Intel® Extension for TensorFlow*.|GPU| diff --git a/examples/examples.md b/examples/examples.md deleted file mode 100644 index d92e572b2..000000000 --- a/examples/examples.md +++ /dev/null @@ -1,17 +0,0 @@ -# Examples - -A wide variety of examples are provided to demonstrate the usage of Intel® Extension for TensorFlow*. - -|Name|Description|Hardware| -|-|-|-| -|[Quick Example](quick_example.html)|Quick example to verify Intel® Extension for TensorFlow* and running environment.|CPU & GPU| -|[ResNet50 Inference](./infer_resnet50/README.html)|ResNet50 inference on Intel CPU or GPU without code changes.|CPU & GPU| -|[BERT Training for Classifying Text](./train_bert/README.html)|BERT training with Intel® Extension for TensorFlow* on Intel CPU or GPU.
Use the TensorFlow official example without code change.|CPU & GPU| -|[Speed up Inference of Inception v4 by Advanced Automatic Mixed Precision via Docker Container or Bare Metal](./infer_inception_v4_amp/README.html)|Test and compare the performance of inference with FP32 and Advanced Automatic Mixed Precision (AMP) (mix BF16/FP16 and FP32).
Shows the acceleration of inference by Advanced AMP on Intel® CPU and GPU via Docker Container or Bare Metal.|CPU & GPU| -|[Accelerate AlexNet by Quantization with Intel® Extension for TensorFlow*](./accelerate_alexnet_by_quantization/README.html)| An end-to-end example to show a pipeline to build up a CNN model to
recognize handwriting number and speed up AI model with quantization
by Intel® Neural Compressor and Intel® Extension for TensorFlow* on Intel GPU.|GPU| -|[Accelerate Deep Learning Training and Inference for Model Zoo Workloads on Intel GPU](./model_zoo_example)|Examples on running Model Zoo workloads on Intel GPU with the optimizations from Intel® Extension for TensorFlow*.|GPU| -|[Quantize Inception V3 by Intel® Extension for TensorFlow* on Intel® Xeon®](./quantize_inception_v3/README.html)|An end-to-end example to show how Intel® Extension for TensorFlow* provides quantization feature by cooperating with Intel® Neural Compressor and oneDNN Graph. It will provide better quantization: better performance and accuracy loss is in controlled.|CPU| -|[Mnist training with Intel® Optimization for Horovod*](./train_horovod/mnist/README.html)|Mnist distributed training example on Intel GPU. |GPU| -|[ResNet50 training with Intel® Optimization for Horovod*](./train_horovod/resnet50/README.html)|ResNet50 distributed training example on Intel GPU. |GPU| -|[Stable Diffusion Inference for Text2Image on Intel GPU](./stable_diffussion_inference/README.html)|Example for running Stable Diffusion Text2Image inference on Intel GPU with the optimizations from Intel® Extension for TensorFlow*. |GPU| -|[Accelerate ResNet50 Training by XPUAutoShard on Intel GPU](./train_resnet50_with_autoshard/README.html)|Example on running ResNet50 training on Intel GPU with the XPUAutoShard feature. |GPU|