Skip to content

Files

Latest commit

df19430 · May 16, 2025

History

History
168 lines (140 loc) · 10.3 KB

support-matrix.md

File metadata and controls

168 lines (140 loc) · 10.3 KB

(support-matrix)=

Support Matrix

TensorRT-LLM optimizes the performance of a range of well-known models on NVIDIA GPUs. The following sections provide a list of supported GPU architectures as well as important features implemented in TensorRT-LLM.

Models (PyTorch Backend)

Architecture Model HuggingFace Example Modality
BertForSequenceClassification BERT-based textattack/bert-base-uncased-yelp-polarity L
DeciLMForCausalLM Nemotron nvidia/Llama-3_1-Nemotron-51B-Instruct L
DeepseekV3ForCausalLM DeepSeek-V3 deepseek-ai/DeepSeek-V3 L
LlavaLlamaModel VILA Efficient-Large-Model/NVILA-8B L + V
LlavaNextForConditionalGeneration LLaVA-NeXT llava-hf/llava-v1.6-mistral-7b-hf L + V
LlamaForCausalLM Llama 3.1, Llama 3, Llama 2, LLaMA meta-llama/Meta-Llama-3.1-70B L
Llama4ForConditionalGeneration Llama 4 meta-llama/Llama-4-Scout-17B-16E-Instruct L
MistralForCausalLM Mistral mistralai/Mistral-7B-v0.1 L
MixtralForCausalLM Mixtral mistralai/Mixtral-8x7B-v0.1 L
MllamaForConditionalGeneration Llama 3.2 meta-llama/Llama-3.2-11B-Vision L
NemotronForCausalLM Nemotron-3, Nemotron-4, Minitron nvidia/Minitron-8B-Base L
NemotronNASForCausalLM NemotronNAS nvidia/Llama-3_3-Nemotron-Super-49B-v1 L
Qwen2ForCausalLM QwQ, Qwen2 Qwen/Qwen2-7B-Instruct L
Qwen2ForProcessRewardModel Qwen2-based Qwen/Qwen2.5-Math-PRM-7B L
Qwen2ForRewardModel Qwen2-based Qwen/Qwen2.5-Math-RM-72B L
Qwen2VLForConditionalGeneration Qwen2-VL Qwen/Qwen2-VL-7B-Instruct L + V
Qwen2_5_VLForConditionalGeneration Qwen2.5-VL Qwen/Qwen2.5-VL-7B-Instruct L + V

Note:

  • L: Language only
  • L + V: Language and Vision multimodal support
  • Llama 3.2 accepts vision input, but our support currently limited to text only.

Models (TensorRT Backend)

LLM Models

Multi-Modal Models 3

(support-matrix-hardware)=

Hardware

The following table shows the supported hardware for TensorRT-LLM.

If a GPU architecture is not listed, the TensorRT-LLM team does not develop or test the software on the architecture and support is limited to community support. In addition, older architectures can have limitations for newer software releases.

:header-rows: 1
:widths: 20 80

* -
  - Hardware Compatibility
* - Operating System
  - TensorRT-LLM requires Linux x86_64 or Linux aarch64.
* - GPU Model Architectures
  -
    - [NVIDIA Blackwell Architecture](https://www.nvidia.com/en-us/data-center/technologies/blackwell-architecture/)
    - [NVIDIA Grace Hopper Superchip](https://www.nvidia.com/en-us/data-center/grace-hopper-superchip/)
    - [NVIDIA Hopper Architecture](https://www.nvidia.com/en-us/data-center/technologies/hopper-architecture/)
    - [NVIDIA Ada Lovelace Architecture](https://www.nvidia.com/en-us/technologies/ada-architecture/)
    - [NVIDIA Ampere Architecture](https://www.nvidia.com/en-us/data-center/ampere-architecture/)

(support-matrix-software)=

Software

The following table shows the supported software for TensorRT-LLM.

:header-rows: 1
:widths: 20 80

* -
  - Software Compatibility
* - Container
  - [25.04](https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html)
* - TensorRT
  - [10.10](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/index.html)
* - Precision
  -
    - Hopper (SM90) - FP32, FP16, BF16, FP8, INT8, INT4
    - Ada Lovelace (SM89) - FP32, FP16, BF16, FP8, INT8, INT4
    - Ampere (SM80, SM86) - FP32, FP16, BF16, INT8, INT4[^smgte89]
Support for FP8 and quantized data types (INT8 or INT4) is not implemented for all the models. Refer to {ref}`precision` and [examples](https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples) folder for additional information.

Footnotes

  1. Encoder-Decoder provides general encoder-decoder functionality that supports many encoder-decoder models such as T5 family, BART family, Whisper family, NMT family, and so on.

  2. Replit Code is not supported with the transformers 4.45+.

  3. Multi-modal provides general multi-modal functionality that supports many multi-modal architectures such as BLIP2 family, LLaVA family, and so on.

  4. Only supports bfloat16 precision.