TensorRT-LLM 0.10.0 Release #1735
kaiyux
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
We are very pleased to announce the 0.10.0 version of TensorRT-LLM. It has been an intense effort, and we hope that it will enable you to easily deploy GPU-based inference for state-of-the-art LLMs. We want TensorRT-LLM to help you run those LLMs very fast.
This update includes:
Key Features and Enhancements
executorAPI.trtllm-refitcommand. For more information, refer toexamples/sample_weight_stripping/README.md.docs/source/advanced/weight-streaming.md.--multiple_profilesargument intrtllm-buildcommand builds more optimization profiles now for better performance.applyBiasRopeUpdateKVCachekernel by avoiding re-computation.enqueuecalls of TensorRT engines.--visualize_networkand--dry_run) to thetrtllm-buildcommand to visualize the TensorRT network before engine build.ModelRunnerCppso that it runs with theexecutorAPI for IFB-compatible models.AllReduceby adding a heuristic; fall back to use native NCCL kernel when hardware requirements are not satisfied to get the best performance.gptManagerBenchmark.Time To the First Token (TTFT)latency andInter-Token Latency (ITL)metrics forgptManagerBenchmark.--max_attention_windowoption togptManagerBenchmark.API Changes
tokens_per_blockargument of thetrtllm-buildcommand to 64 for better performance.GptModelConfigtoModelConfig.SchedulerPolicywith the same name inbatch_schedulerandexecutor, and renamed it toCapacitySchedulerPolicy.SchedulerPolicytoSchedulerConfigto enhance extensibility. The latter also introduces a chunk-based configuration calledContextChunkingPolicy.generate()andgenerate_async()APIs. For example, when given a prompt asA B, the original generation result could be<s>A B C D Ewhere onlyC D Eis the actual output, and now the result isC D E.add_special_tokenin the TensorRT-LLM backend toTrue.GptSessionandTrtGptModelV1.Model Updates
Fixed Issues
gather_all_token_logits. (Segmentation fault with pipeline parallelism andgather_all_token_logits#1284)gpt_attention_pluginfor enc-dec models. (Flan t5 xxl result large difference #1343)Infrastructure changes
nvcr.io/nvidia/pytorch:24.03-py3.nvcr.io/nvidia/tritonserver:24.03-py3.Currently, there are two key branches in the project:
We are updating the
mainbranch regularly with new features, bug fixes and performance optimizations. Therelbranch will be updated less frequently, and the exact frequencies depend on your feedback.Thanks,
The TensorRT-LLM Engineering Team
This discussion was created from the release TensorRT-LLM 0.10.0 Release.
Beta Was this translation helpful? Give feedback.
All reactions