How to Debug and Locate Llama Model Code in SGLang for Profiling Analysis #4910
Unanswered
ziyuhuang123
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am using the following code:
I want to specifically locate the code file related to the Llama model in SGLang. How can I pinpoint where the model's code is executed? Should I debug it step-by-step? Also, since the host and server are separated into two different windows, is there a simple way to debug it?
Beta Was this translation helpful? Give feedback.
All reactions