I'm having trouble reproducing the Llama-2-7B-chat model responses to given prompts and with given temperatures. The responses differ not just stylistically due to randomness, but in terms of informativeness and generation length. Could you please share the model parameters and seeds for this and other models with which the responses were generated?