Can anyone help me how to use this in colab along with local llms #89
Replies: 3 comments 5 replies
-
|
If I am using the previous method to create what should I send as arguments here instead of the below code response = oai.ChatCompletion.create(
context=messages[-1].pop("context", None), messages=self._oai_system_message + messages, **llm_config
)
return True, oai.ChatCompletion.extract_text_or_function_call(response)[0] |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
Hi @YaswanthDasamandam, could you please check this out? https://microsoft.github.io/autogen/blog/2023/07/14/Local-LLMs This method requires colab pro, as running commands in terminal is a colab pro feature. |
Beta Was this translation helpful? Give feedback.
5 replies
-
|
how can we use huggingface models with hugging face token |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi ,
I normally use colab for using smaller Local models . But Here I am not able to create llm lite server and the autogen at the same time .
because of inline . Can anyone help
I read the Issue #46 here it was a discussion about llmlite server ..
But can anyone create a colab for working with local llms . Then the project would be useful for everyone.
This would be very helpful for everyone
Beta Was this translation helpful? Give feedback.
All reactions