Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We have the user start Ollama themselves on a localhost server, and then can just add

``` models=Models( default=Ollama(model="llama2") ) ```

to the Continue config file. We'll then connect to the Ollama server, so it doesn't have to be embedded in the VS Code extension.

(Edit: I see you found it! Leaving this here still)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: