Queries and chats may also embody uploaded photos with the photos
argument.
ollamar
The ollamar package deal begins up equally, with a test_connection()
operate to test that R can connect with a operating Ollama server, and pull("the_model_name")
to obtain the mannequin equivalent to pull("gemma3:4b") or pull("gemma3:12b")
.
The generate()
operate generates one completion from an LLM and returns an httr2_response
, which may then be processed by the resp_process()
operate.
library(ollamar)
resp <- generate("gemma2", "What's ggplot2?")
resp_text <- resp_process(resp)
Or, you may request a textual content response straight with a syntax equivalent to resp <- generate("gemma2", "What's ggplot2?", output = "textual content"
). There’s an choice to stream the textual content with stream = TRUE
:
resp <- generate("gemma2", "Inform me concerning the information.desk R package deal", output = "textual content", stream = TRUE)
ollamar has different performance, together with producing textual content embeddings, defining and calling instruments, and requesting formatted JSON output. See particulars on GitHub.
rollama was created by Johannes B. Gruber; ollamar by by Hause Lin.
Roll your individual
If all you need is a primary chatbot interface for Ollama, one simple choice is combining ellmer, shiny, and the shinychat package deal to make a easy Shiny app. As soon as these are put in, assuming you even have Ollama put in and operating, you may run a primary script like this one:
library(shiny)
library(shinychat)
ui <- bslib::page_fluid(
chat_ui("chat")
)
server <- operate(enter, output, session) {
chat <- ellmer::chat_ollama(system_prompt = "You're a useful assistant", mannequin = "phi4")
observeEvent(enter$chat_user_input, {
stream <- chat$stream_async(enter$chat_user_input)
chat_append("chat", stream)
})
}
shinyApp(ui, server)
That ought to open a particularly primary chat interface with a mannequin hardcoded. In the event you don’t choose a mannequin, the app gained’t run. You’ll get an error message with the instruction to specify a mannequin together with these you’ve already put in domestically.
I’ve constructed a barely extra sturdy model of this, together with dropdown mannequin choice and a button to obtain the chat. You possibly can see that code right here.
Conclusion
There are a rising variety of choices for utilizing massive language fashions with R, whether or not you need to add performance to your scripts and apps, get assist together with your code, or run LLMs domestically with ollama. It’s value making an attempt a few choices to your use case to search out one that most closely fits each your wants and preferences.