Using GPTel and LMStudio in Emacs

Large language models are increasingly useful for developers and writers who spend much of their time inside Emacs. Instead of switching between a browser or external app, you can bring AI assistance directly into your workflow. One of the most elegant tools for this is gptel, an Emacs package that provides a simple interface to local or remote LLMs.

In this guide, we’ll walk through how to connect gptel to a locally running model inside LM Studio, using Qwen as the model backend.


What You Need

  1. Emacs (v27 or later recommended).

  2. gptel installed from MELPA.

  3. LM Studio installed on your machine (Windows, macOS, or Linux).

  4. A Qwen model downloaded inside LM Studio (e.g., Qwen2.5 7B or Qwen 14B).

  5. A running local API server in LM Studio.


Step 1: Install GPTel

In Emacs, you can install gptel via MELPA:

 
M-x package-refresh-contents M-x package-install RET gptel RET

Then load it in your config:

 
(use-package gptel  :ensure t)

Step 2: Run LM Studio with an API Server

LM Studio provides an OpenAI-compatible local server, which gptel can talk to.

  1. Open LM Studio.

  2. Load your desired Qwen model.

  3. Start the local server (usually from the sidebar or settings).

By default, LM Studio runs the API server on:

 
http://localhost:1234/v1

Step 3: Configure GPTel to Use LM Studio

Now, tell gptel where to send requests. In your Emacs config:

 
(setq gptel-backend      (gptel-make-openai "LM Studio"        :host "localhost:1234"        :protocol "http"        :endpoint "/v1/chat/completions"        :key "not-needed"))

Notes:

  • LM Studio does not require an API key by default, but gptel expects one. You can just put "not-needed" or "dummy".

  • The endpoint matches LM Studio’s OpenAI-style API.


Step 4: Start Chatting with Qwen in Emacs

You can now start a conversation with Qwen directly from Emacs:

  • M-x gptel → opens a buffer connected to the local backend.

  • Type your prompt, then press C-c C-c to send it.

  • Qwen’s response will appear inline.

You can also send a selected region as context with:

  • M-x gptel-send


Step 5: Tips for a Smooth Workflow

  • Multiple backends: You can define more than one backend (e.g., LM Studio locally and OpenAI remotely). Switch with M-x gptel-set-backend.

  • Qwen parameters: LM Studio allows you to tweak temperature, top_p, max_tokens, etc. You can pass custom parameters in your gptel config using :stream and :extra-params.

  • Org mode integration: gptel works nicely inside Org buffers. You can use it for notes, code explanations, or drafting text without leaving your Emacs session.


Example Workflow

Imagine you’re writing some Elisp and want Qwen to help:

  1. Select your Elisp function.

  2. Run M-x gptel-send.

  3. Qwen will suggest improvements directly inside Emacs.

Or, while writing documentation in Org mode, you can query Qwen for summaries and have them inserted inline.


Conclusion

With gptel, LM Studio, and Qwen, you can bring modern LLM capabilities straight into Emacs—without relying on external cloud APIs. It’s fast, private, and deeply integrated into your workflow.

If you already live inside Emacs, this setup makes AI a natural extension of your editing environment.

Tags

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
Please share this article on your favorite website or platform.