LM Studio is a free, open-source desktop application that lets you download, run, and experiment with large language models (LLMs) entirely offline on your Linux machine. Unlike cloud-based AI services, LM Studio gives you complete control over your data—nothing leaves your computer. It supports hundreds of models from Hugging Face and other repositories, including Llama, Mistral, Phi, and Gemma variants, with full GPU acceleration for faster inference.
Let me show you how to install LM Studio on Linux step-by-step.
What Makes Installing LM Studio on Linux Challenging?
While LM Studio provides an AppImage for easy deployment, several factors complicate the installation process. First, models can be extremely large—anywhere from 2GB to 40GB+—requiring significant disk space and bandwidth. Second, GPU acceleration depends on NVIDIA CUDA or AMD ROCm drivers being properly installed, which varies significantly across distributions.
Third, the AppImage format doesn’t integrate automatically with your system’s application menu, requiring manual desktop file creation. Finally, the local API server needs proper port configuration to avoid conflicts with other services running on your machine.
How to Install LM Studio on Ubuntu and Debian-Based Systems?
There are multiple ways you can go about installing LM Studio on Ubuntu and Debian-based systems. Here are the two methods that I like most.
Method 1: Using AppImage (Recommended)
The AppImage is the official distribution method and works on most Linux systems without installation:
wget https://lmstudio.ai/releases/latest/linux/x86_64/LMStudio.AppImage
chmod +x LMStudio.AppImage
./LMStudio.AppImage
To integrate LM Studio with your application menu, create a desktop entry:
mkdir -p ~/.local/share/applications
cat > ~/.local/share/applications/lmstudio.desktop << EOF
[Desktop Entry]
Name=LM Studio
Exec=/path/to/LMStudio.AppImage
Icon=lmstudio
Type=Application
Categories=Development;
EOF
Replace /path/to/
with your actual AppImage location.
Method 2: Using Flatpak
Some community-maintained Flatpak builds exist, though they may lag behind official releases:
sudo apt install flatpak
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
flatpak install flathub ai.lmstudio.LMStudio
Note: Always verify Flatpak availability on Flathub before using this method.
How to Install LM Studio on Fedora and RHEL-Based Systems?
Fedora users can use the AppImage method above or leverage Flatpak, which comes pre-configured:
flatpak install flathub ai.lmstudio.LMStudio
For GPU support on Fedora, ensure NVIDIA drivers are installed:
sudo dnf install akmod-nvidia xorg-x11-drv-nvidia-cuda
sudo reboot
AMD GPU users should install ROCm:
sudo dnf install rocm-hip rocm-opencl
How to Install LM Studio on Arch Linux?
Arch users can install LM Studio from the AUR:
yay -S lmstudio-bin
Or using paru:
paru -S lmstudio-bin
The AUR package handles desktop integration automatically and keeps the application updated through your AUR helper.
How to Configure LM Studio After Installation?
Downloading Your First Model
Launch LM Studio and click the download icon. Popular starter models include:
- Llama-3.2-3B – Fast, good for coding (3GB)
- Mistral-7B – Balanced performance (4GB)
- Phi-3-mini – Efficient for low-end hardware (2GB)
Filter by model size, quantization level (Q4_K_M offers good quality/speed balance), and use case.
Setting Up the Local API Server
LM Studio includes an OpenAI-compatible API server. To enable it:
- Click the server icon in the left sidebar
- Select your downloaded model
- Click “Start Server” (default port: 1234)
Test the API with curl:
curl http://localhost:1234/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "local-model",
"messages": [{"role": "user", "content": "Hello"}]
}'
How to Troubleshoot Common LM Studio Installation Issues?
CUDA Not Detected
If LM Studio shows “CPU only” despite having an NVIDIA GPU:
nvidia-smi
nvcc --version
If these commands fail, reinstall NVIDIA drivers and CUDA toolkit. On Ubuntu:
sudo apt install nvidia-driver-535 nvidia-cuda-toolkit
Models Won’t Download
Check disk space and network connectivity. Models download to ~/.cache/lmstudio
. Ensure this location has sufficient space:
df -h ~/.cache/lmstudio
Port 1234 Already in Use
Change the server port in LM Studio’s settings, or identify the conflicting service:
sudo lsof -i :1234
What’s Next After Installing LM Studio?
Once you have LM Studio running with your chosen models, the next step is integrating these local LLMs into your development workflow. You can connect LM Studio’s API to code editors like VS Code using Continue or Cody extensions, build custom applications that leverage local AI, or create automation scripts that maintain complete privacy. Our next guide will walk you through connecting LM Studio to popular code editors and building your first local AI-powered application.