LM Studio's New Remote Access Feature Lets You Run Powerful AI Models From Anywhere
LM Studio has released LM Link, a new feature that lets you access powerful AI models running on your home workstation from anywhere in the world, while keeping all your data completely private and under your control. The feature, built in collaboration with Tailscale, uses end-to-end encryption to create a secure tunnel between your devices, meaning neither LM Studio nor Tailscale can see your conversations, model configurations, or any other data passing through the connection .
What Problem Does LM Link Actually Solve?
Imagine you have a powerful workstation sitting on your desk at home, equipped with an NVIDIA GB10 graphics processor capable of running massive open-source AI models with 120 billion parameters. But you're traveling with a thin laptop that simply doesn't have the computing power to run those same models locally. Before LM Link, you'd either have to leave that expensive hardware idle or compromise on privacy by using cloud-based AI services. LM Link closes that gap .
"Let's say you have a GB10 sitting at home on your home office desk, but you'd like to travel with your laptop. You can keep your GB10 busy while on the go and maintain full privacy and control as if you were sitting in front of your desk," explained Kayden Fu, product manager at LM Studio.
Kayden Fu, Product Manager at LM Studio
The feature launched in preview mode and immediately overwhelmed the company's servers. The sign-up volume was so high on launch morning that it crashed LM Studio's infrastructure, a sign of just how much demand exists for this capability .
How to Set Up Remote Access to Your Local AI Models
- Install the Headless Daemon: Download and install llmster, LM Studio's terminal-based daemon that runs without a graphical interface, on your workstation or home office computer.
- Run Two Terminal Commands: Execute just two commands in your terminal to establish the connection between your workstation and LM Studio's network.
- Access From Any Device: Once configured, your workstation automatically appears in LM Studio on every other device tied to your account, allowing you to connect from laptops, tablets, or other computers anywhere.
The setup process is intentionally simple. LM Studio's engineering team designed it so that users don't need deep technical knowledge to get remote access working. The feature builds on the foundation of LM Studio's 0.4 release, which introduced llmster as a way to run AI models entirely from the command line without needing a graphical interface .
How Does LM Link Keep Your Data Private?
Privacy was the central concern when LM Studio's team designed LM Link. The company partnered with Tailscale, a mesh networking platform, to create encrypted connections between devices without exposing your hardware to the public internet. Here's how the security actually works :
- End-to-End Encryption: Once a connection is established between your devices, all communication is encrypted, meaning only your devices can read the data passing through.
- Zero Server Access: LM Studio's servers and Tailscale's servers cannot access your model lists, chat histories, or model load configurations, even during the connection process.
- Encrypted Tunnel: Everything else travels through an encrypted tunnel that neither LM Studio nor Tailscale can read, providing a second layer of protection.
"LM Studio servers as well as Tailscale servers don't have access to any communication, whether that's your model list, chats, or model load configurations. We don't see any of them. Once that connection is established between your devices, it's end-to-end encrypted. Everything else travels through an encrypted tunnel that neither party can read," stated Fu.
Kayden Fu, Product Manager at LM Studio
This approach means you're not trusting a third party with your AI conversations or data. Your workstation and your laptop communicate directly with each other through an encrypted channel, and the companies providing the infrastructure simply cannot intercept or view what's being transmitted .
Why This Matters for Remote Work and Privacy-Conscious Users
The timing of LM Link's release proved its value almost immediately. When a blizzard shut down Brooklyn just 24 hours after the feature launched, LM Studio's engineering team was able to continue working without missing a beat. They accessed their Dell Pro Max workstations with GB10 graphics processors sitting in their office across the borough, running directly from their homes. The real-world stress test validated that the feature works reliably when you actually need it .
For organizations and individuals concerned about data privacy, this represents a significant shift. You can now maintain the computational power of a high-end workstation while working remotely, without ever uploading your data to cloud servers. Your AI models, your conversations, and your configurations stay on your hardware, under your control, accessible from anywhere .
LM Link is currently in preview status, and the LM Studio team is actively developing additional features. Benchmarks for multi-user concurrent workflows on GB10 workstations are in development, suggesting the company is working toward supporting teams of users accessing the same powerful hardware simultaneously. Early user response has been overwhelmingly positive, with users connecting multiple devices within minutes of gaining access to the preview .