LM Studio has been my go-to app for running local LLMs since I discovered it. It's easy to use, the UI is well-polished, and you can have any AI model up and running in a matter of a few clicks — even if you've never touched a terminal in your life. It's one of the best tools if you want to enjoy the benefits of a local LLM.

But while LM Studio is free to use, it isn't open-source. Some of its components, especially the command-line tooling, carry open licenses, but the app itself is proprietary software. For a tool that can easily sit at the core of your local AI workflows, risking a license change whenever the parent companies want is a risk I was uncomfortable with, and then I found Jan.

LM Studio running an AI chat.
I’ll never pay for AI again

AI doesn’t have to cost you a dime—local models are fast, private, and finally worth switching to.

7

I thought I’d miss LM Studio. I didn’t

Why Jan feels like a real replacement for LM Studio

The alternative that eventually ended up replacing LM Studio for me is Jan. It's a desktop application that lets you run LLMs fully offline — much like LM Studio, but not only is it completely free, it's also open-source, with all the source code being available on GitHub. There are no licensing surprises, no proprietary lock-ins, and no fine print you need to worry about.

The interface is a lot like ChatGPT, which may be a good or bad thing depending on your taste. If you're new to the concept of local LLMs, though, it's familiar, clean, and welcoming. Jan was made to make local AI accessible to everyone, so the UI design imitating ChatGPT isn't an accident. You get a chat window, a model hub, and a settings page that doesn't require developer-level expertise to understand.

If you're switching from LM Studio, the model library will feel familiar too. You can browse and download popular open-source models like Llama, Gemma, Mistral, Qwen, DeepSeek, and more directly from within the app. Models are also tagged to tell you whether they'll run well on your hardware or if they'll be a bit too much to handle.

Jan website open on Windows 11.
Yadullah Abidi / MakeUseOf

Jan also comes with an OpenAI-compatible API server. This means that you can point other tools like Cursor, Open WebUI, custom scripts, and whatever else you might be building at Jan's local server and have it run just as it would when communicating with OpenAI's API. This is great to test out code and prototypes before you start interfacing with OpenAI's actual (and paid) API. The server also has CORS (Cross-Origin Resource Sharing) support by default, which makes it quite easy to hook up to web projects.

Performance, although dictated by the model you're using, is also quite similar to LM Studio. Token usage isn't significantly different from LM Studio either, and the raw inference engine is essentially the same. You also get built-in extensions in case you want to extend the app's functionality.

Jan Logo
OS
Windows, macOS, Linux
Developer
Jan
Price model
Free, Open-source

A free, open-source AI chat assistant that runs local large language models on your PC, no cloud required.

The privacy benefits are actually meaningful

Local models, no cloud, no surprises

Jan and LM Studio open on a laptop.
Yadullah Abidi / MakeUseOf

Running AI locally is already the privacy-conscious choice; however, using Jan lets you go a step further. Everything from your preferences, chat history, model parameters, and the model itself stays on your machine. There's no account to create, no telemetry data to worry about, and no cloud dependency required. You can have a fully airgapped AI user experience if you want it.

Two drawbacks of this approach are that first, you're limited to the AI models your hardware can run, and second, open-source LLMs that you can download and run aren't always as good as their online counterparts. There are tasks that you should use a local LLM for, but it can't do everything.

Jan lets you remedy this by supporting remote APIs like OpenAI, Anthropic, Claude, and more if you want cloud access to their respective models. This approach doesn't require you to choose between the two. It's local (and private) by default, but you always have the option to choose cloud-based models when the task at hand requires one.

Quiz
8 Questions · Test Your Knowledge

MUO's AI Tools, Apps & Vibe Coding
Trivia Challenge

From ChatGPT to cursor-driven code — test how well you know the AI tools reshaping the way we work and build.

AI ToolsChatbotsVibe CodingImage AIProductivity
01 / 8
AI Tools

Which AI coding tool, popular in the 'vibe coding' movement, is described as an AI-first code editor built on top of VS Code?

Correct! Cursor is an AI-first code editor forked from VS Code that lets developers write, edit, and debug code through natural language prompts. It became a flagship tool of the vibe coding trend, where users describe what they want and let AI do the heavy lifting.
Not quite — the answer is Cursor. While GitHub Copilot and Tabnine are popular AI coding assistants, Cursor is the editor specifically built around AI-first workflows and is closely associated with the vibe coding movement popularized in 2024 and 2025.
02 / 8
Vibe Coding

Who is credited with coining the term 'vibe coding' to describe the practice of building software using natural language prompts to AI?

Correct! Andrej Karpathy, a former Tesla AI director and OpenAI co-founder, coined the term 'vibe coding' in early 2025. He described it as surrendering to the AI and just riding the vibes — letting the model write code while the human directs intent through natural language.
Not quite — Andrej Karpathy coined the term. He shared the concept on social media in early 2025, describing a style of programming where you largely ignore the underlying code and trust the AI to handle implementation details based on your high-level instructions.
03 / 8
Chatbots

Which AI assistant is developed by Anthropic and is known for its focus on safety, helpfulness, and being a direct competitor to ChatGPT?

Correct! Claude is Anthropic's flagship AI assistant, designed with a strong emphasis on safety and alignment. Anthropic was founded by former OpenAI researchers, and Claude has become a popular alternative to ChatGPT, especially praised for its long context window and nuanced responses.
Not quite — the answer is Claude, made by Anthropic. Gemini is Google's AI assistant, Grok is from xAI (Elon Musk's company), and Perplexity is an AI-powered search engine. Claude stands out for its safety-first design philosophy and impressively large context window.
04 / 8
Image AI

Which AI image generation model, developed by Stability AI, was one of the first major open-source text-to-image models released to the public?

Correct! Stable Diffusion, released by Stability AI in 2022, was a landmark open-source text-to-image model. Unlike DALL·E or Midjourney, its open-source nature allowed developers and researchers to run it locally and build derivative tools, sparking a huge wave of community innovation.
Not quite — the answer is Stable Diffusion by Stability AI. DALL·E 2 is from OpenAI and is not open source, Midjourney is a closed, subscription-based service, and Adobe Firefly is trained on licensed content. Stable Diffusion's open release was a watershed moment for generative AI.
05 / 8
Productivity

Which AI-powered tool is embedded into Microsoft 365 apps like Word, Excel, and Teams to assist users with tasks like drafting, summarizing, and analyzing data?

Correct! Microsoft Copilot is deeply integrated into Microsoft 365 applications, helping users draft emails in Outlook, generate formulas in Excel, summarize meetings in Teams, and write documents in Word. It's powered by OpenAI's models and represents Microsoft's big bet on AI-enhanced productivity.
Not quite — the answer is Microsoft Copilot. While Cortana was Microsoft's earlier voice assistant (now largely retired), Copilot is the current AI productivity suite integrated across Office apps. Bing AI uses Copilot branding for search, and Azure OpenAI is a developer-facing cloud service.
06 / 8
Vibe Coding

In the context of vibe coding, what does the AI tool Bolt.new primarily allow users to do?

Correct! Bolt.new, created by StackBlitz, lets users prompt an AI to build complete full-stack web applications directly in the browser — no local setup required. It became a go-to vibe coding tool for rapid prototyping, allowing non-developers to ship working apps from plain text descriptions.
Not quite — the answer is that Bolt.new builds and deploys full-stack web apps via natural language in the browser. It's one of the most celebrated vibe coding platforms because it handles everything from frontend to backend, making app creation accessible to people with little to no coding experience.
07 / 8
AI Tools

Which AI tool is specifically designed as a conversational search engine, providing cited, real-time answers by pulling from the web rather than relying on a static training dataset?

Correct! Perplexity AI functions as an AI-native search engine that retrieves real-time information from the web and presents it with cited sources. Unlike traditional chatbots limited to training data, Perplexity is designed to replace or complement traditional search engines with more conversational, sourced answers.
Not quite — Perplexity AI is the answer. While You.com also offers AI-enhanced search, Perplexity has become far more widely recognized as the leading AI search engine. Claude is a general-purpose chatbot by Anthropic, and Jasper is focused on AI marketing and content writing.
08 / 8
Chatbots

What is the name of Google's most capable family of AI models, which powers the Gemini chatbot and is integrated into Google Search and Workspace?

Correct! Google's Gemini is its flagship family of multimodal AI models, replacing the earlier PaLM 2 and LaMDA architectures. The Gemini brand also replaced Google Bard as the name of Google's consumer chatbot. Gemini models power features across Google Search, Workspace, and Android.
Not quite — the current answer is Gemini. Google has gone through several AI model generations: LaMDA powered early experiments, PaLM 2 followed, and the chatbot was initially called Bard. In 2024, Google rebranded everything under the Gemini name to unify its AI model family and consumer products.
Challenge Complete

Your Score

/ 8

Thanks for playing!

It’s not flawless

The rough edges you’ll notice quickly

Jan GPU instruction settings.
Yadullah Abidi / MakeUseOf

If you're switching over from LM Studio, you're going to miss the detail it brings, especially in GPU layer configuration, which is far more visual and accessible in LM Studio. Additionally, features like document analysis are more mature. Jan has been catching up, but if you want fine-tuning control for every inference parameter, LM Studio is the better choice.

If you prefer working in the terminal, Ollama beats both LM Studio and Jan. As such, startup times for both visual LLM programs are comparable, but slower than Ollama. Jan sits somewhere in the middle of the competition — more transparent than LM Studio and significantly more user-friendly than Ollama.

It quietly became my default

Why I stopped opening LM Studio altogether

There's not much stopping you from shifting to Jan from LM Studio. The learning curve is almost flat, and most of your models and data will carry over cleanly. The difference is in ownership of your local AI stack. With Jan, you're in full control, and no unexpected pivots or licensing changes will catch you off guard.

Running offline LLM with LM Studio
I now use this offline AI assistant instead of cloud chatbots

Even with cloud-based chatbots, I'll always use this offline AI assistant I found.

1

In practical, daily use, Jan does everything LM studio does, except it comes with the peace of mind that comes from software you can actually audit. You still get all the control and advantages you need, just without any potential hassle.