Developers Bypass JetBrains AI Assistant Limitations with Custom LLM Proxy Tool

BigGo Editorial Team
Developers Bypass JetBrains AI Assistant Limitations with Custom LLM Proxy Tool

In the rapidly evolving landscape of AI-assisted development, programmers are finding creative ways to overcome the limitations of built-in AI tools. A new open-source project called ProxyAsLocalModel has emerged as a solution for developers looking to use their preferred large language models (LLMs) with JetBrains' AI Assistant, bypassing the platform's restrictive free tier quotas.

Extending JetBrains AI Assistant Beyond Default Options

ProxyAsLocalModel serves as a bridge between third-party LLM APIs and JetBrains IDEs by proxying these services as local models compatible with the AI Assistant. The tool addresses a common frustration among developers: JetBrains AI Assistant provides limited free quotas that quickly deplete, while only supporting local models from LM Studio and Ollama. By creating a proxy that mimics these supported local endpoints, developers can leverage alternative LLM services they've already purchased, such as OpenAI, Claude, Gemini, Qwen, Deepseek, Mistral, and SiliconFlow.

The project is particularly notable for its technical implementation using Kotlin, Ktor, and kotlinx.serialization, which enables cross-platform compatibility through GraalVM native image compilation. This approach avoids the reflection-heavy official SDKs that make native image compilation challenging, resulting in faster startup times and reduced memory usage.

Supported LLM Providers in ProxyAsLocalModel

  • Proxy from: OpenAI, Claude, DashScope (Alibaba Qwen), Gemini, Deepseek, Mistral, SiliconFlow
  • Proxy as: LM Studio, Ollama
  • API Support: Streaming chat completion API only

JetBrains AI Assistant Improvements (2025 Release)

  • Project-level enable/disable options
  • Preference settings for local vs. online models
  • Support for major LLM providers (OpenAI, Claude, Gemini)
  • Better overall IDE integration

JetBrains AI Assistant Evolution and User Experience

Community discussions reveal a mixed but improving reception of JetBrains' AI Assistant. Early versions were criticized for their limitations and tendency to rewrite entire files rather than focusing on specific functions or code blocks.

I tried using the AI assistant when it came out but seemingly was too stupid to figure out how to use it correctly. I tried to get it to write single functions or short blocks of code for me, but it would always start rewriting the whole file from scratch which was way too slow.

However, recent updates have significantly improved the tool's capabilities. The 2025 release offers better project-level controls, support for major LLM providers like OpenAI, Claude, and Gemini, and improved integration with the IDE workflow. Users report success with code reviews, generating REST endpoints, writing tests, and exploring unfamiliar libraries. The introduction of Junie, JetBrains' newer system, has also received positive feedback for solving complex problems that other LLMs struggled with.

Alternative Solutions and Legal Considerations

While ProxyAsLocalModel offers one approach to expanding JetBrains AI Assistant capabilities, community members have suggested alternatives like OpenRouter, which provides access to hundreds of models through a single endpoint without additional costs beyond the providers' public prices. Other similar projects mentioned include enchanted-ollama-openrouter-proxy and LiteLLM Gateway.

An important consideration raised in the discussion is the potential legal implications of using commercial AI services for development. Some users pointed out that many AI service providers include non-compete clauses in their terms of service, potentially exposing businesses to legal risks if they use these services to develop competing products. This raises questions about the appropriate use cases for these AI tools in professional environments.

As AI-assisted development continues to mature, tools like ProxyAsLocalModel represent the community's drive to customize and optimize their workflows, even as the underlying platforms evolve. For developers seeking to maximize their productivity with JetBrains IDEs, these proxy solutions offer a way to leverage preferred LLM services while navigating the constraints of platform-specific implementations.

Reference: ProxyAsLocalModel