Feature Request: Self-Hosted AI Code Generation (Reflex.DIY) - Similar to Bolt.DIY #5929
Closed
joganubaid
started this conversation in
General
Replies: 2 comments
-
Why do you come here just to dump a bunch of LLM generated text here like i have time to read that? LLMs think everything is a good idea, you can't trust them
No
We accept PRs that improve the product or fix bugs
This would be competing with our commercial offering
No |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
Amazing reply |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
The Reflex framework is amazing for building Python web apps, and Reflex Build's AI code generation is incredibly powerful. However, many developers in the community need a self-hosted solution for several reasons:API Key Control: Developers want to use their own LLM API keys (OpenAI, Anthropic, Google) or local models (Ollama, LM Studio) to avoid vendor lock-in and control costs��.Privacy & Compliance: Organizations with strict data policies need to keep code generation entirely within their infrastructure�.Local Development: Many developers prefer running everything locally during development without requiring internet connectivity or cloud subscriptions�.Proven Demand: Projects like Bolt.DIY have demonstrated massive community interest in self-hosted AI builders��. The success of Bolt.DIY (which allows self-hosting with custom LLM APIs) shows this is a critical need.Currently, Reflex Build requires cloud subscription and connects to Reflex's proprietary backend, which limits flexibility for these use cases��.
💡 The ProposalCreate a "Reflex.DIY" companion project (or integrate into core) that enables self-hosted AI code generation:Core Features:1. LLM Provider FlexibilitySupport for multiple providers via API keys: OpenAI, Anthropic (Claude), Google (Gemini)Local LLM support via Ollama, LM Studio, or any OpenAI-compatible endpointConfigurable through environment variables or config file2. Open-Source Prompt SystemOpen-source the prompt engineering logic for generating Reflex codeAllow community contributions to improve promptsSupport for custom prompt templates3. Self-Hosted ArchitectureDocker containerization for easy deploymentCan run on any VPS, local machine, or cloud (AWS, GCP, Azure, Railway, Render)Web UI similar to Reflex Build but running locallyCLI-based generation as an alternative4. Clear SeparationKeep Reflex Build as the premium hosted option with enhanced features (database integration, GitHub sync, advanced debugging)Position Reflex.DIY as the community/developer edition
🚀 Benefits for Reflex1. Massive Community GrowthAttracts developers who prefer self-hosting (proven to be a large segment based on Bolt.DIY's success)Removes barrier to entry for privacy-conscious organizationsIncreases overall Reflex framework adoption2. Community ContributionsOpen-source prompts = community improvementsFaster iteration on AI generation qualityMore diverse use cases discovered3. Competitive AdvantageReflex would be the only Python framework with both hosted AND self-hosted AI generationDifferentiates from competitors like Streamlit, Gradio, Dash4. Proven Business ModelFollows successful open-core strategy of GitLab, Supabase, Sentry��Free tier drives paid conversions (many Bolt.DIY users eventually use Bolt.new)Enterprise customers still prefer managed solutions
📚 Examples & Precedents1. Bolt.DIY SuccessGitHub: stackblitz-labs/bolt.diy�Fully self-hostable with custom LLM APIsActive community, multiple deployment optionsProves developers want this capability2. Reflex Already Supports Self-HostingExcellent documentation for self-hosting Reflex apps��Docker deployment guides existThis would extend self-hosting to the AI generation layer3. LLM Integration ExamplesThe reflex-llm-examples repo� shows Reflex already has patterns for LLM integrationThese patterns could be extended to code generation4. Market Validation1M+ apps built with Reflex��27k+ GitHub stars showing strong community30% of Fortune 500 using Reflex - many have self-hosting requirements
🤝 How I (We) Can HelpI'm very interested in contributing to this effort:Research & Design: Happy to help design the architecture and API structurePrompt Engineering: Can contribute to building effective prompts for Reflex code generationTesting: Will test extensively and provide feedbackDocumentation: Can help document setup and usageI believe this could be built as:A separate repo (reflex-dev/reflex-diy) that depends on core ReflexOr as an optional module in the main Reflex packageOr as community-maintained project with official blessing
❓ Questions for the TeamIs this something the Reflex team would officially support?Would you accept PRs for this functionality?Are there technical/business constraints we should be aware of?Could parts of Reflex Build's prompt system be open-sourced?🎬 ConclusionThis feature would democratize AI-powered Python web development while maintaining Reflex's commercial viability. It aligns with Reflex's open-source philosophy and could significantly accelerate adoption.I'd love to hear the team's thoughts and the community's interest in this direction!Note: I'm not asking to replicate all of Reflex Build's features (database auto-detection, GitHub integration, advanced debugging). Those remain excellent premium differentiators. This is specifically about making the core AI code generation capability available for self-hosting.
Here's the cleaned version without any links, reference markers, or extra formatting:
🎯 The Problem/Need
The Reflex framework is amazing for building Python web apps, and Reflex Build's AI code generation is incredibly powerful. However, many developers in the community need a self-hosted solution for several reasons:
API Key Control: Developers want to use their own LLM API keys (OpenAI, Anthropic, Google) or local models (Ollama, LM Studio) to avoid vendor lock-in and control costs.
Privacy & Compliance: Organizations with strict data policies need to keep code generation entirely within their infrastructure.
Local Development: Many developers prefer running everything locally during development without requiring internet connectivity or cloud subscriptions.
Proven Demand: Projects like Bolt.DIY have demonstrated massive community interest in self-hosted AI builders. The success of Bolt.DIY (which allows self-hosting with custom LLM APIs) shows this is a critical need.
Currently, Reflex Build requires cloud subscription and connects to Reflex's proprietary backend, which limits flexibility for these use cases.
💡 The Proposal
Create a "Reflex.DIY" companion project (or integrate into core) that enables self-hosted AI code generation:
Core Features:
1. LLM Provider Flexibility
2. Open-Source Prompt System
3. Self-Hosted Architecture
4. Clear Separation
🚀 Benefits for Reflex
1. Massive Community Growth
2. Community Contributions
3. Competitive Advantage
4. Proven Business Model
📚 Examples & Precedents
1. Bolt.DIY Success
2. Reflex Already Supports Self-Hosting
3. LLM Integration Examples
4. Market Validation
🤝 How I (We) Can Help
I'm very interested in contributing to this effort:
I believe this could be built as:
❓ Questions for the Team
🎬 Conclusion
This feature would democratize AI-powered Python web development while maintaining Reflex's commercial viability. It aligns with Reflex's open-source philosophy and could significantly accelerate adoption.
I'd love to hear the team's thoughts and the community's interest in this direction!
Note: I'm not asking to replicate all of Reflex Build's features (database auto-detection, GitHub integration, advanced debugging). Those remain excellent premium differentiators. This is specifically about making the core AI code generation capability available for self-hosting.
Beta Was this translation helpful? Give feedback.
All reactions