FAQs
Is NELA fully local?
Core workflows are local-first. Inference, indexing, and retrieval are designed to run on-device. Internet is mainly needed for model downloads and optional remote browsing flows.
Do I need every model category installed?
No. Start with one LLM for chat. Add VLM for image tasks, TTS/ASR for audio workflows, and Embedding/Classifier/Grader for stronger RAG quality.
What is the difference between Chat, Vision, Audio, Podcast, and Mindmap modes?
Each mode maps to a different task path in the backend router: Chat for general text/RAG, Vision for image-grounded prompts, Audio for speech workflows, Podcast for scripted two-speaker generation, and Mindmap for concept-tree outputs.
Why are my RAG answers weak even though files are uploaded?
Check that embedding models are installed and ingestion completed successfully. For better retrieval quality, install classifier and grader models as well. Rebuild the index if you changed embedding model family.
Can I move a project between machines?
Yes. Export your workspace as a .nela project file and import it on another machine. Keep in mind that model files may still need to be installed on the destination machine.
How do I choose a model size for my hardware?
Use the compatibility hints in Settings. Smaller quantized models are faster and lighter; larger models may improve quality but need more RAM/CPU and disk.