native-llm - v0.2.0
    Preparing search index...

    Variable RECOMMENDED_MODELSConst

    RECOMMENDED_MODELS: {
        fast: "gemma-3n-e2b";
        balanced: "gemma-3n-e4b";
        quality: "gemma-3-27b";
        edge: "gemma-3n-e2b";
        multilingual: "qwen3-8b";
        reasoning: "deepseek-r1-14b";
        code: "qwen-2.5-coder-7b";
        longContext: "gemma-3-27b";
    } = ...

    Model recommendations by use case

    Type Declaration

    • Readonlyfast: "gemma-3n-e2b"

      Fast responses, simple tasks (~2GB RAM)

    • Readonlybalanced: "gemma-3n-e4b"

      Best quality/speed balance (~3GB RAM)

    • Readonlyquality: "gemma-3-27b"

      Maximum quality (~18GB RAM)

    • Readonlyedge: "gemma-3n-e2b"

      Best for edge/mobile - minimal RAM

    • Readonlymultilingual: "qwen3-8b"

      Best multilingual

    • Readonlyreasoning: "deepseek-r1-14b"

      Complex reasoning (chain-of-thought)

    • Readonlycode: "qwen-2.5-coder-7b"

      Code generation

    • ReadonlylongContext: "gemma-3-27b"

      Long documents (128K context)