services: ollama: remove some LLM models which ive found to not be useful

This commit is contained in:
2024-11-03 12:16:27 +00:00
parent 7b04d24886
commit 3aadc12f04

View File

@@ -24,7 +24,7 @@ let
codestral-22b
# deepseek-coder-7b # subpar to deepseek-coder-v2 in nearly every way
deepseek-coder-v2-16b # GREAT balance between speed and code quality. code is superior to qwen2_5 in some ways, and inferior in others
deepseek-coder-v2-16b-lite-instruct-q5_1 # higher-res version of default 16b (so, better answers?)
# deepseek-coder-v2-16b-lite-instruct-q5_1 # higher-res version of default 16b (but in practice, is more rambly and less correct)
# falcon2-11b # code examples are lacking
# gemma2-9b # fast, but not great for code
# glm4-9b # it generates invalid code
@@ -37,17 +37,17 @@ let
# mistral-nemo-12b # it generates invalid code
mistral-small-22b # quality comparable to qwen2_5
# mistral-large-123b # times out launch on desko
mixtral-8x7b # generates valid, if sparse, code
# mixtral-8x7b # generates valid, if sparse, code; only for the most popular languages
# phi3_5-3b # generates invalid code
# qwen2_5-7b # notably less quality than 32b (i.e. generates invalid code)
qwen2_5-14b # *almost* same quality to 32b variant, but faster
qwen2_5-32b-instruct-q2_K # lower-res version of default 32b (so, faster?)
# qwen2_5-32b-instruct-q2_K # lower-res version of default 32b (so, slightly faster, but generates invalid code where the full res generates valid code)
qwen2_5-32b # generates 3~5 words/sec, but notably more accurate than coder-7b
# qwen2_5-coder-7b # fast, and concise, but generates invalid code
# solar-pro-22b # generates invalid code
# starcoder2-15b-instruct # it gets stuck
# wizardlm2-7b # generates invalid code
yi-coder-9b # subpar to qwen2-14b, but it's still useful
# yi-coder-9b # subpar to qwen2-14b, but it's still useful
];
};
models = "${modelSources}/share/ollama/models";