Outscale
介绍
Mistral AI 模型在 Outscale 平台上作为托管部署提供。通过 Outscale 市场,您可以订阅 Mistral 服务,该服务将代表您配置虚拟机和 GPU,然后在其上部署模型。
目前,以下模型可用:
- Mistral Small (24.09)
- Codestral (24.05)
- Ministral 8B (24.10)
更多详细信息,请访问模型页面。
入门
以下部分概述了在 Outscale 平台上查询 Mistral 模型的步骤。
部署模型
按照Outscale 文档中描述的步骤,使用您选择的模型部署服务。
查询模型(聊天补全)
部署的模型暴露了一个 REST API,您可以使用 Mistral 的 SDK 或纯 HTTP 调用进行查询。要运行以下示例,您需要设置以下环境变量:
OUTSCALE_SERVER_URL
: 托管您的 Mistral 模型的虚拟机 URLOUTSCALE_MODEL_NAME
: 要查询的模型名称(例如small-2409
、codestral-2405
)
- cURL
- Python
- TypeScript
echo $OUTSCALE_SERVER_URL/v1/chat/completions
echo $OUTSCALE_MODEL_NAME
curl --location $OUTSCALE_SRV_URL/v1/chat/completions \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data '{
"model": "'"$OUTSCALE_MODEL_NAME"'",
"temperature": 0,
"messages": [
{"role": "user", "content": "Who is the best French painter? Answer in one short sentence."}
],
"stream": false
}'
import os
from mistralai import Mistral
client = Mistral(server_url=os.environ["OUTSCALE_SERVER_URL"])
resp = client.chat.complete(
model=os.environ["OUTSCALE_MODEL_NAME"],
messages=[
{
"role": "user",
"content": "Who is the best French painter? Answer in one short sentence.",
}
],
temperature=0
)
print(resp.choices[0].message.content)
import { Mistral } from "@mistralai/mistralai";
const client = new Mistral({
serverURL: process.env.OUTSCALE_SERVER_URL || ""
});
const modelName = process.env.OUTSCALE_MODEL_NAME|| "";
async function chatCompletion(user_msg: string) {
const resp = await client.chat.complete({
model: modelName,
messages: [
{
content: user_msg,
role: "user",
},
],
});
if (resp.choices && resp.choices.length > 0) {
console.log(resp.choices[0]);
}
}
chatCompletion("Who is the best French painter? Answer in one short sentence.");
查询模型(FIM 补全)
Codestral 可以使用一种称为“填充中间”(FIM) 的额外补全模式进行查询。更多信息,请参阅代码生成部分。
- cURL
- Python
- TypeScript
curl --location $OUTSCALE_SERVER_URL/v1/fim/completions \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data '{
"model": "'"$OUTSCALE_MODEL_NAME"'",
"prompt": "def count_words_in_file(file_path: str) -> int:",
"suffix": "return n_words",
"stream": false
}'
import os
from mistralai import Mistral
client = Mistral(server_url=os.environ["OUTSCALE_SERVER_URL"])
resp = client.fim.complete(
model = os.environ["OUTSCALE_MODEL_NAME"],
prompt="def count_words_in_file(file_path: str) -> int:",
suffix="return n_words"
)
print(resp.choices[0].message.content)
import { Mistral} from "@mistralai/mistralai";
const client = new Mistral({
serverURL: process.env.OUTSCALE_SERVER_URL || ""
});
const modelName = "codestral-2405";
async function fimCompletion(prompt: string, suffix: string) {
const resp = await client.fim.complete({
model: modelName,
prompt: prompt,
suffix: suffix
});
if (resp.choices && resp.choices.length > 0) {
console.log(resp.choices[0]);
}
}
fimCompletion("def count_words_in_file(file_path: str) -> int:",
"return n_words");
深入了解
更多信息和示例,您可以查看:
- Outscale 文档,解释如何订阅和部署 Mistral 服务。