David Kimura PRO said 9 months ago on Streaming LLM Responses :
This is considered the template or prompt format for communicating with the particular model that you're using. You can find more information and the specific templates on https://ollama.com/library/mixtral:latest. Since mistral is based on llama, it uses the [INST]prompt[/INST] template. If using gemma then the template would look something like <start_of_turn>model {{ .Response }}<end_of_turn>. It's basically formatting to yield better results from the model that is being used.