This new version adds interesting new features and fixes some of the most annoying bugs that have been dragging on since version 0.2.8.
|No te pierdas nada y ¡Síguenos en Google News!
Just yesterday, I was talking about LM Studio, a tool to set up your own AI-based chat environment from your PC. Of course, not just any PC will do for this; you need a powerful machine, but that’s another story.
If you can afford to run it on your machine, the truth is that it’s a very interesting option. So, we’ll try to cover the news and updates it brings, as you know we also like to talk about technology in general.
LM Studio version 0.2.10
The new update raises the application version to 0.2.10 and is divided into two sections. On the one hand, we have the novelties, and on the other, the fixes, which in this case are minor. Among the highlighted novelties, we find compatibility with Microsoft’s Phi-2 or the ability to modify the number of experts for your prompt.
- Compatibility with Microsoft Research’s Phi-2 model.
- Now you can export a chat conversation as JSON, prompt with full formatting, and more.
- If the up arrow is pressed while the chat input is focused, the last chat message will be edited.
- Copy individual messages to the clipboard.
- A way to set Min P sampling from the user interface and API (parameter `min_p`).
- The new expert mixture model allows you to set the number of experts to be used from the user interface.
- The server can now filter logs in real-time.
- Fixed the application crash if the chosen model directory contained a large number of files.
- Windows and OpenCL: Fixed error loading models when GPU acceleration is enabled.
- Windows: Fixed persistent CPU usage after model download.
- Edited messages now remain in edit mode when the window loses focus (ESC to cancel, ENTER to save).
Note: This content has been translated with an artificial intelligence tool, so the translation may be slightly inaccurate. The original version written by our editor is the Spanish version.