LM studio is more than simply a war off chatting or exposing an OpenAI API - it also runs the inference engine which is a lot more complicated than just loading a compiled executable. (And the chat function probably also does context and different composition algorithms can have a big impact on quality). You also may well want tool calling - so that is another piece of functionality you need to check.
And then there is inference performance - there are always new performance enhancements, so you need to check whether your tool includes the most recent such tweaks.