pull down to refresh

Hello! During the first week of #SEC04 (I know I'm a bit delayed), I created LLoms, a simple LLM chat CLI tool with MCP capabilities. Think of it as a simpler version of Goose, but without as many complexities under the hood. LLoms uses Ollama for inference, and I plan to eventually support OpenAI-like APIs, allowing you to use LLoms in conjunction with ppq.ai for example
In this demo video, I showcase LLoms using qwen2.5:3b, a small model running locally on my computer that can call tools. An interesting aspect is that the MCP tools I’m using are not running locally; instead, I utilize dvmcp-discovery to access them. This allows me to use tools located elsewhere.
If you want to experiment with LLoms, you can find the repository here: https://github.com/gzuuus/lloms Additionally, here is the dvmcp repository: https://github.com/gzuuus/dvmcp
To use the same tools that you can see in the video, set up the MCP server with the command npx and the argument @dvmcp/discovery in LLoms config file. I recommend running npx @dvmcp/discovery first in the same directory to generate the configuration file, and then set the relay to wss://relay.dvmcp.fun.
Enjoy, and please provide feedback if you have any :)