**Disclaimer: This project is in no way intended for production use. It is an experimental project that I have made for myself to see how far I can push LLM models to manage my tools for me. A lot of the code does not follow standard best practices as my priority was experimentation rather than production readiness. Over time, I plan to slowly refine this project, making it better and easier for others to use and contribute. For this public release, I've tried to make as many tools as I could configurable, but by the project's nature, not every tool yet is fully configurable from the `.env`.**
- Can understand voice notes (warning: this currently uses openai hosted whisper which is unnecessarily expensive, so do not use this if you are more than one person using this)
- Docker Container Shell: Can execute shell commands in an isolated docker container. This requires /tmp to be mounted to the container as this is how it can share files with the host to then share files with the user.
- Local Code interpreter: Can execute code in python in the above docker container, this allows for the model to install any dependencies it needs.