Lately I’ve been experimenting with something pretty interesting, and thought it’s worth sharing with folks here 👇
If you don’t have access to paid coding agents or LLM subscriptions, don’t let that stop you. Try running models locally using Ollama, especially something like `qwen3.5:cloud` (and a few others available there). It’s honestly a great way to get hands-on without spending anything.
You can:
How to get started (super basic steps):
1. Install Ollama -> [https://ollama.com](https://ollama.com)
2. Pull a model -> `ollama run qwen3.5:cloud`
3. Start experimenting:
- Ask coding questions
- Build scripts
- Try creating small agents
*(Optional but fun)*
Building a Self-Healing Agent
Recently, I started working on a small side project — a self-healing agent for containers. The idea is simple (still a work in progress):
For example, if there’s a memory issue or a service crash / unhealthy state, the agent attempts a basic fix and ensures the service is back up. Not production-ready yet, but it’s been a great learning experience around automation + LLM use cases in DevOps.
Honestly, tools like Ollama make it super accessible to experiment with ideas like this without worrying about API costs.
*Have you tried running LLMs locally? Built anything interesting with them? Would love to hear what others are exploring! 🚀*