SynAck aka Little Sunshine :facepalm:<p>So while reluctantly contemplating running AI Hawk in an effort to find a job, one of the requirements is to have access to an AI either via an API or running one yourself, like <a href="https://corteximplant.com/tags/ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ollama</span></a>. And, since I'm not about to pay for some corpo AI, I'm contemplating running ollama locally so that I can keep everything in-house and free. </p><p>I've got an old intel Macbook Pro (32GB RAM) that mostly just sits there in standby, so I'm thinking that maybe I just turn that bad boy into a desktop - e.g. always-on - and run ollama on that (maybe inside a Docker container?). That way I could also setup nginx on the Macbook as a reverse proxy for my linux box, and could bring my website hosting in-house while still using a VPN for my primary compute box (the linux box). I've got another M1 MBP here as well, so I could use that as my video call box (pretty much like I already do).</p><p>Does this sound like a good idea or a bad idea? I'd probably also need to setup a firewall on the Macbook since I won't be using a VPN on it. Should I run ollama direct on the MBP or should I run it inside a Docker container? </p><p>What other considerations do I need to make? </p><p><a href="https://corteximplant.com/tags/goodideabadidea" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>goodideabadidea</span></a> <a href="https://corteximplant.com/tags/AmIDoingThisRight" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AmIDoingThisRight</span></a></p>