No, you didn't read that wrong. I enbarked on an AI/AGI/LLM learning journey earlier this year and have determined that building a JARVIS of my own makes sense. In the world of Iron Man, JARVIS (Just A Rather Very Intelligent System) is Tony Stark's AI assistant, capable of managing his entire life and tech empire.
While we're not quite at that level yet, recent advancements in AI have made it possible to create a impressive local AI assistant using open-source technologies. In this guide, we'll walk you through the process of setting up your own JARVIS-like system using Ollama, LLaMA v3.1, Gemma v2, and other open-source models on Ubuntu 24.04 LTS.
I started this journey by learning Ollama with the LLaMA v3.1 and Gemma v2 models locally. I've now added AnythingLLM from the Mintplex-Labs team. I have already had it review some scripts... really has the gears in my head turning. 🥸
I didn't have to upload anything to the cloud... I just keep my local ollama and models up-to-date via custom Bash script on my workstation. My next step is to start writing automated actions and introduce RouteLLM to conflate responses from multiple services and models and develope an action matrix for a general handler. 🤑
More coming soon, but my focus is on clients & learning everything possible about AI, LLM and emerging technologies. Until then, follow me on Facebook, Instagram, LinkedIn and/or Twitter (no... I will not call it X!) and don't forget to follow the #BigFoxTravels hash tag.