AlfredAI
As a huge fan of the Marvel movies, I desired to have some of the technologies shown there. What specifically intrigued me was the Jarvis artificial intelligence system built by Tony Stark. Personal smart assistants have always fascinated me - such as the way Jarvis knows Stark's schedule, routine, health information, projects, emails, and much more. It is built into Stark's life and a fundamental part of it.
Alfred is my version of Jarvis. Holding a plethora of features that resemble the Jarvis AI, I have built it into my life. However, we must start from the beginning in order to understand why it is one of my favorite and most ambitious projects to date.
If you are a developer, consider checking out the public GitHub repository I have linked above (this is a separate repository from my original private repository to protect keys).
Alfred 1.0
There are two versions of Alfred - a 2023 version and the new 2024 one which I will call Alfred 1.0 and Alfred 2.0 respectively for easy comprehension.
The idea for Alfred (1.0) began a little bit after OpenAI released their API's to the public. Interestingly enough, it is not too hard to connect and create your own assistant in Python using it - just merely a function call which returns the output of the bot.
For the first time ever, I had my own assistant that did not have preprogrammed outputs. My previous 'smart' helpers would only answer the 'correct' response if you entered the correct input. Without entering hundreds or even thousands of 'correct inputs' into the code it is basically impossible to have a chatbot that is fun to use.

V2 of the physical body made for Alfred 1.0 (2023) that I completely designed
After becoming incredibly excited, I added my own 'what to do' for the OpenAI API. I learned a whole new system of architecture in the process of making this first version because I decided to add a custom HTML application (with the eel module in Python) which would show my input and the output of Alfred. I also wanted to make a physical body for the system (there actually were 2 versions of that, you can see the first version on my Youtube here and the second version in the picture above), so after buying 12V motors I proceeded to use an old Raspberry Pi 4 that I had found and an L298N motor driver I began working on a robotic prototype. I completely designed the model for the physical version of Alfred 1.0 from start to finish - including the wheels.
The system had one major flaw though - which was the inevitable downfall of the whole 1.0 version. While you can indeed get a response from the OpenAI API that you like, it didn't offer any methods to act on what it said. For example, if you asked Alfred 1.0 to play music, it may respond that it will however it would do absolutely nothing programmatically to do so.
Unfortunately, this led me to add workarounds into the code which basically made it the same as what I had been doing before - having preprogrammed 'correct inputs' to ensure the code itself gives an output that is alongside the AI response. I was not satisfied with the way it worked and how 'calculated' it appeared and began to lose interest at the time.
Alfred 2.0
However, during the summer of 2024, I wanted to retry my attempt to create my own Jarvis. After researching the OpenAI API documentation, I found a new feature they had added - functions. This enabled the API to choose whether or not it thinks it should take action based on the user's query. Because of how long and complex the Alfred 1.0 script was, I decided to create a whole new system and repository on GitHub for this. I planned on building it from the ground up and organizing the system much neater than what it was previously.
The 'brain' script has the main class in it, AlfredBot. This holds all of the abilities of Alfred, such as sending emails, checking reminders, adding tasks, and even checking health data. This script (which I call the 'brain of Alfred) is not meant to be run itself, but to be called on by other sources (what I call calling scripts). These calling scripts come in many shapes and sizes, however, I narrowed them down for my specific use case: one that hosts an HTTPS server which allows for input and output and the other for microphone and speaker communication (these devices are connected by port to the Pi itself).

The most important of the calling scripts is the HTTPS server. I figured out how to host a tunnel from my Pi server for free (minus the domain purchasing of course) so the input/output server is available from anywhere in the world with an internet connection for personal use. Because tons of programming languages have the simple ability to post HTTPS requests and receive output, I built a sleek Swift app for my phone using previous knowledge that receives and sends data. On top of that, there is also a modern frontend HTTPS website in case you need to access the system from the web.
Example workflow of Alfred 2.0 architecture
(2.0) Features
Alfred is also equipped with a database that intelligently builds upon itself. I gave the system the feature to learn - and ever since that it has been learning from me and what I like or don't like. It saves known people's emails so if you give it the prompt for example "send an email to Taylor" the chatbot will pull up the email associated with Taylor's account.
In addition, while I meticulously studied the ins and outs of the Jarvis AI, I noticed that it had access to Stark's health data. If I wanted to incorporate this feature, I needed a health tracker that I always wore (I don't wear my smartwatch to bed due to the short battery) and tracked seemingly background events. After lots of research, I finally landed on the Oura Ring. This was the result of two key points of the device: first, it has a free API where you can give it your account key, and it can access your data in Python. Secondly, it was a 'passive' tracker that is one of the highest-reviewed smart rings with a big following behind it.
Every day, a 'morning briefing' is given to Alfred. This gives the system the chance to see my calendar schedule, unread emails, health data from the day prior, and unchecked reminders on the to-do list. Alfred then decides when to notify me about this and when it seems fit will remind me out loud. The system will decide around my schedule in order to prevent talking out loud while I am sleeping or in a meeting on my computer.
Alfred also has the ability to do basic tasks, as a normal virtual assistant would. It can search the web, send emails, check for new emails, add and remove items from files, manage and water my plant (using the Auto Plant System I built), and much more.
If you desire to communicate with the bot from afar, I built an app using Swift that connects through HTTPS to the server and a web portal. This enables the user to have a sleek user interface that is usable from anywhere with an internet connection.