HumanPet Experiment
Goal
The goal that we hope to achieve with the HumanPet series is an LLM that can talk more human-like and can express itself via visual emotions. The name HumanPet was chosen with this goal in mind (Human = it talks more human-like, Pet = it can display emotions like a virtual pet).
However, we're also wanting to apply a certain personality to HumanPet. The personality can be described as happy and energetic.
Stages
We'll release multiple HumanPet models:
| Stage | Name | Purpose | Approach | Hallucinations | Note | Time |
|---|---|---|---|---|---|---|
| 1 | HumanPet X1 1.7B | Less robotic, classify emotion and intent | Existing dataset | Apparent | No effort. | Now |
| 2 | HumanPet X2 1.7B | Less robotic, limited visual emotion, start of a personality | Stage 1 + NLP | Apparent | Close to no effort, just configuring. | March 2026 |
| 3 | HumanPet Translator Semi 1.7B | Rewrite boring text to silly text with some visual emotion | Stage 2 + generated professional text for translation input | Reduced | A little effort. We use a professional LLM to rewrite silly sentences back to professional texts. This makes sure the model will learn how to rewrite sentences, not just change the words (like NLP does now). It is not sure if this stage will be made. We might use another method. | March-April 2026 |
| 4 | HumanPet X3 1.7B | Less robotic, full visual emotion, full personality | Stage 2 + auto-translated datasets (stage 3) / manual rewriting | Reduced | Auto-translated datasets: A little effort, using stage 3 to translate an existing dataset into simple silly texts. Manual rewriting: Big effort, will take a long time. We remove some of the hallucination by ommitting information that isn't in the context. | 2026 |
| 5 | HumanPet Translator 1.7B | Rewrite boring text to accurate silly text with full visual emotion | Stage 4 | Reduced | No effort, we just reuse the dataset from stage 4. See below for explanation on the translator. It is not sure if this stage will be made. We might use another method. | 2026 - early 2027 |
| 6 | HumanPet Instruct 1.7B | Less robotic, full visual emotion, full personality | Stage 4 + manual/generative instruction writing | Minimal | Medium effort. The final model, which is hopefully one that fulfills our goal. Instructions (via system prompt) will either be written by us, by the translator, or both. It is not sure if this stage will be made. We might use another method. It all depends on the results of stage 4. | 2027 |
| 7 | HumanPet Tool 1.7B | Less robotic, full visual emotion, full personality, tools | Stage 6 + translated tool datasets (stage 5) | Minimal | Low effort. Use the translator (stage 5) to create a silly tool dataset. It is not sure if this stage will be made. It all depends on the results of stage 5 and 6. | 2027 |
| 8 | HumanPet2 Instruct ??? B | Less robotic, full visual emotion, full personality | Stage 6 + other datasets translated (stage 5) | Close to none | Low effort. The final model (stage 6) reinforced with datasets that contain less hallucination, translated with the translator (stage 5). It is not sure if this stage will be made. We might use another method. It all depends on the results of stage 5 and 6. | 2027 - early 2028 |
We're currently at stage 2. Note that all stages are experimental. Stage 1 is done and stage 2 is about to be as well, but all the other stages may never complete. Each stage depends on the last or recent stages, and as such, if one stage is very disappointing (highly likely, considering LLMs fail to deliver well most of the time), the others might never continue. Though, we'll try to prevent that. The "X1"/"X2"/etc. stands for "experimental model stage 1"/"experimental model stage 2" etc.
We might not release a model from a stage right away. This is mostly because we want to do our own research on the model before releasing it to the public, so that we can document any difficulties, quirks, or findings before the release, and fix them if need be.
The reason why we include a translator at stage 3 and 5 mainly has to do with being able to turn other datasets into silly ones. This means that we can use tool, chain-of-thought or other instruct datasets while keeping the personality in tact. This stage is uncertain though, and may not work that well. If it doesn't, stage 7 and 6 will likely be dropped too.
The current roadmap is (and might change depending on model results):
| Stage | Name | Time |
|---|---|---|
| 1 | HumanPet X1 1.7B | Now |
| 2 | HumanPet X2 1.7B (current stage) | March 2026 |
| 3 | HumanPet Translator Semi 1.7B | March-April 2026 |
| 4 | HumanPet X3 1.7B | 2026 |
| 6 | HumanPet Instruct 1.7B | 2027 |
Progress
Progress per asset:
| Type | Asset | Status |
|---|---|---|
| Dataset | HumanPet X1 4.3k | π’ Done |
| Model | HumanPet X1 1.7B | π’ Done |
| Dataset | HumanPet X2 4.3k | π’ Done |
| Model | HumanPet X2 1.7B | π΄ Done, mistakes found |
| Dataset | Re-train: HumanPet X2.1 4.3k | π’ Done |
| Model | Re-train: HumanPet X2.1 1.7B | π΄ Done, mistakes found |
| Dataset | Re-train: HumanPet X2.2 4.3k | π’ Done |
| Model | Re-train: HumanPet X2.2 1.7B | π Work started |
| Dataset | HumanPet Translator Semi 4.3k | π’ Done |
| Model | HumanPet Translator Semi 1.7B | π Work started |
| Dataset | HumanPet X3 ??? k | π Work started |
| Model | HumanPet X3 1.7B | βͺ Planned |
| Dataset | HumanPet Translator ??? k | β« Unknown |
| Model | HumanPet Translator 1.7B | β« Unknown |
| Dataset | HumanPet Instruct ??? k | β« Unknown |
| Model | HumanPet Instruct 1.7B | β« Unknown |
| Dataset | HumanPet Tool ??? k | β« Unknown |
| Model | HumanPet Tool 1.7B | β« Unknown |
| Dataset | HumanPet2 Instruct ??? k | β« Unknown |
| Model | HumanPet2 Instruct ??? B | β« Unknown |
(25% done.)