Designing and Training a Personalized Small Language Model Capable of Human Behavioral Replication
Independent-study SLM work spanning baseline transformers, from-scratch modeling, optimizer and tokenizer experiments, and a local personalized agent track.

Problem
The project set out to explore how far a smaller language model could be pushed toward personalization and behavior replication without treating the model as a black box.
Contribution
Built the work as a staged research and implementation program: PyTorch baseline, from-scratch transformer components, optimized model track, tokenizer experiments, training pipeline work, evaluation scaffolding, and a local personalized agent runtime.
Why it matters
This is the clearest example of research, systems implementation, and product thinking meeting in one body of work.
Contribution context
Personal research and implementation project.
Roles
- Research direction
- Model implementation
- Training pipeline design
Stack
Domains
Artifacts
Evidence note
Grounded in the public `KREEL_PAI` repository plus the local SLM project materials inspected during the audit.