FlowCV Logo
Education

Degree anticipated August 2025

Relevant Coursework (Grades out of 20):

  • Artificial Intelligence: 17.0
  • Machine Learning: 19.8
  • Data science: 17.2
  • Deep Learning: 17.0
  • Generative Models: 15.2
  • research experience

    Research on Abductive Reasoning in LLMs

    RIML and INL Lab, Dr. Rohban and Dr. Jafari, Sharif University of Technology

    Focused on the underexplored area of abductive reasoning in LLMs. The work involves:

    05/2025 – present
  • Conducting comprehensive literature and dataset reviews to identify current gaps and limitations.
  • Investigating the correlation between abductive reasoning and other forms of reasoning (deductive, inductive) as well as task domains, to establish the motivation for enhancing abductive capabilities.
  • Designing approaches to improve abductive reasoning in LLMs while maintaining general reasoning abilities and avoiding over-specialization.This research aims to advance a deeper understanding of reasoning mechanisms in LLMs and contribute methodologies that foster more robust and generalizable models.
  • Hybrid Transformer-RNN Model for Sequential Prediction

    Hamidreza Hosseinkhani, Aren Golazizian, Amirreza Mehrzadian, Aida Khaleghi

    07/2024 – 09/2024
  • Co-designed a hybrid RNN-Transformer architecture to model short-and long-term dependencies in sequential data.
  • Participated in the integration of attention mechanisms and setup of evaluation protocols.
  • Contributed to early-stage experimentation and architectural planning in a multi-member team.
  • technical experience

    LLM & RAG-Based Interactive Narrator

    Final project completed during the LLM Hacks Bootcamp (organized by Zharfa Tech & Fanafza, 2025)

    A sophisticated RAG-enhanced interactive storytelling application that creates personalized Alice in Wonderland adventures

    Conditional and Standard Generative Models: Diffusion vs. GAN

    Implemented and compared DDPM and GANs on FashionMNIST for class-conditional generation; DDPMs outperformed GANs in terms of training stability and fidelity.

    Diffusion-Based Sprite Generation: DDPM vs. DDIM

    Compared DDPM and DDIM sampling methods for sprite generation; evaluated trade-offs in quality vs. speed for conditional image synthesis.

    PixelCNN: Autoregressive Image Generation

    Built PixelCNN from scratch with masked convolutions; trained to generate MNIST digits pixel-by-pixel.

    Pix2Pix for Cityscapes: Segmented to Real-World Translation

    Used Pix2Pix (cGAN) for translating segmentation maps to photo-realistic city images; trained on Cityscapes dataset.

    Transformer from Scratch: Implementing "Attention Is All You Need"

    Implemented complete Transformer architecture (multi-head attention, positional encoding, encoder-decoder) in PyTorch for EN-DE translation.

    Medical Image Segmentation with U-Net

    Trained a U-Net model on ultrasound data for binary segmentation using BCE loss; achieved consistent loss reduction across 10 epochs.

    Named Entity Recognition with DistilBERT

    Fine-tuned DistilBERT on CoNLL-2003 for token classification using HuggingFace and PyTorch; achieved 94.9% validation accuracy.

    Deep Q-Network (DQN) on CartPole-v1

    Implemented DQN with target/policy networks and experience replay to solve CartPole-v1 in OpenAI Gym using PyTorch.

    More of my projects are available on my GitHub

    technical strengths

    Programming Python, R, C++, Matlab, SQL, Neo4j, Linux, Git

    LLMs Hugging Face Transformers, LangChain, vLLM, RAG pipelines, Prompt Engineering,

    RLHF/GRPO training, PEFT (LoRA, QLoRA), Evaluation frameworks

    Machine Learning Pandas, Numpy, Scikit-learn, PyTorch, TensorFlow, Matplotlib, Plotly

    Languages Armenian (native), Persian (native), English (TOEFL 100/120)

    Additional skills LaTeX, Microsoft Office applications(Word, Excel, PowerPoint)

    Certificates
    Armenia LLM Summer School 2025

    An intensive week-long program for students and researchers exploring the latest advancements in Large Language Models, including introductions to LLMs, multi-modal models, post-training, test-time compute, AI agents, and AI safety.

    LLM Hacks: Fundamentals of Developing with LLMs

    Certificate ID: LLM117860

    Completed the intensive LLM Hacks: Fundamentals of Developing with LLMs mini bootcamp. Gained hands-on experience with Large Language Models, including LangChain, Retrieval-Augmented Generation (RAG), and prompt engineering strategies. Successfully delivered and defended a final project evaluated by the instructors.

    Teaching assistant

    Teaching Assistant, Deep Learning Dr. Fatemeh Seyyedsalehi

    Sharif University of Technology Fall 2025

    Teaching Assistant, Machine Learning Dr. Ali Sharifi-Zarchi

    Sharif University of Technology Spring 2024

    Teaching Assistant, Machine Learning Theory Dr. Amir Najafi

    Sharif University of Technology Fall 2024

    Teaching Assistant, Stochastic Processes Dr. Hossein Peyvandi

    Sharif University of Technology Spring 2023