Python backend engineer with experience building APIs, automation tools, and modular services. Strong focus on clean architecture, testing, and efficient data handling. Comfortable with GitHub workflows, CI/CD, Docker, and asynchronous collaboration.
Real-time data pipeline with Kafka, Flink, Iceberg, Trino, MinIO, and Superset.
Medallion batch data pipeline with Airflow, DuckDB, Delta Lake, Trino, MinIO, and Metabase. Full observability and data quality.
Async Python microservices with CDC, real-time WebSocket, and gRPC for high-performance document management.
Docker containerized and configurable Airflow data pipeline for collecting and storing stock and cryptocurrency market data.
ETL pipeline using Pulumi for infrastructure as code, integrating AWS services and Snowflake for automated data flow.
Streamlit Python-based web application to analyze historical stock data.
Improved the library’s data manipulation and reporting functionalities, supporting the development of efficient data pipelines and enabling scalable data solutions for analytics and modeling.