Ph.D. in Computer Science, Harvard University | Systems Scientist | AI, Robotics & Quant | Contact: moc.adnogxilef@xilef
As a Systems Scientist specializing in the perception-to-action pipeline, I bridge the gap between high-level AI and physical control logic within complex, resource-constrained environments. My work integrates a Harvard Ph.D. foundation in 3D neural reconstruction with field-ready engineering—ranging from orchestrating autonomous vehicle trajectories at Disney Research and bio-mimetic underwater swarms to deploying clinical-grade Edge AI for the UN-WFP. Whether architecting predictive intelligence for capital markets or quantifying human trust in autonomous systems via immersive VR, I am dedicated to engineering resilient, intelligent systems that translate high-dimensional data into precise, real-world outcomes.
(Ph.D Minor, Harvard Univerity)
Designed and implemented an integrated perception-to-action module for miniature, bio-mimetic underwater robots. To overcome the lack of real-world datasets, I utilized Generative AI to produce high-fidelity synthetic underwater environments for model training. The resulting TensorFlow Lite computer vision model was heavily optimized for the Google Coral Edge TPU, enabling real-time classification of obstacles, peers, and adversarial targets. This perception layer directly informs the robot’s trajectory planning, facilitating autonomous enemy evasion and decentralized collision avoidance in high-noise fluid environments.
(Internship, Disney Research)
Orchestrated the physical realization of collision-free autonomous driving trajectories for miniature vehicular platforms. I engineered a robust control layer to translate high-level global path planning into precise steering and acceleration inputs. By bridging the gap between trajectory generation and low-level hardware actuation, I ensured the vehicles could navigate complex environments and execute real-time reactive maneuvers to maintain safe distance and avoid obstacles with high fidelity.
(Lead Engineer, UN WFP)
Architected and deployed a mobile Computer Vision (CV) system for real-time childhood malnutrition screening in off-grid, low-resource environments. To enable clinical-grade inference on mobile devices and Google Coral Edge TPUs, I implemented a rigorous optimization pipeline including Full Integer (INT8) Quantization and Structured Pruning. This reduced model latency and memory footprint by over 10x without sacrificing diagnostic accuracy, successfully bridging the gap between high-compute deep learning and field-ready hardware.
(Ph.D Major, Harvard University)
Developed a high-performance software ecosystem for the automated reconstruction of neural circuits from massive 3D Electron Microscopy datasets. My research pioneered the optimization of 3D Convolutional Neural Networks (CNNs) for volumetric processing and the design of Recurrent Neural Networks (RNNs) to maintain spatial consistency across biological structures. Beyond segmentation, I engineered a graph-based analytics framework that automatically detects topological errors and facilitates rapid proofreading of neural sub-graphs, drastically reducing the manual labor required for large-scale connectomics.
(AI Consultant, CapConnect+)
Architected a high-performance data engine for the commercial paper market to analyze and forecast investment patterns across a decade of financial data. I integrated Vector Search via Google Vertex AI to enable semantic retrieval, allowing users to identify complex historical risk profiles and market correlations that keyword-based systems miss. By combining Time-Series Forecasting with a modern data stack, I transformed raw transactional records into a strategic analytics platform capable of discerning market shifts and enterprise-level investment trends.
(Research Fellow, Harvard Business School)
Designed and engineered a high-fidelity VR simulation platform for Harvard Business School to quantify human trust during autonomous vehicle (AV) transitions. Using Oculus VR (Meta Quest), I developed complex self-driving scenarios where users interacted with various levels of vehicle autonomy. The system was designed to measure "over-trust" and "under-trust" behaviors in real-time, providing empirical data on how users react to automated edge cases, system failures, and collision-avoidance maneuvers in a fully immersive, zero-risk environment.
As the lead instructor for this specialized curriculum, I bridge the gap between high-level machine learning and low-power embedded systems. The course focuses on the end-to-end pipeline of TinyML, from training lightweight models in PyTorch and TensorFlow to the rigorous optimization required for deployment on ARM Cortex-M microcontrollers. I guide students through the complexities of hardware-aware constraints, including INT8 quantization, memory-efficient inference, and real-time sensor fusion. By synthesizing theory with hands-on labs using the Arduino Nano 33 BLE and Edge Impulse, I empower the next generation of engineers to deploy "intelligence everywhere"—from industrial IoT sensors to autonomous bio-mimetic robots. Read more
In this intensive, project-driven course, I lead students through the evolution of connectionism—from the mathematical foundations of backpropagation and Artificial Neural Networks (ANNs) to the cutting edge of Generative AI. The curriculum is engineered for high-level practical mastery, moving beyond theory to implement state-of-the-art Convolutional Neural Networks (CNNs) for computer vision and Recurrent Neural Networks (RNNs) for sequential data. We culminate in the "Modern Stack," where students architect and fine-tune Transformers and Diffusion Models, gaining hands-on experience with the same architectures currently transforming industry-scale AI. By focusing on deployment-ready code and rigorous model evaluation, I ensure students can bridge the gap between academic research and production-grade intelligence. Read more
In this course, I mentor students in the art and science of teaching machines to perceive the world. The curriculum is built around the modern evolution of visual intelligence, transitioning from traditional Convolutional Neural Networks (CNNs) for feature extraction to the revolutionary scale of Vision Transformers (ViTs). We also explore the frontier of Generative AI, specifically focusing on the architecture and mathematical sampling processes of Diffusion Models. Beyond model training, the course emphasizes "Vision for the Real World"—challenging students to engineer robust systems that can discern complex patterns, segment volumetric data, and solve high-stakes problems across medical imaging, autonomous navigation, and digital forensics. Read more