Sitemap

A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.

Pages

Posts

Explainable Multimodal Models for Critical Infrastructure

1 minute read

Published:

Critical infrastructure operators increasingly rely on multimodal perception systems that fuse imagery, acoustic signatures, and telemetry feeds. Unfortunately, explainability research has lagged the architectural complexity of these systems. I propose a governance framework that blends modal-specific rationales with a global semantic narrative aligned to operator workflows. The pipeline begins with disentangled encoders whose latent spaces are regularised to preserve modality provenance. During inference, each encoder emits a sparse explanation graph that ties salient observations back to physical phenomena, for example corrosion cues in thermal imagery or harmonic anomalies in vibration spectra.

Sustainable AI Pipelines Through Carbon-Aware MLOps

1 minute read

Published:

Sustainable artificial intelligence cannot be reduced to marginal improvements in data center efficiency. The discipline demands lifecycle accountability across model design, training, deployment, and retirement. In our lab we instrumented a carbon-aware orchestration layer that tags every pipeline component with energy provenance metadata sourced from regional grid emission factors. This instrumentation revealed that model retraining schedules, rather than inference, dominated our carbon budget. Armed with granular telemetry, we shifted heavy retraining batches to windows with high renewable penetration and substituted dense hyperparameter sweeps with Bayesian optimisation constrained by energy quotas.

Operationalizing LLM Governance with Enterprise Knowledge Graphs

1 minute read

Published:

Large language models amplify institutional knowledge yet they also magnify the risk of hallucinated citations and policy drift. My current research integrates enterprise knowledge graphs as both a grounding substrate and a verifiable audit trail. Retrieval augmented generation pipelines typically treat knowledge stores as passive context providers. I invert this relationship by requiring the model to declare explicit graph traversals before composing a response. Each traversal is validated against schema rules and access control policies so that the model cannot fabricate entities or reference embargoed data.

Interpretable Foundations for Trustworthy Agentic AI

1 minute read

Published:

Agentic artificial intelligence systems promise autonomous adaptation and self-directed problem solving, yet their adoption in regulated domains hinges on verifiable transparency. In recent deployments I have observed that practitioners still rely on coarse attribution estimates derived from gradient saliency maps, even though these signals often collapse under distributional drift. I argue that an interpretable agentic stack must start with causal specification of decision objectives. Structural causal models provide a formal scaffold that distinguishes policy intent from the mutable patterns surfaced by data-driven planners. By encoding policy constraints as counterfactual queries, it becomes possible to debug agent trajectories with surgical precision.

Field Experiments in Human-in-the-Loop Machine Learning

1 minute read

Published:

Laboratory benchmarks rarely capture the socio-technical friction encountered when machine learning systems operate alongside frontline practitioners. To investigate this gap, I designed a series of field experiments across public health clinics that employ human-in-the-loop triage models. The studies revealed that data scientists often underestimate the latency introduced by manual override pathways. Clinicians needed interpretable uncertainty cues, not binary predictions, in order to calibrate their trust. We therefore redesigned the interface to surface calibrated risk intervals and provenance notes summarising the data regimes most responsible for each recommendation.

Explainable Multimodal Models for Critical Infrastructure

1 minute read

Published:

Critical infrastructure operators increasingly rely on multimodal perception systems that fuse imagery, acoustic signatures, and telemetry feeds. Unfortunately, explainability research has lagged the architectural complexity of these systems. I propose a governance framework that blends modal-specific rationales with a global semantic narrative aligned to operator workflows. The pipeline begins with disentangled encoders whose latent spaces are regularised to preserve modality provenance. During inference, each encoder emits a sparse explanation graph that ties salient observations back to physical phenomena, for example corrosion cues in thermal imagery or harmonic anomalies in vibration spectra.

portfolio

publications

talks

teaching

MM Foundation Village Outreach Program

Community Education, MM Foundation Village Outreach, 2020

From Jun 2018 to Oct 2020 I served as a student volunteer leader with the MM Foundation Village Outreach Program, where we tackled the digital divide in rural Punjab through sustained computer literacy initiatives. I coordinated a cohort of volunteers to deliver over fifty workshops that combined introductory computing, internet safety, and applied problem-solving tailored to local agricultural and small business needs. Our sessions equipped more than five hundred participants with practical digital skills, including spreadsheet-based budgeting and smartphone-enabled market research.

AirClass Coding Tutor

Remote Instruction, AirClass, Remote (Singapore), 2025

During my tenure as a coding tutor with AirClass (Jan 2020 – Jul 2025) I delivered personalised, project-based instruction in Python, data science, and applied AI to more than one hundred learners across Asia-Pacific and the Middle East. I designed modular curricula that paired foundational programming concepts with capstone applications, from computer vision mini-projects to end-to-end analytics dashboards, ensuring that each student built a tangible portfolio demonstrating both algorithmic understanding and practical problem-solving.