Nimmo
Research Scientist at Axiarete AI · PhD in CS, UC Riverside
I build agentic AI systems and publish the research behind them — cutting network fault diagnosis from minutes to seconds at Cisco, making JavaScript analysis 2× faster at UCR, and now building code-driven resilience at Axiarete AI.
My work lives at the intersection of classical program analysis and modern AI — building systems that can reason about code at scale, and making AI systems more reliable and interpretable.
I completed my PhD in Computer Science from the University of California, Riverside in 2025, advised by Prof. Manu Sridharan. My dissertation tackled the hard problems of making JavaScript static analysis more accurate, faster, and practically useful.
Today I’m a Research Scientist at Axiarete AI, building code-driven resilience and governance frameworks. Before that, I built agentic LLM systems at Cisco ThousandEyes that cut network fault diagnosis from 10+ minutes to seconds, worked on formal specifications with LLMs at Lawrence Livermore National Lab, and researched AI-generated code vulnerabilities at Microsoft Research.
Outside research, I’m a Toastmasters Division-level Public Speaking Champion, enthusiastic home cook, amateur photographer, and occasional wanderer. I also have an incurable habit of deep-diving into conversations with strangers on Reddit.
Leading development of a code-driven resilience and governance analysis framework to assess disaster recovery readiness, observability integrity, and software composition risk directly from source code. Automated, evidence-backed operational readiness at enterprise scale.
Built agentic LLM reasoning modules for a network observability platform, analyzing real-time telemetry across network, application, and BGP layers to deliver plain-language fault explanations. Reduced MTTI from 10+ minutes to seconds. Designed large-scale evaluation workflows for continuous agent reliability improvement.
Developed novel techniques for JavaScript static call graph analysis. Indirection-bounded analysis achieved up to 2× speed-up on large Node.js programs with minimal precision loss. Built automated root cause quantification for call graph unsoundness and data-driven dynamic behavior capture.
Developed LLM-powered formal specification capabilities in the ROSE compiler to automatically infer pre/post-conditions for C++ and Ada code. Built a novel C++ formal specification dataset via prompt engineering to bridge raw code and structured semantics.
Leveraged CodeBERT and static analysis to study and detect source-sink vulnerabilities in code snippets generated by AI assistants like Copilot. Built a neural framework enabling automated detection of unsafe data handling across diverse CWEs.