Madhurima Chakraborty
Open to research collaborations

Madhurima Chakraborty

Nimmo

Research Scientist at Axiarete AI · PhD in CS, UC Riverside

I build agentic AI systems and publish the research behind them — cutting network fault diagnosis from minutes to seconds at Cisco, making JavaScript analysis 2× faster at UCR, and now building code-driven resilience at Axiarete AI.

Resume Email GitHub LinkedIn Scholar
scroll
// about

Who I Am

My work lives at the intersection of classical program analysis and modern AI — building systems that can reason about code at scale, and making AI systems more reliable and interpretable.

I completed my PhD in Computer Science from the University of California, Riverside in 2025, advised by Prof. Manu Sridharan. My dissertation tackled the hard problems of making JavaScript static analysis more accurate, faster, and practically useful.

Today I’m a Research Scientist at Axiarete AI, building code-driven resilience and governance frameworks. Before that, I built agentic LLM systems at Cisco ThousandEyes that cut network fault diagnosis from 10+ minutes to seconds, worked on formal specifications with LLMs at Lawrence Livermore National Lab, and researched AI-generated code vulnerabilities at Microsoft Research.

Outside research, I’m a Toastmasters Division-level Public Speaking Champion, enthusiastic home cook, amateur photographer, and occasional wanderer. I also have an incurable habit of deep-diving into conversations with strangers on Reddit.

🏫
Education
Ph.D. Computer Science
UC Riverside · 2025
Advisor: Manu Sridharan
🔬
Research Interests
Agentic LLMs · Program Analysis
Static Analysis · Code Intelligence
AI Reliability · AI Safety
🌍
Location
San Francisco Bay Area
California, USA
🆕
Beyond the Lab
Public Speaking Champion
Cooking · Photography · Travel
Reddit rabbit holes
// recognition

Awards & Honors

🏆
2025
Most Innovative Project Award
Cohere Expedition Aya — Global Multilingual LLM Competition
🥉
2022
ACM SRC Grand Finals — 3rd Place
Graduate Category, ACM Student Research Competition
🥇
2021
SPLASH SRC — Winner
Graduate Category, ACM SIGPLAN SPLASH
🏫
2019
Dean’s Distinguished Fellowship
University of California, Riverside
🎤
2017
Division-Level Public Speaking Champion
Toastmasters International · Triple Crown Award
🌐
2018
Google Nanodegree Scholarship
Front End Web Developer — Google India & Udacity
// experience

Where I’ve Worked

Current
Research Scientist
Jan 2026 — Present
Axiarete AI · Newark, CA

Leading development of a code-driven resilience and governance analysis framework to assess disaster recovery readiness, observability integrity, and software composition risk directly from source code. Automated, evidence-backed operational readiness at enterprise scale.

Industry
Machine Learning Researcher
Jul 2025 — Jan 2026
Cisco ThousandEyes · San Francisco, CA

Built agentic LLM reasoning modules for a network observability platform, analyzing real-time telemetry across network, application, and BGP layers to deliver plain-language fault explanations. Reduced MTTI from 10+ minutes to seconds. Designed large-scale evaluation workflows for continuous agent reliability improvement.

Research
Graduate Researcher & PhD Candidate
Sep 2019 — Jun 2025
University of California, Riverside · Riverside, CA

Developed novel techniques for JavaScript static call graph analysis. Indirection-bounded analysis achieved up to 2× speed-up on large Node.js programs with minimal precision loss. Built automated root cause quantification for call graph unsoundness and data-driven dynamic behavior capture.

National Lab
Computing Scholar
Jun 2024 — Sep 2024
Lawrence Livermore National Laboratory · Livermore, CA

Developed LLM-powered formal specification capabilities in the ROSE compiler to automatically infer pre/post-conditions for C++ and Ada code. Built a novel C++ formal specification dataset via prompt engineering to bridge raw code and structured semantics.

Industry
Research Intern
Jun 2022 — Sep 2022
Microsoft Research · Seattle, WA

Leveraged CodeBERT and static analysis to study and detect source-sink vulnerabilities in code snippets generated by AI assistants like Copilot. Built a neural framework enabling automated detection of unsafe data handling across diverse CWEs.

// research

Publications

5
CodeClarity: A Framework and Benchmark for Evaluating Multilingual Code Summarization
M. Chakraborty, D. Sharma, E. Nissar, M. Sikander
4
FormalSpecCpp: A Dataset of C++ Formal Specifications Created Using LLMs
M. Chakraborty, P. Pirkelbauer, Q. Yi
3
Indirection-Bounded Call Graph Analysis
M. Chakraborty, A. Gnanakumar, M. Sridharan, A. Møller
2
Automatic Root Cause Quantification for Missing Edges in JavaScript Call Graphs
M. Chakraborty, R. Olivares, M. Sridharan, B. Hassanshahi
1
A Study of Call Graph Effectiveness for Framework-Based Web Applications
M. Chakraborty
// toolkit

Skills

Languages
Python C/C++ JavaScript TypeScript Bash
AI & ML
PyTorch OpenAI APIs Prompt Engineering Semantic Kernel
Agentic LLMs
Tool Use Function Calling Reasoning Agents Opik LangSmith
Cloud & DevOps
AWS Docker Kubernetes GitHub Actions Teleport
Program Analysis
Static Analysis Call Graphs Vulnerability Detection CodeBERT
Interfaces
REST gRPC Jupyter VS Code Git
// community

Academic Service

Program Committee

  • SPLASH’24 — SV Co-Chair
  • PLDI’24 — Artifact Evaluation
  • SAS’22 — Artifact Evaluation

Reviewing

  • NeurIPS’25, ICSE’25
  • MSR’25, TechDebt’26
  • TOSEM, TNNLS, TACO

Mentoring & Volunteering

  • Panelist, PLMW @ SPLASH’25
  • Mentor, Open Source Day 2021
  • Volunteer: PLDI’20, SPLASH’20, FSE’23