Skip to content
View Ramdragneel01's full-sized avatar
👨‍🍳
Let him cook
👨‍🍳
Let him cook

Block or report Ramdragneel01

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Ramdragneel01/README.md

Ram Prakash Dhulipudi (dragon)

I build production AI systems and enterprise frontend platforms. I care about reliability, observability, and shipping useful software fast. Most of my production delivery work is private by design, and the public repositories here are representative demos, reference implementations, and exploratory builds.

For the latest execution updates: Profile Review Follow-Up (May 2026)

Quick Proof (May 2026)

Signal Current evidence
GitHub contribution consistency 1,645 contributions in the last year
Public repository base 44 public repositories
External OSS visibility 1 external PR opened + 1 meaningful issue comment
Measured outcomes documentation 6 of 6 flagship repositories
Delivery focus Trustworthy AI, reliability engineering, and enterprise frontend platforms

Core Focus

  • Trustworthy RAG: evaluation beyond LLM-as-judge loops
  • ML observability: drift, latency, and reliability in production
  • Enterprise frontend: micro-frontends with governance and contracts

Selected Repositories

Repository Why it exists Current direction
hallucination-lens Measure RAG faithfulness at sentence level Add 3 benchmark datasets and ship v1 CLI JSON output
context-watchdog Guardrails for long-running LLM and agent workflows Publish pip package and policy benchmark report with 3 recipe presets
agentic-research-assistant Multi-agent research with traceable orchestration Add deterministic eval suite with citation-faithfulness and cost-tier metrics
mlops-sentinel Monitor model behavior in production Add alert triage dashboard, SLO thresholds, and monthly trend snapshots
partner-portal-microfrontends Enterprise portal using federated React apps Publish live mock-auth demo with badge and accessibility evidence
dragon-portfolio Public case-study surface for shipped systems Add quantified impact cards and monthly release radar updates

Flagship 6 (Start Here)

Repository What reviewers should look for first
hallucination-lens Faithfulness scoring approach + benchmark framing
context-watchdog Policy guardrail patterns for long-running agent workflows
agentic-research-assistant Multi-agent orchestration and traceability
mlops-sentinel Drift and reliability monitoring loop
partner-portal-microfrontends Enterprise frontend governance and contracts
dragon-portfolio Public case-study index and collaboration entry point

What I Optimize For

  • Clear architecture and reproducible setup in every repository
  • Automated checks (lint, test, build) on every meaningful change
  • Evidence of outcomes: latency, cost, and reliability improvements
  • Small, frequent releases instead of large infrequent drops

New Pin Candidate (3-Week Build)

I am building AI Reliability Chaos Lab as the next flagship project to pin.

  • Plan document: AI Reliability Chaos Lab - 3-week build plan
  • Target outcome: a production-grade reliability chaos platform for LLM and agent systems with reproducible failure scenarios, mitigation policies, and release-grade evidence.
  • Pin-ready target: semantic release v1.0.0 after week 3 validation and documentation hardening.

Current Plan

  1. External contribution proof: increase to at least two merged external PRs and five meaningful issue comments, then publish direct evidence links using tracker and evidence template.
  2. Replace remaining placeholder metrics with measured numbers in all flagship README results blocks.
  3. Keep release cadence consistent with one semantic tag per flagship repository each month.
  4. Add one concrete screenshot or short demo GIF to each flagship repository README.
  5. Reduce container CVE warnings in flagship repos and document residual risk decisions in security notes.

No deletion or archiving is planned for non-fork repositories. Fork-only cleanup is in scope.

Current Status

Area Signal
Measured outcomes docs 6 of 6 selected repos
Best checklist coverage 12/12
Lowest checklist coverage 12/12 (dragon-portfolio uplifted)
External contribution evidence 1 external PR opened and 1 meaningful issue comment posted (see tracker)
Partner-portal baseline closure CODEOWNERS added; CLAUDE.md present
GitHub contributions (last year) 1,645
Fork cleanup 4 forked repositories archived

Q3 2026 Roadmap

  1. Raise dragon-portfolio production checklist score from 4/12 to at least 10/12 with architecture, security, and deployment docs.
  2. Maintain partner-portal baseline (CODEOWNERS and CLAUDE.md now present) and extend release governance evidence.
  3. Add reproducible benchmark artifacts and trend snapshots for all six pinned repositories.
  4. Increase release cadence consistency to one semantic tag per flagship repository each month.
  5. Expand external contribution evidence with merged PR links and concise impact notes every sprint.

Recent Program Notes

  1. External contribution tracker
  2. External contribution evidence template
  3. External PR #1 opened: langchain-ai/langchain#37071
  4. External issue comment #1 posted: langchain-ai/langchain#31802 (comment)
  5. External PR #1 issue comment draft
  6. External PR #1 description draft
  7. Week 11 measured outcomes
  8. Week 12 quarterly audit
  9. Week 12 recap
  10. 2-phase profile and README plan
  11. Profile review follow-up (May 2026)

Tech Stack

Python, FastAPI, LangGraph, TypeScript, React, Nx, Docker, PostgreSQL, GCP

GitHub Activity

GitHub streak chart for Ramdragneel01

Collaboration

I am open to collaboration on trustworthy AI tooling, evaluation systems, and production ML platform work.

Preferred collaboration topics:

  1. RAG faithfulness evaluation and benchmark design.
  2. MLOps reliability and observability loops.
  3. Enterprise frontend platform hardening and release governance.

Response expectation: best effort reply within 2 to 5 business days.

Popular repositories Loading

  1. partner-portal-microfrontends partner-portal-microfrontends Public

    Enterprise partner portal using React micro-frontends, Module Federation, Nx, and RBAC.

    TypeScript 2

  2. hallucination-lens hallucination-lens Public

    Context-grounded hallucination detection for RAG using sentence-level embedding faithfulness scoring.

    Python 2

  3. dragon-portfolio dragon-portfolio Public

    Interactive developer portfolio showcasing ML, MLOps, and frontend engineering projects.

    JavaScript 2

  4. mlops-sentinel mlops-sentinel Public

    Production ML observability platform with drift detection, latency monitoring, and alerting.

    Python 2

  5. speech-emotion-v2 speech-emotion-v2 Public

    Production-oriented speech emotion recognition stack with improved modeling and deployment flow.

    Python 2

  6. agentic-research-assistant agentic-research-assistant Public

    Multi-agent research assistant with tool orchestration, tracing, and production-ready APIs.

    Python 2