RE
Requirement Analysis
Requirements core v1.0.0
Requirement Analysis
Overview
Requirement analysis is the systematic process of understanding, documenting, and validating what a system must do (functional requirements) and how it must behave (non-functional requirements). In the context of the full-lifecycle pipeline, requirement analysis transforms raw project/feature documents into a structured Requirement Manifest that drives all downstream phases — architecture, API design, implementation, testing, and deployment.
Key Concepts
Scope Classification
Every requirement document describes one of three scopes:
| Scope | Definition | Pipeline Impact |
|---|---|---|
| NEW_APP | Building a system from scratch | All 13 phases activated |
| NEW_FEATURE | Adding capability to an existing system | Selective phases, incremental artifacts |
| ENHANCEMENT | Improving/optimizing existing functionality | Targeted phases, minimal scaffolding |
Requirement Categories
┌─────────────────────────────────────────────────────────────┐
│ Requirement Taxonomy │
├─────────────────────────────────────────────────────────────┤
│ │
│ FUNCTIONAL │
│ ├── User-facing features (what users can do) │
│ ├── System behaviors (what the system does automatically) │
│ ├── Integration points (external system interactions) │
│ └── Data management (CRUD operations, transformations) │
│ │
│ NON-FUNCTIONAL │
│ ├── Performance (latency, throughput, response time) │
│ ├── Scalability (users, data volume, growth rate) │
│ ├── Availability (uptime, RTO, RPO) │
│ ├── Security (AuthN, AuthZ, encryption, compliance) │
│ ├── Observability (logging, monitoring, tracing) │
│ └── Maintainability (tech debt, extensibility) │
│ │
│ CONSTRAINTS │
│ ├── Technical (language, framework, cloud provider) │
│ ├── Business (budget, timeline, team size) │
│ ├── Regulatory (GDPR, HIPAA, PCI-DSS, SOC2) │
│ └── Operational (deployment frequency, support model) │
│ │
└─────────────────────────────────────────────────────────────┘
AI/ML Need Detection
Scan requirements for signal keywords that indicate AI/ML capabilities are needed:
| Signal Category | Keywords | Detected Capability |
|---|---|---|
| Search/Retrieval | ”search”, “find”, “knowledge base”, “FAQ”, “semantic” | RAG Pipeline |
| Generation | ”generate”, “write”, “summarize”, “chatbot”, “assistant” | LLM Integration |
| Classification | ”classify”, “categorize”, “detect”, “predict”, “score” | ML Model |
| Recommendation | ”recommend”, “suggest”, “personalize”, “similar” | Recommendation Engine |
| Automation | ”automate”, “agent”, “autonomous”, “workflow AI” | Agentic Architecture |
Best Practices
- Quantify NFRs — Replace vague terms (“fast”, “scalable”) with measurable targets (“P99 < 200ms”, “10K concurrent users”)
- Use INVEST criteria for user stories — Independent, Negotiable, Valuable, Estimable, Small, Testable
- Define acceptance criteria for every story in Given/When/Then format
- Separate concerns — Don’t mix functional requirements with implementation details
- Prioritize with MoSCoW — Must have, Should have, Could have, Won’t have (this time)
- Identify dependencies early — Map which stories depend on others for sequencing
- Flag ambiguities explicitly — Never assume; document with proposed resolution
- Detect over-engineering — Flag requirements that suggest unnecessary complexity
Code Examples
✅ Good: Well-Structured Requirement Manifest
scope:
type: NEW_APP
projectName: "Customer Support Portal"
description: "Self-service support with AI-powered search and chat"
epics:
- id: E-001
title: "Ticket Management"
priority: MUST
stories:
- id: US-001
title: "Submit Support Ticket"
persona: "Customer"
narrative: "As a customer, I want to submit a support ticket, so that I can get help with my issue"
acceptanceCriteria:
- "Given I am logged in, When I fill the ticket form and submit, Then a ticket is created with status OPEN"
- "Given I submit a ticket, When the submission succeeds, Then I receive a confirmation email with ticket ID"
priority: MUST
estimatedComplexity: MEDIUM
tags: [backend, frontend, database]
nfrs:
performance:
latencyP99: "200ms" # Quantified, not "fast"
throughput: "500 rps"
❌ Bad: Vague Requirements
# No scope classification
# requirements: "Build a support system that is fast and scalable"
# No user stories, no acceptance criteria
# NFRs: "must be fast" — not measurable
# No AI detection performed
# No priority assignment
Anti-Patterns
- Gold Plating — Adding capabilities not in requirements (“they might need this later”)
- Vague NFRs — “The system should be fast” without quantification
- Missing Personas — User stories without identified actors
- Untestable Criteria — Acceptance criteria that can’t be verified programmatically
- Scope Ambiguity — Not classifying as NEW_APP/NEW_FEATURE/ENHANCEMENT upfront
- Ignoring Constraints — Not cataloging budget, timeline, team, and regulatory constraints
- AI Hallucination — Detecting AI needs where none exist (e.g., simple CRUD labeled as “needs ML”)
Testing Strategies
- Requirement Review — Peer review of Requirement Manifest for completeness and consistency
- Traceability Check — Every requirement maps to at least one test case
- Acceptance Criteria Validation — Each AC can be converted to an automated test
- Ambiguity Score — Count of unresolved ambiguities should be zero before pipeline proceeds
References
- Writing Effective User Stories
- INVEST Criteria
- MoSCoW Prioritization
- IEEE 830 Software Requirements Specification