Skip to content
Home / Skills / Bdd / BDD Collaboration
BD

BDD Collaboration

Bdd collaboration v1.0.0

BDD Collaboration

Overview

BDD collaboration transforms software development from a handoff-based process into a conversation-driven partnership. Through structured workshops like Three Amigos sessions, Example Mapping, and specification refinement, teams build shared understanding before writing code. This collaborative approach surfaces ambiguities early, reduces rework, and ensures that everyone—business analysts, developers, and testers—aligns on what “done” means. Living documentation emerges as a byproduct, keeping requirements and tests synchronized throughout the software lifecycle.

Successful BDD collaboration requires cultural change beyond adopting tools. Teams must embrace early communication, welcome questions, and view scenarios as collaborative specifications rather than developer-owned tests. When done well, BDD collaboration eliminates the “us vs. them” mentality between business and technical teams.


Key Concepts

The Three Amigos

┌─────────────────────────────────────────────────────────┐
│              The Three Amigos Session                   │
├─────────────────────────────────────────────────────────┤
│                                                         │
│  👔 Business Analyst    🧪 Tester    💻 Developer       │
│                                                         │
│  Brings:                Brings:      Brings:            │
│  • Business rules       • Test ideas • Technical         │
│  • User needs           • Edge cases   constraints      │
│  • Success criteria     • Risk areas • Feasibility      │
│                                                         │
│  Collaborate on:                                        │
│  • Feature scope                                        │
│  • Concrete examples                                    │
│  • Acceptance criteria                                  │
│  • Gherkin scenarios                                    │
│                                                         │
│  Output:                                                │
│  • Shared understanding                                 │
│  • Refined user story                                   │
│  • Executable specifications                            │
│                                                         │
└─────────────────────────────────────────────────────────┘

Example Mapping Process

┌─────────────────────────────────────────────────────────┐
│              Example Mapping Session                    │
├─────────────────────────────────────────────────────────┤
│                                                         │
│  🟦 USER STORY (Blue Card)                              │
│  ├─ 🟨 RULE 1 (Yellow Card)                            │
│  │  ├─ 🟩 Example 1.1 (Green Card)                     │
│  │  ├─ 🟩 Example 1.2 (Green Card)                     │
│  │  └─ 🟥 Question 1 (Red Card)                        │
│  │                                                      │
│  ├─ 🟨 RULE 2 (Yellow Card)                            │
│  │  ├─ 🟩 Example 2.1 (Green Card)                     │
│  │  └─ 🟩 Example 2.2 (Green Card)                     │
│  │                                                      │
│  └─ 🟥 Question 2 (Red Card)                           │
│                                                         │
│  Timebox: 25 minutes                                    │
│  Outcome: Story is ready OR needs more discovery        │
│                                                         │
└─────────────────────────────────────────────────────────┘

Signal Lights:
🟢 GREEN  - Story is clear, ready for development
🟡 YELLOW - Some questions, can proceed with caution
🔴 RED    - Too many unknowns, need more discovery

Collaboration Workflow

Sprint Planning - 2 weeks ahead

Discovery Workshop (Example Mapping)

Three Amigos Session (Refine scenarios)

Write Gherkin Scenarios

Review Scenarios with Stakeholders

Development (TDD guided by scenarios)

Automated Scenario Execution

Living Documentation Generated

Living Documentation Pyramid


                 / \
                /   \
               / Exec \      ← Executive Summary
              /_______\
             /         \
            /  Feature  \    ← Feature Overview
           /  Catalog   /
          /_____________\
         /               \
        /   Scenarios     \  ← Detailed Scenarios
       /   (Gherkin)     /
      /___________________\
     /                     \
    /   Step Definitions    \ ← Technical Implementation
   /      & Test Code       /
  /_________________________\

Bottom-Up Flow: Code → Scenarios → Features → Dashboard
Auto-generated, always current

Collaboration Anti-Patterns

Anti-PatternProblemSolution
Waterfall BDDWrite all scenarios upfrontCollaborate just-in-time
Developer-OnlyDevs write scenarios aloneInclude BA and tester
After-the-FactWrite scenarios after codingScenarios drive development
Too AbstractVague, generic scenariosUse concrete examples
Tool-Focused”BDD = Cucumber”BDD is collaboration first

Best Practices

1. Schedule Regular Three Amigos Sessions

Make collaboration routine, not exceptional. Schedule sessions 1-2 sprints ahead of development.

2. Use Example Mapping for Discovery

25-minute timeboxed sessions using colored index cards to identify rules, examples, and questions.

3. Keep Scenarios Business-Readable

Non-technical stakeholders should understand scenarios. Avoid technical jargon.

4. Treat Scenarios as Living Documentation

Update scenarios when behavior changes. They’re not frozen artifacts.

5. Share Scenarios Widely

Make Gherkin scenarios accessible to entire organization via wiki, Confluence, or generated reports.


Code Examples

Example 1: Three Amigos Session Flow

# Before Three Amigos Session:
User Story (Draft):
As a customer, I want to apply promo codes so I can save money.

# During Three Amigos Session:

👔 BA: "We have Black Friday promos coming up. Codes give 10-30% off."

🧪 Tester: "What if code is expired? What if they try multiple codes?"

💻 Developer: "How do we validate codes? Database lookup? API call?"

👔 BA: "Expired codes should show friendly error. Only one code per order."

🧪 Tester: "What about edge cases? Code for $10 off a $5 order?"

💻 Developer: "Good point. Discount can't exceed order total."

👔 BA: "Right. And codes are case-insensitive for user convenience."

🧪 Tester: "Should we test with special characters in codes?"

💻 Developer: "Let's restrict codes to alphanumeric for now."

# After Three Amigos Session:

Feature: Apply Promotional Codes
  As a customer
  I want to apply promo codes
  So that I can save money on my purchase

  Business Rules:
  - Codes give 10-30% discount
  - Only one code per order
  - Codes are case-insensitive
  - Expired codes show friendly error
  - Discount cannot exceed order total
  - Codes are alphanumeric only

  Scenario: Apply valid percentage discount
    Given I have an order totaling $100.00
    When I apply promo code "SAVE20"
    Then my total should be $80.00
    And the discount should show as "-$20.00"

  Scenario: Reject expired promo code
    Given promo code "EXPIRED" expired yesterday
    When I apply promo code "EXPIRED"
    Then I should see error "This promo code has expired"
    And my total should remain $100.00

  Scenario: Only one promo code per order
    Given I have applied promo code "SAVE10"
    And my total is now $90.00
    When I attempt to apply promo code "SAVE20"
    Then I should see error "Only one promo code allowed per order"
    And my total should remain $90.00

  Scenario: Promo codes are case-insensitive
    Given I have an order totaling $100.00
    When I apply promo code "save20"
    Then my total should be $80.00

  Scenario: Discount cannot exceed order total
    Given I have an order totaling $10.00
    When I apply promo code "FLAT20" (gives $20 off)
    Then my total should be $0.00
    But my total should not be negative

  Questions to Research:
  - Do codes expire at midnight or specific time?
  - Who manages code creation (admin panel)?
  - Usage limits per customer?
  - Tracking for marketing analytics?

# ✅ Outcome: Shared understanding, clear scenarios, identified gaps

Example 2: Example Mapping Session

# User Story:
As a user, I want to reset my password
So that I can regain access if I forget it

# Example Mapping Session (25 minutes):

🟦 STORY CARD
   "Password Reset"

   🟨 RULE 1: "User must receive reset email"
      🟩 Example 1.1: Valid email → receive reset link
      🟩 Example 1.2: Unknown email → generic message (security)
      🟥 Question 1: How long is reset link valid?

   🟨 RULE 2: "Reset link must be single-use"
      🟩 Example 2.1: Use link once → works
      🟩 Example 2.2: Use link twice → error
      🟩 Example 2.3: Request new link → old link invalid

   🟨 RULE 3: "New password must meet requirements"
      🟩 Example 3.1: Strong password → accepted
      🟩 Example 3.2: Weak password → rejected with reason
      🟥 Question 2: Can new password be same as old?

   🟨 RULE 4: "User is logged in after reset"
      🟩 Example 4.1: Successful reset → auto login

   🟥 Question 3: Rate limiting on reset requests?
   🟥 Question 4: Email delivery failures - retry logic?

# Signal: 🟡 YELLOW
# - Most rules clear with examples
# - 4 questions need answers
# - Can proceed with known scenarios
# - Research questions in parallel

# Convert to Gherkin:

Feature: Password Reset

  Rule: User receives reset email for valid requests
    
    Scenario: Request reset for registered email
      Given a user account exists for "john@example.com"
      When I request password reset for "john@example.com"
      Then I should receive reset email at "john@example.com"
      And the email should contain a reset link

    Scenario: Request reset for unknown email
      Given no account exists for "unknown@example.com"
      When I request password reset for "unknown@example.com"
      Then I should see message "If the email exists, you will receive reset instructions"
      And no reset email should be sent

  Rule: Reset link is single-use
    
    Scenario: Use reset link once
      Given I have a valid reset link
      When I use the reset link to set new password
      Then my password should be updated
      And I should be logged in

    Scenario: Attempt to reuse reset link
      Given I have used a reset link
      When I attempt to use the same link again
      Then I should see error "This reset link has already been used"

  Rule: New password must meet security requirements
    
    Scenario Outline: Validate password strength
      Given I have a valid reset link
      When I set new password to "<password>"
      Then the result should be "<result>"

      Examples:
        | password      | result                                    |
        | SecurePass1   | success                                   |
        | weak          | error: minimum 8 characters required      |
        | NoNumbers     | error: must contain at least 1 digit      |
        | no-uppercase  | error: must contain uppercase letter      |

Example 3: Specification Workshop Agenda

# Specification Workshop Template

## Pre-Workshop (1 day before)
- [ ] Circulate user story to participants
- [ ] Review story briefly (BA)
- [ ] Identify obvious questions
- [ ] Book room with whiteboard/cards

## Workshop Agenda (90 minutes)

### 1. Story Review (10 minutes)
👔 BA presents user story and business context
- What are we building?
- Why does it matter?
- Who benefits?

### 2. Business Rules Brainstorm (15 minutes)
🟨 Team identifies rules (yellow cards)
- What constraints exist?
- What are the boundaries?
- What validations apply?

Example:
- "Discount codes expire after 30 days"
- "Only one discount per order"
- "Minimum order value may be required"

### 3. Examples for Each Rule (30 minutes)
🟩 Team creates concrete examples (green cards)
- Happy path example
- Edge cases
- Error conditions

Example for "Only one discount per order":
✓ Apply first code → works
✓ Apply second code → error
✓ Remove first code, apply different → works

### 4. Questions and Unknowns (15 minutes)
🟥 Capture questions that block implementation (red cards)
- "Who creates/manages discount codes?"
- "What happens if discount exceeds total?"
- "Are codes case-sensitive?"

### 5. Convert to Gherkin (15 minutes)
Transform examples into Given-When-Then scenarios

Example:
Green Card: "Apply 20% discount on $100 order = $80 total"

Scenario: Apply percentage discount
  Given order total is $100
  When I apply code "SAVE20" (20% off)
  Then my total should be $80

### 6. Review and Acceptance (5 minutes)
👔 BA confirms scenarios match intent
🧪 Tester confirms coverage is adequate
💻 Developer confirms clarity for implementation

## Post-Workshop
- [ ] Document scenarios in feature files
- [ ] Assign question research to owners
- [ ] Schedule follow-up if needed (too many red cards)
- [ ] Add story to sprint backlog

## Workshop Success Criteria
✓ All participants understand the feature
✓ Edge cases identified
✓ 3-7 scenarios defined (not too few, not too many)
✓ Questions tracked with owners
✓ Ready for development OR need more discovery

Example 4: Living Documentation Setup

// 1. Configure Cucumber to generate reports

@RunWith(Cucumber.class)
@CucumberOptions(
    features = "src/test/resources/features",
    glue = "com.example.steps",
    plugin = {
        "pretty",
        "html:target/cucumber-reports/cucumber.html",
        "json:target/cucumber-reports/cucumber.json",
        "junit:target/cucumber-reports/cucumber.xml"
    }
)
public class RunCucumberTests {
}

// 2. Generate enhanced HTML reports

// pom.xml
<plugin>
    <groupId>net.masterthought</groupId>
    <artifactId>maven-cucumber-reporting</artifactId>
    <version>5.7.5</version>
    <executions>
        <execution>
            <phase>verify</phase>
            <goals>
                <goal>generate</goal>
            </goals>
            <configuration>
                <projectName>E-Commerce Platform</projectName>
                <outputDirectory>target/living-docs</outputDirectory>
                <inputDirectory>target/cucumber-reports</inputDirectory>
                <jsonFiles>
                    <param>**/*.json</param>
                </jsonFiles>
                <buildNumber>${build.number}</buildNumber>
                <checkBuildResult>true</checkBuildResult>
            </configuration>
        </execution>
    </executions>
</plugin>

// 3. Publish to Confluence (optional)

public class LivingDocPublisher {
    
    public void publishToConfluence() {
        // Read Cucumber JSON report
        CucumberReport report = parseReport("target/cucumber-reports/cucumber.json");
        
        // Generate Confluence markup
        String confluenceMarkup = convertToConfluence(report);
        
        // Publish via Confluence API
        ConfluenceClient client = new ConfluenceClient(confluenceUrl, apiToken);
        client.updatePage(pageId, confluenceMarkup);
    }
    
    private String convertToConfluence(CucumberReport report) {
        StringBuilder markup = new StringBuilder();
        
        // Feature overview
        markup.append("h1. Features\n\n");
        for (Feature feature : report.getFeatures()) {
            markup.append("h2. ").append(feature.getName()).append("\n");
            markup.append(feature.getDescription()).append("\n\n");
            
            // Scenarios
            for (Scenario scenario : feature.getScenarios()) {
                markup.append("h3. ").append(scenario.getName()).append("\n");
                
                // Steps
                markup.append("{code:gherkin}\n");
                for (Step step : scenario.getSteps()) {
                    markup.append(step.getKeyword())
                          .append(" ")
                          .append(step.getName())
                          .append("\n");
                }
                markup.append("{code}\n\n");
                
                // Status indicator
                String status = scenario.isPassed() ? "(/) PASSING" : "(x) FAILING";
                markup.append("{info}").append(status).append("{info}\n\n");
            }
        }
        
        // Statistics
        markup.append("h2. Test Statistics\n");
        markup.append("||Total||Passed||Failed||Skipped||\n");
        markup.append(String.format("|%d|%d|%d|%d|\n",
            report.getTotalScenarios(),
            report.getPassedScenarios(),
            report.getFailedScenarios(),
            report.getSkippedScenarios()));
        
        return markup.toString();
    }
}

// 4. CI/CD Integration

// Jenkins/GitHub Actions: Publish as artifact
// .github/workflows/test.yml
name: Tests and Living Docs

on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Run tests
        run: mvn clean verify
      
      - name: Publish Living Documentation
        uses: actions/upload-artifact@v3
        with:
          name: living-documentation
          path: target/living-docs/
      
      - name: Deploy to GitHub Pages
        if: github.ref == 'refs/heads/main'
        uses: peaceiris/actions-gh-pages@v3
        with:
          github_token: ${{ secrets.GITHUB_TOKEN }}
          publish_dir: ./target/living-docs

// 5. Result: Auto-generated, always current documentation
// - Feature catalog with descriptions
// - All scenarios with pass/fail status
// - Execution trends over time
// - Accessible to entire organization

Example 5: Collaborative Scenario Review

# Scenario Review Checklist
# Review scenarios with entire Three Amigos before implementation

Feature: Order Discount Calculation

  # ✅ GOOD - Clear business language
  Scenario: Apply volume discount for bulk orders
    Given I have an order with 15 items at $10 each
    When I proceed to checkout
    Then my subtotal should be $150.00
    And volume discount of $22.50 should be applied
    And my total should be $127.50

  # 👔 BA Review:
  # ✓ Business rule correct (15% off for 10+ items)
  # ✓ Language matches business terminology
  # ? Question: What about mixed quantities (e.g., 8 of product A, 7 of product B)?
  
  # 🧪 Tester Review:
  # ✓ Happy path covered
  # ? Missing: What if order has exactly 10 items?
  # ? Missing: What if items have different prices?
  # ? Missing: Does discount stack with promo codes?
  
  # 💻 Developer Review:
  # ✓ Clear enough to implement
  # ? Need: Currency handling (rounding rules)
  # ? Need: Performance requirement (calculation time)

  # OUTCOME: Add scenarios for edge cases identified by tester

  Scenario: Volume discount at threshold boundary
    Given I have an order with exactly 10 items at $10 each
    When I proceed to checkout
    Then volume discount of $15.00 should be applied
    And my total should be $85.00

  Scenario: Volume discount with mixed prices
    Given I have an order with:
      | quantity | price |
      | 5        | 10.00 |
      | 5        | 20.00 |
      | 5        | 15.00 |
    When I proceed to checkout
    Then my subtotal should be $225.00
    And volume discount of $33.75 should be applied
    And my total should be $191.25

  Scenario: Volume discount with promo code
    Given I have an order with 15 items at $10 each
    And I have applied promo code "SAVE10" (10% off)
    When I proceed to checkout
    Then my subtotal should be $150.00
    And volume discount of $22.50 should be applied
    And promo code discount of $12.75 should be applied
    And my total should be $114.75

  # RESEARCH NEEDED (from Developer):
  # - Rounding rules: Round to nearest cent? Always round down?
  # - Performance: Max acceptable calculation time?
  # - Currency: Support multiple currencies?

  # Document in story:
  @requires-clarification
  Scenario: TODO - Multi-currency handling
    # Pending decision on currency conversion rules

# ✅ Collaborative Review Benefits:
# - BA ensures business accuracy
# - Tester identifies missing coverage
# - Developer surfaces technical needs
# - Team builds shared understanding
# - Questions tracked for resolution

Anti-Patterns

❌ Skipping Collaboration

// WRONG - Developer writes scenarios alone
Developer: "I'll write the Gherkin scenarios based on the story."
Result:
- Technical language in scenarios
- Missed business edge cases
- No shared understanding
- Rework when BA reviews later

// ✅ CORRECT - Three Amigos collaboration
BA + Tester + Developer together:
- Discuss examples
- Surface questions early
- Write scenarios collaboratively
- Everyone understands feature

❌ Waterfall BDD

// WRONG - Big upfront design
Week 1: Write all scenarios for entire epic
Week 2-4: Develop features
Week 5: Test against scenarios
Result:
- Requirements drift
- Wasted effort on changed scenarios
- No feedback loop

// ✅ CORRECT - Just-in-time collaboration
Sprint N-1: Three Amigos for stories in Sprint N
Sprint N: Develop with fresh understanding
Continuous: Update scenarios as needed
Result:
- Current information
- Minimal waste
- Fast feedback

❌ Tool-Centric BDD

// WRONG - "BDD is using Cucumber"
Team: "We installed Cucumber, so we're doing BDD!"
Reality:
- Developers write scenarios alone
- No business involvement
- Technical tests in Gherkin format
- No collaboration benefit

// ✅ CORRECT - Collaboration-centric BDD
Team: "BDD is how we collaborate, Cucumber is just the tool"
Practice:
- Regular Three Amigos sessions
- Example Mapping workshops
- Business-readable scenarios
- Shared understanding as primary goal

Testing Strategies

Collaboration Cadence

# Sprint Rhythm for BDD Collaboration

## Week 1 (Sprint N)
**Monday**
- Planning: Pull stories into Sprint N
- Stories for Sprint N+1 already have scenarios (from previous sprint)

**Wednesday**
- Three Amigos: Review stories for Sprint N+2
- Example Mapping: 25-min sessions for each story

**Friday**
- Scenario Review: Finalize Gherkin for Sprint N+2
- Publish scenarios to wiki/Confluence

## Week 2 (Sprint N)
**Monday**
- Development continues (Sprint N stories)
- Ad-hoc Three Amigos as needed for clarifications

**Thursday**
- Demo prep: Living documentation generated from passing tests

**Friday**
- Sprint Demo: Show features + living documentation
- Retrospective: Discuss collaboration effectiveness

## Result: Always 1-2 sprints ahead on scenarios

Distributed Team Collaboration

# Remote Three Amigos Best Practices

## Tools
- Video: Zoom/Teams (cameras on)
- Whiteboard: Miro/Mural for Example Mapping
- Documentation: Confluence/Wiki for scenarios
- Chat: Slack for async questions

## Session Structure (remote-optimized)

**Pre-work (15 minutes before)**
- Review story individually
- Add questions to shared doc
- Review existing scenarios

**Session (45 minutes)**
1. Introductions (if needed) - 2 min
2. Story review (BA screen share) - 5 min
3. Example Mapping (Miro board) - 25 min
   - Use digital cards (blue/yellow/green/red)
   - Everyone can add cards simultaneously
   - Use "thumbs up" reactions for agreement
4. Scenario drafting - 10 min
5. Next steps and owners - 3 min

**Post-session (async)**
- BA writes scenarios in Gherkin
- Developer reviews for technical clarity
- Tester reviews for coverage
- Post to Slack for final approval

## Tips
✓ Record sessions for team members in other timezones
✓ Use countdown timer (visible to all)
✓ Designate facilitator (rotates)
✓ Share screen with scenarios during discussion
✓ Use parking lot for out-of-scope questions

Measuring Collaboration Effectiveness

// Track metrics to improve collaboration

public class BDDCollaborationMetrics {
    
    // Metric 1: Scenario stability
    // How often do scenarios change after implementation starts?
    // Target: < 10% changes after Three Amigos sign-off
    public double calculateScenarioStability() {
        int scenariosWritten = countScenarios();
        int scenariosChanged = countChangedScenarios();
        return (1 - (double) scenariosChanged / scenariosWritten) * 100;
    }
    
    // Metric 2: Three Amigos attendance
    // Are all three perspectives represented?
    // Target: > 90% sessions have BA + Dev + Tester
    public double calculateAttendanceRate() {
        int totalSessions = countThreeAmigosSessions();
        int fullAttendance = countSessionsWithAllThree();
        return ((double) fullAttendance / totalSessions) * 100;
    }
    
    // Metric 3: Question resolution time
    // How quickly are red cards (questions) resolved?
    // Target: < 2 days average
    public double calculateAvgQuestionResolutionDays() {
        List<Question> questions = getQuestions();
        return questions.stream()
            .mapToLong(q -> q.getResolutionTime())
            .average()
            .orElse(0) / (24 * 60 * 60 * 1000); // ms to days
    }
    
    // Metric 4: Defect rate
    // Bugs found in production vs caught by scenarios
    // Target: > 80% caught by scenarios
    public double calculateDefectCatchRate() {
        int totalDefects = countDefects();
        int caughtByScenarios = countDefectsCaughtByBDD();
        return ((double) caughtByScenarios / totalDefects) * 100;
    }
    
    // Metric 5: Rework rate
    // How often do we rebuild due to misunderstood requirements?
    // Target: < 5% of stories need significant rework
    public double calculateReworkRate() {
        int totalStories = countCompletedStories();
        int storiesReworked = countStoriesRequiringRework();
        return ((double) storiesReworked / totalStories) * 100;
    }
}

// Example dashboard output:
/*
BDD Collaboration Health Dashboard

Scenario Stability:      92% ✓ (target: >90%)
Three Amigos Attendance: 88% ~ (target: >90%)
Question Resolution:     1.5 days ✓ (target: <2 days)
Defect Catch Rate:       85% ✓ (target: >80%)
Rework Rate:             3% ✓ (target: <5%)

Overall Health: HEALTHY 🟢

Recommendations:
- Improve Three Amigos attendance (currently 88%, target 90%)
- Consider async participation options for distributed team
*/

Retrospective Topics

# BDD Collaboration Retrospective Questions

## Discovery & Planning
- Are Three Amigos sessions happening regularly?
- Do we start collaboration early enough?
- Are the right people in the room?
- Do we finish sessions with clear outcomes?

## Scenarios
- Are scenarios readable by business stakeholders?
- Do scenarios cover edge cases adequately?
- Are we updating scenarios when behavior changes?
- Is living documentation being used?

## Team Dynamics
- Does everyone feel heard in sessions?
- Are we balancing business and technical perspectives?
- Are testers contributing meaningfully?
- Do we resolve questions quickly?

## Process
- Is Example Mapping working for us?
- Should we adjust session duration?
- Are we capturing insights effectively?
- Is the feedback loop fast enough?

## Outcomes
- Are we catching issues earlier?
- Has rework decreased?
- Do features meet expectations first time?
- Is team confidence improving?

## Actions
Based on discussion, identify:
- ✓ What's working (keep doing)
- △ What's okay (monitor)
- ✗ What's not working (change)
- → Action items with owners

References