Discovery and requirements
Feature Request
A user or stakeholder's expressed desire for something new or different in the software. The raw material of the delivery cycle. Feature requests arrive in many forms: support tickets, sales calls, executive mandates, user interviews, and competitor comparisons. Most organisations receive far more feature requests than they can build. The question of which ones to pursue, and in what order, is the central problem of product management. A feature request is not a commitment. It is a signal worth investigating.
Business Analyst (BA)
The person responsible for understanding what a business needs, translating it into structured requirements that a development team can act on, and bridging the gap between stakeholders and engineers. A good BA asks the questions that surface what users actually need rather than what they say they want. A bad BA writes down what they were told and hands it over. The difference between the two, compounded over many requirements, largely explains whether delivered software solves the right problems.
Requirements
A documented statement of what a system must do or be. Functional requirements describe behaviour: what the system should do when a user does X. Non-functional requirements describe qualities: how fast, how secure, how scalable. Requirements are the contract between the people who want something and the people who build it. Vague requirements produce systems that technically pass but functionally disappoint. The gap between "requirements as written" and "what the user actually meant" is where most project failures live.
Functional Specification (Spec)
A document describing in detail how a system or feature should behave: what inputs it accepts, what it does with them, and what outputs or changes it produces. Specs exist to create shared understanding before building starts. They are useful in proportion to how accurately they reflect what users need and how well engineers read them. In practice, specs are often incomplete, out of date by the time development starts, or interpreted differently by different people. The spec that was written, the spec that was read, and the feature that was built are rarely the same thing.
User Story
A short description of a feature from the perspective of the person who will use it, in the format: "As a [type of user], I want [an action], so that [a benefit]." User stories replaced lengthy specifications in agile processes because they keep the focus on the user's goal rather than implementation detail. "As a customer, I want to filter search results by price, so that I can find products within my budget." The format forces the question of why, which is often more valuable than specifying the how.
Acceptance Criteria
The specific conditions a feature must meet to be considered done and accepted by the requesting party. Written alongside user stories, they make "done" concrete and testable. "The filter should update results within 500ms. Selecting multiple price ranges should be possible. The selected filters should persist if the user navigates back." Acceptance criteria are the shared definition of success before work starts. Features built without them are accepted or rejected based on gut feel, which usually produces an argument.
Stakeholder
Anyone with an interest in the outcome of a project. End users, customers, the sales team, the CEO, the legal department, and the operations team are all stakeholders. They have different and sometimes conflicting interests. Managing stakeholders means understanding what each group needs, communicating appropriately with each, and navigating the conflicts between them. "Stakeholder alignment" is the process of getting them to agree. It takes longer than expected and happens less completely than reported.
The roadmap and prioritisation
Product Manager (PM)
The person responsible for defining what gets built and why, prioritising the backlog, and aligning the engineering team with business goals and user needs. The PM owns the roadmap without owning the engineering team, which makes the role fundamentally about influence rather than authority. Good PMs talk to users constantly, make hard prioritisation decisions with incomplete information, and protect engineering teams from changing direction every two weeks. Product management is sometimes described as being the CEO of the product, which understates the negotiating and overstates the deciding.
Roadmap
A plan that communicates what the product team intends to build over a given time horizon, and roughly when. Roadmaps communicate priorities and direction to engineering teams, executives, sales, and sometimes customers. They are not contracts. The gap between roadmap and reality grows as the time horizon extends: a roadmap for next month is roughly right; a roadmap for next year is a set of intentions. The pressure to commit roadmaps to customers before they are reliable is one of the more persistent dysfunctions of product management.
Backlog
The ordered list of all work the team has identified but not yet completed: features, bug fixes, technical improvements, research tasks. The backlog is never empty, and in healthy organisations it is far larger than the team will ever complete. The PM owns the backlog and is responsible for keeping it prioritised, groomed (items at the top are detailed and ready to work on; items at the bottom may be rough ideas), and honest (items that will never realistically be built should be removed, not left to give the illusion of future progress).
Prioritisation
The process of deciding what to work on next. In theory, driven by frameworks that weigh impact against effort: RICE (Reach, Impact, Confidence, Effort), MoSCoW (Must have, Should have, Could have, Won't have), or value vs complexity matrices. In practice, also driven by who shouts loudest, which customer is threatening to churn, what the CEO mentioned in a meeting, and what the sales team promised. The gap between stated prioritisation principles and actual decisions is wide in most organisations and rarely discussed honestly.
HiPPO
Highest Paid Person's Opinion. The organisational tendency for decisions to reflect the preferences of the most senior person in the room rather than the evidence. When the CEO says "I think we should add X" in a product review meeting, X gets added to the roadmap. The damage HiPPO causes is not that senior people have bad ideas: sometimes they have good ones. The damage is the displacement of systematic evidence gathering and user research by individual preference, and the chilling effect on everyone else's contributions when they know the outcome depends on one person's mood.
Epic
A large body of work that is too big to fit in a single sprint and must be broken down into smaller user stories or tasks. Epics represent significant features or capabilities: "redesign the checkout flow," "add multi-currency support," "build the notifications system." They provide a way to track related work across multiple sprints without losing sight of the larger goal. The relationships between epics, stories, and tasks represent different levels of granularity in planning, from the strategic to the specific.
Agile and sprints
Agile
A philosophy of software development that prioritises iterative delivery, customer collaboration, and responding to change over following a fixed plan. Defined by the Agile Manifesto (2001), which contrasted it with waterfall: deliver working software frequently rather than all at once at the end. Agile is a philosophy, not a process: specific processes like Scrum and Kanban implement it differently. "Doing agile" and "being agile" are considered distinct: following the ceremonies without the underlying principles produces process overhead without the benefits.
Sprint
A fixed time period (typically two weeks) during which a development team works on a defined set of tasks from the backlog. At the start, sprint planning selects what will be done. At the end, a sprint review shows what was delivered, and a retrospective examines how the team worked. Sprints create a rhythm of delivery and reflection. The theory is that regular delivery forces decisions about what matters and surfaces problems early. The practice sometimes produces sprints that exist as an accountability theatre without any of the underlying discipline.
Sprint Planning
The meeting at the start of each sprint where the team selects items from the backlog to commit to delivering. The PM presents prioritised backlog items; the engineering team estimates the effort required and confirms what they can realistically complete. Commitment matters: the point of sprint planning is that the team is genuinely committing to a scope, not listing aspirations. Sprint planning regularly takes longer than it should because backlog items were not prepared ("groomed") in advance and are defined and estimated for the first time in the meeting.
Story Points
A relative unit used to estimate the effort or complexity of a user story, not the time it will take. A story worth 3 points is roughly three times harder than a 1-point story. The abstraction from time is intentional: individuals work at different speeds, and forcing estimates into hours creates false precision. Teams use the Fibonacci sequence (1, 2, 3, 5, 8, 13, 21) to estimate, because the increasing gaps reflect the fact that larger items are inherently harder to estimate precisely. Story points are useful within a team; they have no meaning across teams.
Velocity
The average number of story points a team completes per sprint. Used to forecast when future work will be delivered. If a team has a velocity of 30 points per sprint and there are 120 points of work remaining, the rough estimate is four sprints. Velocity is a planning tool, not a performance metric: comparing velocities between teams, or pressuring teams to increase velocity, produces point inflation (stories are estimated higher to hit targets) rather than more output. Velocity goes down when it should, during incidents, holidays, and new team members onboarding.
Daily Standup
A short daily meeting (the name refers to standing up to keep it brief) at which each team member reports: what they did yesterday, what they will do today, and whether anything is blocking them. Meant to take fifteen minutes. Often takes thirty. At its best, a standup surfaces blockers early and keeps the team coordinated. At its worst, it is a status update delivered to a disengaged audience. The failure mode is people reporting to the scrum master rather than to each other, turning a coordination meeting into a reporting exercise.
Retrospective
A meeting at the end of each sprint for the team to reflect on how they worked, not what they built. Three questions: what went well, what could be improved, what will we change next sprint. Retrospectives are where process problems surface and are supposed to be addressed. In healthy teams, retrospectives produce real changes. In dysfunctional ones, the same issues appear every sprint because nothing changes between them. A retrospective where people are afraid to say what is actually wrong is worse than no retrospective.
Definition of Done
A shared agreement within the team about what "done" means for any piece of work. Code written? Code reviewed? Tests passing? QA signed off? Documentation updated? Deployed to staging? Deployed to production? Teams without a clear definition of done argue about whether things are done, accept work that is not ready, and produce inconsistent quality. A definition of done makes "done" unambiguous. It typically becomes stricter as teams mature and notice what they were not checking before.
Development
Repository (Repo)
The central store for a project's code, including its full history of changes. Most repositories are managed with Git. GitHub, GitLab, and Bitbucket are platforms that host repositories. The repository contains not just the current state of the code but every change ever made, who made it, when, and with what message. This history is invaluable for debugging, auditing, and understanding why a decision was made. "What does the repo say?" is a question that usually has a definitive answer.
Branch
An independent line of development within a repository. Branching allows multiple developers to work on different features simultaneously without interfering with each other or with the main codebase. Changes made on a branch stay on that branch until it is merged. The main branch (historically called master, now usually main) represents the current state of the production code. Branches are cheap to create and should be used freely. A long-running branch that diverges significantly from main becomes a merge conflict problem.
Feature Branch
A branch created specifically for developing a new feature, isolated from the main codebase until the feature is complete. The developer creates the branch, works on it until the feature is ready, then opens a pull request to merge it back. Feature branches keep incomplete work out of the main codebase, allow development to proceed in parallel, and make it easy to abandon a feature without affecting anything else. The naming convention varies by team:
feature/add-price-filter or JIRA-1234-price-filter are common formats.Commit
A snapshot of changes saved to the repository, with a message describing what changed and why. Commits are the unit of history in Git. A good commit message describes the change in terms of intent rather than mechanics: "Fix currency rounding error on checkout" rather than "Changed line 47 of payment.js." Small, frequent commits with clear messages make code review easier, debugging faster, and reversals cleaner. A single commit containing three days of work with the message "updates" is a known failure mode.
Pull Request (PR)
A formal request to merge a branch's changes into another branch (usually main), with the changes presented for review before the merge happens. Also called a merge request (MR) in GitLab. The PR shows exactly what changed, line by line. Reviewers read the changes, leave comments, request amendments, and eventually approve. The PR is the gate between individual work and shared code. It surfaces bugs, enforces style standards, and spreads knowledge about how the codebase is changing. The quality of code review in pull requests is a reasonable proxy for the quality of the engineering culture.
Code Review
The process of one or more engineers reading another's code changes before they are merged, checking for bugs, clarity, maintainability, and adherence to team standards. Code review catches problems before they reach users. It also spreads knowledge (reviewers learn about changes, not just the author), enforces consistency, and creates shared ownership of the codebase. Effective code review requires time, attention, and psychological safety to give honest feedback. Rubber-stamp code review (approving without reading) is worse than no code review because it creates the illusion of a check that did not happen.
Merge Conflict
A situation where two branches have made different changes to the same part of a file, and Git cannot automatically determine which version is correct. The developer must resolve the conflict manually: inspecting both sets of changes and deciding what the final version should be. Merge conflicts are a normal part of collaborative development. Large, long-running branches produce more conflicts because they drift further from main. Frequent small merges reduce conflicts. The worst merge conflicts are the ones discovered after a developer has been working on a branch for three weeks without checking whether main has moved on.
Technical Debt
The accumulated cost of shortcuts, compromises, and deferred improvements in a codebase. Code written quickly to meet a deadline that will need to be rewritten properly later. A design that worked at small scale but does not at large scale. A dependency that is outdated and insecure. Technical debt is a useful metaphor: like financial debt, it accrues interest (the longer it goes unaddressed, the harder the eventual fix) and can become crippling if ignored. Also like financial debt, some is reasonable and intentional: taking a shortcut to ship quickly is sometimes the right call, as long as you plan to pay it back.
Testing and QA
QA (Quality Assurance)
The discipline of verifying that software meets its requirements and does not have defects before it reaches users. QA encompasses writing test cases, executing them, finding bugs, and verifying that bugs are fixed. In traditional delivery, QA is a stage: code goes from development to QA before release. In modern delivery, quality is built in throughout: automated tests run on every commit, and QA engineers work alongside developers rather than at the end. Treating QA as a final gate rather than a continuous activity is slower and catches problems later, when they are more expensive to fix.
Test Case
A specific scenario to be tested, with defined inputs, steps, expected outputs, and pass/fail criteria. "Given a logged-in user, when they apply a price filter of £0–£50, then only products within that range should be displayed." Test cases make testing systematic rather than exploratory. Without them, testers rely on intuition about what to check, which produces inconsistent coverage. Test cases are also the basis for automated testing: a well-written test case can usually be converted into an automated test with reasonable effort.
Regression Testing
Testing existing functionality to confirm that new changes have not broken it. Every code change risks introducing regressions: things that worked before and no longer do. Regression testing is the defence against this. Manual regression testing of a large codebase is expensive and slow. Automated regression tests run quickly and can be triggered on every commit. The investment in automated regression testing is high; the alternative of shipping regressions to users is usually higher.
UAT (User Acceptance Testing)
Testing conducted by actual users or their representatives to confirm that the software meets their requirements before it is released. The final validation before delivery. UAT is where the gap between what was specified and what was understood is most clearly revealed. A feature that passed all technical tests can still fail UAT because the specification did not accurately reflect what the user needed. Discovering this in UAT rather than in production is the best-case scenario for a specification failure. Discovering it after release is more expensive.
Bug (Defect)
A flaw in software that causes it to behave incorrectly or unexpectedly. Bugs are found at every stage of the cycle: in requirements, in code, in testing, and in production. Finding bugs earlier is cheaper: a requirement bug caught in review costs a conversation; the same bug caught in production costs a fix, a deployment, a customer support interaction, and possibly data correction. Bug severity and priority are separate: a severe bug in a rarely-used feature may be less urgent than a minor bug in the checkout flow.
Staging Environment
A server environment that mirrors production as closely as possible, used to test software before it is released to real users. Code is deployed to staging after QA and before production. Staging catches environment-specific issues: problems that only appear with production-like data, infrastructure, or configuration. The gap between staging and production is the gap in which final surprises hide. A staging environment that does not closely resemble production is less useful than it appears.
Deployment and release
Production
The live environment where real users interact with the software. Deploying to production is the moment the work reaches the people it was built for. Everything before it is preparation. Production is where bugs have consequences: data loss, incorrect charges, failed transactions, user frustration. The respect for production environments (careful deployments, monitoring, rollback capability) is a mark of engineering maturity. "It worked in staging" is a phrase that precedes a very particular kind of conversation.
Deployment
The process of releasing new or updated code to an environment. Deploying to staging pushes code to the test environment. Deploying to production releases it to users. Deployment processes range from an engineer manually uploading files (risky, slow, error-prone) to fully automated pipelines triggered by a code merge (fast, repeatable, auditable). The frequency of deployments reflects a team's confidence in their release process: teams that deploy once a quarter are usually afraid to deploy; teams that deploy dozens of times a day have solved the problem of making deployment safe.
CI/CD (Continuous Integration / Continuous Deployment)
A set of practices and tools that automate the building, testing, and deployment of software. Continuous integration: every code change triggers an automated build and test run, catching integration problems immediately. Continuous deployment: code that passes automated tests is automatically deployed to production without manual intervention. CI/CD pipelines (GitHub Actions, CircleCI, Jenkins) run the same steps reliably on every change. The goal is to make deployment so routine and automated that it is never a special event requiring a dedicated person and a nervous afternoon.
Feature Flag (Feature Toggle)
A configuration switch that enables or disables a feature without deploying new code. A feature can be built, merged into production, and kept hidden behind a flag until the business is ready to release it. Flags allow incremental rollouts (turn the feature on for 10% of users first), instant rollbacks (turn the flag off if something goes wrong), and A/B testing (show different versions to different users). Feature flags decouple deployment (when code goes to production) from release (when users can see it), which is one of the more powerful tools for reducing deployment risk.
Hotfix
An urgent fix deployed outside the normal release cycle to address a critical bug or outage in production. Hotfixes bypass the standard process: instead of going through sprint planning, full QA, and staging, they go directly from development to production as quickly as possible. The speed introduces risk: a hotfix that introduces a second bug is a known failure mode. Most teams have a documented hotfix process that abbreviates rather than eliminates quality checks. The best hotfixes are deployed before most users notice the problem; the worst are deployed after an engineer has been awake for twenty hours.
Rollback
Reverting a deployment to a previous version when the new release causes problems. If a deployment breaks something in production, the fastest remedy is often to undo it: restore the previous version while the problem is investigated. Rollback capability requires that previous versions are retained and can be redeployed quickly. Teams with good deployment infrastructure can roll back in minutes; teams with manual deployment processes may take hours. The decision to roll back rather than fix forward is a judgement call: sometimes a quick fix is faster than a revert.
When it goes wrong
Scope Creep
The gradual expansion of a project's requirements beyond what was originally agreed, usually without corresponding adjustment to the timeline or budget. "While we're at it, could we also..." is where scope creep begins. Individual additions are often reasonable. The cumulative effect is a project that grows until it collapses under its own weight or arrives months late. Scope creep is not always the user's fault: inadequate upfront definition creates ambiguity that gets filled in later, often with more than was expected.
Requirements Drift
The gradual change of requirements over time as stakeholders' understanding, needs, or preferences evolve during development. Different from scope creep in that it is not always additive: requirements drift when what the user wants in month three is different from what they said they wanted in month one. Drift is often legitimate: users learn what they want by seeing what they asked for. A process that cannot accommodate some drift produces software that was correct at the start and wrong at delivery. A process with no stability cannot deliver anything.
That's Not What I Asked For
The moment of delivery when the user sees the thing that was built and recognises that it is not the thing they needed. The most expensive sentence in software development. It represents the failure of the entire chain from feature request to delivery to close the gap between what was asked, what was specified, what was built, and what was needed. Each translation introduces loss: the user's request into the spec, the spec into the story, the story into the code. The distance between the original idea and the delivered feature is where value is destroyed. Most methodologies exist to reduce this distance. None eliminates it.
Post-mortem (Incident Review)
A structured analysis of what went wrong after a significant failure: an outage, a bad release, a missed deadline, or a delivery that did not meet expectations. A blameless post-mortem focuses on system failures rather than individual mistakes: the goal is to understand how the system allowed the failure to occur and how to prevent it recurring. The alternative to blameless post-mortems is cultures where people hide problems and cover failures rather than learning from them. Post-mortems that result in no action items are post-mortems that did not finish.