Ask engineers what frustrates them most about their job, and documentation usually makes the top three. Not the concept of documentation itself, most people understand why it matters. The problem is how it’s managed. Specifications that exist in five different versions across three file servers. Requirements documents that haven’t been updated since the design changed two months ago. Interface definitions that contradict each other depending on which team’s folder you’re looking in.
This isn’t a new problem, but it’s gotten substantially worse as systems have become more complex. A project from 20 years ago might have involved dozens of subsystems and a few hundred requirements. Today’s projects routinely involve thousands of requirements, hundreds of interfaces, and integration across mechanical, electrical, and software domains that all need to stay synchronized. The old methods don’t scale.
When Documents Multiply Faster Than Anyone Can Track
The typical engineering project generates documents at an alarming rate. There’s the system requirements specification, the interface control documents, the design descriptions for each subsystem, the test plans, the verification matrices, and the analysis reports. Each of these might be hundreds of pages long. Each goes through multiple revisions as the design matures and requirements change.
Now multiply that across different engineering disciplines working somewhat independently. The mechanical team has their documents, the electrical team has theirs, software has a completely different set. Everyone’s working in their own tools, with their own templates, saving files in their own organizational structure. Some teams use document management systems, others just use network drives with elaborate folder naming schemes.
The problem compounds when you need to understand how everything connects. Say a software engineer needs to verify that their code meets a specific system requirement. First they need to find the current version of the requirements document. Then they need to trace that requirement down through the subsystem allocations, which might be in a different document or a spreadsheet. Then they need to check if any change requests have modified that requirement, which means searching through email and meeting notes because change tracking is scattered across multiple sources.
This takes hours or days when it should take minutes. And that’s assuming everything is findable and internally consistent, which it often isn’t.
The Version Control Disaster
Version control for code is a solved problem. Systems such as Git keep perfect track of every change, who made it, and why. But engineering documents don’t have an equivalent standard. Some organizations use document management systems with check-in/check-out workflows. Others rely on filename conventions with dates or version numbers embedded. Many places use a mix of both, which is somehow worse than either approach alone.
What happens in practice is that people work on local copies, forget to check things back in, or save new versions without updating the central repository. Someone sends a PDF via email for review, and now that version starts circulating independently. Changes get made in one person’s copy that never make it back to the official version. Or worse, conflicting changes happen in parallel and nobody notices until much later.
The technical term for this is “configuration management,” but the everyday reality is just chaos. Nobody’s entirely sure which document version is the source of truth. Design reviews happen with people literally looking at different versions of the same specification. Decisions get made based on outdated information because someone didn’t realize a newer version existed.
When Requirements and Design Drift Apart
Here’s a pattern that plays out on almost every document-heavy project: requirements get written early, sometimes before detailed design even starts. Design work proceeds and the team discovers that some requirements are impractical, some are missing, and others need refinement. Changes get made to the design, but updating all the requirements documentation is a huge effort that keeps getting postponed because there’s always more urgent work.
Six months into the project, the requirements documents and the actual design have diverged substantially. The requirements say one thing, the design does something different, and nobody has a clear picture of which is correct. When verification time comes, testing against the documented requirements produces failures because the system was built to meet the real (but undocumented) requirements that emerged during design.
Updating everything retroactively is painful. It requires going through every document, identifying what’s changed, updating all the cross-references, and making sure nothing gets missed. This is exactly the kind of work that gets squeezed when schedules are tight, which means it often doesn’t happen properly. The documentation becomes less a record of what the system is and more a historical artifact of what people thought it would be.
The Traceability Fiction
Traceability is supposed to be one of the cornerstones of systems engineering. Every design element should trace back to requirements. Every requirement should trace forward to implementation and verification. This sounds great in theory and looks impressive in process documents.
In practice, with document-based approaches, traceability is usually maintained in spreadsheets if it’s maintained at all. Someone creates a matrix that links requirement IDs to design elements to test cases. This works fine initially, but keeping it updated as things change is a nightmare. Every time a requirement gets modified, added, or deleted, the traceability matrix needs updates. Every time the design changes, more updates. Every test plan revision, same story.
The matrix quickly becomes outdated because maintaining it manually is tedious work that produces no immediate value. By midway through the project, most traceability matrices are so out of sync with reality that they’re useless for anything except checking boxes for audits or reviews. People stop trusting them, stop updating them, and start maintaining informal knowledge in their heads or local notes.
What’s Actually Different Now
The shift in thinking around mbse vs traditional systems engineering centers on treating information as structured data rather than unstructured documents. Instead of writing prose in Word or PDFs that humans have to read and interpret, information gets captured in models where relationships and dependencies are explicit and machine-readable.
This doesn’t mean documents disappear entirely. Reports and specifications still get generated. But they’re produced from the model rather than being the primary information store. When something changes in the model, documents that reference it can be regenerated automatically. Traceability isn’t maintained in a separate spreadsheet, it’s built into the model structure itself.
The version control problem gets simpler too. Instead of tracking hundreds of separate document files, there’s a central model repository. Changes are visible to everyone working with the model. Conflicts get caught when they happen rather than discovered months later. Different views of the same information stay consistent because they’re all drawing from the same source.
The Transition Challenge
Moving from document-based to model-based approaches isn’t trivial. There’s a learning curve for the tools and methods. Existing projects can’t just flip a switch and migrate everything overnight. Organizations have established processes, templates, and contractual requirements built around documents.
Some companies try hybrid approaches, maintaining both documents and models during a transition period. This can work but it’s risky because now there are two sources of truth that can drift apart, which is the same problem they were trying to solve. The hybrid state needs to be temporary and carefully managed, otherwise it creates more confusion than it resolves.
The other challenge is cultural. Engineers who have spent their careers writing and reviewing documents need to think differently about how they capture and communicate information. Not everyone adapts easily. Some people genuinely prefer working with documents because that’s what they know. Forcing a methodology change without adequate training and support usually fails.
What Actually Gets Better
For organizations that successfully make the shift, certain chronic problems do improve. Finding current information becomes easier because there’s one authoritative source. Traceability stops being a manual burden and becomes something the tools maintain automatically. Impact analysis for changes gets more reliable because relationships between elements are explicit.
The time savings aren’t immediate. Initial modeling takes longer than initial document writing because more structure is required upfront. But over the project lifecycle, especially on long programs with lots of changes, the cumulative time saved on maintaining consistency and tracking impacts becomes substantial.
Perfect consistency and zero documentation problems aren’t realistic expectations. Models can have errors, tools have limitations, and people still make mistakes. But the failure modes are different and generally less catastrophic than the document chaos that plagues traditional approaches. Information is centralized rather than scattered, relationships are explicit rather than implicit, and changes propagate systematically rather than getting lost in forgotten file versions.
For projects dealing with the kind of complexity that modern systems demand, where hundreds of people need coordinated access to thousands of interconnected pieces of information, the document-based approach has basically hit its limits. Something has to change, and most organizations are finding that structured modeling approaches are the most viable path forward, even with all the implementation challenges they bring.

