It’s over and over again. – “We don’t know” – “It’s long time ago” – “He doesn’t work here anymore” – “The cat are my homework”…
You’re solving bugs, changing some older code, you’re trying to upgrade an application to a newer version, to analyse impacts of a process change, to reveal an administrator password and thousand of similar things that arise on every project. But… nobody knows which modification was made in the application, much less why, whether a similar error wasn’t being solved before, whether the required functionality wasn’t already developed for another customer and so on and so on.
Why is it? The truthful answer would probably say that it is convenient for many people (by reasons which I don’t know and I can’t understand), but I’ll suppose that all of us want to create high-quality and maintainable products.
It should be quite obvious that relying on human memory is a wrong way. Not only that people leave the project or the company, but also the idea that somebody can remember details of his work several years back is simply naive.
Information must be taken of the heads and saved on a less degradable medium.
OK, we’ve decided to document our project. We reserve some time for documentation at the end of the project (and maybe we spend half of it on some urgent finishing work) and we create a passable documentation. But if a problem occurs, we recall that the related modifications were discussed over Skype or so and nobody consider it as something important to document. It’s because any final documentation is always a shortening and it shows just the final result, which automatically omits most of information about why and how some work was performed.
Nevertheless, all information existed when the work was being done. It was known why a modification was designed, it was known which code was programmed for it, it was known how to test the modification and so on. Some information was in e-mails, some in documents, which were deleted later, some just in heads – but all that information was available. And many pieces got lost later.
Example: You’re looking to a source code, it’s clear what it does technically, but you have no idea, why somebody made it. And you’ll probably never know. But the developer must have known that and if he described it just in a single sentence…
Save information you have while you have it. A later reconstruction is usually expensive and often also unsuccessful. With every piece of information, you must always thing about that it can be needed several years later.
You don’t necessarily need any special software, even a file system is a sufficient tool. It has some limitations, but the important thing is your work with information, not with a software. It has some logic to adjust a process together with deploying a tool (it’s easier to convince people to do things differently if they cannot use the old way anymore), but if you’re unable to capture information, no software will save you.
Nevertheless, good software can help you. If you are able to store all information on a single place, easily attach documents, create hierarchical relations, link to existing information and so on, it greatly simplifies many things, like searching for information, changes in a structure etc. and it can significantly simplify the whole process.
You still have to secure that the system will be filled with data. I personally promote a procedure when any information is not considered to exist until it is put to the system. Some people consider this method as too formalistic, but experience with later data (not) inserting is too bad. To put data to a system mustn’t be much more complicated than to write an e-mail (a necessity to fill 20 fields discourage even an enthusiastic user) – than there is no rational reason to send information through other channels.