*This article was written by Carlos Guerra with the input and help of Mario Felaco; of Con-nexo. *
We run organizational security training sessions, long term support and security assessments for NGOs (Non-Government Organizations) and independent media outlets at risk in Latin America. We base a lot of of our work on the SAFETAG framework and we like to promote it to new people that want to help organizations improve their security.
As previously said in the SAFETAG Stories: If you fail -- TRI, TRI Again, there isn't one unique approach on security assessments for organizations and independent media outlets; their unique structures and dynamics make it difficult to assume there is a single way to evaluate security or even a single way they can respond to our recommendations. After a few years doing SAFETAG-based assessments and security interventions, I've come across different ways that I and others end up building assessment reports. They can be widely different, without meaning that one report style is wrong or right. It is just a matter of knowing when is more convenient to use one approach or another. Some of the factors that can affect what kind of information and how much of it the organization could digest are:
- Presence of IT staff that want to understand the vulnerability gathering process: This is usually to reproduce it in the future so they can confirm your findings and assess changes after applying the recommendations.
- Technical skill level of the organization and targets of the report: In some cases you can skip the explanation of some concepts and go straight to the results, in others it can be useful to explain the basic concepts in the same report. Personally I don't like the idea of giving recommendations that the organization won't implement just because they don't understand anything about the acronyms and 1s and 0s.
- Capacity and time to read/learn about everything: Organizations that just don't have the time to read a report about something not directly related to they main work (Human rights, investigative journalism, you name it) are the norm, so being super concise and having them read just what they need to know is a very valid approach.
- Transfer of implementation responsibility to someone different from the target of the report: If the targets of the report aren’t the final implementers of the recommendations, things can get complicated. “Encrypt the data on the computers” can suffice for some targets (specially if you’ve had conversations and what you suggest is implicit), but if an external consultant reads that it would completely valid to wonder “mmm... with veracrypt? with GPG? inside zips? the whole hard drives?” and perhaps think your report sucks.
- Debrief and follow up opportunities: A chance to walk through the report with the organization helps lower the report “learning curve”, giving you some leeway to make them more complex.
On the other hand, some aspects that could affect the report building in our side of the fence are:
- Time: We need to know that we can build the report in an appropriate amount of time. If we just have a couple of days it could be super difficult to write something more comprehensive.
- Requirements: Sometimes the people that ask for our help already have an idea of what they want. In that case the only advice would be to adapt to that idea as long as it is feasible with the resources allocated and the kind of work that we can do.
- Our personal taste: Let's face it, some people aren't the talking type of person, others want to communicate it all. It’s ok for that to affect the kind of report you’re building, as long as the report is useful and pertinent for the organization.
The most important thing to remember is our final goal: that the organizations in fact enhance their security, not just going through a checklist and leave them on their own. Whichever form the reports takes, it should always strive towards that.
With that in mind, on the next post we'll talk about some of the types of reports we've done and seen others do.