Incident analysis is complicated. Sometimes you know the answer right away and sometimes it takes hours or days of research. You should approach the system like a paranoid Sherlock Holmes. Start at the scene of the crime and then dig deep into every aspect. Be wary of your preconceived notions, be aware of red herrings, and always test your hypotheses.
Note
A red herring is something that is misleading or distracting from the task at hand. It is often an attractive answer or problem unrelated to the actual issue.
The scene of a crime for an outage is often the thing you rolled back. It may be a bad config or buggy code. Was the outage caused by the change your team was trying to deploy or was it caused by a system interacting with that code? You can then start reading through the code that changed or the code that interacts with the code you deployed. If there wasn't a rollback, then you've probably already figured out the issue, as you will have had to figure...