These questions can be tough to answer from a purely theoretical perspective. I propose the following approach.
First, conduct a tabletop exercise where you simulate adversary actions. At each stage of the imagined attack, consider what evidence an intruder might create while taking actions against your systems. For example, if you are trying to determine how to detect and respond to an attack against a Web server, you're almost certainly going to need Web server logs. If you don't currently have access to those logs, you've just identified a gap that needs to be addressed. I recommend this sort of tabletop exercise first because you will likely identify deficiencies at low cost. Addressing them might be expensive though.
Second, conduct a technical exercise where a third party simulates adversary actions. This is not exactly a pen test but it is the sort of work a red team conducts. Ask the red team to carry out the attacks you previously imagined to determine if you can detect and respond to their activity. This should be a controlled action, not an "anything goes" event. You will see whether the evidence and processes you identified in the first step help you detect and respond to the red team activity. This step is more expensive than the previous because you are paying for red team attention, and again fixes could be expensive.
Third, you may consider re-engaging the red team to carry out a less restrictive, more imaginative adversary simulation. In this exercise the red team isn't bound by the script you devised previously. See if your improved data and processes are sufficient. If not, work with the red team to devise better detection and response so that you can handle their attacks.
At this point you should have the data and processes to deal with the majority of real-world attacks. Of course some intruders are smart and creative, but you have a chance against them now given the work you just performed.