How to Speed Up Incident Response in 2019: Faster Artifact Collection

This post will focus on the best strategies to accelerate data collection during incident response (IR).

But, before we lay any strategies on the table, let’s discuss why speed during collection should be important to incident response teams.

Why Fast Collection Is Important

Like we covered in our blog on speeding up the start of an investigation, rapid collection is important because:

  • Incident responders can’t come to any conclusions on an alert—whether it’s dismissal or acceleration—unless they have data to review
  • It gives intruders more time to delete key evidence or steal valuable data.

But, as a rule, delays in any part of the DFIR process leave an enterprise vulnerable to every danger posed by slow response, such as intruder entrenchment.

For a more complete look at the risks of slow DFIR, read “Incident Response KPIs: TIME Is Critical. Here Are Five Reasons Why.”

Manual Collection: The Baseline (and Slow) Method

If you’re looking for a no-budget way to retrieve endpoint artifacts, manual collection may be the way to go. In this approach, the incident responder manually executes 12 (or more) command line tools, one at a time, to get such data as open network ports and running processes.

This method is used by many organizations because:

  • It doesn’t require the incident responder to have any scripting abilities
  • It’s completely free.

But it has its drawbacks:

  • It’s the most error-prone method because the responder may forget what arguments to supply
  • It’s the slowest of the collection strategies because the responder needs to wait for each command to finish before starting the next one.

And, as we’ve discussed, lagging behind in this phase exposes the entire organization to serious risks.

How To Collect Faster

Now that we’ve covered why collecting quickly is important and the baseline, manual method of collection, let’s review some strategies for speeding up collection.

Script the Collection

This approach is an automated version of the manual collection method discussed above.

Using Python or Powershell, each of the collection tools is scripted to run in sequence. The automation allows the collection process to complete much more quickly than it would if it relied on an IR’s typing speed.

While the speed boost is a big plus, there are a couple of cons:

  • This approach requires that the company maintain the collection script and have someone fluent in Python or Powershell on the IR team. This could put a strain on the team’s resources because continuous updates are an ongoing time commitment.
  • While it may work perfectly for collection, this custom script may not integrate well with whatever endpoint forensics tool the team uses in the next step of the IR.

Continuous Monitoring

This is the same strategy that we outlined in our post on reducing the time it takes to start an investigation.

In this more infrastructure-intensive approach, every endpoint or server typically needs some form of agent—like an endpoint detection and response (EDR) agent—to monitor the system. These agents continually collect and send data from all of the endpoints to the server system.

The benefits of this approach are:

  • Immediate, real-time access to data. Speed isn’t an issue when the data is constantly coming in from endpoints.
  • Access to historical data. The servers can store an enormous, complete record of the system’s incidents, so the IR team can review anything it wants for current investigations.

The downsides are this method may not always collect all the data needed for an investigation, it requires agents, and it imposes significant storage costs.

Automated Collection

The last approach is the use of a dedicated IR tool. Integrated into a Security Information and Event Management (SIEM) or orchestration platform, this automated collection program runs when it receives an alert and collects all the data the IR team needs.

This approach makes it easy on the IR team because:

  • It doesn’t have the maintenance overhead of scripted collection or the storage overhead of continuous monitoring
  • It works well with analysis tools, making integrating automated collection low-hassle.

There are a few drawbacks:

  • The tool will only gather incident data present at the moment collection starts, meaning it could miss some useful evidence
  • Not all of these tools allow the IR team to customize what endpoint data the program collects. (You can read more about targeted collection here.)

If you‘d like to try this approach, we recommend testing Cyber Triage. It’s DFIR automation software that speeds up the entire IR process, from detection to remediation.

Click here to request a free trial or evaluation.

Conclusion

That was our overview of best-practice approaches for speeding up data collection during incident response. Our next post focuses on improving the speed of the next phase: Analysis.

For more on speeding up incident response, check out the rest of our series: 

Share

FacebookTwitterLinkedInReddit

Cyber RespondIR Newsletter

Like to learn about DFIR?

Sign up for our newsletter to get updates when we push out new technical posts and videos.