Every domain compromise follows a pattern. Not the same one every time — but close enough that you can predict the next move if you understand what came before. This post walks through a sanitized real-world intrusion — credential theft, lateral movement, Kerberoasting, DCSync, the full chain. This is what it actually looks like from the IR side.
Initial Access: The Phishing Email That Worked
The story starts like most do — a well-crafted phishing email. This one targeted the marketing department with a fake invoice attachment. The attachment was an Excel file with embedded macros.
The First Mistake
The user enabled macros. The payload executed as the user's standard account — not admin, just a regular domain user. That was enough.
The initial access doesn't need to be privileged. It just needs a foot in the door and an account that's alive on the network.
What They Got
- Talking Stick (Mimikatz variant) — Dumps credentials from memory
- LM/NTLM hashes — Saved for relay attacks or offline cracking
- Kerberos TGTs — If the user had recent admin sessions, those tickets were captured
In this case, the user had logged into a jump box two days prior — as an administrator. Their cached credentials were still in memory on that jump box. Within 30 seconds of execution, the attacker had a privileged hash.
Lateral Movement: Finding the Path
With a hash in hand, the attacker ran BloodHound. They weren't guessing — they were querying.
They found what every attacker finds: a short path. The user's account had local admin rights on IT-01, which had a session for svc_backup — an account with DCSync rights. That's three hops.
The Missing Control
This is where the defense failed: local admin rights were everywhere. The user who got phished had local admin on IT-01 because someone needed to install software once. Those rights never got revoked.
Kerberoasting: The Shortcut They Didn't Need (But Took Anyway)
With svc_backup access, they didn't technically need to Kerberoast — they already had DCSync. But attackers aren't always optimal. They also Kerberoasted every service account they could find.
The reason this matters for IR: even after they had domain admin, these Kerberoastable accounts were a backup access vector. They could lose DA and regain it through service account password resets.
DCSync: Getting the Hashes
This is when the real damage started. With svc_backup's credentials, they ran Mimikatz's DCSync module.
What DCSync Gives You
- Every password hash in the domain — Including krbtgt, which forges Kerberos tickets
- Golden ticket capability — With krbtgt hash, you own the Kerberos trust model
- Persistent access — Even if you reset the admin's password, they can re-authenticate with a forged ticket
At this point, the domain was effectively owned. The attacker had:
- krbtgt hash (golden ticket capability)
- DCSync rights (replicate any password)
- Multiple Kerberoastable accounts as backup vectors
What the Logs Showed (And What They Missed)
This is where it gets interesting for defenders. The events were all there — they just didn't get triaged.
Day -2: The Phishing Email
- Email gateway flagged the attachment as malicious (not blocked, just flagged)
- No ticket created — "medium severity"
Day -1: Macro Execution
- EDR logged powershell.exe with encoded command
- Ran for 45 seconds, made outbound HTTPS connection to non-standard port
- Alert: "Suspicious PowerShell encoded command" — severity medium, auto-closed
Day 0: Lateral Movement and Escalation
- 4624: Admin account logon to IT-01 from workstation — flagged as unusual but not investigated
- 4768: Kerberoast request for svc_backup — 847 requests in 3 minutes, not flagged (volume below threshold)
- 4662: DCSync operation on Domain object — logged but not alerted as critical
None of these events individually triggered a response. In aggregate, they tell a clear story. But by the time someone looked at them holistically, the domain was already compromised.
Recovery: What Actually Worked
The organization recovered. Here's what it took:
- Reset every password — Not just privileged accounts. Every account. Service accounts, admin accounts, standard users — all of them.
- Reset krbtgt twice — The standard advice is reset twice with 10-hour interval. They did it three times to be safe.
- Revoke all Kerberos tickets — Force re-authentication across the entire forest.
- Purge cached credentials — Every workstation, every server. Forced logoff where possible.
- Rebuild compromised systems — IT-01 and the user's workstation were reimaged from known-good media.
What didn't work: changing the admin's password once. The attacker had golden ticket capability — a password change doesn't invalidate a forged TGT.
The Fixes That Came After
- Local admin rights review — Removed local admin from everything that didn't need it. Implemented LAPS.
- Kerberoasting visibility — Tuned alerts for 4768/4769 to flag service accounts specifically.
- DCSync alerting — Changed 4662 to alert on any Directory Service access for accounts without explicit need.
- Phishing response — Any macro-enabled attachment from external senders gets blocked outright, not just flagged.
The domain wasn't compromised because the attacker was sophisticated. It was compromised because the environment had the same gaps every environment has — local admin sprawl, under-monitored service accounts, and log alerts that didn't tell a story until it was too late.
Ed Truderung is a cybersecurity consultant specializing in identity security, Active Directory hardening, and incident response. He helps organizations find the gaps before attackers do.