Low-Skill Attacks or The Importance of Cross-Training

Gr@ve_Rose
11 min readOct 22, 2020

If you work in *Sec, I’m sure you’ve heard the phrase: “If you can touch it, you can own it” and it holds true, even today. If you are physically in control of a device, it’s yours for the taking. Some of you reading this will (almost) immediately see the ultimate successful attack vector whereas some of you may not — I’m not saying this to judge anyone (as we’re all always learning) but to point out how important it is to have some experience in other fields as well as just the one your focusing on.

Although I’ve been a hacker for well over twenty-five years, a lot of my time has been spent in Architecture, Engineering and Deployment. To brag a little bit, I do have a wide breadth of knowledge and would consider myself an expert in a few things (but not the best) that have come with dedication to the technologies or tasks at hand. You can be an expert as well — Just make sure you are able to dedicate the time to fully understanding how and why things work. Ask questions to your peers. Don’t be afraid of failure as failure is still a positive outcome so long as you learn from it.

With that being said, I was recently asked by a company to examine their remote working security. They shipped me a device (Cisco ISR) preconfigured to connect into their environment. These devices, with the same configuration across the board, are sent to their employees. Once the employee plugs it into their network, the ISR will connect to the main site and the users then plug into the ISR and can connect as if they were working on-site. I’ll get into the details in just a moment however I want to point something out:

The Red Team exists to make the Blue Team stronger. As a PenTester, when you are asked to break into something, your ultimate responsibility is to deliver a thorough, detailed report to the customer. This report should include what you were able to do and the remediation steps the customer can take to fix the security issues you’ve found.

This was a black-box PenTest where I wasn’t given any information — Just the device. Almost as if this were a CTF, the client provided me with an internal IP address and I was tasked with getting the hostname of that IP address. This would act as a flag for a successful hack. Due to the nature of the client, I wasn’t allowed to break into anything apart from the ISR but I was allowed to perform reconnaissance in an effort to get information.

I knew how I was going to win. If you’ve read this far, you may likely be thinking what I thought at this exact moment. But I don’t want to spoil anything in case you aren’t thinking that and I also needed to do due diligence on all vectors.

The first thing was to establish what the unit does on it’s own. Very likely it would use DHCP to get it’s WAN address since there’s a very low likelihood that an administrator would spend time hard-coding IP addresses for every employee’s network.

Keep in mind that the “WAN” in this case would still be an Internal (RFC-1918) address. WAN, in this specific terminology, refers to the interface(s) of the device which (eventually) leads to the Internet and/or holds the Default Route. Even in air-gapped networks, there is likely a WAN interface.

Here’s the basic architecture I configured for this testing environment:

  • VLAN 10 would be used for the WAN connection
  • VLAN 30 would be used for the LAN connection(s)
  • These were trunked up to an ASA which held the Default Gateways for both VLANs
  • The ASA would offer DHCP but was initially disabled
  • I had a SPAN port where I could use Wireshark to see what the ISR was trying to do and analyze the paylods

I started by plugging all interfaces on the unit in to the switch and watched what happened. As I expected, the WAN was sending out DHCP Discover packets so I enabled DHCP on VLAN 10. The unit got it’s IP address and then immediately attempted to send ISAKMP traffic to two specific destinations. This told me that the VPN termination points were not polled with DNS and the IP addresses were hardcoded into the configuration of the ISR.

The next step was to start giving the ISR access to the Internet little-by-little. I allowed the ISAKMP and IPSec traffic out and looked at my PCaps.

  • In Phase One, one VPN used PKI (Certificate) authentication where the second used PSK (Passphrase)
  • Both used Main Mode (6 packets protecting Peer Identities)
  • I was able to get the Encryption, Hashing, Lifetime and Diffie-Hellman Group being negotiated.

I used the wonderful ike-scan tool in an attempt to enumerate the VPN termination points. Unfortunately, neither supported Aggressive Mode but I was able to enumerate which vendor they were. Since I wasn’t allowed to attack the VPN termination point, brute-forcing was out of the question so I moved on.

I plugged a computer into the first LAN port and, as I always do, ran a PCap to see what would happen. I saw the machine send out DHCP Discover packets but nothing came back. Then… I saw the reason why. An EAP Identity Request packet was sent out by the ISR. For those of you unfamiliar with the workings of EAP, the quick ‘n dirty is that you need to authenticate to the network device before being granted any service access or permissions. Since I don’t have a certificate or any credentials to authenticate with, I wasn’t going any further on the first LAN port.

I plugged into the second LAN port and eagerly watched my PCaps… Were all the LAN ports configured with EAP or would I get lucky on the rest? It turns out, Lady Luck was on my side as the computer received an IP address from the second LAN port. I immediately looked at the information on the PCaps regarding the DHCP Offer information and here’s what I discovered:

  • The ISR used a DHCP server on the other end of the VPN which meant that the LAN port wouldn’t give out an address unless the ISR was connected to the VPN
  • I was provided two DNS servers which were located inside the VPN domain
  • I was given the DNS suffix for the search domain

Now, what could I do with this information… The first thing was to check for a PTR record for the IP address I was given as the flag:

dig @10.10.10.10 -x 10.20.30.40
PTR computer.internal.example.net

Lo and behold, I was provided with the system name I needed for my flag. But what else could I do within the realm of the Rules of Engagement (RoE) the client had outlined? We know we can hit a DNS server, so let’s try a Zone Transfer:

dig @10.10.10.10 AXFR example.net
Transfer failed

That was out of the question however it brought up a new question: Why was it being blocked? I tried to send a packet to the DNS server (10.10.10.10) on TCP/53 which is commonly used for Zone Transfers:

hping3 10.10.10.10 -S -c 1 -p 53
Administratively Prohibited: 192.168.0.1

This is a very important message to get back. When a TCP SYN packet hits a closed port or is otherwise not allowed to generate a SYN/ACK packet, there are usually only three options you’ll see:

  • Nothing: This is the default “Drop” from most firewalls where nothing is sent back to the client
  • Reset: Some hosts without firewalling will send a TCP RST packet on a port which is closed to the client but still tells a threat actor that there is a device at the IP address
  • ICMP Administratively Prohibited: Although it likely shows up on other devices, I’ve only ever encountered this from Cisco routing devices with Access Control Entries (ACEs) applied to Access Control Lists (ACLs) blocking traffic

Due to this message, I knew that the ISR was blocking the traffic. If I could get around that restriction, maybe the DNS server would accept a Zone Transfer request. Alas, I had ACLs in front of me.

I did, however, have DNS (UDP/53) requests available so what could I do with that? First, let’s find out our path to get to our destination. I ran

traceroute -n -q 1 -m 15 -p 53 10.10.10.10

Which gave me all the IP addresses of the intermediary Layer-3 routing devices. I could now start working on a very (!) basic “map” of the network as seen from behind the ISR. I ran the IP addresses through PTR checks and got some really juicy information:

dig @10.10.10.10 -x 10.200.200.1
gi32-swcore02.internal.example.net

As a network administrator, this is nice to see. Ports have their own hostnames in them so we can troubleshoot and identify them quickly from logs. As a Red Team’er, this is even nicer to see. With almost utmost certainty, we know that we’re routing through the core switch on port 32 of that core. With the addition of “02” in the name, it’s very likely that they’ve got (at least) two core switches configured and are also likely using VSS to ensure uptime and avoid asymmetric routing as one would do in this type of architecture.

While this little map is nice and all, it only shows us one path. Can we get any more paths?

I ran another nslookup with the “NS” type set and queried the domain “internal.example.net” and was presented with forty (40) additional domain servers and their IP addresses. I then attempted to run an nslookup on some of those DNS servers and was given a response each time. This tells me that either:

  • There is firewalling along each path to the DNS servers for remote clients to perform DNS lookups on all servers -or-
  • There is no firewalling between the remote clients and theses DNS servers

I had to work with the law of averages. Creating tight firewall rules on non-firewall devices (such as Layer-3 switches with ACL capability) is a real pain in the ass to manage and maintain so I assumed the latter.

I created a script which would run similar traceroutes to all forty of those DNS servers on UDP/53, grab the IP addresses for each hop (awk), run a PTR check against the IP address and extract the hostname. With that, I had switch and switchport information.

You may ask why I didn’t just omit the “-n” flag in my traceroute… I wanted to have a unique process that queried the proper DNS server. I know the computer would likely already do that when it got it’s info from DHCP, but when it comes to this, I’d rather spend the extra time to make sure it’s querying the proper server.

I tried accessing a few IP addresses on ports other than UDP/53 but was always blocked by the ISR. If only I could get rid of the ACLs in my way…

If you’ve stuck around this far, this is where the simple attack takes place. Strap in. :)

I took my handy Cisco console cable, attached it to the ISR, fired up minicom and rebooted the ISR. When it started to boot, I issued the break sequence which put me to a new prompt waiting for my input.

rommon 1>

If you’re unfamiliar with Cisco networking gear, ROMMON is a special mode which allows you to modify the boot-up sequence of the hardware. It’s like GRUB but for Cisco. One of the things you can tell ROMMON to do is to ignore the startup configuration file. I set the register and reloaded the machine.

rommon 1> confreg 0x2142
rommon 2> reload

The ISR happily booted up and ignored the client’s configuration file. You may ask what this provided me since I now have a router with a default, factory-reset configuration on it. It wouldn’t connect to the client’s site, DHCP wouldn’t work, so why did I do this?

Cisco CLI has two modes: Regular and Enable. The Enable mode allows you to configure the Cisco appliance where as the Regular mode is usually very restricted. Since there was no configuration on the appliance, I could log into Enable mode without a password. Again, what does this provide me? The ability to configure a router with no configuration on it… Exactly. I was in Enable mode and the last step was to load the configuration I originally ignored…

Router# copy startup-config running-config
Example_ISR#

I was now in full configuration control of the ISR but now with the configuration the client had created.

The first thing to do was to establish my foothold so I created a new user and set their Privilege Level to 15 so I could be an administrator. I also changed the enable password so I could escalate to Read-Write mode whenever I wanted. Done.

Next, I ran “sh run” to show the full running configuration on the device. This was a goldmine. Here’s some of the juicy things I found:

  • The IP addresses of their RADIUS and TACACS+ servers
  • Their EIGRP configuration
  • The plain-text PSK to one of the VPNs
  • The certificate used for VPN negotiation
  • All their ACEs and ACLs

But most important of all were the Level-7 passwords for:

  • The certificate’s Private Key
  • The RADIUS and TACACS+ shared keys
  • The passphrase for EIGRP negotiation

What are Level-7 passwords you may ask? They are easily decryptable passwords from Cisco configurations. Just go and Google “decrypt cisco level 7 password” and be amazed. :) With this information, I could connect my own device to either VPN termination point (since I had the Pre-Shared Key as well as the ability to use the certificate), I could modify all the ACLs on the machine, I could attempt to inject routing instrumentation into their EIGRP adjacency, disable the EAP check on LAN port 1 and, really, whatever I wanted.

I noticed that the second LAN port I was plugged into was supposed to be reserved for Voice which is why it didn’t have EAP on it since their VoIP phones wouldn’t work with EAP.

I modified the ACL I was dealing with to have “permit ip any any” as the first ACE, reset the configuration register back to loading the configuration file at boot, saved the configuration and rebooted the ISR one last time.

The next thing I did was run a script which would run PTR lookups for their subnets based on the idea that most people would use a /24 for endpoint subnets. To be safe, however, I included .0 and .255 — If you have a /23 for example, then .255 (on the first half of the third octet) and .0 (on the second half of the third octet) are unciast IP addresses which means you can put a host there. This gave me a full list of valid PTR records I could parse through. I omitted any entries which didn’t have a PTR entry and used those remaining as targets for my last tests…

Now that I wasn’t restricted by the ACL, I ran the Zone Transfer again against all forty two DNS servers but they all rejected the query. I was, however, able to verify that two of the DNS servers were also Domain Controllers so that could be an attack vector as well.

Lastly I used a BASH “while” loop to load up my list of targets and, for each target, I would send a single TCP SYN packet to that host on ports 22 and 23 with hping3. If I received a SYN/ACK back, then there’s something interesting listening for authentication. I got quite a lot back however, as per the RoE, I didn’t attack anything.

But I could have.

If you can touch it, you can own it. I could’ve started and even stopped at the console configuration override and called it a day — But that wouldn’t have exposed the lax network security on the remote side as much. Had I not done any (or as much) Engineering in my career, I may not have known about using “rommon” to reset a lost password. I also wouldn’t have known to tell the customer that you can disable it and only leave the option for a full factory-reset as the means of retrieving a lost password-ed device. It’s this last part that’s important to go in the report — How to help remediate the issues that you, as a Red Team’er, have found.

--

--

Gr@ve_Rose

CSO, Security Engineer, RedTeamer, PenTester, Creator of https://tcpdump101.com, Packet Monkey