Building off of the “101” article written here, let’s continue on with the roadmap to becoming a security researcher. In the previous article I explained the many interpretations of what this role consists of versus the “white hat hacker” nomenclature. In this blog post and onward I will be sticking to the “security researcher” title (unless otherwise stated), as this seems more appropriate and covers other given titles of this role (penetration tester, ethical hacker, etc.).
Further, my intention is to work off of the different stages in the methodological approach when performing a penetration test / vulnerability assessment. Up until then, however, I want to just set a foundation of what you need to do to better prepare for these more advanced steps. There is a need to learn many different technologies, as well as to have the desire to stay diligent and not be easily deterred. In line with the fact that it would be unfeasible to expect to learn everything overnight, let’s take this process into byte-sized chunks that makes it easier to digest. Lastly, to confirm, there is no “one size fits all” approach to this – I am simply speaking from my experience and encounters thus far.
Alright, enough with the introductions and all – let’s get back on Track, “Track: 102” that is!
Practice, practice, practice – that makes perfect, right? In a way, yes, in a way, no! Personally, I learn things many, many different ways – sometimes a theoretical approach helps introduce or expand a concept, such as reading a technical book about a technology or related content, other times practice and application is where it’s at. To me, it’s a good mix between the two. After all, if you just theorize, you don’t get anywhere, and if you’re only doing, how do you get anywhere without theory and development?
Virtualization Technology Talk
When it comes to theorizing, you are always more than welcome to read the content at Secplicity.org and whatever else suits your interests (of course we know you just love our Secplicity posts!). As for practicing, the best way to do so is in virtual land!
There are a few important, but minor, things to keep in mind about virtualizing, “buzz words” if you will. Surely most of us know what an operating system is, whether it be Windows, Mac, or Linux. Likewise, perhaps some of us are familiar with virtualization software as well. The important thing to bring up is the level of deployment of this virtualized software. By that I mean, there are Type-1 hypervisors and Type-2 hypervisors that have their place in particular deployments:
- Type-1 hypervisors are stand-alone OSes, software that gets installed directly on the hardware of your given computer system. This hypervisor assumes the OS software layer and is designed specifically to host multiple virtual machines. Examples of this would be VMware’s ESXi server, or Microsoft’s Hyper-V.
- Type-2 hypervisors are software applications installed on standard OSes. That is, they’re installed on the aforementioned OS examples but can similarly offer virtualization capabilities. Essentially there is an extra software layer for Type-2 hypervisor deployments. Examples of this include VMware’s Fusion (Mac) or Workstation / Player (Windows).
The reason I bring it up this way is that it is important to know that there are some malware samples that have virtualization-detection capabilities. If included, this logic can prevent the malware from being executed to inhibit execution monitoring. Further, there can be logic built around – buzz-word drumroll please – “zero day” malware that can actually escape hypervisors. If you were using a Type-2 hypervisor and were analyzing some newfound malware, you could potentially infect your host’s OS in this scenario.
Enter Type-1 hypervisors. Granted there is still a potential threat for an escape in this scenario as well, but it wouldn’t affect your standard-issued work (or even personal, if you’re doing this passively) computer or laptop. It is important to know the risks involved with malware analysis and sampling.
Regardless what route you take, the point I am making is to virtualize testbeds – personally I prefer Type-1 hypervisors for reasons stated above. One main reason to virtualize anything is due to something known as “snapshots.” Snapshots are ready-to-go images that can be created after getting a VM up and running to your liking. To clarify, when sampling malware, imagine the number of malware samples collected and wanting to examine say just 50 of them. Do you create new VMs one-by-one, 50 times? No way! That’s really time consuming and can be easily performed via these snapshots. You invest an amount of time in creating a VM, installing whatever software on that VM – say an office suite, or emails, sample data including pictures, videos, or business documents – and then having the hypervisor software “snapshot” the machine.
This snapshot is a reusable image of that machine’s state that can be redeployed time and time again, sometimes even automated! So you run a malware sample, the VM gets infected yet you don’t need to clear anything off – simply restore the snapshot previously made and get ready for the next malware sample to examine.
Beyond malware sampling and reverse engineering, the hypervisor implementation is really just dependent on your needs. Corporations and enterprises may opt for Type-1 deployments for centralized management abilities, whereas individuals or smaller teams may opt for Type-2 deployments – perhaps even a mix.
Another reason for virtualization is practical experience. As discussed above, it is important to learn the technologies commonly used. One way to do this is to run virtualized environments for one of many possible scenarios, sometimes even for very specific set-ups. Let’s say you want to learn how to use Windows Server 2012 and Windows Server 2016, or want to test deployments of different Windows 7 / 8 / 10 clients and how they integrate with each server version? Going through the process of setting up each host and learning how each works is possible by virtualizing an instance and simply just tinkering.
What to tinker with? That is a hard question. It really depends on your interest in security but to keep in line with the methodological approach, we’d want to focus on networking (to get access and maintain it) as well as how persistence works on that machine. That is, how does the given OS handle network connections, how does it store data, how does it monitor tasks that are running? As you can imagine, this list can go on and on, and that’s why diligence is important. There are and will be many difficulties faced but being persistent and progressive will allow you to see through or past that hang-up. Don’t get caught up on just one subject; work around the different domains of computing technology and with time, you’ll only continue to grow.
Host Hardware Talk
To help you get started, I recommend assessing where you are at and what is within the means of possibilities – as well as determining what your end goal is. Obviously, another big factor in most anything is money. If you have the money, then invest in a decent server with a good CPU and enough RAM, as well as storage space. On the other side of the spectrum, if you’re tight on money then a Type-2 hypervisor may be more appropriate. In either case, you can always look into free or open source options for hypervisors. Here is a great article covering different options, some which are free on a timed-trial basis or others that are open source. Follow the documentation for your desired hypervisor, get it installed on whatever host, and see about getting some ISOs for testing.
An ISO is a disk image of an optical disc, which is how data was transported way back when. If you’re unsure of what optical disks or CDs are, think of them as pre-USB flash drive methods of how to store and share content. So when one acquires an ISO, they’re procuring a disk image of said software. You can get Linux ISOs from here or simply Google your preferred Linux flavor and go from there. Microsoft offers evaluations, as detailed here. As for Mac evaluations, I can only refer you to this public forum post covering this in more detail. Regardless of how you deploy your scenario, take snapshots after getting things set up but before proceeding with OS-altering functions. This way you can easily revert back to the “perfect image” and go about your testing whichever way you see fit.
To touch on the requirements for a proper virtual host a bit more, you might be wondering what hardware is best. What matters most is to ensure that the CPU can even offer virtualization – if the CPU doesn’t support virtualization then nothing else matters. Some hypervisors have a hardware compatibility list so it’s important to verify if this is the case; verify the NIC is compatible (I actually ran into this specific issue myself when trying to get my own host setup for an ESXi host setup a while ago) and other components to ensure the software is going to be compatible and supported. Next would be to determine how much RAM you’re going to need, which depends on how many VMs you want to spin up at one time – this is a key question to determine from the start. Lastly, you’re also going to need to figure out the amount of storage to provide for your VMs.
My only suggestion, and it’s just a suggestion, would be to brainstorm a server build around an appropriate CPU. Websites like newegg.com offer some discounted items. You can start there or perhaps even a local PC shop. Start by looking at some semi-inexpensive CPUs, verify they support virtualization, and also the max supported RAM size and speeds. Then look into a compatible motherboard that can house the hardware. Some hypervisors can utilize more than just a single CPU so if you intend on using more than one CPU, make sure the motherboard can accept them, along with however many RAM sticks.
Wrapping up this rather lengthy post, the key takeaways are to determine your goal for why you’re doing what you’re doing, figure out what virtual deployment is best for you and your use case, virtualize for practical experience in specific environments and just go from there. Virtual environments are pretty much the standard nowadays in many aspects of IT, though it is worth reiterating that malware analysis can call for a need to have hardware test clients for anti-virtualization logic in some more advanced malware samples. Aside from that, though I may have experienced a set of technologies, I encourage you to experience different sets of technologies. It’s okay to try with one software application over another, then switch back – this only exposes you to that much more experience with varying products.
Dave Purscell says
Nicely written article.