When a natural disaster occurs, there is often significant damage to vitally important infrastructure. Repair crews must quickly locate the structures with the most damage that are in need of immediate attention. These crews need to determine how to allocate their resources most efficiently to save time and money without having to assess each area individually. To streamline this process, drone technology can be used to take photographs of the affected areas. From these photographs, three dimensional models of the area can be constructed. These models can include point clouds, panoramas, and other three-dimensional representations. This process is called photogrammetry. The first step in constructing a three-dimensional model from two dimensional photographs is to detect key features that match throughout all the photos. This is done using David Lowe’s Scale Invariant Feature Transform (SIFT) algorithm which detects the key features. Pairwise matches are then computed by using a k nearest neighbor algorithm to compare all the images one pair at a time finding pixel coordinates of matching features. These pixel matches are then passed to an algorithm which calculates the relative camera positions of the photos in a 3D space. These positions are then used to orient the photos allowing us to generate a 3D model. The purpose of this research is to determine the best method to generate a 3D model of a damaged area with maximum clarity in a relatively short period of time at the lowest possible cost; therefore, allowing repair crews to allocate resources more efficiently.