3D-2D medical image matching is a crucial task in image-guided surgery, image-guided radiation therapy and minimally invasive surgery. The task relies on identifying the correspondence between a 2D reference image and the 2D projection of the 3D target image. In this thesis, we propose a novel image matching framework between 3D CT projection and 2D X-ray image, tailored for vertebra images. The main idea is to train a vertebra detector by means of the deep neural network. The detected vertebra is represented by a bounding box in the 3D CT projection. Next, the bounding box annotated by the doctor on the X-ray image is matched to the corresponding box in the 3D projection. We evaluate our proposed method on our own 3D-2D registration dataset. The experimental results show that our framework outperforms the state-of-the-art neural-network-based keypoint matching methods.