Curious questioning or the ability to inquire about surrounding environment or additional context, is an important step towards building agents which go beyond learning from a static knowledge base. The ability to request feedback is the first step in building intelligent agents which can incorporate this feedback to enhance learning. Visual questioning tasks help model this human skill of “curiosity.” In this thesis, we focus on two relevant vision based questioning tasks – visual question generation and visual dialog. We propose novel approaches and evaluation metrics for these tasks. For visual question generation, we combined language models with variational autoencoders to enhance diversity in text generations. We also suggest diversity metrics to quantify these improvements. For visual dialog, we introduce a reformulated dataset to enable training of questioning agents in a dialog setup. We also introduce simpler and more effective baselines for the task. Our combined results in visual question generation and visual dialog contribute to establishing visual questioning as an important next step for computer vision, and more generally, for artificial intelligence.