Many useful AI tasks like machine translation, captioning or program syn- thesis to name a few can be abstracted as structured prediction problems. For these problems, the search space is well-defined but extremely large — all English language sentences for captioning or translation and similarly, all programs that can be generated from a context-free grammar in the case of program syn- thesis. Therefore, inferring the correct output (a sentence or a program) given the input (an image or user-defined specifications) is an intractable search problem. To overcome this, heuristics — hand designed or learnt from data — are often employed. In my work, I propose modified search procedures to output multiple diverse sequences and then, for the task of outputting programs, I propose a novel search procedure that accelerates existing techniques via heuristics learnt from deep networks. Going further, I propose to study the role of memory and search i.e. process each new query with the memory of previous queries — specifically in the context of solving mathematical problems.In the context of sequence prediction tasks like image captioning or translation, I introduce Diverse Beam Search (DBS), an approximate inference technique to decode multiple relevant and diverse outputs. With the objective of producing multiple sentences that are different from each other, DBS modifies the commonly used Beam Search procedure by greedily imposing diversity constraints. In follow-up work, we directly formulate the task of modeling a set of sequences and propose a trainable search procedure dubbed diff-BS. While both algorithms are task-agnostic, image-captioning is used as the test-bed to demonstrate their effectiveness. In the context of program-synthesis, I propose Neural Guided Deductive Search (NGDS), that accelerates deductive search via learnt heuristics. We find that our approach results in a significant speedup without compromising on the quality of the solutions found. Further, I will discuss the application of this technique in the context of programming by examples and synthesis of hard problems for a given solver. Finally, I study the interplay between memory and search, specifically in the context of mathematical problem solving. Analogical reasoning is a strategy commonly adopted by humans while solving problems i.e. new and unseen problems are solved by drawing parallels to previously seen problems. Inspired by such an approach, I propose to learn suitable representations for “problems” that al- lows the reuse of solutions from previously seen problems as a building block to construct the solution for the problem at hand.
【 预 览 】
附件列表
Files
Size
Format
View
Improved search techniques for structured prediction