The problem of ascribing a semantic representation to text is an important one that can help text understanding problems like textual entailment. In this thesis, we address the problem of assigning a shallow semantic representation to text. This problem is traditionally studied in the context of verbs and their nominalizations. We propose to extend the task to go beyond verbs and nominalizations to include other linguistic constructions such as commas and prepositionsWe develop an ontology of predicate-argument relations that commas and prepositions express in text. Just like the verb and nominal semantic role labeling schemes, the relations we propose are domain independent. For these two classes of phenomena, we introduce new corpora where these relations are annotated.From the machine learning perspective, learning to predicting these relations is a structured learning problem. However, we only have the small (for commas) or partially annotated (for prepositions) datasets. To predict the new relations, we show that using linguistic knowledge and information about output structure can bias the learning to build robust models.Finally, we observe that the relations expressed by the various phenomena interact with each other by constraining each others' output. We show that we can take advantage of these inter-dependencies by enforcing coherence between their predictions. By constraining inference using linguistic knowledge, we can improve relation prediction performance.