Several approaches to implementing symbol-like representations in neurally plausible models have been proposed.These approaches include binding through synchrony, mesh binding, and tensor product binding. Recent theoretical work has suggested that these methods will not scale well; that is, they cannot encode human-sized structured representations without making implausible resource assumptions. Here I present an approach that will scale appropriately, which is based on the Semantic Pointer Architecture. Specifically, I construct a spiking neural network composed of about 2.5 million neurons that employs semantic pointers to encode and decode the main lexical relations in WordNet, a semantic network containing over 117,000 concepts. I experimentally demonstrate the capabilities of this model by measuring its performance on three tasks which test its ability to accurately traverse the WordNet hierarchy, as well as its ability to decode sentences involving WordNet concepts. I argue that these results show that this approach is uniquely well-suited to providing a biologically plausible account of the structured representations that underwrite human cognition. I conclude with an investigation of how the connection weights in this spiking neural network can be learned online through biologically plausible learning rules.