An important step in acquiring a language is the ability to segment words fromspeech streams. Typical speech contains many cues to word segmentation, but cues arenot always consistent. In studying the efficacy of particular cues, it has been suggestedthat some non-linguistic information, such as music, may actually help with wordsegmentation. Although it is traditionally accepted that music and language are treated asseparate types of information by the brain, recent evidence suggests that there may beshared structural, though likely not semantic, properties.The current study was designed to compare the effects of cues to wordsegmentation on learning rates in order to determine if tonal information could provide abenefit beyond that provided by regular speech cues. Participants listened to a speechstream of pseudo-randomly repeated nonsense words. Speech streams were of four types:monotone, prosody-enhanced (final vowel lengthened), tonally-enhanced (each syllable;;sung;; on a particular tone), and tonal-word (every ;;word;; ;;sung;; in the same series ofthree tones). On a forced-choice test participants were asked to choose which in a pair ofsyllable strings most resembled a word from the exposure stream. Learning wasmeasured by the number of correct responses on the forced-choice test.Results showed a significant facilitory effect of the prosodic cue (i.e., final vowellengthening), but no effect of either tonal condition, suggesting a privileged status forlanguage-specific cues to word segmentation. Failure to replicate previous findings oftonal facilitation are discussed in relation to the detrimental effects of two unexpectedlyhigh between-word transitional probabilities as well as a potential lack of statisticalpower.