Some of the NLP applications require splitting a large raw text into sentences to get more meaningful information out. Intuitively, a sentence is an acceptable unit of conversation. When it comes to computers, it is a harder task than it looks. A typical sentence splitter can be something as simple as splitting the string on (.
), to something as complex as a predictive classifier to identify sentence boundaries:
>>>inputstring = ' This is an example sent. The sentence splitter will split on sent markers. Ohh really !!' >>>from nltk.tokenize import sent_tokenize >>>all_sent = sent_tokenize(inputstring) >>>print all_sent [' This is an example sent', 'The sentence splitter will split on markers.','Ohh really !!']
We are trying to split the raw text string into a list of sentences. The preceding function, sent_tokenize
, internally uses a sentence boundary detection algorithm that comes pre-built into NLTK. If your application requires a custom...