Friday, March 15, 2013

Text Processing Tutorial with RapidMiner

     I know that a while back it was requested (on either Piazza or in class, can't remember) that someone post a tutorial about how to process a text document in RapidMiner and no one posted back. In this tutorial, I will try to fulfill that request by showing how to tokenize and filter a document into its different words and then do a word count for each word in a text document (I am essentially showing how to do the same assignment in HW 2 (plus filtering) but through RapidMiner and not AWS).

1) I first downloaded my document (The Entire Works of Mark Twain) through Project Gutenberg's website as a text document. Save the document in a file on your computer.

2) Open RapidMiner and click "New Process". On the left hand pane of your screen, there should be a tab that says "Operators"- this is where you can search and find all of the operators for RapidMiner and its extensions. By searching the Operators tab for "read", you should get an output like this (you can double click on the images below to enlarge them):


There are multiple read operators depending on which file you have, and most of them work the same way. If you scroll down, there is a "Read Documents" operator. Select this operator and enter it into your Main Process window by dragging it. When you select the Read Documents operator in the Main Process window, you should see a file uploader in the right-hand pane. 


 Select the text file you want to use.



 3) After you have chosen your file, make sure that the output port on the Read Documents operator is connected to the "res" node in your Main Process. Click the "play" button to check that your file has been received correctly. Switch to the results perspective by clicking the icon that looks like a display chart above the "Process" tab at the top of the Main Process pane. Click the "Document (Read Document)" tab. Your output text should look something like this depending on the file you have chosen to process:


4) Now we will move on to processing the document to get a list of its different words and their individual count. Search the Operators list for "Process Documents". Drag this operator the same way as you did for the "Read Documents" operator into the main pane.


Double click the Process Documents operator to get inside the operator. This is where we will link operators together to take the entire text document and split it down into its word components. This consists of several operators that can be chosen by going into the Operator pane and looking at the Text Processing folder. You should see several more folders such as "Tokenization", "Extraction", "Filtering", "Stemming", "Transformation", and "Utility". These are some of the descriptions of what you can do to your document.  The first thing that you would want to do to your document is to tokenize it. Tokenization creates a "bag of words" that are contained in your document. This allows you to do further filtering on your document. Search for the "Tokenize" operator and drag it into the "Process Documents" process. 



Connect the "doc" node of the process to the "doc" input node of the operator if it has not automatically connected already. Now we are ready to filter the bag of words. In "Filtering" folder under the "Text Processing" operator folder, you can see the various filtering methods that you can apply to your process. For this example, I want to filter certain words out of my document that don't really have any meaning to the document itself (such as the words a, and, the, as, of, etc.); therefore, I will drag the "Filter Stopwords (English)" into my process because my document is in English. Also, I want to filter out any remaining words that are less than three characters. Select "Filter Tokens by Length" and set your parameters as desired (in this case, I want my min number of characters to be 3, and my max number of characters to be an arbitrarily large number since I don't care about an upper bound). Connect the nodes of each subsequent operator accordingly as in the picture.


 After I filtered the bag of words by stopwords and length, I want to transform all of my words to lowercase since the same word would be counted differently if it was in uppercase vs. lowercase. Select the operator "Transform Cases" and drag it into the process.



 5) Now that I have the sufficient operators in my process for this example, I check all of my node connections and click the "Play" button to run my process. If all goes well, your output should look like this in the results view:


 Congrats! You are now able to see a word list containing all the different words in your document and their occurrence count next to it in the "Total Occurences" column. If you do not get this output, make sure that all of your nodes are connected correctly and also to the right type. Some errors are because your output at one node does not match the type expected at the input of the next node of an operator. If you are still having trouble, please comment or check out the Rapid-i support forum.


5 comments:

  1. This is a really good tutorial. I was able to walk through the steps of this tutorial very easily. I was wondering why you chose to filter tokens of size three or smaller. This would eliminate any three letter words. Probably three letter words are probably very common and would be eliminated by the filter Stopwords function anyways. Maybe as a follow up you (or myself for that matter) could do another text processing tutorial that gets a little more in depth. I was thinking about taking a look at n-grams. N-grams are common word pairs of n length. For example, a 2-gram is a common pair of two words while a 3-gram is a common string of three words. I believe that this process would greatly help with the understanding of the data that you are mining. For example, let’s say that you are mining movie reviews with your current method. Right now you might get words like, “story”, “action”, or “jokes”. Now, if you were to generate 2-grams with the new process you might find you would get results like, “good story”, “bad action”, or “cheesy jokes”. This would give you a lot more insight into the data that you are mining. Also, it would be good if there was a better way to visualize this data. For example, if you were able to put the information you found about your data mining into some sort of graph or cart in order to more easily understand the data you are looking at.

    ReplyDelete



  2. Thanks so much!! Your site looks nice. Wikidot sounds perfect!










    Image Processing

    ReplyDelete
  3. Hi,
    i gone through ur steps its good to learn, but i'm getting some noisy data. i need all the words as in the document which i added to Read Document.. and i want to know how to do this same word count for multiple files in a directory.

    Thanks
    Sridhar

    ReplyDelete
  4. Read Document and Process Document processes are not available on my processes list. Is there any way to import them or something else ?? What is the solution ??

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete