Please Provide the Following Information

The resource you requested will be sent via email.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Please Provide the Following Information

The resource you requested will be sent via email.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Please Provide the Following Information

The resource you requested will be sent via email.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Please Provide the Following Information

The resource you requested will be sent via email.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Blogs
November 21, 2017

Topic Modelling on Fin-tech blogs

Topic modelling is a Natural language processing and machine learning technique often used in text mining. As the name entails topic modelling deals with discovery and extraction of topics from a collection of documents. These ‘topics’ can then be used for inferring theme of documents and then finally use it for document clustering based on a common theme.

Data

The data we have is from Fintech blogging site letstalkpayments(LTP). Generally speaking, the blogs on this site deal with company, event, place or product related to Fintech industry. We treat the data which is a collection of blogs as a collection of documents.

Why topic modelling

The documents we got from LTP are unlabeled i.e we do not have any attached labels or class which can be used to train supervised model. So our problem is reduced to clustering. There are many clustering techniques like k-means. But the reason we do not use k-means is that our aim is not only to cluster the documents but also to find the class or topic which makes documents in the same cluster similar.

Topic Modelling

The technique we use for topic modelling is Latent Dirichlet Allocation. The intuition behind this technique is, group words that appear in similar documents belong under a single topic. For example, if a document is about online payment processing we might expect words like ‘google wallet’, ‘PayPal’ or ‘merchant’ but not words like ‘region ‘,’insurance’ etc. These frequent occurring words can then be grouped under a single topic. Based on the frequency of these words appearing in document LDA can assign relevance score of a particular topic to a document. So when a document is passed through a trained LDA model it outputs LDA vector. This LDA vector is nothing but a topic distribution.
e.g. Below is a typical LDA vector returned by gensim LDA model. LDA vector is collection of tuple as (id, score)

[(0, 0.036421599137288853), (2, 0.019968505065152336), (8, 0.069562120707093708), (10, 0.19510710012291829), (13, 0.17358284526186246), (14, 0.013598365411651589), (15, 0.32516362281978356), (17, 0.10580934184113006), (18, 0.035494535506624673)]

Below is each ID translated to topic name

'0.053*"cash"', '0.028*"information"', '0.251*"card"', '0.080*"market"', '0.203*"payment"', '0.105*"company"', '0.030*"industry"', '0.121*"app"', '0.036*"experience"'

Data Preprocessing

Data preprocessing is an important step in getting data ready for training LDA model. LDA requires data to be less noisy and topic should not change much with a reasonable number of documents belonging to each topic. As preprocessing step we first remove stopwords. The reason we do this is that stopwords occur quite frequently in English text and there is a high chance that most of the topics will be these stopwords. Stopword list is provided by NLTK but it is not comprehensive enough. We augment that list with words from this website Stop Word List 1.
Next step is removing punctuations from the text as it also has a high chance of getting detected as a topic word. Punctuation list is also provided by NLTK. Next, we pass the words through lemmatizer to get root form of the words. This can be done using lemmatizer provided by NLTK or Stanford core NLP. This step is done so that two same words (but different inflected forms) are not detected as separate topic words.
Another step which we included later after going through training steps is filtering out nouns. This can be done using POS tagger which tags the input statement words with parts of speech and then removing all words which are not nouns
e.g. The statement : “The payments industry in the US has been steadily rising as the hottest and most financially attractive segment within FinTech.” will give : “payments industry US segment FinTech”.

Training LDA model

Before training we need to first construct document term frequency matrix. To do this we first construct dictionary using Gensim’s inbuilt function. We use this dictionary to construct bag-of-words. Bag-of-words is a collection of words in text and their frequency. We can then build document term matrix from bag-of-words.
We use gensim library and Gensim’s implementation of LDA. The LDA model takes document term matrix as input. Other parameters we need to specify are number of passes (these are important for model to converge), dictionary, number to topics, update_every (number of batches after which to update), chunksize (size of batches in which to divide the documents) and every_eval (number of batches after which to calculate perplexity of model). For rest of the parameters, we keep their default values.

Result

We trained models on the complete set of the document as well subset of documents. We vary the number of passes & number of topics and alpha type (asymmetric, auto & symmetric).
Below are of 50 topics model discovered after running for 50 passes on complete data:

To learn more about Botsplash click the button below to schedule a demo with our team.

Topic modelling is a Natural language processing and machine learning technique often used in text mining. As the name entails topic modelling deals with discovery and extraction of topics from a collection of documents. These ‘topics’ can then be used for inferring theme of documents and then finally use it for document clustering based on a common theme.

Data

The data we have is from Fintech blogging site letstalkpayments(LTP). Generally speaking, the blogs on this site deal with company, event, place or product related to Fintech industry. We treat the data which is a collection of blogs as a collection of documents.

Why topic modelling

The documents we got from LTP are unlabeled i.e we do not have any attached labels or class which can be used to train supervised model. So our problem is reduced to clustering. There are many clustering techniques like k-means. But the reason we do not use k-means is that our aim is not only to cluster the documents but also to find the class or topic which makes documents in the same cluster similar.

Topic Modelling

The technique we use for topic modelling is Latent Dirichlet Allocation. The intuition behind this technique is, group words that appear in similar documents belong under a single topic. For example, if a document is about online payment processing we might expect words like ‘google wallet’, ‘PayPal’ or ‘merchant’ but not words like ‘region ‘,’insurance’ etc. These frequent occurring words can then be grouped under a single topic. Based on the frequency of these words appearing in document LDA can assign relevance score of a particular topic to a document. So when a document is passed through a trained LDA model it outputs LDA vector. This LDA vector is nothing but a topic distribution.
e.g. Below is a typical LDA vector returned by gensim LDA model. LDA vector is collection of tuple as (id, score)

[(0, 0.036421599137288853), (2, 0.019968505065152336), (8, 0.069562120707093708), (10, 0.19510710012291829), (13, 0.17358284526186246), (14, 0.013598365411651589), (15, 0.32516362281978356), (17, 0.10580934184113006), (18, 0.035494535506624673)]

Below is each ID translated to topic name

'0.053*"cash"', '0.028*"information"', '0.251*"card"', '0.080*"market"', '0.203*"payment"', '0.105*"company"', '0.030*"industry"', '0.121*"app"', '0.036*"experience"'

Data Preprocessing

Data preprocessing is an important step in getting data ready for training LDA model. LDA requires data to be less noisy and topic should not change much with a reasonable number of documents belonging to each topic. As preprocessing step we first remove stopwords. The reason we do this is that stopwords occur quite frequently in English text and there is a high chance that most of the topics will be these stopwords. Stopword list is provided by NLTK but it is not comprehensive enough. We augment that list with words from this website Stop Word List 1.
Next step is removing punctuations from the text as it also has a high chance of getting detected as a topic word. Punctuation list is also provided by NLTK. Next, we pass the words through lemmatizer to get root form of the words. This can be done using lemmatizer provided by NLTK or Stanford core NLP. This step is done so that two same words (but different inflected forms) are not detected as separate topic words.
Another step which we included later after going through training steps is filtering out nouns. This can be done using POS tagger which tags the input statement words with parts of speech and then removing all words which are not nouns
e.g. The statement : “The payments industry in the US has been steadily rising as the hottest and most financially attractive segment within FinTech.” will give : “payments industry US segment FinTech”.

Training LDA model

Before training we need to first construct document term frequency matrix. To do this we first construct dictionary using Gensim’s inbuilt function. We use this dictionary to construct bag-of-words. Bag-of-words is a collection of words in text and their frequency. We can then build document term matrix from bag-of-words.
We use gensim library and Gensim’s implementation of LDA. The LDA model takes document term matrix as input. Other parameters we need to specify are number of passes (these are important for model to converge), dictionary, number to topics, update_every (number of batches after which to update), chunksize (size of batches in which to divide the documents) and every_eval (number of batches after which to calculate perplexity of model). For rest of the parameters, we keep their default values.

Result

We trained models on the complete set of the document as well subset of documents. We vary the number of passes & number of topics and alpha type (asymmetric, auto & symmetric).
Below are of 50 topics model discovered after running for 50 passes on complete data:

Subscribe to our newsletter... we promise no spam

Botsplash Logo
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.