出現回数が一回の単語だが、とても重要な単語を発見しており、今分析しているテキストでは結構沢山ありそうでキーワードとして抽出できるようにしたいのですが、こういう場合どう対処していけばいいのかわからずにします。
個人的には、パラメータチューニングでどうにかなるような話ではない気がしていますが、どなたか、アイデアがあればご教示頂けませんでしょうか?
python
1from sklearn.feature_extraction.text import TfidfVectorizer 2 t_vec = TfidfVectorizer(token_pattern=r"(?u)\b\w\w+\b", 3 max_df=MAX_DF, min_df=MIN_DF, sublinear_tf=SUBLINEAR_TF)
下記から引用
input : string {‘filename’, ‘file’, ‘content’} If ‘filename’, the sequence passed as an argument to fit is expected to be a list of filenames that need reading to fetch the raw content to analyze. If ‘file’, the sequence items must have a ‘read’ method (file-like object) that is called to fetch the bytes in memory. Otherwise the input is expected to be the sequence strings or bytes items are expected to be analyzed directly. encoding : string, ‘utf-8’ by default. If bytes or files are given to analyze, this encoding is used to decode. decode_error : {‘strict’, ‘ignore’, ‘replace’} (default=’strict’) Instruction on what to do if a byte sequence is given to analyze that contains characters not of the given encoding. By default, it is ‘strict’, meaning that a UnicodeDecodeError will be raised. Other values are ‘ignore’ and ‘replace’. strip_accents : {‘ascii’, ‘unicode’, None} (default=None) Remove accents and perform other character normalization during the preprocessing step. ‘ascii’ is a fast method that only works on characters that have an direct ASCII mapping. ‘unicode’ is a slightly slower method that works on any characters. None (default) does nothing. Both ‘ascii’ and ‘unicode’ use NFKD normalization from unicodedata.normalize. lowercase : boolean (default=True) Convert all characters to lowercase before tokenizing. preprocessor : callable or None (default=None) Override the preprocessing (string transformation) stage while preserving the tokenizing and n-grams generation steps. tokenizer : callable or None (default=None) Override the string tokenization step while preserving the preprocessing and n-grams generation steps. Only applies if analyzer == 'word'. analyzer : string, {‘word’, ‘char’, ‘char_wb’} or callable Whether the feature should be made of word or character n-grams. Option ‘char_wb’ creates character n-grams only from text inside word boundaries; n-grams at the edges of words are padded with space. If a callable is passed it is used to extract the sequence of features out of the raw, unprocessed input. Changed in version 0.21. Since v0.21, if input is filename or file, the data is first read from the file and then passed to the given callable analyzer. stop_words : string {‘english’}, list, or None (default=None) If a string, it is passed to _check_stop_list and the appropriate stop list is returned. ‘english’ is currently the only supported string value. There are several known issues with ‘english’ and you should consider an alternative (see Using stop words). If a list, that list is assumed to contain stop words, all of which will be removed from the resulting tokens. Only applies if analyzer == 'word'. If None, no stop words will be used. max_df can be set to a value in the range [0.7, 1.0) to automatically detect and filter stop words based on intra corpus document frequency of terms. token_pattern : string Regular expression denoting what constitutes a “token”, only used if analyzer == 'word'. The default regexp selects tokens of 2 or more alphanumeric characters (punctuation is completely ignored and always treated as a token separator). ngram_range : tuple (min_n, max_n) (default=(1, 1)) The lower and upper boundary of the range of n-values for different n-grams to be extracted. All values of n such that min_n <= n <= max_n will be used. max_df : float in range [0.0, 1.0] or int (default=1.0) When building the vocabulary ignore terms that have a document frequency strictly higher than the given threshold (corpus-specific stop words). If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None. min_df : float in range [0.0, 1.0] or int (default=1) When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature. If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None. max_features : int or None (default=None) If not None, build a vocabulary that only consider the top max_features ordered by term frequency across the corpus. This parameter is ignored if vocabulary is not None. vocabulary : Mapping or iterable, optional (default=None) Either a Mapping (e.g., a dict) where keys are terms and values are indices in the feature matrix, or an iterable over terms. If not given, a vocabulary is determined from the input documents. binary : boolean (default=False) If True, all non-zero term counts are set to 1. This does not mean outputs will have only 0/1 values, only that the tf term in tf-idf is binary. (Set idf and normalization to False to get 0/1 outputs.) dtype : type, optional (default=float64) Type of the matrix returned by fit_transform() or transform(). norm : ‘l1’, ‘l2’ or None, optional (default=’l2’) Each output row will have unit norm, either: * ‘l2’: Sum of squares of vector elements is 1. The cosine similarity between two vectors is their dot product when l2 norm has been applied. * ‘l1’: Sum of absolute values of vector elements is 1. See preprocessing.normalize use_idf : boolean (default=True) Enable inverse-document-frequency reweighting. smooth_idf : boolean (default=True) Smooth idf weights by adding one to document frequencies, as if an extra document was seen containing every term in the collection exactly once. Prevents zero divisions. sublinear_tf : boolean (default=False) Apply sublinear tf scaling, i.e. replace tf with 1 + log(tf).
「出現回数が一回の単語だが」は、どのようなテキストを対象としたものなのでしょうか。そのテキストと「今分析しているテキスト」との関係は、どうなっているのでしょうか。単語の重要度は何で判断しているのですか(どのように判断したら出現回数が1回しかない単語を"とても重要"と決定できるのですか)。質問の最初の一文だけでも、疑問点が多すぎて理解できないのです。もう少し、判るように説明してもらえませんか?
tfidfの計算のうちtfは、ある文書の中で出現する単語の出現頻度をさします。その頻度が一回と言う意味です。しかし、一回にも関わらず、ある文書を指しさめす重要な単語である場合を想定しています。その単語のtfidf値を高くするには、tfの値かidfの値を調整する必要があります。その場合、どう調整すればいいのだろう?と言う質問になります。例えば、その同じ単語を追加して頻度をあげてしまうのか、もしくはidfを二乗してtfidfの値を大きくするのか、などが考えられますが、それでは調整前に頻度の高かった別の単語のtfidf値が下がってしまい、うまくいかなかったりします。どうすればいいのだろう?と言う状況です。