­

【NLP實戰】手把手帶你fastText文本分類

  • 2020 年 3 月 13 日
  • 筆記

寫在前面

今天的教程是基於FAIR的Bag of Tricks for Efficient Text Classification[1]。也就是我們常說的fastText。

最讓人欣喜的這篇論文配套提供了fasttext工具包。這個工具包代碼質量非常高,論文結果一鍵還原,目前已經是包裝地非常專業了,這是fastText官網和其github代碼庫,以及提供了python接口,可以直接通過pip安裝。這樣準確率高又快的模型絕對是實戰利器。

為了更好地理解fasttext原理,我們現在直接復現來一遍,但是代碼中僅僅實現了最簡單的基於單詞的詞向量求平均,並未使用b-gram的詞向量,所以自己實現的文本分類效果會低於facebook開源的庫。

論文概覽

❝We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute.

首先引用論文中的一段話來看看作者們是怎麼評價fasttext模型的表現的。

這篇論文的模型非常之簡單,之前了解過word2vec的同學可以發現這跟CBOW的模型框架非常相似。

對應上面這個模型,比如輸入是一句話,到就是這句話的單詞或者是n-gram。每一個都對應一個向量,然後對這些向量取平均就得到了文本向量,然後用這個平均向量取預測標籤。當類別不多的時候,就是最簡單的softmax;當標籤數量巨大的時候,就要用到「hierarchical softmax」了。

模型真的很簡單,也沒什麼可以說的了。下面提一下論文中的兩個tricks:

  • 「hierarchical softmax」 類別數較多時,通過構建一個霍夫曼編碼樹來加速softmax layer的計算,和之前word2vec中的trick相同
  • 「N-gram features」 只用unigram的話會丟掉word order信息,所以通過加入N-gram features進行補充 用hashing來減少N-gram的存儲

看了論文的實驗部分,如此簡單的模型竟然能取得這麼好的效果 !

但是也有人指出論文中選取的數據集都是對句子詞序不是很敏感的數據集,所以得到文中的試驗結果並不奇怪。

代碼實現

看完閹割版代碼大家記得去看看源碼噢~ 跟之前系列的一樣,定義一個fastTextModel類,然後寫網絡框架,輸入輸出placeholder,損失,訓練步驟等。

class fastTextModel(BaseModel):      """      A simple implementation of fasttext for text classification      """      def __init__(self, sequence_length, num_classes, vocab_size,                   embedding_size, learning_rate, decay_steps, decay_rate,                   l2_reg_lambda, is_training=True,                   initializer=tf.random_normal_initializer(stddev=0.1)):          self.vocab_size = vocab_size          self.embedding_size = embedding_size          self.num_classes = num_classes          self.sequence_length = sequence_length          self.learning_rate = learning_rate          self.decay_steps = decay_steps          self.decay_rate = decay_rate          self.is_training = is_training          self.l2_reg_lambda = l2_reg_lambda          self.initializer = initializer            self.input_x = tf.placeholder(tf.int32, [None, self.sequence_length], name='input_x')          self.input_y = tf.placeholder(tf.int32, [None, self.num_classes], name='input_y')            self.global_step = tf.Variable(0, trainable=False, name='global_step')          self.instantiate_weight()          self.logits = self.inference()          self.loss_val = self.loss()          self.train_op = self.train()            self.predictions = tf.argmax(self.logits, axis=1, name='predictions')          correct_prediction = tf.equal(self.predictions, tf.argmax(self.input_y, 1))          self.accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'), name='accuracy')        def instantiate_weight(self):          with tf.name_scope('weights'):              self.Embedding = tf.get_variable('Embedding', shape=[self.vocab_size, self.embedding_size],                                               initializer=self.initializer)              self.W_projection = tf.get_variable('W_projection', shape=[self.embedding_size, self.num_classes],                                                  initializer=self.initializer)              self.b_projection = tf.get_variable('b_projection', shape=[self.num_classes])          def inference(self):          """          1. word embedding          2. average embedding          3. linear classifier          :return:          """          # embedding layer          with tf.name_scope('embedding'):              words_embedding = tf.nn.embedding_lookup(self.Embedding, self.input_x)              self.average_embedding = tf.reduce_mean(words_embedding, axis=1)            logits = tf.matmul(self.average_embedding, self.W_projection) +self.b_projection            return logits          def loss(self):          # loss          with tf.name_scope('loss'):              losses = tf.nn.softmax_cross_entropy_with_logits(labels=self.input_y, logits=self.logits)              data_loss = tf.reduce_mean(losses)              l2_loss = tf.add_n([tf.nn.l2_loss(cand_var) for cand_var in tf.trainable_variables()                                  if 'bias' not in cand_var.name]) * self.l2_reg_lambda              data_loss += l2_loss * self.l2_reg_lambda              return data_loss        def train(self):          with tf.name_scope('train'):              learning_rate = tf.train.exponential_decay(self.learning_rate, self.global_step,                                                         self.decay_steps, self.decay_rate,                                                         staircase=True)                train_op = tf.contrib.layers.optimize_loss(self.loss_val, global_step=self.global_step,                                                        learning_rate=learning_rate, optimizer='Adam')            return train_op
def prepocess():      """      For load and process data      :return:      """      print("Loading data...")      x_text, y = data_process.load_data_and_labels(FLAGS.positive_data_file, FLAGS.negative_data_file)      # bulid vocabulary      max_document_length = max(len(x.split(' ')) for x in x_text)      vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)      x = np.array(list(vocab_processor.fit_transform(x_text)))        # shuffle      np.random.seed(10)      shuffle_indices = np.random.permutation(np.arange(len(y)))      x_shuffled = x[shuffle_indices]      y_shuffled = y[shuffle_indices]        # split train/test dataset      dev_sample_index = -1 * int(FLAGS.dev_sample_percentage * float(len(y)))      x_train, x_dev = x_shuffled[:dev_sample_index], x_shuffled[dev_sample_index:]      y_train, y_dev = y_shuffled[:dev_sample_index], y_shuffled[dev_sample_index:]      del x, y, x_shuffled, y_shuffled        print('Vocabulary Size: {:d}'.format(len(vocab_processor.vocabulary_)))      print('Train/Dev split: {:d}/{:d}'.format(len(y_train), len(y_dev)))      return x_train, y_train, vocab_processor, x_dev, y_dev      def train(x_train, y_train, vocab_processor, x_dev, y_dev):      with tf.Graph().as_default():          session_conf = tf.ConfigProto(              # allows TensorFlow to fall back on a device with a certain operation implemented              allow_soft_placement= FLAGS.allow_soft_placement,              # allows TensorFlow log on which devices (CPU or GPU) it places operations              log_device_placement=FLAGS.log_device_placement          )          sess = tf.Session(config=session_conf)          with sess.as_default():              # initialize cnn              fasttext = fastTextModel(sequence_length=x_train.shape[1],                        num_classes=y_train.shape[1],                        vocab_size=len(vocab_processor.vocabulary_),                        embedding_size=FLAGS.embedding_size,                        l2_reg_lambda=FLAGS.l2_reg_lambda,                        is_training=True,                        learning_rate=FLAGS.learning_rate,                        decay_steps=FLAGS.decay_steps,                        decay_rate=FLAGS.decay_rate                      )                # output dir for models and summaries              timestamp = str(time.time())              out_dir = os.path.abspath(os.path.join(os.path.curdir, 'run', timestamp))              if not os.path.exists(out_dir):                  os.makedirs(out_dir)              print('Writing to {} n'.format(out_dir))                # checkpoint dir. checkpointing – saving the parameters of your model to restore them later on.              checkpoint_dir = os.path.abspath(os.path.join(out_dir, FLAGS.ckpt_dir))              checkpoint_prefix = os.path.join(checkpoint_dir, 'model')              if not os.path.exists(checkpoint_dir):                  os.makedirs(checkpoint_dir)              saver = tf.train.Saver(tf.global_variables(), max_to_keep=FLAGS.num_checkpoints)                # Write vocabulary              vocab_processor.save(os.path.join(out_dir, 'vocab'))                # Initialize all              sess.run(tf.global_variables_initializer())                  def train_step(x_batch, y_batch):                  """                  A single training step                  :param x_batch:                  :param y_batch:                  :return:                  """                  feed_dict = {                      fasttext.input_x: x_batch,                      fasttext.input_y: y_batch,                  }                  _, step, loss, accuracy = sess.run(                      [fasttext.train_op, fasttext.global_step, fasttext.loss_val, fasttext.accuracy],                      feed_dict=feed_dict                  )                  time_str = datetime.datetime.now().isoformat()                  print("{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))                def dev_step(x_batch, y_batch):                  """                  Evaluate model on a dev set                  Disable dropout                  :param x_batch:                  :param y_batch:                  :param writer:                  :return:                  """                  feed_dict = {                      fasttext.input_x: x_batch,                      fasttext.input_y: y_batch,                  }                  step, loss, accuracy = sess.run(                      [fasttext.global_step, fasttext.loss_val, fasttext.accuracy],                      feed_dict=feed_dict                  )                  time_str = datetime.datetime.now().isoformat()                  print("dev results:{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))                # generate batches              batches = data_process.batch_iter(list(zip(x_train, y_train)), FLAGS.batch_size, FLAGS.num_epochs)              # training loop              for batch in batches:                  x_batch, y_batch = zip(*batch)                  train_step(x_batch, y_batch)                  current_step = tf.train.global_step(sess, fasttext.global_step)                  if current_step % FLAGS.validate_every == 0:                      print('n Evaluation:')                      dev_step(x_dev, y_dev)                      print('')                path = saver.save(sess, checkpoint_prefix, global_step=current_step)              print('Save model checkpoint to {} n'.format(path))    def main(argv=None):      x_train, y_train, vocab_processor, x_dev, y_dev = prepocess()      train(x_train, y_train, vocab_processor, x_dev, y_dev)    if __name__ == '__main__':      tf.app.run()

本文參考資料

[1]

Bag of Tricks for Efficient Text Classification: https://arxiv.org/abs/1607.01759

The End