Giter VIP home page Giter VIP logo

text-antispam's Introduction

产品级垃圾文本分类器

注意事项:

垃圾文本分类器所用到的tensorflow版本为2.2.0。

需要TensorLayer2.0+版本,建议从GitHub源码下载。


任务场景

文本反垃圾是网络社区应用非常常见的任务。因为各种利益关系,网络社区通常都难以避免地会涌入大量*扰、色情、诈骗等垃圾信息,扰乱社区秩序,伤害用户体验。这些信息往往隐晦,多变,传统规则系统如正则表达式匹配关键词难以应对。通常情况下,文本反垃圾离不开用户行为分析,本章只针对文本内容部分进行讨论。

为了躲避平台监测,垃圾文本常常会使用火星文等方式对关键词进行隐藏。例如:

渴望 兂 极限 激情 恠 燃烧 加 涐 嶶 信 lovexxxx521
亲爱 的 看 頭潒 约
私人 企鹅 ⓧⓧⓧ㊆㊆⑧⑧⑧ 给 你 爽 你 懂 的

垃圾文本通常还会备有多个联系方式进行用户导流。识别异常联系方式是反垃圾的一项重要工作,但是传统的识别方法依赖大量策略,攻防压力大,也容易被突破。例如:

自啪 试平 n 罗辽 婊研 危性 xxxx447
自啪 试平 n 罗辽 婊研 危性 xxxxx11118
自啪 试平 n 罗辽 婊研 危性 xxxx2323

在这个实例中,我们将使用TensorLayer来训练一个垃圾文本分类器,并介绍如何通过TensorFlow Serving来提供高性能服务,实现产品化部署。这个分类器将解决以上几个难题,我们不再担心垃圾文本有多么隐晦,也不再关心它们用的哪国语言或有多少种联系方式。

第一步,训练词向量,相关代码在word2vec文件夹,执行步骤见word2vec/README.md。

第二步,训练分类器,相关代码在network文件夹,执行步骤见network/README.md。

第三步,与TensorFlow Serving交互,客户端代码在serving文件夹。

网络结构

文本分类必然要先解决文本表征问题。文本表征在自然语言处理任务中扮演着重要的角色。它的目标是将不定长文本(句子、段落、文章)映射成固定长度的向量。 文本向量的质量会直接影响下游模型的性能。神经网络模型的文本表征工作通常分为两步,首先将单词映射成词向量,然后将词向量组合起来。 有多种模型能够将词向量组合成文本向量,例如词袋模型(Neural Bag-of-Words,NBOW)、递归神经网络(Recurrent Neural Network,RNN)和卷积神经网络(Convolutional Neural Network,CNN)。这些模型接受由一组词向量组成的文本序列作为输入,然后将文本的语义信息表示成一个固定长度的向量。 NBOW模型的优点是简单快速,配合多层全连接网络能实现不逊于RNN和CNN的分类效果,缺点是向量线性相加必然会丢失很多词与词相关信息,无法更精细地表达句子的语义。CNN在语言模型训练中也被广泛使用,这里卷积的作用变成了从句子中提取出局部的语义组合信息,多个卷积核则用来保证提取的语义组合的多样性。 RNN常用于处理时间序列数据,它能够接受任意长度的输入,是自然语言处理最受欢迎的架构之一,在短文本分类中,相比NBOW和CNN的缺点是需要的计算时间更长。

实例中我们使用RNN来表征文本,将输入的文本序列通过一个RNN层映射成固定长度的向量,然后将文本向量输入到一个Softmax层进行分类。 本章结尾我们会再简单介绍由NBOW和多层感知机(Multilayer Perceptron,MLP)组成的分类器和CNN分类器。实际分类结果中,以上三种分类器的 准确率都能达到97%以上。如图1所示,相比之前训练的SVM分类器所达到的93%左右的准确率,基于神经网络的垃圾文本分类器表现出非常优秀的性能。


图1 Word2vec与Dynamic RNN

词的向量表示

最简单的词表示方法是One-hot Representation,即把每个词表示为一个很长的向量,这个向量的维度是词表的大小,其中只有一个维度的值为1,其余都为0,这个维度就代表了当前的词。这种表示方法非常简洁,但是容易造成维数灾难,并且无法描述词与词之间的关系。还有一种表示方法是Distributed Representation,如Word2vec。这种方法把词表示成一种稠密、低维的实数向量。该向量可以表示一个词在一个N维空间中的位置,并且相似词在空间中的位置相近。由于训练的时候就利用了单词的上下文,因此Word2vec训练出来的词向量天然带有一些句法和语义特征。它的每一维表示词语的一个潜在特征,可以通过空间距离来描述词与词之间的相似性。

比较有代表性的Word2vec模型有CBOW模型和Skip-Gram模型。图2演示了Skip-Gram模型的训练过程。假设我们的窗口取1,通过滑动窗口我们得到(fox, brown)(fox, jumps)等输入输出对,经过足够多次的迭代后,当我们再次输入fox时,jumpsbrown的概率会明显高于其他词。在输入层与隐层之间的矩阵W1存储着每一个单词的词向量,从输入层到隐层之间的计算就是取出单词的词向量。因为训练的目标是相似词得到相似上下文,所以相似词在隐层的输出(即其词向量)在优化过程中会越来越接近。训练完成后我们把W1(词向量集合)保存起来用于后续的任务。


图2 Word2vec训练过程

Dynamic RNN分类器

传统神经网络如MLP受限于固定大小的输入,以及静态的输入输出关系,在动态系统建模任务中会遇到比较大的困难。传统神经网络假设所有输入都互相独立,其有向无环的神经网络的各层神经元不会互相作用,不好处理前后输入有关联的问题。但是现实生活中很多问题都是以动态系统的方式呈现的,一件事物的现状往往依托于它之前的状态。虽然也能通过将一长段时间分成多个同等长度的时间窗口来计算时间窗口内的相关内容,但是这个时间窗的依赖与变化都太多,大小并不好取。目前常用的一种RNN是LSTM,它与标准RNN的不同之处是隐层单元的计算函数更加复杂,使得RNN的记忆能力变得更强。

在训练RNN的时候我们会遇到另一个问题。不定长序列的长度有可能范围很广,Static RNN由于只构建一次Graph,训练前需要对所有输入进行Padding以确保整个迭代过程中每个Batch的长度一致,这样输入的长度就取决于训练集最长的一个序列,导致许多计算资源浪费在Padding部分。Dynamic RNN实现了Graph动态生成,因此不同Batch的长度可以不同,并且可以跳过Padding部分的计算。这样每一个Batch的数据在输入前只需Padding到该Batch最长序列的长度,并且根据序列实际长度中止计算,从而减少空间和计算量。

图3演示了Dynamic RNN分类器的训练过程,Sequence 1、2、3作为一个Batch输入到网络中,这个Batch最长的长度是6,因此左方RNN Graph展开后如右方所示是一个有着6个隐层的网络,每一层的输出会和下一个词一起作为输入进入到下一层。第1个序列的长度为6,因此我们取第6个输出作为这个序列的Embedding输入到Softmax层进行分类。第2个序列的长度为3,因此我们在计算到第3个输出时就停止计算,取第3个输出作为这个序列的Embedding输入到Softmax层进行后续的计算。依此类推,第3个序列取第5个输出作为Softmax层的输入,完成一次前向与后向传播。


图3 Dynamic RNN训练过程

text-antispam's People

Contributors

hanjr92 avatar markchangliu avatar pakrchen avatar tr-buaa avatar zsdonghao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

text-antispam's Issues

500: Internal Server Error

好像还是跑不通,cnn_classfier和rnn有ValueError无法继续进行,mlp.classfier.py虽然可以正常训练,但是serving_mlp.py之后浏览器无法还是检测
运行http://127.0.0.1/predict?text=加我微信XXXX有福利 还是有错
500: Internal Server Error
不知道是不是我运行操作有问题 十分期望能正常运行检测变体垃圾文本,期待能得到作者回复!

通过docker部署,浏览器访问验证时报错Attempting to use uninitialized value output_layer

访问http://127.0.0.1/predict?text=%E5%8A%A0%E6%88%91%E5%BE%AE%E4%BF%A1xxxxx%E6%9C%89%E7%A6%8F%E5%88%A9
错误代码:
ERROR:tornado.application:Uncaught exception GET /predict?text=%E5%8A%A0%E6%88%91%E5%BE%AE%E4%BF%A1xxxxx%E6%9C%89%E7%A6%8F%E5%88%A9 (127.0.0.1)
HTTPServerRequest(protocol='http', host='127.0.0.1', method='GET', uri='/predict?text=%E5%8A%A0%E6%88%91%E5%BE%AE%E4%BF%A1xxxxx%E6%9C%89%E7%A6%8F%E5%88%A9', version='HTTP/1.1', remote_ip='127.0.0.1')
Traceback (most recent call last):
File "E:\ProgramFiles\Python35\lib\site-packages\grpc\beta_client_adaptations.py", line 193, in _blocking_unary_unary
credentials=_credentials(protocol_options))
File "E:\ProgramFiles\Python35\lib\site-packages\grpc_channel.py", line 533, in call
return _end_unary_response_blocking(state, call, False, None)
File "E:\ProgramFiles\Python35\lib\site-packages\grpc_channel.py", line 467, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.FAILED_PRECONDITION
details = "Attempting to use uninitialized value output_layer/b
[[{{node output_layer/b/read}} = IdentityT=DT_FLOAT, _output_shapes=[[2]], _device="/job:localhost/replica:0/task:0/device:CPU:0"]]"
debug_error_string = "{"created":"@1541404487.436000000","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"Attempting to use uninitialized value output_layer/b\n\t [[{{node output_layer/b/read}} = IdentityT=DT_FLOAT, _output_shapes=[[2]], _device="/job:localhost/replica:0/task:0/device:CPU:0"]]","grpc_status":9}"

During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\ProgramFiles\Python35\lib\site-packages\tornado\web.py", line 1590, in _execute
result = method(*self.path_args, **self.path_kwargs)
File "E:/docker/text-antispam-master/serving/serving.py", line 21, in get
predict = self.classify(text)
File "E:/docker/text-antispam-master/serving/serving.py", line 38, in classify

### tensorflow serving启动后:
docker@tfServing-docker:~$ docker run -it -p 9000:9000 -v /docker:/docker tensorflow/serving:latest-devel
root@0a2946e3e077:/tensorflow-serving# tensorflow_model_server --port=9000 --model_base_path=/docker/model --model_name=saved_model
2018-11-05 03:19:46.464088: I tensorflow_serving/model_servers/server.cc:82] Building single TensorFlow model file config: model_name: saved_model model_base_path: /docker/model
2018-11-05 03:19:46.465177: I tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models.
2018-11-05 03:19:46.466121: I tensorflow_serving/model_servers/server_core.cc:517] (Re-)adding model: saved_model
2018-11-05 03:19:46.575415: I tensorflow_serving/core/basic_manager.cc:739] Successfully reserved resources to load servable {name: saved_model version: 1}
2018-11-05 03:19:46.578178: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: saved_model version: 1}
2018-11-05 03:19:46.580695: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: saved_model version: 1}
2018-11-05 03:19:46.583750: I external/org_tensorflow/tensorflow/contrib/session_bundle/bundle_shim.cc:360] Attempting to load native SavedModelBundle in bundle-shim from: /docker/model/1
2018-11-05 03:19:46.585615: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /docker/model/1
2018-11-05 03:19:46.593028: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2018-11-05 03:19:46.615678: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:162] Restoring SavedModel bundle.
2018-11-05 03:19:46.617201: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:172] The specified SavedModel has no variables; no checkpoints were restored. File does not exist: /docker/model/1/variables/variables.index
2018-11-05 03:19:46.618343: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:138] Running MainOp with key legacy_init_op on SavedModel bundle.
2018-11-05 03:19:46.619315: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:259] SavedModel load for tags { serve }; Status: success. Took 33692 microseconds.
2018-11-05 03:19:46.620873: I tensorflow_serving/servables/tensorflow/saved_model_warmup.cc:83] No warmup data file found at /docker/model/1/assets.extra/tf_serving_warmup_requests
2018-11-05 03:19:46.627259: I tensorflow_serving/core/loader_harness.cc:86] Successfully loaded servable version {name: saved_model version: 1}
2018-11-05 03:19:46.630992: I tensorflow_serving/model_servers/server.cc:285] Running gRPC ModelServer at 0.0.0.0:9000 ...

### 客户端启动serving.py后
Python 3.5.4 (v3.5.4:3f56838, Aug 8 2017, 02:17:05) [MSC v.1900 64 bit (AMD64)] on win32
runfile('E:/docker/text-antispam-master/serving/serving.py', wdir='E:/docker/text-antispam-master/serving')
Building prefix dict from the default dictionary ...
DEBUG:jieba:Building prefix dict from the default dictionary ...
Loading model from cache C:\Users\ADMINI1\AppData\Local\Temp\jieba.cache
DEBUG:jieba:Loading model from cache C:\Users\ADMINI
1\AppData\Local\Temp\jieba.cache
Loading model cost 1.188 seconds.
DEBUG:jieba:Loading model cost 1.188 seconds.
Prefix dict has been built succesfully.
DEBUG:jieba:Prefix dict has been built succesfully.
分词 初始化
listen start

代码修改处
host, port = ('192.168.99.100', '9000')
channel = implementations.insecure_channel(host, int(port))
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
request = predict_pb2.PredictRequest()
request.model_spec.name = 'saved_model'

样本库是怎么更新的呢

请问下,您的样本库是怎么更新的呢?难道把在实际应用中的句子根据最终的分类结果再放回到样本库么?

No versions of servable default found under base path

在启动TensorFlow serving的时候,执行了tensorflow_model_server --port=9000 --model_base_path=/home/ubuntu/workspace/GTXiao/text-antispam/network/output/rnn_model/1/variables
其中model_base_path为导出模型的绝对路径,但是出现了2018-12-17 17:42:17.913282: W tensorflow_serving/sources/storage_path/file_syste m_storage_path_source.cc:268] No versions of servable default found under base p ath /home/ubuntu/workspace/GTXiao/text-antispam/network/output/rnn_model/1/varia bles的错误。

ValueError

版本1
python3.5.2
tensorflow == 2.2.0
tensorlayer == 2.2.1

版本2
python3.6.12
tensorflow == 2.2.0
tensorlayer == 2.2.3

错误信息
root@948eae80bec6:/opt/text-antispam/network# python3 rnn_classifier.py
2020-12-01 01:34:08.114620: I tensorflow/core/profiler/lib/profiler_session.cc:159] Profiler session started.
2020-12-01 01:34:08.121494: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2020-12-01 01:34:08,265 INFO Input input_layer: [None, None, 200]
2020-12-01 01:34:08.266184: E tensorflow/stream_executor/cuda/cuda_driver.cc:313] failed call to cuInit: UNKNOWN ERROR (303)
2020-12-01 01:34:08.266226: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (948eae80bec6): /proc/driver/nvidia/version does not exist
2020-12-01 01:34:08.266494: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-12-01 01:34:08.271857: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 3192000000 Hz
2020-12-01 01:34:08.272342: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4e4fbf0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-12-01 01:34:08.272372: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-12-01 01:34:08,280 INFO RNN rnn_1: cell: LSTMCell, n_units: 64
2020-12-01 01:34:08,387 INFO Dense dense: 2 softmax_v2
2020-12-01 01:34:08,417 INFO batch_size: 128
2020-12-01 01:34:08,417 INFO Start training the network...
Traceback (most recent call last):
File "rnn_classifier.py", line 445, in
train(model)
File "rnn_classifier.py", line 100, in train
range(max_seq_len - len(d))]
ValueError: operands could not be broadcast together with shapes (1,200) (0,) (1,200)

短文本的识别问题

问题描述

使用CNN模型长文本识别效果挺好的。但是针对短文本的话,识别效果就比较差了。本身短文本的特征较少是一个原因。但是将短文本加入训练后,实际训练后的效果也不是特别好。请教下有没有其他的办法来处理?

实际结果

酒店援交:{"data": {"text": "\u9152\u5e97\u63f4\u4ea4", "predict": 0}}

淫乱少妇:{"data": {"text": "\u6deb\u4e71\u5c11\u5987", "predict": 0}}

代f开发f票联系QQ3486693982:{"data": {"text": "\u4ee3f\u5f00\u53d1f\u7968\u8054\u7cfbQQ3486693982", "predict": 1}}

想请教一下关于训练集中文本变形的问题

想请问一下,就是训练集中并没有文本变形的的文本,例如文章提到:“微信”的变形为“危性”,那么测试集的准确率怎么去表明说是对变形文本的识别的准确率呢?

ValueError: cannot reshape array

is:issue is:open 请问这个tf这些用的什么版本?遇到这个问题,求指导 serving时,ValueError: cannot reshape array of size 20 into shape (1,20,200)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.