Giter VIP home page Giter VIP logo

blog_comments's Introduction

Hi there 👋🤓

  • 🌱 Focus on Natural Language Processing.
  • 🤔 Interested in everything about PL.


⭐️ From cairoHy

blog_comments's People

Watchers

 avatar  avatar

blog_comments's Issues

B站-《线性代数的本质》-笔记 | new Cairo()

https://cairohy.github.io/2017/05/01/basic/%E7%BA%BF%E6%80%A7%E4%BB%A3%E6%95%B0%E7%9A%84%E6%9C%AC%E8%B4%A8-%E6%80%BB%E7%BB%93/

闲来无事,假期把B站上《线性代数的本质》系列看了一遍,在此记录一下要点。(系列里的所有动画的程序可以在https://github.com/3b1b/manim获得。这个系列能够让你从变换的角度来解读和理解线性代数的一些概念。 一、向量是什么向量(Vector): 物理领域,是由方向和长度确定的一个量。 计算机领域,是一个有序的数字列表,比如[1,2,3]^T。 数学领域,更加抽象,可以进行相加

paperWeekly知识图谱阅读小组-《Modeling Relational Data with Graph Convolutional Networks》 | new Cairo()

https://cairohy.github.io/2017/04/27/deeplearning/KG-GCN-arXiv2017-%E3%80%8AModeling%20Relational%20Data%20with%20Graph%20Convolutional%20Networks%E3%80%8B/

1、文章来源来源于arXiv,发表于2017年3月。是paperWeekly知识图谱阅读小组的本周阅读论文。 2、要解决的问题及已有方法虽然知识图谱(知识库,Knowledge Base)得到了广泛应用,但是即使最大的知识库也是不完整的,下游的应用(QA、IR)如果使用,需要进行统计关系学习(statistical relational learning,SRL)。 本文认为知识图谱中应当包含(实

【David Silver强化学习公开课】-7:Policy Gradient | new Cairo()

https://cairohy.github.io/2017/09/06/deeplearning/DRL/%E3%80%8ADavid%20Silver%E5%BC%BA%E5%8C%96%E5%AD%A6%E4%B9%A0%E5%85%AC%E5%BC%80%E8%AF%BE%E3%80%8B-7%EF%BC%9APolicy%20Gradient/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

《Zero-Shot Relation Extraction via Reading Comprehension》 | new Cairo()

https://cairohy.github.io/2017/06/28/deeplearning/NLP-RE-%E3%80%8AZero-Shot%20Relation%20Extraction%20via%20Reading%20Comprehension%E3%80%8B/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

两篇命名实体识别的经典模型[转] | new Cairo()

https://cairohy.github.io/2017/07/02/forward/%E4%B8%A4%E7%AF%87%E5%91%BD%E5%90%8D%E5%AE%9E%E4%BD%93%E8%AF%86%E5%88%AB%E7%9A%84%E6%97%A9%E6%9C%9F%E7%BB%8F%E5%85%B8%E6%A8%A1%E5%9E%8B/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

天气太热-略读一篇-《Recent Trends in Deep Learning Based Natural Language Processing》 | new Cairo()

https://cairohy.github.io/2017/08/17/deeplearning/NLP-%E3%80%8ARecent%20Trends%20in%20Deep%20Learning%20Based%20Natural%20Language%20Processing%E3%80%8B/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

将Memory Network用于RC和QA的五篇论文 | new Cairo()

https://cairohy.github.io/2017/11/21/deeplearning/five%20papers%20about%20memory%20and%20machine%20reading%20comprehension/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

【David Silver强化学习公开课】-5:Model-Free Control | new Cairo()

https://cairohy.github.io/2017/09/01/deeplearning/DRL/%E3%80%8ADavid%20Silver%E5%BC%BA%E5%8C%96%E5%AD%A6%E4%B9%A0%E5%85%AC%E5%BC%80%E8%AF%BE%E3%80%8B-5%EF%BC%9AModel-Free%20Control/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

【David Silver强化学习公开课】-8:Integrating Learning and Planning | new Cairo()

https://cairohy.github.io/2017/09/11/deeplearning/DRL/%E3%80%8ADavid%20Silver%E5%BC%BA%E5%8C%96%E5%AD%A6%E4%B9%A0%E5%85%AC%E5%BC%80%E8%AF%BE%E3%80%8B-8%EF%BC%9AIntegrating%20Learning%20and%20Planning/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

paperWeekly知识图谱阅读小组-《Distant Supervision for Relation Extraction with Sentence-level Attention and Entity Descriptions》 | new Cairo()

https://cairohy.github.io/2017/05/10/deeplearning/NLP-RelationExtraction-AAAI2017-%E3%80%8ADistant%20Supervision%20for%20Relation%20Extraction%20with%20Sentence-level%20Attention%20and%20Entity%20Descriptions%E3%80%8B/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

ORDERED NEURONS: INTEGRATING TREE STRUCTURES INTO RECURRENT NEURAL NETWORKS | new Cairo()

https://cairohy.github.io/2019/10/07/nlp/ORDERED-NEURONS-INTEGRATING-TREE-STRUCTURES-INTO-RECURRENT-NEURAL-NETWORKS/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

【David Silver强化学习公开课】-1:RL-introduction | new Cairo()

https://cairohy.github.io/2017/08/28/deeplearning/DRL/%E3%80%8ADavid%20Silver%E5%BC%BA%E5%8C%96%E5%AD%A6%E4%B9%A0%E5%85%AC%E5%BC%80%E8%AF%BE%E3%80%8B-1%EF%BC%9ARL-introduction/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

AoA-Reader实现总结-Tensorflow | new Cairo()

https://cairohy.github.io/2017/05/09/ml-coding-summarize/AoA-Reader%E5%9C%A8tensorflow%E4%B8%8B%E5%AE%9E%E7%8E%B0%E6%80%BB%E7%BB%93/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

【David Silver强化学习公开课】-6:Value Function Approximation | new Cairo()

https://cairohy.github.io/2017/09/04/deeplearning/DRL/%E3%80%8ADavid%20Silver%E5%BC%BA%E5%8C%96%E5%AD%A6%E4%B9%A0%E5%85%AC%E5%BC%80%E8%AF%BE%E3%80%8B-6%EF%BC%9AValue%20Function%20Appro/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

【David Silver强化学习公开课】-3:DP | new Cairo()

https://cairohy.github.io/2017/08/30/deeplearning/DRL/%E3%80%8ADavid%20Silver%E5%BC%BA%E5%8C%96%E5%AD%A6%E4%B9%A0%E5%85%AC%E5%BC%80%E8%AF%BE%E3%80%8B-3%EF%BC%9ADP/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

【David Silver强化学习公开课】-2:MDP | new Cairo()

https://cairohy.github.io/2017/08/29/deeplearning/DRL/%E3%80%8ADavid%20Silver%E5%BC%BA%E5%8C%96%E5%AD%A6%E4%B9%A0%E5%85%AC%E5%BC%80%E8%AF%BE%E3%80%8B-2%EF%BC%9AMDP/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

When Bert Forgets How To POS: Amnesic Probing of Linguistic Properties and MLM Predictions | new Cairo()

https://cairohy.github.io/2020/06/04/nlp/When%20Bert%20Forgets%20How%20To%20POS/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

《AAAI2017-Joint Copying and Restricted Generation for Paraphrase 》 | new Cairo()

https://cairohy.github.io/2017/03/09/deeplearning/NLP-AAAI2017-%E3%80%8AJoint-Copying-and-Restricted-Generation-for-Paraphrase%E3%80%8B/

1、文章来源AAAI-2017年论文。研究关于NLG(Natural Language Generation,自然语言生成)的问题。 2、要解决的问题及已有方法面向转述(释义)的NLG任务中,有Copy和Rewrite两种生成方式,但是之前的Seq2Seq模型只用了一个decoder。 Seq2Seq模型,也就是encoder-decoder模型,从源文本中编码为上下文向量,而后对其进行解码生成目

一次解决两个任务-《Question Answering and Question Generation as Dual Tasks》 | new Cairo()

https://cairohy.github.io/2017/06/19/deeplearning/NLP-QA&QG_Dual-%E3%80%8AQuestion%20Answering%20and%20Question%20Generation%20as%20Dual%20Tasks%E3%80%8B/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

SQuAD阅读理解第一名-《R-net Machine Reading Comprehension With self-matching Networks》 | new Cairo()

https://cairohy.github.io/2017/05/18/deeplearning/NLP-RC-ACL2017-%E3%80%8AR-net%20Machine%20Reading%20Comprehension%20With%20self-matching%20Networks%E3%80%8B/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

Tensorflow中GRU和LSTM的权重初始化 | new Cairo()

https://cairohy.github.io/2017/05/05/ml-coding-summarize/Tensorflow%E4%B8%ADGRU%E5%92%8CLSTM%E7%9A%84%E6%9D%83%E9%87%8D%E5%88%9D%E5%A7%8B%E5%8C%96/

GRU和LSTM权重初始化在编写模型的时候,有时候你希望RNN用某种特别的方式初始化RNN的权重矩阵,比如xaiver或者orthogonal,这时候呢,只需要: 12345678910cell = LSTMCell if self.args.use_lstm else GRUCellwith tf.variable_scope(initializer=tf.orthogonal_initiali

使用助记符帮助阅读-《Mnemonic Reader for Machine Comprehension》 | new Cairo()

https://cairohy.github.io/2017/05/23/deeplearning/NLP-RC-ArXiv2017-M-reader-%E3%80%8AMnemonic%20Reader%20for%20Machine%20Comprehension%E3%80%8B/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

DRL-Dueling-《Dueling Network Architectures for Deep Reinforcement Learning》 | new Cairo()

https://cairohy.github.io/2017/08/31/deeplearning/DRL-Dueling-%E3%80%8ADueling%20Network%20Architectures%20for%20Deep%20Reinforcement%20Learning%E3%80%8B/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

Tensorflow的RNN和Attention实现过程 | new Cairo()

https://cairohy.github.io/2017/06/05/ml-coding-summarize/Tensorflow%E7%9A%84RNN%E5%92%8CAttention%E7%9B%B8%E5%85%B3/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

从维基百科中找答案-《Reading Wikipedia to Answer Open-Domain Questions》 | new Cairo()

https://cairohy.github.io/2017/07/09/deeplearning/NLP-QA-wiki-%E3%80%8AReading%20Wikipedia%20to%20Answer%20Open-Domain%20Questions%E3%80%8B/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

学习如何停止阅读-《ReasoNet Learning to Stop Reading in Machine Comprehension》 | new Cairo()

https://cairohy.github.io/2017/05/22/deeplearning/NLP-RC-ReasoNet-NIPS2016-%E3%80%8AReasoNet%20Learning%20to%20Stop%20Reading%20in%20Machine%20Comprehension%E3%80%8B/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

seq2seq的NMT模型怎样训练-论文《Massive Exploration of Neural Machine Translation Architectures》 | new Cairo()

https://cairohy.github.io/2017/04/11/deeplearning/NLP-Hyperparams-train-arXiv2017-%E3%80%8AMassive%20Exploration%20of%20Neural%20Machine%20Translation%20Architectures%E3%80%8B/

1、文章来源来源于arXiv2017,Google Brain利用超过25万个GPU小时,对循环神经网络的训练做出了一些工作,得到了一些经验性的做法。 2、要解决的问题及已有方法NMT(神经机器翻译)网络虽然效果不错,但是经常需要几天甚至几周才能训练完成,如果加上调超参数那训练时间更加无法忍受。 已有的方法就是如同炼丹一样,依靠调参的人的经验,尽量减少开销。Google Brain通过自家GPU,

【David Silver强化学习公开课】-4:Model-Free Prediction | new Cairo()

https://cairohy.github.io/2017/08/31/deeplearning/DRL/%E3%80%8ADavid%20Silver%E5%BC%BA%E5%8C%96%E5%AD%A6%E4%B9%A0%E5%85%AC%E5%BC%80%E8%AF%BE%E3%80%8B-4%EF%BC%9AModel-Free%20Prediction/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

BP,RNN 和 LSTM暨《Supervised Sequence Labelling with Recurrent Neural Networks-2012》阅读笔记 | new Cairo()

https://cairohy.github.io/2016/11/23/machine-learning/BP-RNN-%E5%92%8C-LSTM%E6%9A%A8%E3%80%8ASupervised-Sequence-Labelling-with-Recurrent-Neural-Networks-2012%E3%80%8B%E9%98%85%E8%AF%BB%E7%AC%94%E8%AE%B0/

一、BackPropagation $w_{jk}^l$:表示第$l-1$层第k个神经元到第$l$层第j个神经元的连接权重; $b_j^l$:表示第$l$层第j个神经元的偏置; $z_j^l$:表示第$l$层第j个神经元的带权输入; $a_j^l$:表示第$l$层第j个神经元的激活值; $\sigma$:表示一个激活函数(sigmoid,relu,tanh); z_j^l = \sum_kw_{

《MEMEN: Multi-layer Embedding with Memory Networks for Machine Comprehension》 | new Cairo()

https://cairohy.github.io/2017/12/20/deeplearning/%E3%80%8AMeMEN-Multi-layer%20Embedding%20with%20Memory%20Networks%20for%20Machine%20Comprehension%E3%80%8B/

PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATIONhttps://arxiv.org/pdf/1912.02164.pdf,by Uber AI。 大规模预训练语言模型的效果不错,但如何利用它们生成属性可控的文本(比如说某一领域、某种风格、某种情感),fine-tune是一种方法,本文提出了一

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.