Giter VIP home page Giter VIP logo

prediction-of-customer-purchase-intention's Introduction

Mcformer---An end to end model to predict customer-purchase-intention

发明人: Jiahui Feng(冯家辉), Qihang Zhao(赵启航), Hefu Liu(刘和福), Wei Chen(陈伟)

  • This algorithm was developed by members of the IOM Lab at the School of Management, USTC
  • This is an end to end deep learning method Mcformer to utilize the customer clickstream data to predict the user purchase intention.
  • We aim to utilize the customer clickstream data to predict the customer purchase intention, the scenes as follows:
  • question

The framework of Mcformer

Framework of Mcformer

  • Introduction of Mcformer
  • In order to deal with multi-dimension clickstream sequence data, we proposed an end-to-end deep learning model, named Multi-channel for purchase transformer (Mcformer), to predict the customers’ purchasing intention. Figure 1 shows the model architecture of Mcformer. This model composed by four parts: embedding layer, multi-transformer layer, cross fusion layer and output layer. Embedding layer is used to embed the sparse one-hot vectors of the behavior data to dense vectors. After that, multi-channels transformer identify intra-information of each sequence. Then the cross fusion layer is applied to identify the inter-information of different sequences. Finally, Mcformer output the result by the multilayer perceptron.

Requirements

  • sklearn
  • pandas
  • pytorch
  • cuda

Run the project

''' bash python train.py '''

Data

Our data is the real world data from https://tianchi.aliyun.com/dataset/dataDetail?dataId=649, this dataset need to preprocessing, which need long time. If you need the data to verify our model, you could contact with us [email protected]

  • data

modify files to ensure the code work

And if you want to use your data, you have to provide

  • the number of items: ni
  • the number of category:nc
  • the number of types: nt
  • the number of hour/minutes: nh

you need to modify the files as follows:

modelconf.py

  • cat_pad_unk = [[0,nc+1,nc+2], [0,nh+1,nh+2]]
  • item_pad_unk =[[0,ni+1,ni+2], [0,nh+1,nh+2]]
  • type_pad_unk =[[0,nt+1,nt+2], [0,nh+1,nh+2]]

data.py

the file location

train.py

if you want to visualize the training process ,you should change parameters.

The results

  • The result show that Mcformer get great performance in long sequence classification tasks.
  • The beat parameters in the file named optimizpara, so you can use our best parameters directly.

Parameter

prediction-of-customer-purchase-intention's People

Contributors

jhfeng0215 avatar

Stargazers

 avatar  avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.