Giter VIP home page Giter VIP logo

Comments (14)

sv09 avatar sv09 commented on May 22, 2024 1

Thanks a lot @pabloduque0, that helps!

from lightweight_mmm.

pabloduque0 avatar pabloduque0 commented on May 22, 2024

Hello @sv09 ! Great question!

The main difference is that predict assumes the prediction is made for data after the training period and therefore will use the lagging from the training data, whereas .trace["mu"] will start with lagging 0.

Essentially if you want the predictions from the training data .trace["mu"] should be what you are looking for, and if you want to run predictions after the training period, predict is there for you.

Hope that helps!

from lightweight_mmm.

sv09 avatar sv09 commented on May 22, 2024

Hi @pabloduque0, thank you for that clarification!
I'm using the entire data for training/prediction and budget allocation. In the budget allocation function, I think mmm.predict is used which makes 'kpi_without_optim' not match with the actual target value.
So, in this case, instead of setting 'kpi_without_optim' argument with the value returned by 'optimize_media.find_optimal_budgets', should I set it with the actual target value from my data?

Replace 'kpi_without_optim' in the function below -

plot.plot_pre_post_budget_allocation_comparison(media_mix_model=mmm,
kpi_with_optim=solution['fun'],
kpi_without_optim=kpi_without_optim,
optimal_buget_allocation=optimal_buget_allocation,
previous_budget_allocation=previous_budget_allocation,
figure_size=(12,10),
channel_names=channel_names)

with -

plot.plot_pre_post_budget_allocation_comparison(media_mix_model=mmm,
kpi_with_optim=solution['fun'],
kpi_without_optim= -1x(total actual target value from the data),
optimal_buget_allocation=optimal_buget_allocation,
previous_budget_allocation=previous_budget_allocation,
figure_size=(12,10),
channel_names=channel_names) ?

from lightweight_mmm.

pabloduque0 avatar pabloduque0 commented on May 22, 2024

Yess, I see where you are going with this one.

I can see how you might want to see kpi_without_optim without the lagging.

However in trying to make the comparison between pre/post optimisation it is important that we make a fair comparison. kpi_without_optim should be evaluated under the same conditions that we evaluate solution['fun'] even if it makes kpi_without_optim slightly off to the historic data. What we are trying to compare is what the model outputs based on an average historic allocation vs optimised media allocation.

We have been thinking for some time to make the lagging optional in predict which will allow to run the optimisation with or without it. But it should be either used for both optimised and without optim or none, removing it only from kpi_without_optim would lead to biased comparisons.

WDYT? :)

from lightweight_mmm.

sv09 avatar sv09 commented on May 22, 2024

For my current run of 'optimize_media.find_optimal_budgets', I'm getting kpi_without_optim = -7246.9756, whereas total actual target value from my data = 43957.92
This is a huge difference, and I would expect the kpi_without_optim value to be closer to the actual target value. Is there anything that I'm missing here?

from lightweight_mmm.

pabloduque0 avatar pabloduque0 commented on May 22, 2024

@sv09 how many n_time_periods are you optimising for and how many weeks did you training data have?

from lightweight_mmm.

sv09 avatar sv09 commented on May 22, 2024

n_time_periods = 105 (weeks). The training data also has 105 weeks of data.

from lightweight_mmm.

pabloduque0 avatar pabloduque0 commented on May 22, 2024

Then my next guess is the budget. What budget are you passing to the optimisation? and what would be the total budget of the training data (if we made the equivalent calculation)?

from lightweight_mmm.

sv09 avatar sv09 commented on May 22, 2024

I'm giving budget = 1337057.4 to 'optimize_media.find_optimal_budgets', and the total budget of the training data = 1306110.91

from lightweight_mmm.

pabloduque0 avatar pabloduque0 commented on May 22, 2024

Thanks for your input, let me investigate this one and get back to you.

from lightweight_mmm.

pabloduque0 avatar pabloduque0 commented on May 22, 2024

@sv09 Im trying to replicate the behaviour with mock data and I am unable to run into the same big discrepancies.

Optimising for the total period of data gives me a difference of 1.5% when including the extra features and around ~3% when not including them. Which we can attribute to the lagging.

The next thing that I can think of is that you have highly influential extra features which you might have not included in the optimisation. Could that be the case?

If that is not the case, could you try to reproduce in a colab with mock data you can send my way?

from lightweight_mmm.

sv09 avatar sv09 commented on May 22, 2024

Thank you @pabloduque0 for that! Yes, i didn't include extra features in optimisation. I'll include that and see how the results are, and let you know.

from lightweight_mmm.

sv09 avatar sv09 commented on May 22, 2024

I ran the budget optimization with extra_features, and now pre optimization predicted target value is much closer to the actual target value. However, if I use extra features for optimization, I have to use n_time_periods = number of weeks in the training data/extra features (since I'm using the whole data for training), else I get an error - "TypeError: add got incompatible shapes for broadcasting:". I was wondering if there was any way to include extra features while being able to use varying n_time_periods for budget optimization.

from lightweight_mmm.

pabloduque0 avatar pabloduque0 commented on May 22, 2024

Okay I think at this point is more about what do you want to optimise and how you want to do it. My suggestion of adding the extra features only comes from the expectation that pre optimisation predicted target should match historic target. Generally you can optimise without extra features, they will only affect the the absolute value of the predictions but will not alter the media mix (they are only additive).

In reality when it comes to optimisation you would either estimate your extra features (if possible) or not include them, after all what you are looking at will be the media optimisation itself, the absolute value predicted by the optimisation is not what we are really focusing as the output of the optimisation, if the model has a decent fit, those values will be within a reasonable ballpark.

Therefore when estimating your extra features (again, if possible) you can estimate them of the length you want to optimise for.

Hope it helps.

from lightweight_mmm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.