chinexp's People
chinexp's Issues
state of the art
What do you think is the state of the art?
what should I be looking for? key words?
Datasets with low level metrics as dependent variables
I think this is not possible since these datasets are time series. We can predict a time series based on configurations.
Comments?
Progressive sampling
- Split dataset into train(40), buffer (WHAT on test)(X), test (60-X)
while budget > 0 and mre > specified:- what on train
- test on buffer
- remove the data points from train
@chsu6 what do you think?
compare against randomly selected 13(X)% of the data
Outlier detection technique
1_tp_read.csv 13.1855 0.22125 32.0
2_tp_write.csv 5.269 0.194 32.0
3_tp_read.csv 29.3525 0.51125 64.0
4_tp_write.csv 23.113 1.409 64.0
ds101_ops_read.csv 11.5135 0.157 56.0
ds101_ops_write.csv 5.7015 0.54175 56.0
ds101_rt_read.csv 13.945 0.38025 56.0
ds101_rt_write.csv 5.8155 0.09325 56.0
ds101_tp_read.csv 11.822 0.503 56.0
ds101_tp_write.csv 5.963 0.37025 56.0
Research Questions
Research Questions
- Accuracy vs Size
-- Random (strawman)
-- WHAT
-- PCA (maybe)
-- KNN - Variance in prediction
- Granularity vs less Data
- Model vs less Data
How can we build smarter configuration space?
Should we include the ATnT folks?
Would it give us leverage?
It would definitely help us in writing...they would be better than both of us combined.
What are the other things we can try?
- granularity
- lower level metrics
What more?
Code peer review
Choose multiple points from the cluster N**0.5
of points from each cluster, filename, median_err, IQR, total#points
1 1_tp_read.csv 39.071 30.65225 16.0
2 1_tp_read.csv 27.329 7.163 32.0
3 1_tp_read.csv 9.22 1.37175 48.0
4 1_tp_read.csv 7.9295 0.7765 64.0
5 1_tp_read.csv 7.576 0.2955 80.0
6 1_tp_read.csv 7.166 1.12225 96.0
1 2_tp_write.csv 37.025 44.542 16.0
2 2_tp_write.csv 12.2045 7.07525 32.0
3 2_tp_write.csv 2.748 0.2755 48.0
4 2_tp_write.csv 3.211 0.25675 64.0
5 2_tp_write.csv 2.3765 0.46325 80.0
6 2_tp_write.csv 3.7185 0.111 96.0
1 3_tp_read.csv 36.172 0.0655 32.0
2 3_tp_read.csv 28.4085 0.9045 64.0
3 3_tp_read.csv 24.015 0.37275 96.0
4 3_tp_read.csv 23.055 1.2245 128.0
5 3_tp_read.csv 18.33 0.4865 160.0
6 3_tp_read.csv 17.0995 0.95 192.0
1 4_tp_write.csv 24.8375 1.9895 32.0
2 4_tp_write.csv 21.545 2.638 64.0
3 4_tp_write.csv 15.0815 0.6425 96.0
4 4_tp_write.csv 13.013 0.651 128.0
5 4_tp_write.csv 9.504 1.00425 160.0
6 4_tp_write.csv 9.945 0.779 192.0
1 ds101_ops_read.csv 32.885 5.50125 32.0
2 ds101_ops_read.csv 11.785 0.31 64.0
3 ds101_ops_read.csv 10.78 0.3 96.0
4 ds101_ops_read.csv 9.195 1.632 128.0
5 ds101_ops_read.csv 7.916 0.92 160.0
6 ds101_ops_read.csv 7.5665 0.154 192.0
1 ds101_ops_write.csv 13.38 2.393 32.0
2 ds101_ops_write.csv 6.387 0.5035 64.0
3 ds101_ops_write.csv 5.245 0.12825 96.0
4 ds101_ops_write.csv 4.329 0.18475 128.0
5 ds101_ops_write.csv 3.759 0.11925 160.0
6 ds101_ops_write.csv 3.626 0.05525 192.0
1 ds101_rt_read.csv 26.143 2.07975 32.0
2 ds101_rt_read.csv 17.8945 2.57825 64.0
3 ds101_rt_read.csv 20.085 2.705 96.0
4 ds101_rt_read.csv 16.3085 0.522 128.0
5 ds101_rt_read.csv 15.601 0.4065 160.0
6 ds101_rt_read.csv 11.886 0.4085 192.0
1 ds101_rt_write.csv 8.185 0.3815 32.0
2 ds101_rt_write.csv 4.786 0.242 64.0
3 ds101_rt_write.csv 4.653 0.33675 96.0
4 ds101_rt_write.csv 3.748 0.20925 128.0
5 ds101_rt_write.csv 3.5 0.204 160.0
6 ds101_rt_write.csv 3.391 0.21275 192.0
1 ds101_tp_read.csv 31.2015 7.03175 32.0
2 ds101_tp_read.csv 11.18 0.99175 64.0
3 ds101_tp_read.csv 10.686 0.415 96.0
4 ds101_tp_read.csv 8.251 1.731 128.0
5 ds101_tp_read.csv 7.947 0.032 160.0
6 ds101_tp_read.csv 8.078 0.095 192.0
1 ds101_tp_write.csv 13.256 0.0 32.0
2 ds101_tp_write.csv 6.7505 0.64575 64.0
3 ds101_tp_write.csv 5.667 0.093 96.0
4 ds101_tp_write.csv 4.656 0.15125 128.0
5 ds101_tp_write.csv 4.142 0.03775 160.0
6 ds101_tp_write.csv 4.219 0.02075 192.0
Brute force to know the optimized solution
Here is the results to run thousands of iterations with random sampling. In this particular case, I use 60% data for as candidate configuration (where random sampling selects data) and 40% data for verifying purpose. The preliminary results is attached below. The quick conclusion here is chances are there. I think if we can add more redundant data (or less significant data points), we will see more potentials to save the benchmark costs, as long as we can find a way to the best sampling approach.
1_tp_read.csv
- The min MAPE=0.1408
- The max MAPE=6.3953
- The mean MAPE=0.4771
- The median MAPE=0.4089
- The std MAPE=0.3855
2_tp_write.csv
- The min MAPE=0.0565
- The max MAPE=3.9908
- The mean MAPE=0.1394
- The median MAPE=0.1298
- The std MAPE=0.1309
3_tp_read.csv
- The min MAPE=0.2916
- The max MAPE=0.7721
- The mean MAPE=0.4176
- The median MAPE=0.4115
- The std MAPE=0.0577
4_tp_write.csv
- The min MAPE=0.1596
- The max MAPE=0.4444
- The mean MAPE=0.2774
- The median MAPE=0.2755
- The std MAPE=0.0495
ds101_ops_read.csv
- The min MAPE=0.1349
- The max MAPE=0.6511
- The mean MAPE=0.3125
- The median MAPE=0.2998
- The std MAPE=0.0821
ds101_ops_write.csv
- The min MAPE=0.0598
- The max MAPE=0.3745
- The mean MAPE=0.1360
- The median MAPE=0.1285
- The std MAPE=0.0431
ds101_rt_read.csv
- The min MAPE=0.1409
- The max MAPE=0.4408
- The mean MAPE=0.2459
- The median MAPE=0.2398
- The std MAPE=0.0488
ds101_rt_write.csv
- The min MAPE=0.0425
- The max MAPE=0.3033
- The mean MAPE=0.0889
- The median MAPE=0.0802
- The std MAPE=0.0297
ds101_tp_read.csv
- The min MAPE=0.1356
- The max MAPE=0.6269
- The mean MAPE=0.2922
- The median MAPE=0.2771
- The std MAPE=0.0866
ds101_tp_write.csv
- The min MAPE=0.0598
- The max MAPE=0.3782
- The mean MAPE=0.1239
- The median MAPE=0.1156
- The std MAPE=0.0398
Check the stability of the results
Ideas?
http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf
Look at Figure 1. This is what I was telling you for "Chi"
Use only 50% of the data for WHAT
Is there any conferences we are trying to target?
- Names and dead lines please
New Datasets
Outlier detection with number of nodes = N**0.5
1_tp_read.csv 15.962 1.41725 16.0
2_tp_write.csv 10.884 0.0 16.0
3_tp_read.csv 37.392 1.59 32.0
4_tp_write.csv 33.58 1.084 32.0
ds101_ops_read.csv 12.585 2.7105 32.0
ds101_ops_write.csv 6.9705 0.703 32.0
ds101_rt_read.csv 16.712 0.2455 32.0
ds101_rt_write.csv 5.3815 0.4845 32.0
ds101_tp_read.csv 13.761 1.72 32.0
ds101_tp_write.csv 6.9095 0.435 32.0
what is tp, rp, ops?
Should we trying with more datasets?
Other benchmarks?
Choose Multiple points form clusters (n/2)**0.5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.