jmbejara / comp-econ-sp18 Goto Github PK
View Code? Open in Web Editor NEWMain Course Repository for Computational Methods in Economics (Econ 21410, Spring 2018)
Main Course Repository for Computational Methods in Economics (Econ 21410, Spring 2018)
When I download the zipped file from the link indicated, the file most similar to described is 1) a microsoft csv, not an xls file 2) titled Pregnant_Women_Participating with underscores and 3) has different data rows and columns. What am I missing?
Do we have to match the ticks of the axes of your graphs?
Question 3 asks us to consider the values x∈(0,150000), but linspace()
starts at the initial point given, so if I write linspace(0,150000,100)
I will have a problem because lognorm isn't defined at x=0.
What should I choose for my initial point?
In class we talked about using the built-in aggregation functions when calling DataFrame.aggregate. For example: df.agg({'A' : ['sum', 'min'], 'B' : ['min', 'max']})
as described here: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.aggregate.html
I think this is the list of built-in "string" functions. Note that you can always supply a custom function. If anybody has any additional information on this, feel free to post.
Function | Description |
---|---|
count | Number of non-NA observations |
sum | Sum of values |
mean | Mean of values |
mad | Mean absolute deviation |
median | Arithmetic median of values |
min | Minimum |
max | Maximum |
mode | Mode |
abs | Absolute Value |
prod | Product of values |
std | Bessel-corrected sample standard deviation |
var | Unbiased variance |
sem | Standard error of the mean |
skew | Sample skewness (3rd moment) |
kurt | Sample kurtosis (4th moment) |
quantile | Sample quantile (value at %) |
cumsum | Cumulative sum |
cumprod | Cumulative product |
cummax | Cumulative maximum |
cummin | Cumulative minimum |
I've taken this list from the Pandas docs here: http://pandas.pydata.org/pandas-docs/stable/basics.html#descriptive-statistics
It seems to be corroborated here: https://www.shanelynn.ie/summarising-aggregation-and-grouping-data-in-python-pandas/
Wrote this code to compress:
def compress(n):
newU = u[0:768, 0:n]
newV = v[0:n, 0:1024]
u_n= newU
v_n= newV
s_n= np.eye(n, dtype=float) * np.diag(s[0:n])
Each time i run it I run it this way:
compress(500)
pre = np.matmul(u_n, s_n)
face_compressed_500= np.matmul(pre, v_n)
plt.imshow(face_compressed_500, cmap=plt.cm.gray)
compress(300)
pre = np.matmul(u_n, s_n)
face_compressed_300= np.matmul(pre, v_n)
plt.imshow(face_compressed_300, cmap=plt.cm.gray)
But i get the exact same picture, furthermore my storage size is only 200 apart
Hi,
What does "Set the index name in the dataset to wines" mean? Should we just add a column name above the index numbers or should we change index to the column "title"?
Thanks!
# TODO
import numpy
mean_grid = numpy.linspace(-.01, 2, num = 100, endpoint = True)
#print(mean_grid)
mg = mean_grid
var_needed = np.empty(100)
j = 0
for i in mg:
mup = i
cons = [lambda f: (f.T @ mu) - mup, lambda f: (f.T @ np.full((5,1),1)) - 1]
var = scipy.optimize.fmin_slsqp(lambda f: f.T@sigma@f, w, eqcons = cons)
var_needed[j] = np.amin(var)
j= j+1
print(var_needed)
plt.plot(var_needed, mean_grid)
plt.xlabel('Portfolio Return Variance')
plt.ylabel('Mean Portfolio Return')
returns a graph with a straight line...
please help
When the question asks us to "describe what is surprising about the outcome of this simulation", should we be giving our personal reactions to the outcome?
I'm getting a 500 : Internal Server Error with the text below when I try to save creating_reading_writing as a pdf. I can still save other documents to pdf. Any idea what might be going wrong?
nbconvert failed: PDF creating failed, captured latex output:
This is XeTeX, Version 3.14159265-2.6-0.99998 (MiKTeX 2.9.6400 64-bit)
entering extended mode
("C:/Users/Jacob Bishop/Documents/Github/comp-econ-sp18/HW/hw-03/notebook.tex"
LaTeX2e <2017-04-15>
Babel <3.12> and hyphenation patterns for 73 language(s) loaded.
("C:\Program Files\MiKTeX 2.9\tex\latex\base\article.cls"
Document Class: article 2014/09/29 v1.4h Standard LaTeX document class
("C:\Program Files\MiKTeX 2.9\tex\latex\base\size11.clo"))
("C:\Program Files\MiKTeX 2.9\tex\latex\base\fontenc.sty"
("C:\Program Files\MiKTeX 2.9\tex\latex\base\t1enc.def")
("C:\Program Files\MiKTeX 2.9\tex\latex\lm\t1lmr.fd"))
("C:\Program Files\MiKTeX 2.9\tex\latex\psnfss\mathpazo.sty")
("C:\Program Files\MiKTeX 2.9\tex\latex\graphics\graphicx.sty"
("C:\Program Files\MiKTeX 2.9\tex\latex\graphics\keyval.sty")
("C:\Program Files\MiKTeX 2.9\tex\latex\graphics\graphics.sty"
("C:\Program Files\MiKTeX 2.9\tex\latex\graphics\trig.sty")
("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\graphics-cfg\graph
ics.cfg") ("C:\Program Files\MiKTeX 2.9\tex\latex\graphics-def\xetex.def")))
("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\caption\caption.st
y"
("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\caption\caption3.s
ty"))
("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\adjustbox\adjustbo
x.sty" ("C:\Program Files\MiKTeX 2.9\tex\latex\xkeyval\xkeyval.sty"
("C:\Program Files\MiKTeX 2.9\tex\generic\xkeyval\xkeyval.tex"
("C:\Program Files\MiKTeX 2.9\tex\generic\xkeyval\xkvutils.tex")))
("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\adjustbox\adjcalc.
sty")
("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\adjustbox\trimclip
.sty"
("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\collectbox\collect
box.sty")
("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\adjustbox\tc-xetex
.def"))
("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\ifoddpage\ifoddpag
e.sty") ("C:\Program Files\MiKTeX 2.9\tex\latex\ltxmisc\varwidth.sty"))
("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\xcolor\xcolor.sty"
("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\graphics-cfg\color
.cfg")) ("C:\Program Files\MiKTeX 2.9\tex\latex\tools\enumerate.sty")
("C:\Program Files\MiKTeX 2.9\tex\latex\geometry\geometry.sty"
("C:\Program Files\MiKTeX 2.9\tex\generic\oberdiek\ifpdf.sty")
("C:\Program Files\MiKTeX 2.9\tex\generic\oberdiek\ifvtex.sty")
("C:\Program Files\MiKTeX 2.9\tex\generic\ifxetex\ifxetex.sty")
("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\xelatex\xetexconfig\geom
etry.cfg")) ("C:\Program Files\MiKTeX 2.9\tex\latex\amsmath\amsmath.sty"
For additional information on amsmath, use the `?' option.
("C:\Program Files\MiKTeX 2.9\tex\latex\amsmath\amstext.sty"
("C:\Program Files\MiKTeX 2.9\tex\latex\amsmath\amsgen.sty"))
("C:\Program Files\MiKTeX 2.9\tex\latex\amsmath\amsbsy.sty")
("C:\Program Files\MiKTeX 2.9\tex\latex\amsmath\amsopn.sty"))
("C:\Program Files\MiKTeX 2.9\tex\latex\amsfonts\amssymb.sty"
("C:\Program Files\MiKTeX 2.9\tex\latex\amsfonts\amsfonts.sty"))
("C:\Program Files\MiKTeX 2.9\tex\latex\base\textcomp.sty"
("C:\Program Files\MiKTeX 2.9\tex\latex\base\ts1enc.def"))
("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\upquote\upquote.st
y") ("C:\Program Files\MiKTeX 2.9\tex\latex\eurosym\eurosym.sty")
("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\ucs\ucs.sty"
("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\ucs\uni-global.def
")) ("C:\Program Files\MiKTeX 2.9\tex\latex\base\inputenc.sty"
Package inputenc Warning: inputenc package ignored with utf8 based engines.
)
("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\fancyvrb\fancyvrb.
sty"
Style option: `fancyvrb' v2.7a, with DG/SPQR fixes, and firstline=lastline fix
<2008/02/07> (tvz)) ("C:\Program Files\MiKTeX 2.9\tex\latex\oberdiek\grffile.st
y" ("C:\Program Files\MiKTeX 2.9\tex\latex\oberdiek\kvoptions.sty"
("C:\Program Files\MiKTeX 2.9\tex\generic\oberdiek\ltxcmds.sty")
("C:\Program Files\MiKTeX 2.9\tex\generic\oberdiek\kvsetkeys.sty"
("C:\Program Files\MiKTeX 2.9\tex\generic\oberdiek\infwarerr.sty")
("C:\Program Files\MiKTeX 2.9\tex\generic\oberdiek\etexcmds.sty"
("C:\Program Files\MiKTeX 2.9\tex\generic\oberdiek\ifluatex.sty"))))
("C:\Program Files\MiKTeX 2.9\tex\generic\oberdiek\pdftexcmds.sty")
Package grffile Warning: \Gread@QTm of xetex.def not found on input line 259.
Package grffile Warning: Unsupported \Gread@eps not patched on input line 303.
) ("C:\Program Files\MiKTeX 2.9\tex\latex\hyperref\hyperref.sty"
("C:\Program Files\MiKTeX 2.9\tex\generic\oberdiek\hobsub-hyperref.sty"
("C:\Program Files\MiKTeX 2.9\tex\generic\oberdiek\hobsub-generic.sty"))
("C:\Program Files\MiKTeX 2.9\tex\latex\oberdiek\auxhook.sty")
("C:\Program Files\MiKTeX 2.9\tex\latex\hyperref\pd1enc.def")
("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\00miktex\hyperref.
cfg") ("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\url\url.sty"
))
Package hyperref Message: Driver (autodetected): hxetex.
("C:\Program Files\MiKTeX 2.9\tex\latex\hyperref\hxetex.def"
("C:\Program Files\MiKTeX 2.9\tex\latex\hyperref\puenc.def")
("C:\Program Files\MiKTeX 2.9\tex\generic\oberdiek\stringenc.sty")
("C:\Program Files\MiKTeX 2.9\tex\latex\oberdiek\rerunfilecheck.sty"))
("C:\Program Files\MiKTeX 2.9\tex\latex\tools\longtable.sty")
("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\booktabs\booktabs.
sty")
("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\enumitem\enumitem.
sty") ("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\ulem\ulem.st
y")
No file notebook.aux.
("C:\Program Files\MiKTeX 2.9\tex\latex\base\ts1cmr.fd")
("C:\Program Files\MiKTeX 2.9\tex\latex\psnfss\t1ppl.fd")
("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\caption\ltcaption.
sty")
geometry detected driver: pdftex
geometry verbose mode - [ preamble ] result:
("C:\Users\Jacob Bishop\AppData\Roaming\MiKTeX\2.9\tex\latex\ucs\ucsencs.def")
("C:\Program Files\MiKTeX 2.9\tex\latex\hyperref\nameref.sty"
("C:\Program Files\MiKTeX 2.9\tex\generic\oberdiek\gettitlestring.sty"))
Package hyperref Warning: Rerun to get /PageLabels entry.
("C:\Program Files\MiKTeX 2.9\tex\latex\psnfss\ot1ppl.fd")
("C:\Program Files\MiKTeX 2.9\tex\latex\psnfss\omlzplm.fd")
("C:\Program Files\MiKTeX 2.9\tex\latex\psnfss\omszplm.fd")
("C:\Program Files\MiKTeX 2.9\tex\latex\psnfss\omxzplm.fd")
("C:\Program Files\MiKTeX 2.9\tex\latex\psnfss\ot1zplm.fd")
LaTeX Warning: No \author given.
("C:\Program Files\MiKTeX 2.9\tex\generic\oberdiek\se-ascii-print.def")
("C:\Program Files\MiKTeX 2.9\tex\latex\lm\t1lmtt.fd")
("C:\Program Files\MiKTeX 2.9\tex\latex\lm\ts1lmtt.fd")
LaTeX Warning: File `https://i.imgur.com/Ax3pp2A.png' not found on input line 3
12.
! Unable to load picture or PDF file 'https://i.imgur.com/Ax3pp2A.png'.
}
l.312 ...graphics{https://i.imgur.com/Ax3pp2A.png}
?
! Emergency stop.
}
l.312 ...graphics{https://i.imgur.com/Ax3pp2A.png}
No pages of output.
Transcript written on notebook.log.
Hi,
Are we suppose to use gitignore for all the datasets? Is adding .gitignore to the end of file name sufficient to ignore the file?
Thanks!
Hi Jeremy,
I can't find variable GQ on the IPUMS CPS. I am searching with the search engine but I keep getting a message saying "Search Terms is too short (minimum is 3 characters)". Then I search for "Group Quarters status" but I still can't find the right variable.
Do you have any guesses on what is wrong?
Thanks,
Bariscan
Can you post the links to the specific quandl data sets you want us to use? I have found 4 separate GDP files and don't know which one to use.
Can the array for portfolio weight from our minimization have negative values? Does this mean that we short that stock?
I know which variables to select but forgot which samples to select. Right now I have selected only 18 ASEC samples for the years 2000-2017 (I deselect all the monthly samples). This in total is about 130 MB which seems wrong because the sample output of df.info() seems to suggest that the correct data has to be about 207 MB. Please remind me about which samples to select. Thank you so much!
I'm quite confused on how to proceed with Q21 and Q22:
What does it mean "You will need to use the survey weights to compute each mean within each cell. "
I'm running into a couple issues while trying to download the data.
Hi Jeremy,
For EDUC = 000, it defines as "NIU or no schooling". Should we consider it as NIU?
Thanks,
Bariscan
When I try to produce a pdf directly from the jupyter notebook (file->download as->pdf via latex) I only get the first page (it cuts off right before the fred picture). I don't run into this problem if I compile from the tex document that the notebook produces using a tex editor. Any idea what could cause this?
In Q10, I don't specifiy whether to use strict (<) or weak inequalities (<=). This could be the cause of some of the slight differences in results that some of you are having. For what it's worth, I'm using strict (<) inequalities.
This is my code to solve the optimal portfolio weights, but it gives me an error saying that the tuple, which is con
is not callable.
f = lambda w: Sigma_p
con = ({'type' : 'eq',
'fun' : lambda w: (w.T @ mu) - 0.01},
{'type' : 'eq',
'fun' : lambda w: (w.T @ np.full((5,1),1)) - 1})
w0 = np.full((5,1),0.2)
wstar = scipy.optimize.fmin_slsqp(f, w0, f_eqcons=con, acc=1e-09, iprint=2)
wstar
For the Euro12 Section of HW2, Question 14, do we have to use "isin" and in our code or is the hint optional?
Hi,
Is it possible to use apply function on only one column? I tried to use chipo.item_price.apply(), but that only gives me the column and does not change the actual dataframe.
Thanks!
What is the x-axis of your histogram in Q4?
Is there a way to convert a date (dtype = object) into dtype datetime64 when the date is given in the format "Jan-04" to mean January 2004?
I do not see anything specific about non negative weights in our portfolio optimization. Is that allowed?
I keep on getting this message when I try to use the minimize function for question 11. Any idea what might be causing this?
What do you mean by "calculate over the whole dataset"? Do you mean look at the data in euro12
or should we find the data in the link you attached: https://github.com/jokecamp/FootballData/?
Thanks!
I'm having trouble opening the SQL. When I downloaded from the link, I get a zip file, and when I click on the zip file more zip files open, and when I click on any of those more zip files open, etc... Am I supposed to be able to load the data without touching the original zip file?
The description of problem 2 reuses "find the data from the FRED for Real Gross Domestic Product in the US" instead of saying "find the data from the FRED for US Unemployment rate time series."
I'm also not sure that the title is enough information to identify which unemployment series you want. Long term? Short term?
I'm stuck on this integration. I think I'm not using scipy correctly. Any one have any suggestions on a better approach
`result = scipy.integrate.quad(lambda x: (1.0/(23.14154)**1/2)np.exp(-1((x-1)**2)/2*4)), -2, 2)
SyntaxError: invalid syntax`
def f():
pass
print(type(f()))
This prints <class 'NoneType'>
rather than the marked option A.
Isn't this the case because the function f contains the pass function, which is a null operation and as the function is defined only with the null operation, it is categorized as a NoneType, an object with no value?
Is there a due date for HW 1? I don't see it. @jmbejara, could you take a look at this?
Code just for fun:
import numpy as np
a = np.array([0,1,2])
In case anybody is having this issue, be sure to download only the ASEC samples. The ASEC supplement is the March CPS. The ASEC includes extra variables that are not included in the basic monthly CPS survey. From Wikipedia, the ASEC is described as the following:,
Since 1948, the CPS has included supplemental questions (at first, in April; later, in March) on income received in the previous calendar year, which are used to estimate the data on income and work experience. These data are the source of the annual Census Bureau report on income, poverty, and health insurance coverage...
Downloading the March samples from the "Basic Monthly" samples doesn't give us what we need. See the included screenshot. The arrow points out the fact that these variables are missing in the March basic monthly samples but are available in the ASEC samples.
In part 1 of HW 3 (creating_reading_writing), there's a part where we are supposed to download an xls file from https://www.kaggle.com/jpmiller/publicassistance/downloads/WICAgencies2014ytd.zip/2
However, when I download the zip file and extract it, I got a folder of csv files instead of one xls file.
The question states
- How many times did a Veggie Salad Bowl show up in an order?
Is this asking for the average number of veggie salad bowl requests per order?
Suppose I have 4 orders.
What is the question asking for?
For example,
Hi @jmbejara,
Are the instructions for the "(4) Use numpy.linspace to create a vector of N values equally spaced from -0.01 to 2" question supposed to be -0.01 to 0.02? (Did you mean 2% return rather than 200% return?)
Thanks!
-Isabel
Been stuck on Q3 for a while. The error I get says that plt.plot cant take a generator.
def pdf(x):
a = (1/(x0.5 (2np.pi)**(1/2))) math.exp(-1* ((math.log10(x) - 11)2)/(2 (0.5*2)))
return a
grid = np.linspace(0, 150000, num= 150000)
#fig, ax = plt.subplots(figsize=(10, 6))
#ax.hist(obs, bins=40, normed=True)
new = map(pdf, grid)
plt.plot(grid, new, 'k-', linewidth=2)
plt.show()
Here is a list of known typos and other improvements to make in HW 4. I haven't corrected these particular ones yet. If anyone notices any other typos, please let me know here.
Intuitively, should the var-covar matrix be 5x5? Does the np.cov() method know to compute the means of each variable when producing the matrix?
The code that I currently have gives me a 500 x 500 matrix.
Sigma = np.cov(prices)
Sigma
EDIT: should I find the transpose of prices
to obtain the variances? I did that instead, and now I have a 5x5 matrix. What is the intuition behind doing this?
prices_trans = prices.T
prices_trans
Sigma = np.cov(prices_trans)
Sigma
NVM I figured it out, sorry I just realized that the '
means transpose
When we import the dataset, there are a few separators such as ','
and '\s+'
that return errors in the code. However, although '|'
works, the printed DataFrame is not very clean (see below). How does one determine the correct separator by looking at the raw data?
url = 'https://raw.githubusercontent.com/TheUpshot/chipotle/master/orders.tsv'
chipo = pd.read_csv(url, sep='|')
chipo
Sorry, I should've asked this in class when we were reviewing the exercise together, but for question 1.5, when it asks us to find the number of observations in the dataset, could we use occupations.size
? It returns 3772, which is equal to 943*4. I was thinking of "observations" as the number of elements in the dataset. Thank you!!!!
I'm not sure I understand how python deals with scope, but shouldn't compress take both the matrix and a number n as inputs, not just n?
I will be available for office hours on 5/2 by the elevators on the 5th floor:
Morning: 11:30 am -noon
Afternoon: 2-2:30pm
The 2017-2018 Computational Social Science Workshop meets each Thursday from 11 a.m. to 12:20 p.m. in Kent 120.
# THE COMPUTATIONAL SOCIAL SCIENCE WORKSHOP PRESENTS
BEN GOLUB
ASSISTANT PROFESSOR, DEPARTMENT OF ECONOMICS
HARVARD UNIVERSITY
The Computational Social Science Workshop at the University of Chicago cordially invites you to attend this week’s talk:
Abstract: How should policymakers disseminate information: by broadcasting it widely (e.g., via mass media), or letting word spread from a small number of initially informed “seed” individuals? While conventional wisdom suggests delivering information more widely is better, we show theoretically and experimentally that this may not hold when people may need to ask questions to fully comprehend the information they were given. In a field experiment during the chaotic 2016 Indian demonetization, we varied how information about demonetization rules was delivered to villages on two dimensions: how many were initially informed (broadcasting versus seeding) and whether the identity of the initially informed was publicly disclosed. The quality of information aggregation is measured in three ways: the volume of conversations about demonetization, level of knowledge about demonetization rules, and choice quality in a strongly incentivized decision dependent on understanding the rules. Our results are consistent with four predictions of a model in which people need others’ help to make the best use of announced information, but worry about signaling inability or unwillingness to correctly process information they have access to. First, if who is informed is not publicized, broadcasting improves all three outcomes relative to seeding. Second, under seeding, publicizing who is informed improves all three outcomes. Third, when broadcasting, publicizing who is informed hurts along all three dimensions. Finally, when who is informed is made public, broadcasting is worse along all three dimensions relative to seeding.
THURSDAY, 4/12/2018
11:00AM-12:20PM
KENT 120
A light lunch will be provided by Cedars Mediterranean Kitchen.
Benjamin Golub’s research in economic theory focuses on social and economic networks. His work has examined:
Learning and gossip: what are the dynamics of information in networks? Who is particularly influential? When do agents learn correctly? When are their beliefs polarized?
Coordination in organizations: how do strategic agents in complex organizations coordinate, and how do the networks that connect them matter?
Financial contagion: which financial networks are particularly sensitive to sudden breakdowns?
Public goods and externalities: in complex favor-trading problems such as pollution reduction, we can use a network perspective to understand how much scope for cooperation there is, who is particularly essential to it, and how to organize negotiations.
Since 2015, he has been an Assistant Professor at the Harvard Department of Economics, and prior to that he spent two years as a Junior Fellow at the Harvard Society of Fellows. He was educated at Stanford (Ph.D. in economics) and Caltech (B.S., mathematics).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.