alvarobartt / investiny Goto Github PK
View Code? Open in Web Editor NEW🤏🏻 `investpy` but made tiny
Home Page: https://alvarobartt.github.io/investiny
License: MIT License
🤏🏻 `investpy` but made tiny
Home Page: https://alvarobartt.github.io/investiny
License: MIT License
Could you please add support for python 3.8
When I give a start and end date over 5000 days in difference, for example:
historical_data(investing_id=13928, from_date="09/01/2001", to_date="10/01/2022")
While tackling issue #40, I've spotted that the volume
is being returned as part of historical_data
as the current check is just making sure that the key exists, but not taking into consideration that the values inside are n/\a
(None
). See the screenshot below:
import requests
from requests.structures import CaseInsensitiveDict
headers = CaseInsensitiveDict()
headers["User-Agent"] = "Mozilla/5.0 (Windows NT 10.0; rv:91.0) Gecko/20100101 Firefox/91.0"
headers["Accept"] = "/"
headers["Accept-Language"] = "en-US,en;q=0.5"
headers["Referer"] = "https://tvc-invdn-com.investing.com/"
headers["Content-Type"] = "text/plain"
headers["Origin"] = "https://tvc-invdn-com.investing.com"
headers["Connection"] = "keep-alive"
headers["Sec-Fetch-Dest"] = "empty"
headers["Sec-Fetch-Mode"] = "cors"
headers["Sec-Fetch-Site"] = "same-site"
resp = requests.get(url, headers=headers)
print(resp.status_code)
from investiny import historical_data, search_assets
search_results = search_assets(query='U.K. 10Y', limit=1, type="bond")
investing_id = int(search_results[0]["ticker"])
list index out of range
Hi I get below error when using investiny.
Request to Investing.com API failed with error code: 403
HI there. I use historical_data function, I want to have Date column too, what should I do?
Opening new ticket per #31 @alvarobartt
Historical price query for US10Y returning incorrect Date and Price mapping.
Getting weekend date 10/08/22 (Sat) and missing 10/10/22(Mon) per the following -
Thanks for looking into this.
I seem to be getting incorrect date and price mapping. I you check the images, 10/02/2022 and 10/09/22 are Sundays. Cross checked on investing.com and seems price output are moved back by a day.
** limited knowledge of running code, so might be doing something wrong at my end.. but will great if you take a look.
First of all thanks very much for this, Alvaro, your efforts are hugely appreciated!
Therefore I feel a bit bad asking for more, but am wondering whether it is possible to include volume data which was available for at least some of the indices?
Again many thanks!
Hi,
I have just started receiving error 403... (alvarobartt/investpy#613). Can someone confirm that is a general issue or just something on my end?
Thank you,
Hello, first thank you for the amazing project. When i tried examples of the documentation i still get 403 error :/, even with the new api.
I tried:
and received:
I am using Anaconda at this point, but driven by curiosity i tried open the api link on the browser and still get the 403 error, but after refresh all data are loaded correctly
EDIT: in the browser tvc4 had the same behavior. I think in some header or cookie problem, maybe ?
Installation in Windows machine went smoothly
That on Linux reported this error message. Anything I can do to fix it?
/usr/lib/python3/dist-packages/secretstorage/dhcrypto.py:15: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
from cryptography.utils import int_from_bytes
/usr/lib/python3/dist-packages/secretstorage/util.py:19: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
from cryptography.utils import int_from_bytes
Defaulting to user installation because normal site-packages is not writeable
ERROR: Could not find a version that satisfies the requirement investiny (from versions: none)
ERROR: No matching distribution found for investiny
It seems to work fine using cloudscraper instead of httpx
Just changed and added this :
scraper = cloudscraper.create_scraper(
browser={
'browser': 'chrome',
'platform': 'android',
'desktop': False
}
)
r = scraper.get(url, params=params, headers=headers)
I wanted to use the historical data for the trendet library, like in the code here https://github.com/alvarobartt/trendet
I had the error 'ConnectionError: ERR#0015: error 403, try again later. #600', but with investiny I was able to create the df finally, however the column names seem to be different now. In the code you have 'Up Trend' , 'Down Trend', 'Date'. But in the investiny I only saw 'open', 'high', 'low'. Am I using the wrong data? I got the one from investiny as follows:
from investiny import historical_data, search_assets
search_results = search_assets(query="AAPL", limit=1, type="Stock", exchange="NASDAQ")
investing_id = int(search_results[0]["ticker"]) # Assuming the first entry is the desired one (top result in Investing.com)
data = historical_data(investing_id=investing_id, from_date="09/01/2022", to_date="10/01/2022")
Any chance to use yyyy-mm-dd
date format instead of m/d/Y
?
Hello,
as I understand, the problem was not solved.
Are there any news?
@alvarobartt I have several tools that work to webscrape the webpage but then the website might change and was curious to know who here knows more front end dev work and we can create an alternate data source.
DM me for more details
Hi there! Could you explain how to get investing_id
by the ticker? For example, how to get the historical data of GOOGL in NASDAQ exchange from 09/01/2022 to 10/01/2022?
please use which is working api
import requests
from requests.structures import CaseInsensitiveDict
while True:
i=['1138417','17955','101810','160',]
for k in i:
url = f"https://tvc6.investing.com/9368e857cc51ddcae69108bd8a3b6d49/1664515691/56/56/23/history?symbol={k}&resolution=D&from=1633411692&to=1664515752"
headers = CaseInsensitiveDict()
headers["authority"] = "tvc4.investing.com"
headers["accept"] = "*/*"
headers["accept-language"] = "en-US,en;q=0.9"
headers["content-type"] = "text/plain"
headers["origin"] = "https://tvc-invdn-com.investing.com"
headers["referer"] = "https://tvc-invdn-com.investing.com/"
headers["sec-fetch-dest"] = "empty"
headers["sec-fetch-mode"] = "cors"
headers["sec-fetch-site"] = "same-site"
headers["user-agent"] = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.102 Safari/537.36"
resp = requests.get(url, headers=headers)
print(resp.status_code)
time.sleep(1)
if resp.status_code==200:
print(resp.json())
When i do "pip install investiny" colab return the writing in the title. How can i fix?
Originally posted by longlie1109 October 12, 2022
Is there a way to get the "exchange" in search_assets function ?
NASDAQ works fine but other market isn't work out for me
So as the discussion says, it seems that the results provided by search_assets
contain a field named exchange
which doesn't seem to be matching the actual filter for the search, which means that if the results contain e.g. exchange="RANDOM_EXCHANGE"
, if you use that filter in a next call to search_assets
using the exchange
param, it won't work, as the printed exchange is not the same one as the one used by the filter, even though those refer to the same.
Note that something similar happens with the type
parameter, which was already solved at 8cf3e0e
historical_price
seems to return a dict of list like this:{
'date': ['09/01/2022', '09/02/2022', '09/05/2022', '09/06/2022', '09/07/2022', '09/08/2022', '09/09/2022', '09/12/2022', '09/13/2022', '09/14/2022', '09/15/2022', '09/16/2022', '09/19/2022', '09/20/2022', '09/21/2022', '09/22/2022', '09/23/2022', '09/26/2022', '09/27/2022', '09/28/2022', '09/29/2022'],
'open': [147.49000549316, 147.77000427246, 149.07000732422, 147.77000427246, 147.02000427246, 145.33000183105, 143.13999938965, 143.52000427246, 144.16999816895, 143.25999450684, 143.58000183105, 142.97999572754, 143.19000244141, 142.39999389648, 140.86000061035, 141.16000366211, 140.13999938965, 139.19999694824, 137.69999694824, 137, 138.33000183105],
'high': [147.83000183105, 149.10000610352, 149.25, 149.02000427246, 147.88000488281, 145.80999755859, 144.47999572754, 144.55999755859, 144.4700012207, 143.86999511719, 143.63000488281, 143.24000549316, 143.25999450684, 142.58000183105, 141.97999572754, 142.02000427246, 140.99000549316, 139.2200012207, 139.28999328613, 138.74000549316, 138.49000549316],
'low': [146.7799987793, 147.13999938965, 147.36999511719, 146.67999267578, 146.74000549316, 143.05999755859, 142.46000671387, 142.91000366211, 143, 142.64999389648, 142.74000549316, 142.14999389648, 142.17999267578, 140.5299987793, 140.72999572754, 139.89999389648, 138.44999694824, 137.41000366211, 136.61999511719, 135.52000427246, 136.2200012207],
'close': [147.38999938965, 148.33000183105, 147.72999572754, 147.27000427246, 147.5299987793, 143.16000366211, 143.57000732422, 144.5, 143.38999938965, 143.63000488281, 143.2200012207, 142.72999572754, 142.61999511719, 140.69999694824, 141.19999694824, 140.02000427246, 139.47999572754, 138.86999511719, 137.13999938965, 137.91000366211, 137.5]
}
would be nice to have an option to return a pandas.DataFrame
with date
as index
, kind of:
df = pd.DataFrame.from_dict(data, orient ='columns')
df = df.set_index('date')
open high low close
date
09/01/2022 147.490005 147.830002 146.779999 147.389999
09/02/2022 147.770004 149.100006 147.139999 148.330002
09/05/2022 149.070007 149.250000 147.369995 147.729996
09/06/2022 147.770004 149.020004 146.679993 147.270004
09/07/2022 147.020004 147.880005 146.740005 147.529999
09/08/2022 145.330002 145.809998 143.059998 143.160004
09/09/2022 143.139999 144.479996 142.460007 143.570007
09/12/2022 143.520004 144.559998 142.910004 144.500000
09/13/2022 144.169998 144.470001 143.000000 143.389999
09/14/2022 143.259995 143.869995 142.649994 143.630005
09/15/2022 143.580002 143.630005 142.740005 143.220001
09/16/2022 142.979996 143.240005 142.149994 142.729996
09/19/2022 143.190002 143.259995 142.179993 142.619995
09/20/2022 142.399994 142.580002 140.529999 140.699997
09/21/2022 140.860001 141.979996 140.729996 141.199997
09/22/2022 141.160004 142.020004 139.899994 140.020004
09/23/2022 140.139999 140.990005 138.449997 139.479996
09/26/2022 139.199997 139.220001 137.410004 138.869995
09/27/2022 137.699997 139.289993 136.619995 137.139999
09/28/2022 137.000000 138.740005 135.520004 137.910004
09/29/2022 138.330002 138.490005 136.220001 137.500000
This library can not install using pip install investiny
. It show following error message.
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
ERROR: Could not find a version that satisfies the requirement investiny (from versions: none)
ERROR: No matching distribution found for investiny
In order to improve investiny
's performance when retrieving historical data from Investing.com when the specified intervals imply retrieving data greater than 5000 points, we'll be using httpx
with asyncio
so as to make sure that those requests are run asynchronously rather than inside a for-loop.
We'll run a simple benchmark once done so as to compare both approaches to see which one performs better.
More information at https://www.twilio.com/blog/asynchronous-http-requests-in-python-with-httpx-and-asyncio courtesy of @sagnew
Dear alvarobartt,
investing.com will never help you with this open source project, because they hire cloudflare to ban every non-paying data bot from their server forever. Knowing this, it is up to us to find another solution.
Has anyone thought of working with a free account from investing.com ? It is possible to create one or more watch lists with a free account. With a free account you will receive a personal cookie. Using this cookie together with curl make it possible to get my watch_list data. Any problem at all, from 09:00 to 22:00 with a frequency of 5 minutes. The free account and personal cookie let you pass Cloudflare bot checker. I'm really fresh in python, but I managed to get this done.
I would like to help with this open source project, but someone first needs to introduce me to the investiny project.
People who supporting this idea, send a like
Glad if you can add a method to get the information followng pages in a stock.
For some ids, historical_data gives this error:
historical_data(investing_id=40654,from_date=start_date_1, to_date=end_date_1)
also
historical_data(investing_id=26490,from_date=start_date_1, to_date=end_date_1)
for dates:
start_date_1 = '04/27/2000'
end_date_1 = '04/26/2012'
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/tmp/ipykernel_16762/3915976534.py in <module>
----> 1 historical_data(investing_id=26490,from_date=start_date_1, to_date=end_date_1)
~/.local/lib/python3.10/site-packages/investiny/historical.py in historical_data(investing_id, from_date, to_date, interval)
46 time_format = "%H:%M %m/%d/%Y" if isinstance(interval, int) else "%m/%d/%Y"
47 output = {
---> 48 "date": [datetime.fromtimestamp(t).strftime(time_format) for t in data["t"]], # type: ignore
49 "open": data["o"], # type: ignore
50 "high": data["h"], # type: ignore
KeyError: 't'
from investiny import search_assets
search_results = search_assets(query="EUR/USD", limit=1, type="FX")
Traceback (most recent call last):
File "", line 1, in
File "/home/pi/.local/lib/python3.11/site-packages/investiny/search.py", line 42, in search_assets
return request_to_investing(endpoint="search", params=params) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pi/.local/lib/python3.11/site-packages/investiny/utils.py", line 36, in request_to_investing
raise ConnectionError(
ConnectionError: Request to Investing.com API failed with error code: 403.
Is there a way to get max daily data for an Id... Currently there seems to be some limits. So for japan Germany 10 Y government spread (ID: 1079966). I get this as a result
Clearly 1301 rows for 22 years of data is a problem. The date jumps from 1998 to 2017.
The start date on investing.com is 01/11/2010. However while getting historical daily data they seem to have some kind of limit while downloading (Tried multiple tickers in excel. There seems to be some limit around 5400 points).
Is there a way to get max daily data which is available?
Searching for a stock in Indian Bombay Stock Exchange BSE like below
type = 'Stock'
exchange='BSE'
query = 'kotak'
search_results = search_assets(query=query, limit=1, type=type, exchange=exchange)
print(search_results)
returns empty list
[]
Searching for a stock in Indian National Stock Exchange NSE like below
type = 'Stock'
exchange='NSE'
query = 'kotak'
search_results = search_assets(query=query, limit=1, type=type, exchange=exchange)
print(search_results)
returns empty list
[]
However searching without providing any exchange parameter like following
type = 'Stock'
exchange=''
query = 'kotak'
search_results = search_assets(query=query, limit=1, type=type, exchange=exchange)
print(search_results)
returns 2 elements although the limit parameter is set to 1
[{'symbol': 'KTKM', 'full_name': 'NSE:KTKM', 'description': 'Kotak Mahindra Bank Ltd.', 'type': 'Stock', 'ticker': '18260', 'exchange': 'NSE'}, {'symbol': 'KTKM_p', 'full_name': 'BSE:KTKM_p', 'description': 'Kotak Mahindra Bank Ltd Preferred', 'type': 'Stock', 'ticker': '1180379', 'exchange': 'BSE'}]
Even more confusing is when limit is increased as in the following
type = 'Stock'
exchange=''
query = 'kotak'
search_results = search_assets(query=query, limit=4, type=type, exchange=exchange)
print(search_results)
3 results are returned
[{'symbol': 'KTKM', 'full_name': 'NSE:KTKM', 'description': 'Kotak Mahindra Bank Ltd.', 'type': 'Stock', 'ticker': '18260', 'exchange': 'NSE'}, {'symbol': 'KTKM_p', 'full_name': 'BSE:KTKM_p', 'description': 'Kotak Mahindra Bank Ltd Preferred', 'type': 'Stock', 'ticker': '1180379', 'exchange': 'BSE'}, {'symbol': 'KTKM', 'full_name': 'BSE:KTKM', 'description': 'Kotak Mahindra Bank Ltd.', 'type': 'Stock', 'ticker': '39573', 'exchange': 'BSE'}]
Does anyone know the significance of _p preferred? Should I search BSE history in normal or preferred?
here is my code:
from investiny import historical_data
data = historical_data(investing_id=19256, from_date="01/01/2019", to_date="12/31/2019") #adese 19256 data-content-id üzerinden bulunuyor
here's out:
runfile('C:/Users/emrec/Desktop/TUBITAK 22 DOSYASI/stockrates/untitled2.py', wdir='C:/Users/emrec/Desktop/TUBITAK 22 DOSYASI/stockrates')
Traceback (most recent call last):
File "C:\Users\emrec\AppData\Local\Programs\Python\Python39\lib\site-packages\spyder_kernels\py3compat.py", line 356, in compat_exec
exec(code, globals, locals)
File "c:\users\emrec\desktop\tubitak 22 dosyasi\stockrates\untitled2.py", line 3, in <module>
data = historical_data(investing_id=6408, from_date="01/01/2019", to_date="12/31/2019") #adese 19256 data-content-id üzerinden bulunuyor
File "C:\Users\emrec\AppData\Local\Programs\Python\Python39\lib\site-packages\investiny\historical.py", line 48, in historical_data
info = investing_info(investing_id=investing_id)
File "C:\Users\emrec\AppData\Local\Programs\Python\Python39\lib\site-packages\investiny\info.py", line 41, in investing_info
return request_to_investing( # type: ignore
File "C:\Users\emrec\AppData\Local\Programs\Python\Python39\lib\site-packages\investiny\utils.py", line 36, in request_to_investing
raise ConnectionError(
ConnectionError: Request to Investing.com API failed with error code: 403.
from #40
Hi @alvarobartt .. updated investiny version however ran into some issues --
The output is missing 10/14 (Friday) and displaying 10/15, prices off by a day. This is for DXY.
This code:
from investiny import historical_data
data = historical_data(investing_id=6408, from_date="09/01/2022", to_date="10/01/2022") # Returns AAPL historical data as JSON (without date)
print(data)
fails:
Traceback (most recent call last):
File "/usr/ports/finance/py-investiny/x.py", line 6, in <module>
data = historical_data(investing_id=6408, from_date="09/01/2022", to_date="10/01/2022") # Returns AAPL historical data as JSON (without date)
File "/usr/local/lib/python3.9/site-packages/investiny/historical.py", line 48, in historical_data
info = investing_info(investing_id=investing_id)
File "/usr/local/lib/python3.9/site-packages/investiny/info.py", line 41, in investing_info
return request_to_investing( # type: ignore
File "/usr/local/lib/python3.9/site-packages/investiny/utils.py", line 36, in request_to_investing
raise ConnectionError(
ConnectionError: Request to Investing.com API failed with error code: 403.
Hi, @alvarobartt !
I've encountered the 403 Error problem today and found that using curl seem to be working fine, no 403 error.
And the only difference I can see is the headers ordering - requests shuffles headers, while curl preserves them as provided.
So I tried using urllib.request and it worked.
I'm using Python 3.10.5
Maybe this can solve all 403 errors in the project?
minimal working example:
import urllib.request
# take them from your browser, no cookies required
headers = {}
req = urllib.request.Request(f'https://sbcharts.investing.com/events_charts/us/222.json', b"", headers)
with urllib.request.urlopen(req) as response:
response = response.read().decode()
When you call today (10/10/2022):
data2 = historical_data(investing_id=curr_id,
from_date=st_date,
to_date=end_date)
where:
investing_id=1166004,from_date=10/01/2022,to_date=10/10/2022
you'll get json:
{'date': ['10/03/2022', '10/04/2022', '10/05/2022', '10/06/2022', '10/07/2022'], 'open': [7.0425000190735, 7.1424999237061, 7.2750000953674, 7.352499961853, 7.2849998474121], 'high': [7.0875000953674, 7.272500038147, 7.3274998664856, 7.3825001716614, 7.3550000190735], 'low': [7.0349998474121, 7.1199998855591, 7.2224998474121, 7.3049998283386, 7.2849998474121], 'close': [7.0599999427795, 7.2674999237061, 7.3274998664856, 7.335000038147, 7.3474998474121], 'volume': [1557937, 169883, 401434, 215784, 275430]}
so there are data for period: 10/03/2022 ... 10/07/2022. There is no data for 10/10/2022!
when you call it without to_date parameter
data = historical_data(investing_id=curr_id,
from_date=st_date
)
, you'll get:
{'date': ['09/12/2022', '09/13/2022', '09/14/2022', '09/15/2022', '09/16/2022', '09/20/2022', '09/21/2022', '09/22/2022', '09/23/2022', '09/26/2022', '09/27/2022', '09/28/2022', '09/29/2022', '09/30/2022', '10/03/2022', '10/04/2022', '10/05/2022', '10/06/2022', '10/07/2022', '10/10/2022'], 'open': [7.4499998092651, 7.6100001335144, 7.5300002098083, 7.5900001525879, 7.3850002288818, 7.3499999046326, 7.3000001907349, 7.3425002098083, 7.2649998664856, 6.9800000190735, 7.0174999237061, 6.897500038147, 7.0799999237061, 7.0900001525879, 7.0425000190735, 7.1424999237061, 7.2750000953674, 7.352499961853, 7.2849998474121, 7.3600001335144], 'high': [7.5725002288818, 7.6599998474121, 7.6275000572205, 7.5999999046326, 7.3949999809265, 7.3899998664856, 7.4800000190735, 7.4099998474121, 7.2649998664856, 7.0549998283386, 7.039999961853, 7.0199999809265, 7.1574997901917, 7.1399998664856, 7.0875000953674, 7.272500038147, 7.3274998664856, 7.3825001716614, 7.3550000190735, 7.3800001144409], 'low': [7.4499998092651, 7.5275001525879, 7.4899997711182, 7.3699998855591, 7.3074998855591, 7.3150000572205, 7.3000001907349, 7.3249998092651, 7.0475001335144, 6.9749999046326, 6.9875001907349, 6.8775000572205, 7.0174999237061, 7.039999961853, 7.0349998474121, 7.1199998855591, 7.2224998474121, 7.3049998283386, 7.2849998474121, 7.3400001525879], 'close': [7.5650000572205, 7.5500001907349, 7.5749998092651, 7.4050002098083, 7.3625001907349, 7.3850002288818, 7.3375000953674, 7.3474998474121, 7.0650000572205, 6.9825000762939, 7.0050001144409, 6.9899997711182, 7.0700001716614, 7.0900001525879, 7.0599999427795, 7.2674999237061, 7.3274998664856, 7.335000038147, 7.3474998474121, 7.3800001144409], 'volume': [127182, 172399, 2245501, 107892, 143964, 265991, 547998, 227319, 201673, 541426, 511374, 251056, 272732, 7209297, 1557937, 169883, 401434, 215784, 275430, 8015]}
so now, probably both parameters are ignored and in such case there is the data for the whole month with todays data.
When I request data till today, I should get todays data as well. Could you please take a look on this?
Hello @alvarobartt
First of all, thank you for this amazing package, it is saving my month of work to check the historical valuations of my portfolio.
I just want to report that https://tvc4.investing.com/ when into maintenance yesterday and, since going back up, I am now constantly getting the error ```Request to Investing.com API failed with error code: 503.````
Was there any change into the API that you know of? Or some kind of limit implemented?
Thank you once more for your amazing work.
Getting an error when I try to install investing pip install investiny
that the version isn't compatible, this looks different than most one line errors.
python3.9 is being used from conda as you can see below from my PyCharm IDE and per requirements >=python 3.8 from https://alvarobartt.github.io/investiny/requirements/ I should be ok. Is this related to #38 since I'm on MacOs and usually what causes issues for Linux systems translates to me on Unix.
#38
MacOs BigSur
I have download the list of all stocks on my countries exchange in two separate files ( two separate exchanges) from investing.com screener page by clicking on download results.
The downloaded csv contains the name and symbol but not the investing_id required for downloading the historical prices.
My idea is to loop through all name or symbol calling search_quotes to retrieve the investing_id from the search results and then store it some place safe.
Then I want to loop through the investing_id list to retrieve historical data.
Therefore patient.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.