Giter VIP home page Giter VIP logo

stweet's Introduction

stweet

Open Source Love Python package PyPI version MIT Licence

A modern fast python library to scrap tweets and users quickly from Twitter unofficial API.

This tool helps you to scrap tweets by a search phrase, tweets by ids and user by usernames. It uses the Twitter API, the same API is used on a website.

Inspiration for the creation of the library

I have used twint to scrap tweets, but it has many errors, and it doesn't work properly. The code was not simple to understand. All tasks have one config, and the user has to know the exact parameter. The last important thing is the fact that Api can change — Twitter is the API owner and changes depend on it. It is annoying when something does not work and users must report bugs as issues.

Main advantages of the library

  • Simple code — the code is not only mine, every user can contribute to the library
  • Domain objects and interfaces — the main part of functionalities can be replaced (eg. calling web requests), the library has basic simple solution — if you want to expand it, you can do it without any problems and forks
  • 100% coverage with integration tests — this advantage can find the API changes, tests are carried out every week and when the task fails, we can find the source of change easily – not in version 2.0
  • Custom tweets and users output — it is a part of the interface, if you want to save tweets and users custom format, it takes you a brief moment

Installation

pip install -U stweet

Donate

If you want to sponsor me, in thanks for the project, please send me some crypto 😁:

Coin Wallet address
Bitcoin 3EajE9DbLvEmBHLRzjDfG86LyZB4jzsZyg
Etherum 0xE43d8C2c7a9af286bc2fc0568e2812151AF9b1FD

Basic usage

To make a simple request the scrap task must be prepared. The task should be processed by ** runner**.

import stweet as st


def try_search():
    search_tweets_task = st.SearchTweetsTask(all_words='#covid19')
    output_jl_tweets = st.JsonLineFileRawOutput('output_raw_search_tweets.jl')
    output_jl_users = st.JsonLineFileRawOutput('output_raw_search_users.jl')
    output_print = st.PrintRawOutput()
    st.TweetSearchRunner(search_tweets_task=search_tweets_task,
                         tweet_raw_data_outputs=[output_print, output_jl_tweets],
                         user_raw_data_outputs=[output_print, output_jl_users]).run()


def try_user_scrap():
    user_task = st.GetUsersTask(['iga_swiatek'])
    output_json = st.JsonLineFileRawOutput('output_raw_user.jl')
    output_print = st.PrintRawOutput()
    st.GetUsersRunner(get_user_task=user_task, raw_data_outputs=[output_print, output_json]).run()


def try_tweet_by_id_scrap():
    id_task = st.TweetsByIdTask('1447348840164564994')
    output_json = st.JsonLineFileRawOutput('output_raw_id.jl')
    output_print = st.PrintRawOutput()
    st.TweetsByIdRunner(tweets_by_id_task=id_task,
                        raw_data_outputs=[output_print, output_json]).run()


if __name__ == '__main__':
    try_search()
    try_user_scrap()
    try_tweet_by_id_scrap()

Example above shows that it is few lines of code required to scrap tweets.

Export format

Stweet uses api from website so there is no documentation about receiving response. Response is saving as raw so final user must parse it on his own. Maybe parser will be added in feature.

Scrapped data can be exported in different ways by using RawDataOutput abstract class. List of these outputs can be passed in every runner – yes it is possible to export in two different ways.

Currently, stweet have implemented:

  • CollectorRawOutput – can save data in memory and return as list of objects
  • JsonLineFileRawOutput – can export data as json lines
  • PrintEveryNRawOutput – prints every N-th item
  • PrintFirstInBatchRawOutput – prints first item in batch
  • PrintRawOutput – prints all items (not recommended in large scrapping)

Using tor proxy

Library is integrated with tor-python-easy. It allows using tor proxy with exposed control port – to change ip when it is needed.

If you want to use tor proxy client you need to prepare custom web client and use it in runner.

You need to run tor proxy -- you can run it on your local OS, or you can use this docker-compose.

Code snippet below show how to use proxy:

import stweet as st

if __name__ == '__main__':
    web_client = st.DefaultTwitterWebClientProvider.get_web_client_preconfigured_for_tor_proxy(
        socks_proxy_url='socks5://localhost:9050',
        control_host='localhost',
        control_port=9051,
        control_password='test1234'
    )

    search_tweets_task = st.SearchTweetsTask(all_words='#covid19')
    output_jl_tweets = st.JsonLineFileRawOutput('output_raw_search_tweets.jl')
    output_jl_users = st.JsonLineFileRawOutput('output_raw_search_users.jl')
    output_print = st.PrintRawOutput()
    st.TweetSearchRunner(search_tweets_task=search_tweets_task,
                         tweet_raw_data_outputs=[output_print, output_jl_tweets],
                         user_raw_data_outputs=[output_print, output_jl_users],
                         web_client=web_client).run()

Divide scrap periods recommended

Twitter on guest client block multiple pagination. Sometimes in one query there is possible to call for 3 paginations. To avoid this limitation divide scrapping period for smaller parts.

Twitter in 2023 block in API putting time range in timestamp – only format YYYY-MM-DD is acceptable. In arrow you can only put time without hours.

Twint inspiration

Small part of library uses code from twint. Twint was also main inspiration to create stweet.

stweet's People

Contributors

markowanga avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stweet's Issues

class SearchTweetsTask how to crawl more tweets

code like this
search_tweets_task = st.SearchTweetsTask(from_username='AircraftSpots', tweets_limit=200)
but parameter 【tweets_limit】 It doesn't seem to work.
in fact, Run the code eatch time can get 25 tweets,how to get more tweets?

Documentation in readTheDoc 📜

There is a need to prepare good documentation – better than in README.md.

Documentation need to contain the examples. The best way is prepared live documentation, where code can evaluate. This help find bugs with deprecated API. The best option is prepare Jupyter Notebook with documentation pages abnd convert it ro RST. Next publish docs in readTheDoc.

Scraper getting 404 Errors

Hi,

I've suddenly started getting the following error while running the scraper:

RequestResponse(status_code=404, text='{"errors":[{"message":"Sorry, that page does not exist","code":34}]}'), retrying in 15 seconds...
I am using stweet version 2.1.0

Here's what my code looks like:

    tweets_collector = st.CollectorRawOutput()
    users_collector = st.CollectorRawOutput()

    search_tweets_task = st.SearchTweetsTask(
        all_words=search_term,
        since=since_date,
        until=until_date
    )
    web_client = st.RequestsWebClient(
        proxy=st.RequestsWebClientProxyConfig(
            http_proxy=socks5://localhost:9050,
            https_proxy=socks5://localhost:9050
        ),
        interceptors=[TwitterAuthWebClientInterceptor()]
    )
    st.TweetSearchRunner(
        search_tweets_task=search_tweets_task,
        tweet_raw_data_outputs=[tweets_collector],
        user_raw_data_outputs=[users_collector],
        web_client=web_client
    ).run()

I am unable to figure out why my requests are failing. Can you please help? Thanks!

AttributeError: module 'stweet' has no attribute 'JsonLineFileRawOutput'

when i run from example code in this repository i got following error
AttributeError: module 'stweet' has no attribute 'JsonLineFileRawOutput

OS: win11
Python 3.7

import stweet as st
print(dir(st))

['CollectorTweetOutput', 'CollectorUserOutput', 'CsvTweetOutput', 'CsvUserOutput', 'GetUsersResult', 'GetUsersRunner', 'GetUsersTask', 'JsonLineFileTweetOutput', 'JsonLineFileUserOutput', 'Language', 'PrintEveryNTweetOutput', 'PrintEveryNUserOutput', 'PrintFirstInRequestTweetOutput', 'PrintTweetOutput', 'PrintUserOutput', 'RepliesFilter', 'RequestsWebClient', 'RequestsWebClientProxyConfig', 'SearchTweetsResult', 'SearchTweetsTask', 'Tweet', 'TweetOutput', 'TweetSearchRunner', 'TweetsByIdsResult', 'TweetsByIdsRunner', 'TweetsByIdsTask', 'User', 'UserOutput', 'WebClient', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', 'auth', 'exceptions', 'export_data', 'export_tweets_to_csv', 'export_tweets_to_json_lines', 'export_users_to_csv', 'export_users_to_json_lines', 'get_user_runner', 'http_request', 'import_data', 'mapper', 'model', 'read_tweets_from_csv_file', 'read_tweets_from_json_lines_file', 'read_users_from_csv_file', 'read_users_from_json_lines_file', 'search_runner', 'tweet_output', 'tweets_by_ids_runner', 'user_output']

any solution?

Can't be used with aws.

Hi @markowanga
How to use with aws?
I want to use this on ec2.
I tried curl and got this error.

"errors":[{"code":88,"message":"Rate limit exceeded."}]

The usage limit has not been reached.

Installing using setup.py on Windows fails with UnicodeDecodeError

I encountered a minor issue after cloning the repo and attempting to install the package through python -m setup.py install.

(venv) C:\Users\Profile\PycharmProjects\stweet>python -m setup.py install
Traceback (most recent call last):
  File "C:\Users\Profile\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 183, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
  File "C:\Users\Profile\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 109, in _get_module_details
    __import__(pkg_name)
  File "C:\Users\Profile\PycharmProjects\stweet\setup.py", line 4, in <module>
    LONG_DESCRIPTION = fh.read()
  File "C:\Users\Profile\AppData\Local\Programs\Python\Python37\lib\encodings\cp1252.py", line 23, in decode
    return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 2176: character maps to <undefined>

Will submit a PR shortly linked to this issue # with an update to setup.py so that encoding='utf-8' is explicitly defined in the read blocks.

Edit: grammar*

stop after a while

I want to crawl 2014-2021 tweets with exact_words (only one word), but it stopped and only picked up 2021.9.13 to 2021.9.4. Why does this happen?

I can't get the tweet content

I want to get tweet content from a twitter user named way42380182 used try_search() function,but resault is null. Than used try_user_scrap() function,It was get the user's personal infomation not its tweet contents.
What should I do?
Thanks.

RequestResponse(status_code=404, text='{"errors":[{"message":"Sorry, that page does not exist","code":34}]}')

import stweet as st


def try_search():
    search_tweets_task = st.SearchTweetsTask(all_words='#covid19')
    output_jl_tweets = st.JsonLineFileRawOutput('output_raw_search_tweets.jl')
    output_jl_users = st.JsonLineFileRawOutput('output_raw_search_users.jl')
    output_print = st.PrintRawOutput()
    st.TweetSearchRunner(search_tweets_task=search_tweets_task,
                         tweet_raw_data_outputs=[output_print, output_jl_tweets],
                         user_raw_data_outputs=[output_print, output_jl_users]).run()


try_search()

stacktrace

Traceback (most recent call last):
  File "...\python_test_site\main.py", line 14, in <module>
    try_search()
  File "...\python_test_site\main.py", line 11, in try_search
    user_raw_data_outputs=[output_print, output_jl_users]).run()
  File "...\Python\Python310\lib\site-packages\stweet\search_runner\search_runner.py", line 51, in run
    self._execute_next_tweets_request()
  File "...\Python\Python310\lib\site-packages\stweet\search_runner\search_runner.py", line 71, in _execute_next_tweets_request
    raise ScrapBatchBadResponse(response)
stweet.exceptions.scrap_batch_bad_response.ScrapBatchBadResponse: RequestResponse(status_code=404, text='{"errors":[{"message":"Sorry, that page does not exist","code":34}]}')

Adding write API functions?

What would it take to add write API functionality such as tweet, retweet, like, reply, dm, etc?

I'm assuming it could be done by adding oauth2 authentication to your account then using similar hidden endpoints? I'm not familiar enough with the private API to do so myself.

Since / Until not being considered?

Hi there,

I'm trying to get tweets for lets say the entire Jan 2021 and doing as follows:

tz = 'Europe/Berlin'
since=Arrow(year=2021, month=1, day=1, hour=0, tzinfo=tz)
until=Arrow(year=2021, month=1, day=30, hour=23, tzinfo=tz)

search_tweets_task = st.SearchTweetsTask(all_words='#bitcoin',since=since, until=until)
output_jl_tweets = st.JsonLineFileRawOutput('output_raw_search_tweets_bitcoin.jl')
output_jl_users = st.JsonLineFileRawOutput('output_raw_search_users_bitcoin.jl')
output_print = st.PrintRawOutput()
st.TweetSearchRunner(search_tweets_task=search_tweets_task,
                         tweet_raw_data_outputs=[output_print, output_jl_tweets],
                         user_raw_data_outputs=[output_print, output_jl_users]).run()

Unfortunately it always gives me the tweets of the last few days of October only. Am I doing anything wrong or how may I resolve this issue?

Best,
Uwe

Internal Error

File "/Users/cosmos/Documents/GitHub/Haas-ToolBox/env1/lib/python3.8/site-packages/stweet/search_runner/search_runner.py", line 50, in run
self._execute_next_tweets_request()
File "/Users/cosmos/Documents/GitHub/Haas-ToolBox/env1/lib/python3.8/site-packages/stweet/search_runner/search_runner.py", line 70, in _execute_next_tweets_request
raise ScrapBatchBadResponse(response)
stweet.exceptions.scrap_batch_bad_response.ScrapBatchBadResponse: RequestResponse(status_code=500, text='{"errors":[{"message":"Internal error","code":131}]}')

To fix it i simply downgraded to 1.3.1 and replaced code with the one i wrote before.

Filter Tweets by Source field

Hey! Thank you so much for the Twint alternative :)

I haven't found this info in the repo: is it possible to filter out tweets by source while searching?

Thanks in advance!

Not all tweets can be received

I know this is probably a common problem. Below is a piece of code that has this problem.
`import stweet as st
import getproxy
import arrow
from stweet.twitter_api.twitter_auth_web_client_interceptor import TwitterAuthWebClientInterceptor
import copy
import random
from multiprocessing import Pool

def run_batched(request_task):
search_tweets_task = copy.deepcopy(request_task)
proxy = getproxy.get_proxy_sorted_by_speed(arrow.now().date().isoformat())[3][0]
fastest_proxy = st.RequestsWebClientProxyConfig(f'socks5://{proxy}', f'socks5://{proxy}')
web_client = st.RequestsWebClient(proxy=fastest_proxy,interceptors=[TwitterAuthWebClientInterceptor()])
output_tweets = st.CollectorRawOutput()
try:
st.TweetSearchRunner(search_tweets_task=search_tweets_task,
tweet_raw_data_outputs=[output_tweets],
user_raw_data_outputs=[],
web_client=web_client).run()
except Exception as e:
print(e)
output_tweets = output_tweets.get_raw_list()
output_tweets = list(map(lambda x: x.to_json(), output_tweets))
return output_tweets

def remove_duplicates(tweets):
new_tweets = []
# use id only
ids = []
for tweet in tweets:
if tweet['id'] not in ids:
ids.append(tweet['id'])
new_tweets.append(tweet)
return new_tweets

def try_search():
since = arrow.get(2023, 1, 29, 0, 0, 0)
until = arrow.get(2023, 1, 31, 0, 0, 0)
request = st.SearchTweetsTask(all_words='kronii',since=since, until=until)

# run 100 batches then merge
with Pool(1000) as p:
    results = p.map(run_batched, [request]*1000)

# merge results
merged = []
for result in results:
    merged.extend(result)

# remove duplicates
merged = remove_duplicates(merged)


    
# print results

with open('test.txt', 'w') as f:
    for tweet in merged:
        json = tweet
        f.write(f'[{json["id"]}] {json["full_text"]}\n')

print(f'Found {len(merged)} tweets')

if name == 'main':
try_search()
`
I chose a period of one day. The number of tweets I receive from each request varies. I do not quite understand why that is. So I just made a lot of requests to get all the tweets. Their overall number was more than in each individual response from the server. In other words, I can't be sure I got them all.Can I somehow influence the request to get all the tweets for a given period of time on request in a reasonable time?

get all tweet

Hi first of all sorry for my bad english and thank you for this repository.
I want to get all tweet from username to make chart in annual period. When i tested on my twitter account it is only get 70 tweets. Expected results should be 6.9k tweets

Below the code:

search_tweets_task = st.SearchTweetsTask(from_username='stlintangtap')
    output_jl_tweets = st.JsonLineFileRawOutput('output_raw_search_tweets.jl')
    output_jl_users = st.JsonLineFileRawOutput('output_raw_search_users.jl')
    output_print = st.PrintRawOutput()
    st.TweetSearchRunner(search_tweets_task=search_tweets_task,
                         tweet_raw_data_outputs=[output_print, output_jl_tweets],
                         user_raw_data_outputs=[output_print, output_jl_users]).run()

User timeline is different than user tweets

import stweet as st
from arrow import Arrow

search_tweets_task = st.SearchTweetsTask(
from_username = 'VP',
since=Arrow(year=2020, month=6, day=4),
until=Arrow(year=2020, month=11, day=4)
)
tweets_collector = st.CollectorTweetOutput()

st.TweetSearchRunner(
search_tweets_task=search_tweets_task,
tweet_outputs=[tweets_collector, st.CsvTweetOutput('output_file.csv')]
).run()


This code had run for a few other user accounts but not this one, please help, thanks for a great tool!

Tor question

In the following example, how does this behave if twitter denies a request? Will it infinitely change tor IP's until the request completes? Or would that require additional functionality?

import stweet as st

if __name__ == '__main__':
    web_client = st.DefaultTwitterWebClientProvider.get_web_client_preconfigured_for_tor_proxy(
        socks_proxy_url='socks5://localhost:9050',
        control_host='localhost',
        control_port=9051,
        control_password='test1234'
    )

    search_tweets_task = st.SearchTweetsTask(all_words='#covid19')
    output_jl_tweets = st.JsonLineFileRawOutput('output_raw_search_tweets.jl')
    output_jl_users = st.JsonLineFileRawOutput('output_raw_search_users.jl')
    output_print = st.PrintRawOutput()
    st.TweetSearchRunner(search_tweets_task=search_tweets_task,
                         tweet_raw_data_outputs=[output_print, output_jl_tweets],
                         user_raw_data_outputs=[output_print, output_jl_users],
                         web_client=web_client).run()

class SearchTweetsTask, since、until、tweets_limit not work

import arrow
import stweet as st

since = arrow.get('2022-1-11')
until = arrow.get('2022-12-13')

def try_search():
    search_tweets_task = st.SearchTweetsTask(from_username='@elonmusk',since=since, until=until,tweets_limit=1000)
    output_jl_tweets = st.JsonLineFileRawOutput('output_raw_search_tweets2.jl')
    output_jl_users = st.JsonLineFileRawOutput('output_raw_search_users2.jl')
    output_print = st.PrintRawOutput()
    st.TweetSearchRunner(search_tweets_task=search_tweets_task,
                         tweet_raw_data_outputs=[output_print, output_jl_tweets],
                         user_raw_data_outputs=[output_print, output_jl_users]).run()

when i run the code above, I only got 23 pieces of data, data date from 2022-12-2 to 2022-12-12, I tried many times, The result is the same

ReadTimeoutError

import stweet as st
import arrow
import pandas as pd

usernames = ['ftfinancenews', 'IFN_news', 'YahooFinance', 'BusinessMN', 'PFinanceNews', 'Investingcom', 'SenateFinance',
             'fintechf', 'FinanceAsia', 'GFmag', 'ETFinance', 'SenFina', 'ftmoney', '9Finance', 'Gander_News_c3', 
             'financemagnates', 'public_finance_', 'ELSFinance', 'CleanEnergyFC', 'Definews_Info', 'IMFpubs', 'bworldph',
             'NEWS9TWEETS', 'TDAmeritradePR', 'MoneyTalksNews', 'spectatorindex', 'cnnbrk', 'CNN']

def mine_tweets(username, starting_date, finishing_date, no_days=None):
      if no_days != None and finishing_date!=None:
          finishing_date = str(pd.to_datetime(start_date_str) + pd.Timedelta(f"{no_days} day"))[:10]
      
      task = st.SearchTweetsTask(
          from_username=username, 
          language=st.Language.ENGLISH,
          since=arrow.get(f'{starting_date}T00:00:00.000+01:00'),
          until=arrow.get(f'{finishing_date}T24:00:00.000+01:00'),
          replies_filter=st.RepliesFilter.ONLY_ORIGINAL
      )
  
      tweet_outputs = [
          st.CsvTweetOutput(f'{username}_{starting_date}_to_{finishing_date}.csv')
      ]
  
      run_result = st.TweetSearchRunner(
          search_tweets_task=task, 
          tweet_outputs=tweet_outputs
      ).run()

for username in usernames:
mine_tweets(username, '2011-01-01', '2021-01-01', no_days=None)

timeout Traceback (most recent call last)
F:\Anaconda\lib\site-packages\urllib3\connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
444 # Otherwise it looks like a bug in the code.
--> 445 six.raise_from(e, None)
446 except (SocketTimeout, BaseSSLError, SocketError) as e:

F:\Anaconda\lib\site-packages\urllib3\packages\six.py in raise_from(value, from_value)

F:\Anaconda\lib\site-packages\urllib3\connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
439 try:
--> 440 httplib_response = conn.getresponse()
441 except BaseException as e:

F:\Anaconda\lib\http\client.py in getresponse(self)
1368 try:
-> 1369 response.begin()
1370 except ConnectionError:

F:\Anaconda\lib\http\client.py in begin(self)
309 while True:
--> 310 version, status, reason = self._read_status()
311 if status != CONTINUE:

F:\Anaconda\lib\http\client.py in _read_status(self)
270 def _read_status(self):
--> 271 line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
272 if len(line) > _MAXLINE:

F:\Anaconda\lib\socket.py in readinto(self, b)
588 try:
--> 589 return self._sock.recv_into(b)
590 except timeout:

F:\Anaconda\lib\ssl.py in recv_into(self, buffer, nbytes, flags)
1070 self.class)
-> 1071 return self.read(nbytes, buffer)
1072 else:

F:\Anaconda\lib\ssl.py in read(self, len, buffer)
928 if buffer is not None:
--> 929 return self._sslobj.read(len, buffer)
930 else:

timeout: The read operation timed out

During handling of the above exception, another exception occurred:

ReadTimeoutError Traceback (most recent call last)
F:\Anaconda\lib\site-packages\requests\adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
448 retries=self.max_retries,
--> 449 timeout=timeout
450 )

F:\Anaconda\lib\site-packages\urllib3\connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
755 retries = retries.increment(
--> 756 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
757 )

F:\Anaconda\lib\site-packages\urllib3\util\retry.py in increment(self, method, url, response, error, _pool, _stacktrace)
531 if read is False or not self._is_method_retryable(method):
--> 532 raise six.reraise(type(error), error, _stacktrace)
533 elif read is not None:

F:\Anaconda\lib\site-packages\urllib3\packages\six.py in reraise(tp, value, tb)
734 raise value.with_traceback(tb)
--> 735 raise value
736 finally:

F:\Anaconda\lib\site-packages\urllib3\connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
705 headers=headers,
--> 706 chunked=chunked,
707 )

F:\Anaconda\lib\site-packages\urllib3\connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
446 except (SocketTimeout, BaseSSLError, SocketError) as e:
--> 447 self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
448 raise

F:\Anaconda\lib\site-packages\urllib3\connectionpool.py in _raise_timeout(self, err, url, timeout_value)
336 raise ReadTimeoutError(
--> 337 self, url, "Read timed out. (read timeout=%s)" % timeout_value
338 )

ReadTimeoutError: HTTPSConnectionPool(host='api.twitter.com', port=443): Read timed out. (read timeout=10)

During handling of the above exception, another exception occurred:

ReadTimeout Traceback (most recent call last)
in
1 for username2 in usernames2:
----> 2 mine_tweets(username2, '2011-01-01', '2021-01-01', no_days=None)

in mine_tweets(username, starting_date, finishing_date, no_days)
17 run_result = st.TweetSearchRunner(
18 search_tweets_task=task,
---> 19 tweet_outputs=tweet_outputs
20 ).run()

F:\Anaconda\lib\site-packages\stweet\search_runner\search_runner.py in run(self)
50 self._prepare_token()
51 while not self._is_end_of_scrapping():
---> 52 self._execute_next_tweets_request()
53 return SearchTweetsResult(self.search_run_context.all_download_tweets_count)
54

F:\Anaconda\lib\site-packages\stweet\search_runner\search_runner.py in _execute_next_tweets_request(self)
62 def _execute_next_tweets_request(self):
63 request_params = self._get_next_request_details()
---> 64 response = self.web_client.run_request(request_params)
65 if response.is_token_expired():
66 self._refresh_token()

F:\Anaconda\lib\site-packages\stweet\http_request\web_client.py in run_request(self, requests_details)
29 def run_request(self, requests_details: RequestDetails) -> RequestResponse:
30 """Method process the request. Method wrap request with interceptors."""
---> 31 return _run_request_with_interceptors(requests_details, self._interceptors, self)
32
33 @AbstractMethod

F:\Anaconda\lib\site-packages\stweet\http_request\web_client.py in _run_request_with_interceptors(requests_details, next_interceptors, web_client)
15 ) -> RequestResponse:
16 return next_interceptors[0].intercept(requests_details, next_interceptors[1:], web_client) if len(
---> 17 next_interceptors) > 0 else web_client.run_clear_request(requests_details)
18
19

F:\Anaconda\lib\site-packages\stweet\http_request\requests\requests_web_client.py in run_clear_request(self, params)
39 timeout=params.timeout,
40 proxies=self._get_proxy(),
---> 41 verify=self.verify
42 )
43 return RequestResponse(response.status_code, response.text)

F:\Anaconda\lib\site-packages\requests\sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
540 }
541 send_kwargs.update(settings)
--> 542 resp = self.send(prep, **send_kwargs)
543
544 return resp

F:\Anaconda\lib\site-packages\requests\sessions.py in send(self, request, **kwargs)
653
654 # Send the request
--> 655 r = adapter.send(request, **kwargs)
656
657 # Total elapsed time of the request (approximately)

F:\Anaconda\lib\site-packages\requests\adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
527 raise SSLError(e, request=request)
528 elif isinstance(e, ReadTimeoutError):
--> 529 raise ReadTimeout(e, request=request)
530 else:
531 raise

ReadTimeout: HTTPSConnectionPool(host='api.twitter.com', port=443): Read timed out. (read timeout=10)

`created_at` out of time span

When I run the following snippet to get tweets in one day and I checked created_at section, then a few of them were out of range.

For example, now 15:00 Aug 28 (JST) I run it , then I found 'Wed Jul 20 00:03:50 +0000 2022' for created_at. Most of them come from Aug 27 and Aug 28.
Why is that?

import stweet as st
import arrow

local_now = arrow.now('Asia/Tokyo')
local_before = local_now.shift(hours=-24)

search_tweets_task = st.SearchTweetsTask(
        from_username = user,
        since=local_before,
        until=local_now,
        language=st.Language.ENGLISH,
        replies_filter=st.RepliesFilter.ONLY_ORIGINAL
    )

Output repeated and deadlock with using the stweet.TweetsByIdRunner

Hi @markowanga , when I use the TweetsByIdRunner function to scrape some tweets, it occasionally occurs a deadlock problem, that the reason is the outputs is repeated endlessly!
For example, when tweet id is "1349479144225173509" will encounter this problem, my code is below:

def try_tweet_by_id_scrap():
    id_task = st.TweetsByIdTask('1349479144225173509')
    filename = 'stweet_outputs/output_raw_id.{}.jl'.format(datetime.now().strftime('%Y-%m-%dT%H.%M.%S'))
    output_json = st.JsonLineFileRawOutput(filename)
    output_print = st.PrintRawOutput()
    st.TweetsByIdRunner(
        tweets_by_id_task=id_task,
        raw_data_outputs=[output_print, output_json],
        web_client=st.RequestsWebClient(proxy=proxy, interceptors=[TwitterAuthWebClientInterceptor()])
    ).run()


if __name__ == '__main__':
    try_tweet_by_id_scrap()

Thanks for help!

PermissionError

for i in range(1, 10):
    mine_tweets('#AMZN #amzn', f'201{i}-01-01', f'201{i}-06-01')
    
    mine_tweets('#AMZN #amzn', f'201{i}-06-01', f'201{i}-12-01')
    
mine_tweets('#AMZN #amzn', '2021-01-01', '2021-06-01')  
mine_tweets('#AMZN #amzn', '2021-06-01', '2021-12-01')


---------------------------------------------------------------------------
PermissionError                           Traceback (most recent call last)
<ipython-input-4-5d228fa6c5ce> in <module>
      1 for i in range(1, 10):
----> 2     mine_tweets('#AMZN #amzn', f'201{i}-01-01', f'201{i}-06-01')
      3 
      4     mine_tweets('#AMZN #amzn', f'201{i}-06-01', f'201{i}-12-01')
      5 

<ipython-input-3-5dc95beed6ea> in mine_tweets(username, starting_date, finishing_date, no_days)
     17     run_result = st.TweetSearchRunner(
     18         search_tweets_task=task,
---> 19         tweet_outputs=tweet_outputs
     20     ).run()

F:\Anaconda\lib\site-packages\stweet\search_runner\search_runner.py in run(self)
     50         self._prepare_token()
     51         while not self._is_end_of_scrapping():
---> 52             self._execute_next_tweets_request()
     53         return SearchTweetsResult(self.search_run_context.all_download_tweets_count)
     54 

F:\Anaconda\lib\site-packages\stweet\search_runner\search_runner.py in _execute_next_tweets_request(self)
     68             parsed_tweets = self.tweet_parser.parse_tweets(response.text)
     69             self.search_run_context.scroll_token = self.tweet_parser.parse_cursor(response.text)
---> 70             self._process_new_scrapped_tweets(parsed_tweets)
     71         else:
     72             raise ScrapBatchBadResponse(response)

F:\Anaconda\lib\site-packages\stweet\search_runner\search_runner.py in _process_new_scrapped_tweets(self, parsed_tweets)
     74 
     75     def _process_new_scrapped_tweets(self, parsed_tweets: List[Tweet]):
---> 76         self._process_new_tweets_to_output(parsed_tweets)
     77         self.search_run_context.last_tweets_download_count = len(parsed_tweets)
     78         self.search_run_context.add_downloaded_tweets_count(len(parsed_tweets))

F:\Anaconda\lib\site-packages\stweet\search_runner\search_runner.py in _process_new_tweets_to_output(self, new_tweets)
     93     def _process_new_tweets_to_output(self, new_tweets: List[Tweet]):
     94         for tweet_output in self.tweet_outputs:
---> 95             tweet_output.export_tweets(new_tweets)
     96         return

F:\Anaconda\lib\site-packages\stweet\tweet_output\csv_tweet_output.py in export_tweets(self, tweets)
     36             mode='a',
     37             header=self._header_to_add(),
---> 38             index=False
     39         )
     40         return

~\AppData\Roaming\Python\Python37\site-packages\pandas\core\generic.py in to_csv(self, path_or_buf, sep, na_rep, float_format, columns, header, index, index_label, mode, encoding, compression, quoting, quotechar, line_terminator, chunksize, date_format, doublequote, escapechar, decimal, errors, storage_options)
   3400             doublequote=doublequote,
   3401             escapechar=escapechar,
-> 3402             storage_options=storage_options,
   3403         )
   3404 

~\AppData\Roaming\Python\Python37\site-packages\pandas\io\formats\format.py in to_csv(self, path_or_buf, encoding, sep, columns, index_label, mode, compression, quoting, quotechar, line_terminator, chunksize, date_format, doublequote, escapechar, errors, storage_options)
   1081             formatter=self.fmt,
   1082         )
-> 1083         csv_formatter.save()
   1084 
   1085         if created_buffer:

~\AppData\Roaming\Python\Python37\site-packages\pandas\io\formats\csvs.py in save(self)
    232             errors=self.errors,
    233             compression=self.compression,
--> 234             storage_options=self.storage_options,
    235         ) as handles:
    236 

~\AppData\Roaming\Python\Python37\site-packages\pandas\io\common.py in get_handle(path_or_buf, mode, encoding, compression, memory_map, is_text, errors, storage_options)
    645                 encoding=ioargs.encoding,
    646                 errors=errors,
--> 647                 newline="",
    648             )
    649         else:

PermissionError: [Errno 13] Permission denied: '#AMZN #amzn_2014-01-01_to_2014-06-01.csv'

Tests not passing in TweetsByIds Task

There is error in not existing tweet

Traceback (most recent call last):
  File "/Users/marcinwatroba/Desktop/MY_PROJECTS/stweet/tmp/test_run.py", line 24, in <module>
    st.TweetsByIdsRunner(task, []).run()
  File "/Users/marcinwatroba/Desktop/MY_PROJECTS/stweet/stweet/tweets_by_ids_runner/tweets_by_ids_runner.py", line 61, in run
    tweet_base_info = self._get_base_tweet_info(tweet_id_to_scrap)
  File "/Users/marcinwatroba/Desktop/MY_PROJECTS/stweet/stweet/tweets_by_ids_runner/tweets_by_ids_runner.py", line 70, in _get_base_tweet_info
    return self._get_base_tweet_info_from_text_response(tweet_id, request_result.text)
  File "/Users/marcinwatroba/Desktop/MY_PROJECTS/stweet/stweet/tweets_by_ids_runner/tweets_by_ids_runner.py", line 74, in _get_base_tweet_info_from_text_response
    parsed_json = json.loads(response_text)
  File "/Users/marcinwatroba/miniconda3/lib/python3.8/json/__init__.py", line 357, in loads
    return _default_decoder.decode(s)
  File "/Users/marcinwatroba/miniconda3/lib/python3.8/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/Users/marcinwatroba/miniconda3/lib/python3.8/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

文字コードについて

こんにちは。
出力される文字コードについて質問があります。
公開されているサンプルコードをもとに実行したのですが、
ASCIIコードで出力されます。

unicodeで出力することは可能ですか?

How to retrieve only a single Tweet, not subtweets

Running the sample code for getting a tweet with a single id actually scrapes dozens of tweets taking 30+ seconds and return a massive List. Is it possible to just get the first tweet object and then stop running so it can be more time efficient?

I've tried messing around with the context, but I can't seem to wrap my head around how that's actually used in this project. I could perhaps be being dense however, and any clarification would be greatly appreciated.

How to add picture link to parsed tweets?

It seems there is no property defined in class description - link.

I used this code:

import stweet as st

search_tweets_task = st.SearchTweetsTask(
    all_words='#covid19',
)
tweets_collector = st.CollectorTweetOutput()

proxies_config = st.RequestsWebClientProxyConfig(
    http_proxy="<Your http proxy URL>",
    https_proxy="<Your https proxy URL>"
)

st.TweetSearchRunner(
    search_tweets_task=search_tweets_task,
    tweet_outputs=[tweets_collector, st.CsvTweetOutput('output_file.csv')],
    web_client=st.RequestsWebClient(proxy=proxies_config, verify=False),
).run()

tweets = tweets_collector.get_scrapped_tweets()

SSLError: None: Max retries exceeded with url

I have a problem like this when search tweets:

SSLError: None: Max retries exceeded with url: /2/search/adaptive.json?include_can_media_tag=1&include_ext_alt_text=true&include_quote_count=true&include_reply_count=1&tweet_mode=extended&include_entities=true&include_user_entities=true&include_ext_media_availability=true&send_error_codes=true&simple_quoted_tweet=true&count=100&cursor=-1&spelling_corrections=1&ext=mediaStats%252ChighlightedLabel&tweet_search_mode=live&f=tweets&q=+%28from%3Adelphinebatho+OR+from%3Adenis_masseglia+OR+from%3Adenissommer+OR+from%3Adbaichere+OR+from%3Adidierlegac%29+since%3A1621036800+until%3A1621555200 (Caused by None)

Any solutions?

RequestResponse(status_code=403, text='{"errors":[{"code":200,"message":"Forbidden."}]}')

import arrow
import stweet as st


def try_search():
    search_tweets_task = st.SearchTweetsTask(all_words='#covid19', since=dt, until=dt2)
    output_jl_tweets = st.JsonLineFileRawOutput('output_raw_search_tweets.jl')
    output_jl_users = st.JsonLineFileRawOutput('output_raw_search_users.jl')
    output_print = st.PrintRawOutput()
    st.TweetSearchRunner(search_tweets_task=search_tweets_task,
                         tweet_raw_data_outputs=[output_print, output_jl_tweets],
                         user_raw_data_outputs=[output_print, output_jl_users]).run()


dt = arrow.Arrow(2023, 4, 20, 9, 30, 0)
dt2 = arrow.Arrow(2023, 4, 20, 9, 31, 0)
try_search() 

traceback

Traceback (most recent call last):
  File "/Users/kedao/Desktop/OPAY/ng-twitter-spider/new_twitter_spider.py", line 31, in <module>
    st.TweetSearchRunner(search_tweets_task=search_tweets_task,
  File "/Users/kedao/miniforge3/envs/anaconda_base/lib/python3.8/site-packages/stweet/search_runner/search_runner.py", line 51, in run
    self._execute_next_tweets_request()
  File "/Users/kedao/miniforge3/envs/anaconda_base/lib/python3.8/site-packages/stweet/search_runner/search_runner.py", line 71, in _execute_next_tweets_request
    raise ScrapBatchBadResponse(response)
stweet.exceptions.scrap_batch_bad_response.ScrapBatchBadResponse: RequestResponse(status_code=403, text='{"errors":[{"code":200,"message":"Forbidden."}]}')

Only several tweets

I used your project several months, it is very useful. But recently I find i can only get several tweets. Is there some problems?Thanks

How to obtain HTTP status codes?

Hello Marcin, I hope you are doing well.

As you may know, HTTP status checkers don’t do great on Twitter. Tweets that have been deleted or simply don’t exist can be shown as 200 despite them being 403 or 404.

I’m wondering: is there a way I can feed stweet a text file containing Tweet URLs and have stweet ask Twitter to tell me the status codes for these Tweets?

Thank you!

RefreshTokenException: Error during request for token


RefreshTokenException Traceback (most recent call last)
in
1 for user in usernames:
----> 2 mine_tweets('2016-01-01', 365*5, None, 50000000, user)

in mine_tweets(start_date_str, no_days, keywords, tweets_limit, username)
14 st.TweetSearchRunner(
15 search_tweets_task=search_tweets_task,
---> 16 tweet_outputs=[tweets_collector, st.CsvTweetOutput(f'{username}_{keywords}output{date}.csv')]
17 ).run()
18

F:\Anaconda\lib\site-packages\stweet\search_runner\search_runner.py in run(self)
48 def run(self) -> SearchTweetsResult:
49 """Main search_runner method."""
---> 50 self._prepare_token()
51 while not self._is_end_of_scrapping():
52 self._execute_next_tweets_request()

F:\Anaconda\lib\site-packages\stweet\search_runner\search_runner.py in _prepare_token(self)
88 def _prepare_token(self):
89 if self.search_run_context.guest_auth_token is None:
---> 90 self._refresh_token()
91 return
92

F:\Anaconda\lib\site-packages\stweet\search_runner\search_runner.py in _refresh_token(self)
83 def _refresh_token(self):
84 token_provider = self.auth_token_provider_factory.create(self.web_client)
---> 85 self.search_run_context.guest_auth_token = token_provider.get_new_token()
86 return
87

F:\Anaconda\lib\site-packages\stweet\auth\simple_auth_token_provider.py in get_new_token(self)
38 """Method to get refreshed token. In case of error raise RefreshTokenException."""
39 try:
---> 40 token_html = self._request_for_response_body()
41 return json.loads(token_html)['guest_token']
42 except JSONDecodeError:

F:\Anaconda\lib\site-packages\stweet\auth\simple_auth_token_provider.py in _request_for_response_body(self)
33 return token_response.text
34 else:
---> 35 raise RefreshTokenException('Error during request for token')
36
37 def get_new_token(self) -> str:

RefreshTokenException: Error during request for token

Get 404 error when trying TweetSearchRunner.run() in version 2.1.1

In python 3.9.6 and stweet 2.1.1
J just follow the example code and make few changes

import stweet as st
from stweet.twitter_api.twitter_auth_web_client_interceptor import TwitterAuthWebClientInterceptor
import arrow
PROXY_URL = 'socks5://localhost:4781'
web_client = st.RequestsWebClient(
        proxy=st.RequestsWebClientProxyConfig(
            http_proxy=PROXY_URL,
            https_proxy=PROXY_URL
        ),
        interceptors=[TwitterAuthWebClientInterceptor()]
    )
def try_search():
    since = '2023-01-01'
    until = '2023-01-02'
    arrow_since = arrow.get(since, "YYYY-MM-DD")
    arrow_until = arrow.get(until, "YYYY-MM-DD")
    search_tweets_task = st.SearchTweetsTask(all_words='covid',since=arrow_since,
        until=arrow_until)
    output_jl_tweets = st.JsonLineFileRawOutput('output_raw_search_tweets.jl')
    output_jl_users = st.JsonLineFileRawOutput('output_raw_search_users.jl')
    output_print = st.PrintRawOutput()
    
    st.TweetSearchRunner(search_tweets_task=search_tweets_task,
                         tweet_raw_data_outputs=[output_print, output_jl_tweets],
                         user_raw_data_outputs=[output_print, output_jl_users],web_client=web_client).run()


def try_user_scrap():
    user_task = st.GetUsersTask(['iga_swiatek'])
    output_json = st.JsonLineFileRawOutput('output_raw_user.jl')
    output_print = st.PrintRawOutput()
    st.GetUsersRunner(get_user_task=user_task, raw_data_outputs=[output_print, output_json],web_client=web_client).run()


def try_tweet_by_id_scrap():
    id_task = st.TweetsByIdTask('1447348840164564994')
    output_json = st.JsonLineFileRawOutput('output_raw_id.jl')
    output_print = st.PrintRawOutput()
    st.TweetsByIdRunner(tweets_by_id_task=id_task,
                        raw_data_outputs=[output_print, output_json]).run()


if __name__ == '__main__':
    try_search()
    #try_user_scrap()
    #try_tweet_by_id_scrap()

It works when calling try_user_scrap and try_tweet_by_id_scrap but fails in try_search

Here is traceback:

Traceback (most recent call last):
  File "F:\upshi1ro\jprank\Twittertest\st.py", line 45, in <module>
    try_search()
  File "F:\upshi1ro\jprank\Twittertest\st.py", line 24, in try_search
    st.TweetSearchRunner(search_tweets_task=search_tweets_task,
  File "D:\study\python3\lib\site-packages\stweet\search_runner\search_runner.py", line 51, in run
    self._execute_next_tweets_request()
  File "D:\study\python3\lib\site-packages\stweet\search_runner\search_runner.py", line 72, in _execute_next_tweets_request
    raise ScrapBatchBadResponse(response)
stweet.exceptions.scrap_batch_bad_response.ScrapBatchBadResponse: RequestResponse(status_code=404, text='{"errors":[{"message":"Sorry, that page does not exist","code":34}]}')

I also try to print request_params in _execute_next_tweets_request:

RequestDetails(http_method=<HttpMethod.GET: 1>, url='https://twitter.com/i/api/2/search/adaptive.json', headers={}, params={'include_profile_interstitial_type': '1', 'include_blocking': '1', 'include_blocked_by': '1', 'include_followed_by': '1', 'include_want_retweets': '1', 'include_mute_edge': '1', 'include_can_dm': '1', 'include_can_media_tag': '1', 'skip_status': '1', 'cards_platform': 'Web-12', 'include_cards': '1', 'include_ext_alt_text': 'true', 'include_quote_count': 'true', 'include_reply_count': '1', 'tweet_mode': 'extended', 'include_entities': 'true', 'include_user_entities': 'true', 'include_ext_media_color': 'true', 'include_ext_media_availability': 'true', 'send_error_codes': 'true', 'simple_quoted_tweet': 'true', 'q': 'covid since:2023-01-01 until:2023-01-02', 'count': 20, 'query_source': 'typed_query', 'pc': '1', 'spelling_corrections': '1', 'ext': 'mediaStats,highlightedLabel,voiceInfo'}, timeout=60)

Encoding Error

The data is saved to csv and can show in pycharm or notepad++ clearly.But when i open the csv file with Excel the data content will be disordered.

ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))

I got this error while executing this code
image

import stweet as st
start = timeit.default_timer()

task = st.SearchTweetsTask(
    language = st.Language.ARABIC,
    tweets_limit=2000000,
    all_words=''
)

tweets_collector = st.CollectorTweetOutput()
result = st.TweetSearchRunner(task, [tweets_collector]).run()
stop = timeit.default_timer()
print('Time: ', stop - start) 

any idea of the solution?

Difference between the number of followers retrieved by stweet and the actual one.

I have noticed some difference in the number of followers provided by stweet compared to the actual ones on Twitter. Indeed, taking elonmusk as an example we should expect 48 million followers. However, we have 8171 with stweet:

import stweet as st

get_users_task = st.GetUsersTask(['elonmusk'])
users_collector = st.CollectorUserOutput()

st.GetUsersRunner(
    get_user_task=get_users_task,
    user_outputs=[users_collector]
).run()

users = users_collector.get_scrapped_users()

for user in users:
  print(user.followers_count)

It gives back 8171

Last, thanks for creating this library! It's really great since twint is facing some issues atm.

Iterator for large datasets

When sbd have large tweets / users datasets there is no way to process it by classic methods implemented in library. Process import data to memory. When file size will be so big it will be not possible to load data.

Iterator should read tweets or users in small parts.

media url is empty

import stweet as st
import os

try:
    os.remove('output_file.csv')
except:
    pass
tweets_collector = st.CollectorTweetOutput()
tweets = tweets_collector.get_scrapped_tweets()
tweet_ids = st.TweetsByIdsTask(['1115978039534297088'])
st.TweetsByIdsRunner(
    tweets_by_ids_task=tweet_ids,
    tweet_outputs=[tweets_collector, st.PrintTweetOutput()],
    web_client=st.RequestsWebClient(proxy=proxies_config, verify=False),
).run()
tweets = tweets_collector.get_scrapped_tweets()

When I do this, why is there an image in the original tweet, but the URL media in tweets is empty

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.