Giter VIP home page Giter VIP logo

tuber's People

Contributors

amrrs avatar balthasars avatar gvelasq avatar jalvarado avatar jobreu avatar ktrask avatar layik avatar lyons7 avatar mbaquer6 avatar michaelchirico avatar michaeltoth avatar mpaulacaldas avatar mronkko avatar muschellij2 avatar soodoku avatar thieled avatar timbmk avatar troyhernandez avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tuber's Issues

Obtaining video duration

How can I obtain the duration of a video? It does not seem to be either in stats or content details, even though according to the youtube API it should be among the parameters returned by get_content_details

wrong index in get_playlist_items when > 50 items

Hello,
The sequence should be every 6 elements to get the videos list and the last page doesn't have a nextpage token, so it has 1 less element between the last list.
Your code:

if (simplify == TRUE) {
    if (length(res) > 5) {
      res <- plyr::ldply(lapply(unlist(res[seq(5, length(res), 5)],
                                                                 recursive = F),
                                                                 as.data.frame))
    } else {
       res <- plyr::ldply(lapply(unlist(res[5], recursive = F), as.data.frame))
    }
  }

Here is a possible solution:

if (simplify == TRUE) {
    if (length(res) > 5) {
      res <- append(res, list(NA), length(res)-1)
      res <- plyr::ldply(lapply(unlist(res[seq(5, length(res), 6)],
                                       recursive = F),
                                 as.data.frame))
    } else {
      res <- plyr::ldply(lapply(unlist(res[5], recursive = F), as.data.frame))
    }
  }

list_channel_videos ( ) raises both 403 and 404 errors

Hello,

I was trying to get a list of videos in this channel by using

list_channel_videos ( ) and I got the infamous 404 error:

library(devtools)
devtools::install_github("soodoku/tuber", build_vignettes = TRUE)
yt_oauth("1c8faacb16564e178b952ca9fbacc378","t1iczUSCjh5ggVuL5w9YOrXv")
li <- c("UCjCRflXGFvKRJepiBhF6sDw")
list_channel_videos(channel_id=li)

Error: HTTP failure: 404

I found the discussion on this topic at #25 but I am using the development version. Also, other functions from the tuber package work fine.

I thought I should clarify that I am using the Youtube Data API v3.

Now what is strange is that I can run the list_channel_videos () function and get alternating errors:

list_channel_videos(channel_id="UCjCRflXGFvKRJepiBhF6sDw" , max_results=10)

Error: HTTP failure: 404

list_channel_videos(channel_id="UCjCRflXGFvKRJepiBhF6sDw" , max_results=10)

Error: HTTP failure: 403

Do you have any idea of what this might be?

Thanks!!

Search

Is there a problem with the search function? because keeps on running even when I set max_result = 1
updated: it just took a long time; my bad

get_playlist_items res with less than 50 items

There is a minor error while trying to get the playlist items when it has less than 50 items. The error occurs because the else of the if condition (length(res) > 5).

The else try to get an invalid position res[5] which returns a data frame of 0 obs. of 0 variables.

Suggestion: Feature for finding most popular channels

Hi,

I hope you don't mind using github issues for feature suggestions, otherwise please let me know and I drop you an e-mail :)

It would be awesome if you consider adding a new feature to tubeR which could be used to identify popular channels e.g. within a country or a specific content category.

I am fairly certain that there are functions in the youtube API that you could use for this, as other sites offer this functionality as well:

NA

Brain fade.

list_channel_videos max_results doesn't work

Hello,
The max_results for list_channel_videos() doesn't work. I try this because our channel have more than 50 videos and I need the pageToken to get the other results, but it always get 50 videos no matter what I use as limiter.
Any help with this issue will be greatly appreciated.
Thank you.

get_comment_threads does not return max results

When I manually set maximum_results to the specified # of comments for a video_id, there are less results than specified. For example, I recently tried to fetch 4362 comments for a video, but the function returned only 1900.

Error in curl

Hi. I am having the following error when running yt_oauth("", ""):

Waiting for authentication in browser...
Press Esc/Ctrl + C to abort
Authentication complete.
Error in curl::curl_fetch_memory(url, handle = handle) :
server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none

I am using ubuntu 14.04.1. It is obviously a certificate problem but I can't figure it out. Everything is fine with the id and the secret and the browser authentication opens perfectly showing the application name.

Thanks

Why get_playlist doesn't allow me to set parts?

Hey,
I try to get playlists from YouTube channel with snippet and contentDetails:

playlists <- get_playlists(filter = c(channel_id = "channel_id"), part = "snippet,contentDetails")

or

playlists <- get_playlists(filter = c(channel_id = "channel_id"), part = c("snippet","contentDetails"))

But i see the error:

Error in match.arg(part) : 
  'arg' should be one of “contentDetails”, “id”, “localizations”, “player”, “snippet”, “status”

or

Error in match.arg(part) : 'arg' must be of length 1

In youtube API I can do this (from documentation):

# curl URL:
curl -i -G -d "maxResults=25&channelId=UC_x5XG1OV2P6uZZ5FSM9Ttw&part=snippet%2CcontentDetails&key={YOUR_API_KEY}"
              https://www.googleapis.com/youtube/v3/playlists

# HTTP URL:
GET https://www.googleapis.com/youtube/v3/playlists?maxResults=25
                                                   &channelId=UC_x5XG1OV2P6uZZ5FSM9Ttw
                                                   &part=snippet%2CcontentDetails
                                                   &key={YOUR_API_KEY}

tuber Error: Error in readRDS(token) : error reading from connection

Hi, I'm new using tuber.

I've created an APP and I already have the ID and KEY to that APP, but when I try to connect by using "yt_oauth" I get the Error in readRDS(token) : error reading from connection.

What am I doing wrong?

By the way, these are my key and ID:

my.youtube <- "487857702734-ch0kfe72akgk5158sojbggidsfsq8nqc.apps.googleusercontent.com"
my.youtubeS <- "b51BRuBGvD6gD6jkuBTxXI7b"
yt_oauth(my.youtube, my.youtubeS)

Thanks in advance.

Error with "oath_app"

I'm trying to do the authentication, but the program returns a erros saying: Error in oauth_app("google", key = app_id, secret = app_secret) : não foi possível encontrar a função "oauth_app".

Any reasons why it can't find the "oath_app" function? I have no idea why i'm getting this error

suggestions for get_related_videos

Hi,

in the current github version get_related_videos() does not contain id's for the corresponding videos but instead only the id of the input video:

 get_related_videos(video_id = "yJXTXN4xrI8") %>% pull(video_id) %>% 
  table()
Total Results 128 
.
yJXTXN4xrI8 
         50 

It would be very useful if users could have access to both, the input as well as the output_ids.

As for another suggestion, the Youtube Search API also is able to return related videos for an input query other than a video id. Any chance you could look into this? I think this could be very valuable for users, e.g. to study how the video suggestions for a term (e.g. "islam") change over time.

Edit: Sorry I just realized that my second suggestion might very well be the exact implementation of yt_search(). Is this the same endpoint? if so, please ignore what I wrote.. =D

get_captions throws error

Hi,

first of all thank you so much for creating this package. Until now I relied on the excellent Youtube Data Tool and its awesome to finally have a proper way of working with youtube data in R. While your latest change, the possibility to get all comments for a video, works very well for me, I have trouble with catching captions:

captions <- list_caption_tracks(video_id = "C2p42GASnUo",  lang = "en")
captions

   videoId              lastUpdated trackKind language name audioTrackType  isCC isLarge isEasyReader isDraft
1 C2p42GASnUo 2017-05-15T19:57:13.795Z       ASR       es             unknown FALSE   FALSE        FALSE   FALSE
2 C2p42GASnUo 2017-05-10T20:31:26.261Z  standard       en             unknown FALSE   FALSE        FALSE   FALSE
3 C2p42GASnUo 2017-05-10T20:31:57.981Z  standard       es             unknown FALSE   FALSE        FALSE   FALSE
  isAutoSynced  status                                           id
1        FALSE serving gJ7PGz_Tj5zCuwb-GNgrENWFI7er-fGhLfdFQHboNkQ=
2        FALSE serving             rOkYSaba8sA5GlCHIbS8lpwv-XUSpRrX
3        FALSE serving             rOkYSaba8sCImiwNI2S1VG4VDJ2wJOHx

First, the language parameter does not seem to work as intended, as non-english results get returned as well. Second, pass one of the id's to list_caption_tracks does not work:

 caps <- get_captions(id="rOkYSaba8sA5GlCHIbS8lpwv-XUSpRrX")
Error: HTTP failure: 403
> caps

Do you have any idea what's going on here?

Edit: Is it possible that I need more permissions in addition to all youtube and the freebase api?

Error: HTTP failure: 401

Hi, I keep receiving this error when trying to use any function from the tuber package. It was working fine until about 4pm this afternoon when I tried using the list_channel_videos() function. But all others since then have stopped to. I have noticed when restarting R, and using yt_oauth(APP_ID,APP_SECRET) , I don't get an authorisation message in my browser, R just executes the command fine. Could this be the issue? That yt_oauth isn't properly configured?

emoji vignette

The HTML rendered version of the emoji vignette here:

https://htmlpreview.github.io/?https://github.com/soodoku/tuber/blob/master/vignettes/emoji_vignette.html

I think we can improve it a bunch. Do you want to a take a crack?

Some ideas:

  1. .I am pretty sure we don't use all the packages we load up front. So can we trim to what we use.
  2. Plots look pretty bad. Let's fix'em
  3. Some fun analysis about +ve/-ve sentiment from emojis?

p.p.s.
Added you as a contributor in the description:
https://github.com/soodoku/tuber/blob/master/DESCRIPTION

Error in list_channel_videos

When I run:
list_channel_videos(channel_name = "latenight")

I get the following error:

Error in tuber_GET("channels", querylist, ...) : Bad Request (HTTP 400).

Error: redirect_uri_mismatch

Hey,
I had a problem with yt_oauth() method.

The only code what I wrote is:

library(tuber)

yt_oauth(app_id = "my_app_id", app_secret = "my_secret")

When i try to start app, my browser open new tab with:
Error: redirect_uri_mismatch

In request details i see: redirect_uri=urn:ietf:wg:oauth:2.0:oob
But in Console developers (https://console.developers.google.com/apis/credentials/oauthclient/ ...) I can't add that uri to "Authorised redirect URIs" or "Authorised redirect URIs".

What should I do? Documentation of tuber doesn't have any solution for this problem.

Error yt_get_related_videos

I am using tuber to collect data from YouTube. In the process of working through the manual by running all the example code, I ran into an issue with the yt_get_related_videos. I tried the example provided in the manual documentation, but I got the following error message

yt_get_related_videos(video_id="yJXTXN4xrI8")
Error in strsplit(term, " ") : object 'term' not found

Looking at the structure of the function, I can see where term is being used but I do not know what value to place in the function to give term a value.

yt_get_related_videos
function (video_id = NULL, maxResults = 5, safeSearch = "none",
...)
{
if (is.null(video_id))
stop("Must specify a video ID")
if (maxResults < 0 | maxResults > 50)
stop("maxResults only takes a value between 0 and 50")
yt_check_token()
term = paste0(unlist(strsplit(term, " ")), collapse = "%20",
safeSearch = safeSearch)
querylist <- list(part = "snippet", relatedToVideoId = video_id,
type = "video", maxResults = maxResults)
res <- tuber_GET("search", querylist, ...)
resdf <- NA
if (res$pageInfo$totalResults != 0) {
simple_res <- lapply(res$items, function(x) x$snippet)
resdf <- as.data.frame(do.call(rbind, simple_res))
}
else {
resdf <- 0
}
cat("Total Results", res$pageInfo$totalResults, "\n")
return(invisible(resdf))
}
<environment: namespace:tuber>

Generalize queries

I'd like to be able to get all the videos of a certain channel, and I found this link on the google developers guide: https://developers.google.com/youtube/v3/guides/working_with_channel_ids

I customized some code below, and I received an error:
Error in list_videos("latenight") : could not find function "tuber_GET"

Can you design in a way to do custom queries, or a way to access the tuber_GET request function like this?

Here's the code (for example, try entering "latenight" as the display_name)

list_videos <- function (display_name) 
{
     querylist <- list(part = "snippet", q = display_name)
     res <- tuber_GET("channel", querylist)
     resdf <- NA
     if (length(res$items) != 0) {
          simple_res <- lapply(res$items, function(x) c(unlist(x$snippet), 
                                                        etag = x$etag))
          resdf <- as.data.frame(do.call(rbind, simple_res))
     }
     else {
          resdf <- 0
     }
     cat("Total Number of Videos:", length(res$items), "\n")
     return(invisible(resdf))
}

Pagination

I'm sorry if this is not the place. I have done some scrapping but I'm not used to talking with APIs. How could I automatize the pagination with the page_token argument? I mean... what is page_token argument expecting to search in the next page? The id of the last video of the previous query? This is not really an issue and it probably concerns the Youtube API and not this package (awesome by the way), I'm sorry for that.

Thanks!

Error: HTTP failure: 400 when yt_search with location paramenter

I'd like to limit yt_search result, but location parameter doesn't work.

# 0. 환경설정 -----
library(tidyverse)
library(tuber)      # devtools::install_github("soodoku/tuber", build_vignettes = TRUE)

yt_oauth("xxxxxxxxxxx.apps.googleusercontent.com", "xxxxxxxxxxxx")

# 1. 데이터 가져오기 -----

kaiser_yt_df <- yt_search(term = "카이저", published_after = "2017-07-01T00:00:00Z", location_radius="10km")
kaiser_yt_df <- yt_search(term = "카이저", published_after = "2017-07-01T00:00:00Z", location="(37.42307,-122.08427)")
# Error: HTTP failure: 400
> sessionInfo()
R version 3.5.0 (2018-04-23)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1

Matrix products: default

locale:
[1] LC_COLLATE=Korean_Korea.949  LC_CTYPE=Korean_Korea.949    LC_MONETARY=Korean_Korea.949
[4] LC_NUMERIC=C                 LC_TIME=Korean_Korea.949    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] extrafont_0.17   ggthemes_3.5.0   glue_1.2.0       lubridate_1.7.4  DT_0.4           tuber_0.9.7.9001
 [7] forcats_0.3.0    stringr_1.3.0    dplyr_0.7.4      purrr_0.2.4      readr_1.1.1      tidyr_0.8.0     
[13] tibble_1.4.2     ggplot2_2.2.1    tidyverse_1.2.1 

loaded via a namespace (and not attached):
 [1] reshape2_1.4.3   haven_1.1.1      lattice_0.20-35  colorspace_1.3-2 htmltools_0.3.6  yaml_2.1.19     
 [7] rlang_0.2.0      pillar_1.2.2     foreign_0.8-70   modelr_0.1.1     readxl_1.1.0     bindrcpp_0.2.2  
[13] bindr_0.1.1      plyr_1.8.4       munsell_0.4.3    gtable_0.2.0     cellranger_1.1.0 rvest_0.3.2     
[19] htmlwidgets_1.2  psych_1.8.4      curl_3.2         parallel_3.5.0   Rttf2pt1_1.3.6   broom_0.4.4     
[25] Rcpp_0.12.16     scales_0.5.0     jsonlite_1.5     mnormt_1.5-5     hms_0.4.2        digest_0.6.15   
[31] stringi_1.2.2    grid_3.5.0       cli_1.0.0        tools_3.5.0      magrittr_1.5     lazyeval_0.2.1  
[37] extrafontdb_1.0  crayon_1.3.4     pkgconfig_2.0.1  xml2_1.2.0       assertthat_0.2.0 httr_1.3.1      
[43] rstudioapi_0.7   R6_2.2.2         nlme_3.1-137     compiler_3.5.0  

video id in search + timestamp on stats

Hi, first thanks for the package, really useful.

documentation says that if we search yt_search(term="Barack Obama") we get 13 elements, including the video_id but I only get 7 elements in the response.

In addition, is there a way to gather the stats of a video with a timestamp?

Thank you

Where should 'app_id' and 'app_password' come from?

Apologies if this is a stupid question, but where are these fields supposed to come from? It is not immediately clear from package documentation or the Google Developers info site. I have tried my project id, but I don't know what the 'app_password' would be connected to that. I tried using the Id and key from my OAuth 2.0 set up on the Google Developers website, but I get an "Unable to read token from:.httr-oauth". I looked in my wd to delete any old httr oauth but there was none. Sorry to ask but this package is so new I didn't get much luck googling etc. Thanks in advance!

Reset to out-of-band authentication instead of .httr-oauth from the httpuv package

I've installed tuber on my ubuntu RStudio Server. As I'm running R in my browser, there's of course no possibility that I'm able to use the httpuv generated .httr-oauth file as my browser isn't able to redirect to my server via localhost:1410.

But the first time I called the yt_oauth function, muscle memory kicked in and I selected option 1 (use the .httr-oauth file) instead of 2. (out-of-band option).

Somehow I'm not able to find the possibility to enable option 2 after I selected option 1. I tried removing the httpuv package, but tuber is dependent on it, wether or not I've selected option 1 or 2.

I removed the existing .httr-oauth file manually and with yt_oath(keys, remove_old_oauth = TRUE) but I can't "reselect" option 2 instead of 1. Am I missing something?

Issue identifying emojis in comments

Hello! I wanted to ask what the default encoding is in the data frame you get from 'get_comment_threads'. I'd like to identify the emojis in my YouTube comments, but I can't figure out how to convert them to the right encoding as my dictionary. When I turn the textOriginal into a character vector and then convert that to ascii, the encoding doesn't match how I've done that in the past with twitter data with emojis. Is that something to do with YouTube, tuber or how I'm converting it?

error

Hi

when i run
get_stats(video_id= "PmEGhlO_2h4")
or
get_channel(channel_id = "UCrpWCVp9FuGA8OnGtA0rgsw")

i get the following error:
Error in tuber_GET("channels", querylist, ...) : Forbidden (HTTP 403).

could you check this pls? thanks

text converted to factor: expected behavior?

Thank you for creating the package.

the combination of ldply and rbind uses for conversion of unlisted object to data frame, in the default setting of most R environment options(stringsAsFactors = TRUE), converts all strings to factors. Therefore, functions such as get_all_comments returns all comments as factors. I don't think it is an expected behavior.

I can change the ldply and rbind combo to the following:

conv_unlist_df <- function(simple_res, stringsAsFactors = FALSE) {                                                                                                                   
### Conversion of unlist stuff to data frame, make stringsAsFactors as an option                                                                                                     
    ldply(simple_res, function(x) as.data.frame(t(x), stringsAsFactors = stringsAsFactors))                                                                                          
}

and use that for conversion. It works but numeric columns and logical columns are all converted to strings.

I was wondering if you could clarify whether it is an expected behavior? If it is not an expected behavior and you don't mind a breaking change, I can send a PR.

Thank you!

Related to issue 43

Hi Soodoku,

i am still getting the same error.

ytcomments<-get_all_comments(video_id = "ZUG9qYTJMsI")
Error in rbind(deparse.level, ...) :
numbers of columns of arguments do not match

please help me in the same.

Related to issue 43

Hey Soodoku,

I am still facing the same issue.

ytcomments<-get_all_comments(video_id = "ZUG9qYTJMsI")
Error in rbind(deparse.level, ...) :
numbers of columns of arguments do not match

Retrieve video id in function yt_search

Could you please include id resource property into the part parameter in the GET request within the yt_search function? I would be nice if we could get the id for all videos along with the snippet property.

get video id on query and how to see multiple pages?

hey,

Trying to search for x search terms via yt_search and then for each video_id get some stats.

i know i have to use simplify = false to get the video id and with some parsing i don't think its difficult. However i can't see an option to see more than 50 and/ or go to other pages. You mention in the documentation that we can ago up until 500 but that doesn't seem to be working at all.

Can you help?

Pull Comments

Is there anyway of pulling more than 100 comments from a youtube video?

Error in rbind(deparse.level,...) get_all_comments function

Hello,

first of all I would like to thank you for the awesome package you created it already helped me a lot for my masters thesis! I used the get_all_comments () function to extract comments of different videos. For the majority it worked fine but for some it strangely encountered an error.

One example: video_70_comments <-get_all_comments(video_id="zdnybX_qWxY")

"Error in rbind(deparse.level,...): number of columns of arguments do not match"

Do you know a solution to this issue?

Also a second question: Does the get_all_comments() function support running multiple video_ids?

I tried but it only would let me run one video_id at a time. A loop that i wrote did not work either.

Thank you a lot for your help in advance I would greatly appreciate it!!!
Best regards from Germany

Trouble getting captions

How can I get captions of a video using YouTube API v3? I'm not sure what you mean in the documentation when you say we "must specify ID resource" to use get_captions?

When I try (for example), the following,
(using video ID)
get_captions(id="OFcXgFBzMlE")
(or, using caption ID)
list_caption_tracks(part = "id, snippet", video_id = "OFcXgFBzMlE", lang = "en")$items[[1]]$id
get_captions(id="OxOi6B-cWrZN4plfs50Uyu7rJ9Ex_Rl-oWRKhd8pmVs=")

I get "Error: HTTP: 404"

Is this a problem with my code or with the package or with YouTube? Can you provide some sample code to get captions for a video that has them? Many many thanks for your help.

HTTP failure 400 in get_captions

Hi,
Tried tuber for the first time today. Got it working for the most part, but getting a malformed HTTP error for a video whose captions I'm trying to download.
Any help / suggestion is greatly appreciated.
Thanks!

get_captions(video_id = "uDGLM0FmueI")

Error: HTTP failure: 400

list_caption_tracks(video_id = "uDGLM0FmueI")

  videoId              lastUpdated trackKind language                name audioTrackType  isCC isLarge isEasyReader

1 uDGLM0FmueI 2015-09-25T00:17:37.092Z standard en primary FALSE FALSE FALSE
2 uDGLM0FmueI 2015-09-24T21:29:49.455Z standard en 1443130189264766_V1 primary FALSE FALSE FALSE
isDraft isAutoSynced status id
1 FALSE FALSE serving 7UvEkU_4--WRDZHuRUK3vT7rvPsECvnrqtaF1so4xHw=
2 TRUE FALSE serving TtVWqS2yHoGLGsBsK4oXUgpFeJg4sxyYT23qLsID7uR-iNlazm6kFlNRpz_s_ztX

list_channel_videos raises 404 error

Hi,

trying to use list_channel_videos() raises a 404 error for me:

list_channel_videos(channel_id = "UCXOKEdfOFxsHO_-Su3K8SHg")
Error: HTTP failure: 404

During the same session other functions work just fine, so I don't think it's related to authorization problems:

 get_stats(video_id="B8ofWFx525s")

$id
[1] "B8ofWFx525s"

$viewCount
[1] "956212"

$likeCount
[1] "12944"

$dislikeCount
[1] "169"

$favoriteCount
[1] "0"

$commentCount
[1] "880"

Urgent - Package installation failing, even though I used it before

My computer crashed so I needed to restart R and tuber, as I have done numerous times in the past without a problem. However, this time I seem to be getting an error that the package installation is failing. Could you please clarify how to fix this issue? I need help soon as I need to run some code today. Many thanks!!!

My code:
install.packages("tuber", repos="http://cran.rstudio.com/")
devtools::install_github("soodoku/tuber", build_vignettes = TRUE)
library(tuber)

Console output:

install.packages("tuber", repos="http://cran.rstudio.com/")
Installing package into ‘C:/Users/Haris/Documents/R/win-library/3.4’
(as ‘lib’ is unspecified)
trying URL 'http://cran.rstudio.com/bin/windows/contrib/3.4/tuber_0.9.5.zip'
Content type 'application/zip' length 126324 bytes (123 KB)
downloaded 123 KB

package ‘tuber’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
C:\Users\Haris\AppData\Local\Temp\Rtmp4UXvGq\downloaded_packages

devtools::install_github("soodoku/tuber", build_vignettes = TRUE)
Downloading GitHub repo soodoku/tuber@master
from URL https://api.github.com/repos/soodoku/tuber/zipball/master
Installing tuber
"C:/PROGRA1/R/R-341.2/bin/x64/R" --no-site-file --no-environ --no-save --no-restore --quiet CMD build
"C:\Users\Haris\AppData\Local\Temp\Rtmp4UXvGq\devtools20b854466312\soodoku-tuber-6ae6e48" --no-resave-data --no-manual

  • checking for file 'C:\Users\Haris\AppData\Local\Temp\Rtmp4UXvGq\devtools20b854466312\soodoku-tuber-6ae6e48/DESCRIPTION' ... OK
  • preparing 'tuber':
  • checking DESCRIPTION meta-information ... OK
  • installing the package to build vignettes
    Warning: running command '"C:/PROGRA1/R/R-341.2/bin/x64/Rcmd.exe" INSTALL -l "C:\Users\Haris\AppData\Local\Temp\RtmpiGZUoC\Rinst12c4ec93b88" --no-multiarch "C:/Users/Haris/AppData/Local/Temp/RtmpiGZUoC/Rbuild12c41205221e/tuber"' had status 1
    -----------------------------------
  • installing source package 'tuber' ...
    ** R
    ** inst
    ** preparing package for lazy loading
    Error in loadNamespace(i, c(lib.loc, .libPaths()), versionCheck = vI[[i]]) :
    there is no package called 'rlang'
    ERROR: lazy loading failed for package 'tuber'
  • removing 'C:/Users/Haris/AppData/Local/Temp/RtmpiGZUoC/Rinst12c4ec93b88/tuber'
    -----------------------------------
    ERROR: package installation failed
    Installation failed: Command failed (1)

library(tuber)
Error: package or namespace load failed for ‘tuber’ in loadNamespace(i, c(lib.loc, .libPaths()), versionCheck = vI[[i]]):
there is no package called ‘rlang’
In addition: Warning message:
package ‘tuber’ was built under R version 3.4.4

cat() statements in get_stats()

hey soodoku,
what are your thoughts about removing the cat statements in get_stats()? If you run get_stats() in the RStudio console, you see the results twice: once from the cat() statements and once from the return value of the function.

Thanks for your work on this package! it's very useful

get_channel_stats Error: HTTP failure: 400

I have this problem when trying to use get_channel_stats, other functions for example get_all_channel_video_stats work normally, I already used get_channel_stats in that same way and it worked perfect

> library(tuber);
> yt_oauth("keykeykeykeykeykeykey", "keykeyykeykeykey")
> c<-get_channel_stats(channel_id="UCwYWoD6KeD0F5LaRCMk00Vw")
Error: HTTP failure: 400
> 

yt_search() not handling spaces correctly in the query term

When doing searches requiring more than one term the query result returns incorrect number of search results.
Example:
yt_search(term = "Barack Obama") returns 3877 search results

This is very low for the search term "Barack Obama".
Compared with using the API Explorer on https://developers.google.com/youtube/v3/docs/search/list with the same search term yields 1000000.

I think it has to do with the search term has to be URL-escaped. Testing with the search term "Barack-Obama" yields a more likely number:
yt_search(term = "Barack-Obama") returns 1000000 search results.

I recommend either specifying that this is the case in the documentation or, preferably, handling spaces in the function.

Update: I should note that I am using the CRAN version of tuber.

Package installation failing

I restarted RStudio and tried to install your package, as I have done numerous times, but this time it is not working for a reason I don't understand. The error message can be found below.

I would greatly appreciate someone's prompt help, as I need to run some code today!

install.packages("tuber", repos="http://cran.rstudio.com/")
Installing package into ‘C:/Users/Haris/Documents/R/win-library/3.4’
(as ‘lib’ is unspecified)
trying URL 'http://cran.rstudio.com/bin/windows/contrib/3.4/tuber_0.9.7.zip'
Content type 'application/zip' length 128923 bytes (125 KB)
downloaded 125 KB

package ‘tuber’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
C:\Users\Haris\AppData\Local\Temp\Rtmp8apzzn\downloaded_packages

devtools::install_github("soodoku/tuber", build_vignettes = TRUE)
Downloading GitHub repo soodoku/tuber@master
from URL https://api.github.com/repos/soodoku/tuber/zipball/master
Installing tuber
"C:/PROGRA1/R/R-341.2/bin/x64/R" --no-site-file --no-environ --no-save --no-restore --quiet CMD build
"C:\Users\Haris\AppData\Local\Temp\Rtmp8apzzn\devtools1f20465f168e\soodoku-tuber-20dd342" --no-resave-data --no-manual

  • checking for file 'C:\Users\Haris\AppData\Local\Temp\Rtmp8apzzn\devtools1f20465f168e\soodoku-tuber-20dd342/DESCRIPTION' ... OK
  • preparing 'tuber':
  • checking DESCRIPTION meta-information ... OK
  • installing the package to build vignettes
    Warning: running command '"C:/PROGRA1/R/R-341.2/bin/x64/Rcmd.exe" INSTALL -l "C:\Users\Haris\AppData\Local\Temp\RtmpwLOY5T\Rinst1bcc31f31006" --no-multiarch "C:/Users/Haris/AppData/Local/Temp/RtmpwLOY5T/Rbuild1bcc44351bf2/tuber"' had status 1
    -----------------------------------
  • installing source package 'tuber' ...
    ** R
    ** inst
    ** preparing package for lazy loading
    Error in loadNamespace(i, c(lib.loc, .libPaths()), versionCheck = vI[[i]]) :
    there is no package called 'rlang'
    ERROR: lazy loading failed for package 'tuber'
  • removing 'C:/Users/Haris/AppData/Local/Temp/RtmpwLOY5T/Rinst1bcc31f31006/tuber'
    -----------------------------------
    ERROR: package installation failed
    Installation failed: Command failed (1)

library(tuber)
Error: package or namespace load failed for ‘tuber’ in loadNamespace(j <- i[[1L]], c(lib.loc, .libPaths()), versionCheck = vI[[j]]):
namespace ‘Rcpp’ 0.12.13 is already loaded, but >= 0.12.15 is required
In addition: Warning message:
package ‘tuber’ was built under R version 3.4.4

get_all_comments does not return max results

Hi, first of all thank you for your awesome R package to scrape for comments at youtube. I am using your package to analyse some comments, but I came up with the problem that not all comments can be collected. I think the issue have been mentioned in other issues (get_comment_threads) as well but this problem focuses on the method "get_all_comments". The original video has 3040 comments, the function returns only 2335 records, so approximately 30% get lost. The bigger problem in my opinion is the returning of the replies. Looking at the user in "top comment" category it can be seen that the original video counts 34 different replies, the function returns only 5, so the communication between different users will be lost.

comments <- get_all_comments(video_id = "zz-RpiUFY-I")

Error: HTTP failure: 403

Hi there,

So I've been using and loving Tuber, until just now when all the sudden my search attempts are coming back with a "Error: HTTP failure: 403" message. I haven't changed a thing. Same API key, same secret. But now yt_search with any term in it simply fails.

I don't see any troubleshooting mechanisms in the package, and the documentation you link to on the YouTube Developer site doesn't yield any additional insights either. I logged into my YouTube account to see if it had been closed, and it hadn't. I checked the Google APIs dashboard to see if my access had been restricted or something, and it doesn't appear to have been.

Any ideas what could be going on? And more importantly, how to fix it?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.