Giter VIP home page Giter VIP logo

blog's People

Contributors

apreshill avatar bisaloo avatar dpprdan avatar elinw avatar example123 avatar gaborcsardi avatar gadenbuie avatar github-actions[bot] avatar jonthegeek avatar jromanowska avatar lindbrook avatar maelle avatar mine-cetinkaya-rundel avatar nuest avatar py-b avatar rekyt avatar tjmahr avatar zoelocke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

blog's Issues

win-builder

post about pkgsearch

once it's been merged with crandb and a new version has been released on CRAN.

  • comparison with packagefinder, what are the strengths of each of them?

  • some technical details in particular how is the data queried. It's the data from CRAN but not queried via tools::CRAN_package_db(), why?

  • use cases.

    • actual search or quick reminder of what's available? one will IMO most certainly want to check out packages more before installing them (packagefinder has a function for browsing package URLs ๐Ÿ‘Œ ).
    • what's new, which packagefinder explicitely supports. CRANberries in your R console.
    • use for data analysis, i.e. frequency of updates of packages?
  • invite more use cases to be reported (tagging R-hub on Twitter, reporting via gitter? there's no disqus yet)

post about READMEs

  • Some text about a README is often an entry point to the package. take-home message would be one should prepare a nice README. How: read other READMEs, have someone else read the README.

  • this quote https://twitter.com/ma_salmon/status/1151779026702352384

  • sample of top and trending packages (pkgsearch), with a GitHub repo URL, for which we can find a README (via GitHub's preferred README API endpoint) -- so a limited sample. On this sample, look at

    • how many READMEs use the package name as first header
    • how many READMEs have an install(ation) section
    • look at most used section titles over all READMEs
    • display the structure of a few READMEs
    • distribution of the number of level 2 headers?
  • Mention usethis' README template.

  • Link to the Write the docs newsletter about READMEs

  • Link to https://www.garrickadenbuie.com/blog/dry-vignette-and-readme/ and roxygen2 documentation tags article, to explain how to re-use stuff.

  • https://github.com/ropensci/software-review-meta/issues/55

  • https://devguide.ropensci.org/building.html#readme

Idea: add link to documentation and build into the blog post navbar

Looking for the documentation website, from a google search I arrive through the Rhub blog.

And I was expecting to find more link in the navbar to what is presented in about.md like

  • A link to the documentation
  • A link to the builder

What about adding these in config.toml to add direct link to these two important ressources ?

Recent changes on the R-hub package builder

  • E.g. if the checkbashisms PR is eventually merged. Maybe wait for a few other issues to be closed.

  • Mention the platforms added last year are mentioned in the blog post about usage.

  • @gaborcsardi where else to look for relevant changes to mention?

Occasion to remind readers of how to give feedback, and where some things live (e.g. the Docker configurations).

add writeup of where to get package binaries

as with #81 I'm always re-researching the whole package binary availability situation in all combinations of

  • OS
  • distros (for linux)
  • R versions
  • or generally, old package binaries

Between the (public?) RSPM binaries (for some distros), Michael Rutters debian binaries, sometime-late (?) CRAN macOS binaries and renvs semi-related local global pkg cache, there seems to be a lot to consider.

I'm not the greatest expert on compiled packages, because I've never myself written packages with any compiled code, but I have struggled with this from the other end.

Is there a good resource for this already?
if not, might this blog be a place for a writeup about this?

blog post about cranlogs

usual presentation/release post but I'm opening the issue in order not to forget to look for the script/package I've seen on Twitter that removes noise/automatic downloads to give you a clean time series of actual package downloads.

Link for Alicia's talk in "Code gen. in R pkgs" post?

Currently the link on the line below goes to vlbuilder on GitHub. This might be what you want (I didn't get to see Alicia's talk โ˜น๏ธ). There is a link to her slides from the conference here, or you might be waiting for the conf. talks to go up!

_Miles furthermore mentioned [Alicia Schep's rstudio::conf talk "Auto-magic package development"](https://github.com/vegawidget/vlbuildr#vlbuildr) to us, that was a great watch/read!_

blog post: how to keep up with CRAN

Official communication channels

  • CRAN policy, with the watch services by Dirk Eddelbuettel

  • R Journal (use bib2df to parse the general .bib, show that there's a recurrent article "Changes on CRAN"). These articles have info about policy changes but also changes in the submission pipelines, check setups etc.

Other information sources

  • Your own packages, link to our post about CRAN checks, in particular the section about noticing their changes.

  • Only using an up-to-date email address as maintainer!

  • R-package-devel and other venues where folks ask for R package development help, because they will mention CRAN stuff. R-package-devel more than other venues because users know CRAN "listens" to that list.

URLs in CRAN packages

I got curious because I wanted to see how many packages have a GitLab URL.

db <- tools::CRAN_package_db()

db <- tibble::as_tibble(db[, c("Package", "URL")])
db <- dplyr::distinct(db)
nrow(db)
#> [1] 15278
sum(is.na(db$URL))
#> [1] 8050

db <- db[!is.na(db$URL),]

library("magrittr")

url_regex <- function() "(https?://[^\\s,;>]+)"
find_urls <- function(txt) {
  mch <- gregexpr(url_regex(), txt, perl = TRUE)
  res <- regmatches(txt, mch)[[1]]

  if(length(res) == 0) {
    return(list(NULL))
  } else {
    list(unique(res))
  }
}

db %>%
  dplyr::group_by(Package)  %>%
  dplyr::mutate(actual_url = find_urls(URL))%>%
  dplyr::ungroup() %>%
  tidyr::unnest(actual_url) %>%
  dplyr::group_by(Package, actual_url) %>%
  dplyr::mutate(url_parts = list(urltools::url_parse(actual_url))) %>%
  dplyr::ungroup() %>%
  tidyr::unnest(url_parts) %>%
  dplyr::mutate(scheme = trimws(scheme)) -> parsed_db


dplyr::count(parsed_db, Package, sort = TRUE)
#> # A tibble: 7,161 x 2
#>    Package           n
#>    <chr>         <int>
#>  1 RcppAlgos         7
#>  2 BIFIEsurvey       5
#>  3 BigQuic           5
#>  4 PGRdup            5
#>  5 vwline            5
#>  6 ammistability     4
#>  7 augmentedRCBD     4
#>  8 dcGOR             4
#>  9 dendextend        4
#> 10 dialr             4
#> # โ€ฆ with 7,151 more rows
dplyr::count(parsed_db, scheme, sort = TRUE)
#> # A tibble: 2 x 2
#>   scheme     n
#>   <chr>  <int>
#> 1 https   5851
#> 2 http    2503
dplyr::count(parsed_db, domain, sort = TRUE)
#> # A tibble: 1,846 x 2
#>    domain                    n
#>    <chr>                 <int>
#>  1 github.com             4631
#>  2 www.r-project.org       165
#>  3 cran.r-project.org      144
#>  4 r-forge.r-project.org    82
#>  5 bitbucket.org            67
#>  6 sites.google.com         54
#>  7 arxiv.org                53
#>  8 gitlab.com               44
#>  9 www.github.com           33
#> 10 docs.ropensci.org        31
#> # โ€ฆ with 1,836 more rows

Created on 2019-11-20 by the reprex package (v0.3.0)

answer some questions pkgsearch

Use pkgsearch to find % of packages using roxygen2 over time.

Either in a post about package docs, or in a post with other such random facts (or a quiz? :-) ).

blog post about rversions

Guest post about packageRanks

By @lindbrook cf lindbrook/packageRank#4

Reg length the post about rversions has a 4min reading time and the post about cranlogs has a 6min reading time, which I find good. It means 880-1320 words as a ballpark figure.

Reg timing, ping me again when you're ready to start working on it, no hurry and absolutely no pressure!

I am thinking of the post as a way to describe the motivation and use case(s) in practice of your package as well as to make readers curious to read more in your package docs. Compared to the README, I'd be e.g. curious to read how you got the idea to start the package, and how it's been useful in your work (or was it just a project out of curiosity for the numbers).

Note: At the moment the R-hub blog doesn't show authors of posts but this will change + for a guest post we'd add a sentence at the beginning of the post to be sure there are links to your online presence.

Post about Bioconductor

Not well developed.

  • Bioconductor. Maybe support by R-hub if it improves; in any case description of release cycles, review process etc., since it is an interesting system.

  • why and how to setup a CRAN mirror? Ask a few maintainers of CRAN mirrors about their motivation and experience.

  • Solaris, why does CRAN use it (link to Uwe Ligges' useR! 2017 keynote), why is it so complicated to maintain the corresponding platform.

Post about retry in httr and crul

Both httr and crul have a function/method for "retries".

The usual case where you e.g. want to get something from an API that might give an error code first so you try again. Best practice is to wait a bit longer with each try and not to try indefinitely.

Generally it's an interesting example of "rolling your own" (writing your own while error blabla) vs. using a more generally implemented thing. In this case the more generally implemented thing is not even in a third package so you can use it without taking on one more dependency.

add tags to posts

not sure if they'd be visible with the current theme, but eventually we will need them.

add overview for system dependencies

Whenever I have to deal with system dependencies and want to avoid randomly adding apt-gets until "stuff works", I seem to be researching the same things:

Is this something that other people might be re-researching as well all the time?
Is there a good resource out there already?

If no, I'd be happy to pitch in a draft article about this, if the r-hub blog is an appropriate venue.
(I thought it might be, since for now anyway, sysreq is the answer for me).

Create directory-based archetypes

And change all authors fields to author (to be able to use the blogdown New Post addin) + create an .Rprofile like ropensci/roweb2's one.

internal vs external functions

More aimed at folks new to package development.

What is an internal function?
Why not to export all functions?
How to document internal functions (NoRd,keyword internal)?
Whether to test them?

Synonyms.

  • Internal, unexported, helper
  • External, exported, user-facing

In a few packages (R-hub packages), count the number of exported functions vs not exported functions, maybe count use of internal functions, and the ones that are similar between packages. But not too much, link to https://rud.is/b/2018/04/08/dissecting-r-package-utility-belts/

Links

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.