Giter VIP home page Giter VIP logo

oo-student-scraper's Introduction

Object-Oriented Student Scraper

Objectives

  • Build two classes, a class that is responsible for scraping data from a web page and a class that uses that data to instantiate new objects.
  • Understand the contents of a third class that is responsible for the command line interface.
  • Use meta-programming to instantiate new instances of a class and add attributes to instances of a class.

Overview

In this lab, you'll be scraping your Learn.co student website. You'll use the index page to grab a list of current students and instantiate a series of Student objects. You'll scrape the individual profile pages of each student to add attributes to each individual student.

We've given you a third class, CommandLineInterface that is responsible for generating students using both the Student and Scraper classes. As you go through this lab, take some time to read through the code in the CommandLineInterface class. Try to understand how it works and how it uses the code in our other classes to create students, add attributes to them, and display them to the user via the command-line interface.

We've also given you an executable file in bin/run that you can execute once you get your tests passing to see your code in action!

Before You Begin

For this project, we'll be scraping data from the student profiles website. Sort of. We can't really give you a project with a full test suite and base those tests on the real live Learn.co website on the real live internet. Why? Because websites change! They get new styling or new information, or they break because someone did something wrong. It happens! So, any tests we write would be based on the website at a given point in time. The code you write to pass those tests would assume that you are scraping a website that may have since changed. Your scraper would pass tests but fail to actually scrape the web page if you tried to run your code by sending a real web request to the real website. That would be terrible! Then you couldn't see your code in action or view the page you were writing code to scrape. Just awful.

Don't worry! We've very cleverly solved this problem for the purposes of this project. We've created a mock-up student site, available here as a GitHub Page

This copy is being maintained only for the purposes of this project, so we don't have to worry about things like the styling changing or the code breaking and affecting our scraper code.

Instructions

Run bundle install first.

The Scraper Class

Let's start with the Scraper class in lib/scraper.rb. In this class you are responsible for defining two methods. The #scrape_index_page method is responsible for scraping the index page that lists all of the students and the #scrape_profile_page method is responsible for scraping an individual student's profile page to get further information about that student.

Reminder: Nokogiri will take care of parsing the actual HTML page. Once parsed, we can use Nokogiri's .css method to access specific elements based on their class or id. After reading the instructions below, inspect the website in a browser to find the appropriate CSS classes for elements you need to scrape. For more information, check out Nokogiri's reference materials for additional information

The .scrape_index_page Method

This is a class method that should take in an argument of the URL of the index page. For the purposes of our test the URL will be https://learn-co-curriculum.github.io/student-scraper-test-page/index.html. This method should use Nokogiri and Open-URI to access that page. The return value of this method should be an array of hashes in which each hash represents a single student. The keys of the individual student hashes should be :name, :location and :profile_url.

Here's a look at the desired behavior:

Scraper.scrape_index_page(index_url)
# => [
#    {:name => "Abby Smith", :location => "Brooklyn, NY", :profile_url => "students/abby-smith.html"},
#    {:name => "Joe Jones", :location => "Paris, France", :profile_url => "students/joe-jonas.html"},
#    {:name => "Carlos Rodriguez", :location => "New York, NY", :profile_url => "students/carlos-rodriguez.html"},
#    {:name => "Lorenzo Oro", :location => "Los Angeles, CA", :profile_url => "students/lorenzo-oro.html"},
#    {:name => "Marisa Royer", :location => "Tampa, FL", :profile_url => "students/marisa-royer.html"}
#  ]

Top-Tip: Remember to use the element inspector in your browser's developer tools to examine each element whose value you are trying to scrape. You can also use binding.pry and experiment with different element selectors in your terminal. It takes a lot of trial and error to find the correct selectors for the desired element.

The .scrape_profile_page Method

This is a class method that should take in an argument of a student's profile URL. It should use Nokogiri and Open-URI to access that page. The return value of this method should be a hash in which the key/value pairs describe an individual student. Some students don't have a Twitter or some other social link. Be sure to be able to handle that. Once written, the following code should return a hash containing info from the provided profile_url:

Scraper.scrape_profile_page(profile_url)

And the returned hash should look like the following:

{
  :twitter=>"http://twitter.com/flatironschool",
  :linkedin=>"https://www.linkedin.com/in/flatironschool",
  :github=>"https://github.com/learn-co",
  :blog=>"http://flatironschool.com",
  :profile_quote=>"\"Forget safety. Live where you fear to live. Destroy your reputation. Be notorious.\" - Rumi",
  :bio=> "I'm a school"
}

The only attributes you need to scrape from a student's profile page are the ones listed above: the Twitter URL, LinkedIn URL, GitHub URL, blog URL, profile quote, and bio. The hash you build using those attributes should be formatted like the one in the example above.

Why class methods?

Why are our scraping methods being defined as class methods? Well, we don't need to store any information about the Scraper once it has completed the job of scraping. We simply need to scrape some information and pass that information along to our Student class. So, we don't need to produce instances of Scraper that maintain their own attributes.

The Student Class

We've already given you the attr_accessors that you are required to have for each individual student.

The student class will use the information returned by the above methods from our Scraper class to create students and add attributes to individual students. However, the Student class shouldn't know about the Scraper class. This means that the Student class shouldn't directly interact with the Scraper class––it shouldn't call on the Scraper class in any of its methods or take in the Scraper class itself as an argument. Why is this? We want our program to be as flexible as possible.

We can imagine any number of applications that use a Student model. So we don't want our Student model to be dependent on how it gets information regarding the students it creates. It should simply be ready to take in that information, regardless of its source (be it scraping, a .csv file, or a form on a website).

The #create_from_collection(students_array)

This class method should take in an array of hashes. In fact, we will call Student.create_from_collection with the return value of the Scraper.scrape_index_page method as the argument. The #create_from_collection method should iterate over the array of hashes and create a new individual student using each hash. This brings us to the #initialize method on our Student class.

The #initialize Method

The #initialize method should take in an argument of a hash and use meta-programming to assign the newly created student attributes and values per the key/value pairs of the hash. Use the #send method to achieve this. This method should also add the newly created student to the Student class' @@all array of all students. You'll need to create this class variable and set it equal to an empty array at the top of your class. Push self into the array at the end of the #initialize method.

The #add_student_attributes Method

This instance method should take in a hash whose key/value pairs describe additional attributes of an individual student. In fact, we will be calling student.add_student_attributes with the return value of the Scraper.scrape_profile_page method as the argument.

The #add_student_attributes method should iterate over the given hash and use meta-programming to dynamically assign the student attributes and values per the key/value pairs of the hash. Use the #send method to achieve this.

Important: The return value of this method should be the student itself. Use the self keyword.

The .all Method

This class method should return the contents of the @@all array.

Our Code in Action

Now that you have all your tests passing, you can run our executable file, which relies on our CommandLineInterface class.

We've provided you with all of the code in the CommandLineInterface class. Take a few minutes to read through this class and gain a strong understanding of how it uses the code you wrote in your Scraper and Student classes to make a request to the local files and scrape the students.

Now run the executable file with ruby bin/run. You should see all of the students you scraped and instantiated puts-ed out to the terminal. Great job!

oo-student-scraper's People

Contributors

annjohn avatar aturkewi avatar bhollan avatar curiositypaths avatar dakotalmartinez avatar danielseehausen avatar dependabot[bot] avatar devinburnette avatar drakeltheryuujin avatar drewprice avatar ed-karabinus avatar enoch2k2 avatar gj avatar ihollander avatar jmburges avatar jonbf avatar kwebster2 avatar littlejeon avatar lizbur10 avatar lkwlala avatar maxwellbenton avatar mendelb avatar msuzoagu avatar peterbell avatar pletcher avatar rrcobb avatar sgharms avatar snags88 avatar sophiedebenedetto avatar victhevenot avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

oo-student-scraper's Issues

Tic-Tac-Toe with AI lab: Mystery red bar

Tic-Tac-Toe with AI lab:

I check off all steps of the "checklist", hit "submit", then a blank red bar is shown at the top of the screen:

https://imgur.com/oXSGkJM

The tests are passing. Readme is asking for a lot of things to be built on top of what's in spec; did my best without any guidance. Are there opaque requirements I'm not passing? What is this red bar, or what is keeping me from progressing, and what should I do next? Thanks.

Suggestions for Improvement - Student Scraper

As someone who is pretty much brand new to coding, I found getting started on this lab fairly difficult. The main thing that I had a big challenge with was the CSS selectors. I did not have a good understanding of how to select different attributes, classes, etc using the CSS selectors and even after reading the Learn.co labs covering this I still felt lost. I eventually had to result to asking a technical coach for assistance and once they walked me through a couple of the attributes from the lab, I finally sort of got the hang of it.

I think adding some more hints and/or examples of CSS selectors would make this lab much more manageable to someone with little to no experience with HTML/CSS like myself. I don't really feel that the biggest learning point of this lab is trying to teach yourself CSS selectors (I assume that's covered in another part of the course). But I feel like I spent a considerable amount of time struggling with it and it could have been a lot easier if there had been a bit more guidance. I think the main learning points from this lab are objects, classes, instances, and iterating through arrays & hashes. While the CSS selector is important, the material covered prior to this lab doesn't give enough preparation for it (in my opinion).

Issue with the scraper text

I'm seeing a problem with the following lines in scraper_spec.rb:

let!(:student_index_array) {[{:name=>"Joe Burgess", :location=>"New York, NY", :profile_url=>"./fixtures/student-site/students/joe-burgess.html"},

{:name=>"Mathieu Balez", :location=>"New York, NY", :profile_url=>"./fixtures/student-site/students/mathieu-balez.html"},

{:name=>"Diane Vu", :location=>"New York, NY", :profile_url=>"./fixtures/student-site/students/diane-vu.html"}]}

When I extract the web page XML it is showing the urls as students/student-name.html

The way the directory structure is organized this would be the correct path to get to the student profiles. The other path is relative to the test file.

When I changed the tests to point to the new path it passed. Do these lines need to be updated?

add description for Nitrous users

You cannot load a server on Nitrous using regular means. you have to load a Python server instead.
command is:
python -m SimpleHTTPServer 3000
then you can look at the site via a menu at the top of nitrous.

please review my hack of the rspec file, let me know if I am off track please

I found that the URL showing on my local browser was this:
http://127.0.0.1:4000/fixtures/student-site/

The test files where set to look for this:
http://127.0.0.1:4000/

So I had to hack the scraper_spec.rb file in the following places to get tests to pass:

let!(:student_index_array) {[{:name=>"Joe Burgess", :location=>"New York, NY", :profile_url=>"http://127.0.0.1:4000/fixtures/student-site/students/joe-burgess.html"},
{:name=>"Mathieu Balez", :location=>"New York, NY", :profile_url=>"http://127.0.0.1:4000/fixtures/student-site/students/mathieu-balez.html"},
{:name=>"Diane Vu", :location=>"New York, NY", :profile_url=>"http://127.0.0.1:4000/fixtures/student-site/students/diane-vu.html"}]}

also here:
describe "#scrape_index_page" do
it "is a class method that scrapes the student index page and a returns an array of hashes in which each hash represents one student" do
index_url = "http://127.0.0.1:4000/**fixtures/student-site**/"
scraped_students = Scraper.scrape_index_page(index_url)
expect(scraped_students).to be_a(Array)
expect(scraped_students.first).to have_key(:location)
expect(scraped_students.first).to have_key(:name)
expect(scraped_students).to include(student_index_array[0], student_index_array[1], student_index_array[2])
end
end

Create From Collection Issue

Hey, so in the spec for the create_from_collection method in the Student class, it says the method will use the Scraper class:
.create_from_collection uses the Scraper class to create new students with the correct name and location.

But the solution does not use the Scraper class at all.
def self.create_from_collection(students_array) students_array.each do |student_hash| Student.new(student_hash) end end

Errors in filenames for student profile pages

In the students folder, Jelson-Rodriguez.html needs to be renamed to jelson-rodriguez.html and Minh-Nguyen.html needs to be renamed to minh-nguyen.html for bin/run to work properly after the lab is complete.

Make test to check for use of OpenURI?

To match the README's demand for OpenURI, there should probably be a test that checks whether self.scrape_index_page and self.scrape_profile_page use OpenURI. Currently, it's possible to avoid using OpenURI:

def self.scrape_index_page(index_url)
    html = File.read(index_url)
    Nokogiri::HTML(html)...

Mislabelled file path and lack of example

In the "before you begin" section it says that in order to do this we'll be using a copy of the student site found in oo-student-scraper/fixtures/student-site but this directory doesn't exist. The closest thing i found to it is oo-student-scraper/vcr/fixtures/index_page.yml. Also after the request to run jekyll serve there is room for the example of what running that specific command should output, and there's no example.

learn submit not working locally

getting this message....

YABC-6:oo-student-scraper-v-000 admin$ learn submit
Adding changes...
Committing changes...
It looks like you have no changes to commit. Will still try updating your submission...
Pushing changes to GitHub...
Submitting lesson...
Hmm...this doesn't appear to be a Learn lesson. Are you in the right directory?

connecting to browser ide

all night (your time) it's been working. No i can't connect to ide for 30 minutes straight. Things like this have been happening for about a week now

#scrape_profile_page test is too general

The tests in this lab are too general, specifically for #scrape_profile_page. The test passes as long as a hash is returned, irrespective of what is contained within the hash.

Also with regards to the solution. If you call the child elements using [0] element and the profile page does not have a twitter icon (for example), then the twitter key will have a value of the next link that it does have. For example a linkedin link. So the keys will have incorrect values in accordance with the name of the key.

Issue running `learn` on nitrous env

Student had issue running learn in their nitrous container. Might be impacting local devs as well. Solution was to add rspec to the Gemfile and specify the nokogiri version as 1.6.6.2

Cannot run the "run" file in bin.

I passed my test, and tried to run the cli, but it got some error with css ".text.strip", then I tried to download the master from the answer, and ran the "run" file in bin. Got the error below:
(master) oo-student-scraper
// ♥ ruby ./bin/run
Traceback (most recent call last):
2: from ./bin/run:6:in <main>' 1: from /home/beneficial-archiver-5319/temporary/oo-student-scraper/lib/command_line_interface.rb:11:in run'
/home/beneficial-archiver-5319/temporary/oo-student-scraper/lib/command_line_interface.rb:21:in add_attributes_to_students': undefined method each'
for nil:NilClass (NoMethodError)

Typo in #initialize method paragraph...word achieve

Correct spelling is achieve (see below).
The #initialize method should take in an argument of a hash and use metaprogramming to assign the newly created student attributes and values in accordance with the key/value pairs of the hash. Use the #send method to acheive this.

rspec test goes to page not found

While working on the Scraper project I noticed the rspec test for #scrape_index_page uses an index_url variable set to http://127.0.0.1:4000/fixtures/student-site/index.html, which results in a page not found (./spec/scraper_spec.rb:26). The index page can, however, be brought up by using http://127.0.0.1:4000/.

Is this an error within the rspec test? Should the index_url variable be set to http://127.0.0.1:4000/? When changed, the test passes for scrape_index_page class method.

Similar issue for the #scrape_profile_page test.

Program will not run

After passing all tests for this lab, I attempted to run the program but received the following error. I have not changed command_line_interface.rb and even compared it to the master branch just in case.
screen shot 2017-09-04 at 8 55 20 pm

From what I can tell, the problem is that #add_attributes_to_students expects there to be a profile-url in each instance of a student. Each student is only initialized with a name and location. There is an attribute accessor for profile_url, but it is never used in the class and returns nil. Here are lines 20-25 from lib/command_line_interface.rb
def add_attributes_to_students Student.all.each do |student| attributes = Scraper.scrape_profile_page(BASE_PATH + student.profile_url) student.add_student_attributes(attributes) end end

Executing code

Even though my code passes all the tests, when I try "ruby bin/run" I get an error message. It doesn't execute the code. I asked for help and another student who submitted his project came across the same problem, even though he passes the tests.

Capital letters throwing off test

Capital letters in the students (Minh Nguyen and Duc Pham) urls dont match the src urls provided in the index.html fixture. Both students profile pages should be named with lower case letters and their sources in the index.html should match. Otherwise you get a 404 and bin/run will break.

can't run bin/run as students.learn.co website changes

@AnnJohn @jmburges

on the students.learn.co website there are some dead links that 404. so, to fix this, solution can wrap request to the profile pages in error rescuing (begin/rescue/end). but, this worries me from a long term perspective b/c of course that webpage will change over time and this lab depends on it being fixed in terms of style and functionality. the test suite will continue to run okay b/c it is using vcr and the recorded responses function as our fixtures, but for students to dynamically scrape the web page, and to run the bin/run and interact with the CommandLineInterface class and understand how the different parts of their code fits together, the scraper they wrote to pass the tests won’t work to scrape the webpage if/when it changes.

so we have a couple of options for now: 1. copy the students.learn.co page and deploy a static version using gh-pages and this is a version that no one should ever mess with. 2. replace usage of VCR with fixtures sub directory that contains html files and direct students to File.open then parse with Nokogiri as opposed to send web request then parse, like this lab: https://github.com/learn-co-curriculum/scraping-kickstarter/blob/solution/kickstarter_scraper.rb 3. for now, as a temporary solution, instruct students to use error rescuing as in the solution i just pushed up or provide them the begin/rescue/end and explain a bit about why they want this in their method. This solution is very temporary b/c as soon as any additional change is made to the webpage, the ability of the students to scrape the webpage from the real web will deteriorate.

Conceptual disagreement?

I feel like the #SCRAPE_INDEX_PAGE method description doesn't necessarily jive with what the rspec test expects from us. From reading the description it feels conceptually like we can get the values for our hash keys just from what we've scraped from the page. We don't really get a hint that we might need to modify those values (namely for the : profile_url value). The scraped data in this case does not match the test output; we need to add additional directions to a specific local file path for the test to pass, rather than the actual url value that we can scrape from the index.html page. We are not given any explanation as to why this output needs to be different, so those more curious are left to wonder why for a symbol named profile_url we need to output a local file path instead.

Just my 2 cents.

Non-functioning spec for #create_from_collection

This spec which is supposed to test Student.create_from_collection passes when the method is blank:

  describe "#create_from_site" do 
    it "uses the Scraper class to create new students with the correct name and location." do
      Student.create_from_collection(student_index_array)
      expect(Student.class_variable_get(:@@all).first.name).to eq("Alex Patriquin")
      expect(Student.class_variable_get(:@@all).last.name).to eq("Sushanth Bhaskarab")
    end
  end

This after clause isn't working properly because Student.all is not getting cleared:

after(:each) do 
    Student.class_variable_set(:@@all, [])
  end

I would also recommend throwing at least one other expectation in there to make the spec more robust:

  describe "#create_from_site" do 
    it "uses the Scraper class to create new students with the correct name and location." do
      Student.create_from_collection(student_index_array)
      expect(Student.class_variable_get(:@@all).first.name).to eq("Alex Patriquin")
      expect(Student.class_variable_get(:@@all).last.name).to eq("Sushanth Bhaskarab")
    end
  end

typo

the word 'nokogiri' should be capitalized, but is lower case throughout the lab text

Errors when running test suite when using ruby versions other than 2.2.1

I had errors when running locally on a linux machine using ruby 2.2.2 and then also on nitrous that uses 2.1.5. The tests worked on Cloud9 using version 2.2.1. Switched my local machine to Ruby 2.2.1, reinstalled the gems, and the tests ran. Have not been able to get it working in nitrous still.

Learn command throwing file load errors (even on sol'n branch) - Lab may have setup issue

I did a fresh fork, clone and bundle install of this lab and I got the following error message (similar to what a student was seeing):

/Users/jessica/code/oo-student-scraper-v-000/config.rb:1:in `require': cannot load such file -- vcr (LoadError)
    from /Users/jessica/code/oo-student-scraper-v-000/config.rb:1:in `<top (required)>'
    from /Users/jessica/code/oo-student-scraper-v-000/spec/spec_helper.rb:3:in `require_relative'
    from /Users/jessica/code/oo-student-scraper-v-000/spec/spec_helper.rb:3:in `<top (required)>'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/gems/rspec-core-3.1.7/lib/rspec/core/configuration.rb:1072:in `require'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/gems/rspec-core-3.1.7/lib/rspec/core/configuration.rb:1072:in `block in requires='
    from /Users/jessica/.rvm/gems/ruby-2.1.2/gems/rspec-core-3.1.7/lib/rspec/core/configuration.rb:1072:in `each'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/gems/rspec-core-3.1.7/lib/rspec/core/configuration.rb:1072:in `requires='
    from /Users/jessica/.rvm/gems/ruby-2.1.2/gems/rspec-core-3.1.7/lib/rspec/core/configuration_options.rb:103:in `block in process_options_into'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/gems/rspec-core-3.1.7/lib/rspec/core/configuration_options.rb:102:in `each'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/gems/rspec-core-3.1.7/lib/rspec/core/configuration_options.rb:102:in `process_options_into'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/gems/rspec-core-3.1.7/lib/rspec/core/configuration_options.rb:22:in `configure'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/gems/rspec-core-3.1.7/lib/rspec/core/runner.rb:95:in `setup'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/gems/rspec-core-3.1.7/lib/rspec/core/runner.rb:84:in `run'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/gems/rspec-core-3.1.7/lib/rspec/core/runner.rb:69:in `run'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/gems/rspec-core-3.1.7/lib/rspec/core/runner.rb:37:in `invoke'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/gems/rspec-core-3.1.7/exe/rspec:4:in `<top (required)>'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/bin/rspec:23:in `load'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/bin/rspec:23:in `<main>'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/bin/ruby_executable_hooks:15:in `eval'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/bin/ruby_executable_hooks:15:in `<main>'
/Users/jessica/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/fileutils.rb:1454:in `unlink': No such file or directory @ unlink_internal - .results.json (Errno::ENOENT)
    from /Users/jessica/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/fileutils.rb:1454:in `block in remove_file'
    from /Users/jessica/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/fileutils.rb:1459:in `platform_support'
    from /Users/jessica/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/fileutils.rb:1453:in `remove_file'
    from /Users/jessica/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/fileutils.rb:791:in `remove_file'
    from /Users/jessica/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/fileutils.rb:569:in `block in rm'
    from /Users/jessica/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/fileutils.rb:568:in `each'
    from /Users/jessica/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/fileutils.rb:568:in `rm'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/gems/learn-test-1.2.23/lib/learn_test/rspec/runner.rb:118:in `cleanup'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/gems/learn-test-1.2.23/lib/learn_test/rspec/runner.rb:28:in `run'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/gems/learn-test-1.2.23/bin/learn-test:71:in `<top (required)>'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/bin/learn-test:23:in `load'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/bin/learn-test:23:in `<main>'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/bin/ruby_executable_hooks:15:in `eval'
    from /Users/jessica/.rvm/gems/ruby-2.1.2/bin/ruby_executable_hooks:15:in `<main>'

I checked out the solution branch, did a bundle install and ran learn and got the same error.

cc @abhishekpillai - this is related to the issue you were troubleshooting with DianeCai

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.