Giter VIP home page Giter VIP logo

pylookup's People

Contributors

cofi avatar mackong avatar syl20bnr avatar tkf avatar tsgates avatar zwass avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pylookup's Issues

Use Sphinx object inventory instead of HTML

Sphinx generates an objects.inv file containing a mapping of object names to URIs. I found this gist (that depends on Intersphinx but that shouldn't be too hard to extract) for processing the inventory: https://gist.github.com/epc/4118456

Apart from not having to scrape the HTML for information, this would have the benefit of working with projects that don't use Sphinx directly but do generate an objects.inv for interop with Intersphinx, such as Twisted: http://twistedmatrix.com/documents/current/api/objects.inv

pylookup.el and mode hooks

I wanted to use a different browser for Pylookup pages vs. other URLs.
OK, fine, so I'll make just make browse-url-browser-function local
for the "Pylookup Completions" buffer, right? Not so straightforward
because pylookup-mode doesn't run any hooks. This change

--- pylookup.el.orig 2011-09-15 13:34:49.199019887 -0700
+++ pylookup.el 2011-09-15 13:21:51.593052329 -0700
@@ -70,7 +70,8 @@
(use-local-map pylookup-mode-map)
(setq major-mode 'pylookup-mode)
(setq mode-name "Pylookup")

  • (setq buffer-read-only t))
  • (setq buffer-read-only t)
  • (run-mode-hooks))

(defun pylookup-move-prev-line ()
"Move to previous entry"

fixed that problem for me in version 2.7.1.

Support for running under python2

diff --git a/pylookup.py b/pylookup.py
index bb5c8fb..ddf7735 100755
--- a/pylookup.py
+++ b/pylookup.py
@@ -19,7 +19,7 @@ import formatter
 from os.path import join, dirname, exists, abspath, expanduser
 from contextlib import closing

-if sys.version_info.major == 3:
+if sys.version_info[0] == 3:
     import html.parser    as htmllib
     import urllib.parse   as urlparse
     import urllib.request as urllib

error building local numpy database

The ZIP download available at http://docs.scipy.org/doc/ for Numpy causes lookup.py -u to error:

./pylookup.py -u numpy-html/
Wait for a few seconds ..
Fetching htmls from 'file:///Users/MYNAME/Downloads/tsgates-pylookup-3d3151a/numpy-html/genindex-all.html'
Traceback (most recent call last):
File "./pylookup.py", line 306, in
update(opts.db, opts.url, opts.append)
File "./pylookup.py", line 228, in update
print("Error: fetching file from the web: '%s'" % sys.exc_info())
TypeError: not all arguments converted during string formatting

add recipe for melpa

Would you mind if I add a recipe for this project on Melpa? so that more users can install this awesome package easily.

Support for running under python3

pylookup.py doesn't run under python3. A simple 2to3 pass can make it work properly, but that breaks python2 support. It would be great if pylookup worked properly under both python2 and python3.

Broken link in README

The README says:

 Please check, 
    Web  : http://taesoo.org/Opensource/Pylookup

which does not seem to exist.

Warning:Wrong type argument: window-configuration-p, nil

Any help is appreciated...

When I was intend to look up some words, it comes this WARNING:
"Wrong type argument: window-configuration-p, nil" in the minibuffer

And the info in the Message buffer as below:
"pylookup-mode-quit-window: Wrong type argument: window-configuration-p, nil"

My Env:

Emacs23.4 in CentOS5.8 in VirtualBox

Don't distribute the docs

It would be better for the user of pylook to download the version of the docs that matches their installation. I propose to delete the HTML-directory and putting this in .gitignore (sorry for the bad formatting):

diff --git a/.gitignore b/.gitignore
index 52f6829..cfe148e 100644
--- a/.gitignore
+++ b/.gitignore
@@ -2,3 +2,5 @@
 *.elc
 /*.zip
 /makefile
+python-*-docs-html
+pylookup*.db

patch for python 3.2.2

--- /cygdrive/c/Temp/pylookup.py 2011-12-22 11:03:22.802593900 +0100
+++ pylookup.py 2011-12-27 10:18:05.311482700 +0100
@@ -1,311 +1,337 @@

-#!/usr/bin/env python

-"""
-Pylookup is to lookup entries from python documentation, especially within
-emacs. Pylookup adopts most of ideas from haddoc, lovely toolkit by Martin

-Blais.

-(usage)

  • ./pylookup.py -l ljust

- ./pylookup.py -u http://docs.python.org

-"""

-from future import with_statement

-import os
-import sys
-import re
-try:

  • import cPickle as pickle
    -except:
  • import pickle
    -import formatter

-from os.path import join, dirname, exists, abspath, expanduser
-from contextlib import closing

-if sys.version_info[0] == 3:

  • import html.parser as htmllib
  • import urllib.parse as urlparse
  • import urllib.request as urllib
    -else:

- import htmllib, urllib, urlparse

-VERBOSE = False
-FORMATS = {

  •         "Emacs" : "{entry}\t({desc})\t[{book}];{url}",
    
  •         "Terminal" : "{entry}\t({desc})\t[{book}]\n{url}"
    

- }

-def build_book(s, num):

  • """
  • Build book identifier from s, with num links.
  • """
  • for matcher, replacement in (("library", "lib"),
  •                            ("c-api", "api"),
    
  •                            ("reference", "ref"),
    
  •                            ("", "etc")):
    
  •    if matcher in s:
    

- return replacement if num == 1 else "%s/%d" % (replacement, num)

-def trim(s):

  • """
  • Add any globle filtering rules here
  • """
  • s = s.replace( "Python Enhancement Proposals!", "")
  • s = s.replace( "PEP ", "PEP-")

- return s

-class Element(object):

  • def init(self, entry, desc, book, url):
  •    self.book = book
    
  •    self.url = url
    
  •    self.desc = desc
    

- self.entry = entry

  • def format(self, format_spec):
  •    return format_spec.format(entry=self.entry, desc=self.desc,
    

- book=self.book, url=self.url)

  • def match_insensitive(self, key):
  •    """
    

- Match key case insensitive against entry and desc.

  •    `key` : Lowercase string.
    
  •    """
    

- return key in self.entry.lower() or key in self.desc.lower()

  • def match_sensitive(self, key):
  •    """
    

- Match key case sensitive against entry and desc.

  •    `key` : Lowercase string.
    
  •    """
    

- return key in self.entry or key in self.desc

  • def match_in_entry_insensitive(self, key):
  •    """
    

- Match key case insensitive against entry.

  •    `key` : Lowercase string.
    
  •    """
    

- return key in self.entry.lower()

  • def match_in_entry_sensitive(self, key):
  •    """
    

- Match key case sensitive against entry.

  •    `key` : Lowercase string.
    
  •    """
    

- return key in self.entry

-def get_matcher(insensitive=True, desc=True):

  • """

- Get Element.match_* function.

  • get_matcher(0, 0)

  • get_matcher(1, 0)

  • get_matcher(0, 1)

  • get_matcher(1, 1)

-

  • """
  • _sensitive = "_insensitive" if insensitive else "_sensitive"
  • _in_entry = "" if desc else "_in_entry"

- return getattr(Element, "match{0}{1}".format(_in_entry, _sensitive))

-class IndexProcessor( htmllib.HTMLParser ):

  • """
  • Extract the index links from a Python HTML documentation index.

- """

  • def init( self, writer, dirn):

- htmllib.HTMLParser.init( self, formatter.NullFormatter() )

  •    self.writer     = writer
    
  •    self.dirn       = dirn
    
  •    self.entry      = ""
    
  •    self.desc       = ""
    
  •    self.list_entry = False
    
  •    self.do_entry   = False
    
  •    self.one_entry  = False
    
  •    self.num_of_a   = 0
    

- self.desc_cnt = 0

  • def start_dd( self, att ):

- self.list_entry = True

  • def end_dd( self ):

- self.list_entry = False

  • def start_dt( self, att ):
  •    self.one_entry = True
    

- self.num_of_a = 0

  • def end_dt( self ):

- self.do_entry = False

  • def start_a( self, att ):
  •    if self.one_entry:
    
  •        self.url = join( self.dirn, dict( att )[ 'href' ] )
    

- self.save_bgn()

  • def end_a( self ):
  •    global VERBOSE
    
  •    if self.one_entry:
    
  •        if self.num_of_a == 0 :
    

- self.desc = self.save_end()

  •            if VERBOSE:
    
  •                self.desc_cnt += 1
    
  •                if self.desc_cnt % 100 == 0:
    
  •                    sys.stdout.write("%04d %s\r" \
    

- % (self.desc_cnt, self.desc.ljust(80)))

  •            # extract fist element
    
  •            #  ex) **and**() (in module operator)
    
  •            if not self.list_entry :
    

- self.entry = re.sub( "([^)]+)", "", self.desc )

  •                # clean up PEP
    

- self.entry = trim(self.entry)

  •                match = re.search( "([^)]+)", self.desc )
    
  •                if match :
    

- self.desc = match.group(0)

- self.desc = trim(re.sub( "[()]", "", self.desc ))

  •        self.num_of_a += 1
    
  •        book = build_book(self.url, self.num_of_a)
    

- e = Element(self.entry, self.desc, book, self.url)

- self.writer(e)

-def update(db, urls, append=False):

- """Update database with entries from urls.

  • db : filename to database
  • urls : list of URL
  • append : append to db
  • """
  • mode = "ab" if append else "wb"
  • with open(db, mode) as f:
  •    writer = lambda e: pickle.dump(e, f)
    
  •    for url in urls:
    
  •        # detech 'file' or 'url' schemes
    
  •        parsed = urlparse.urlparse(url)
    
  •        if not parsed.scheme or parsed.scheme == "file":
    
  •            dst = abspath(expanduser(parsed.path))
    
  •            if not os.path.exists(dst):
    
  •                print("Error: %s doesn't exist" % dst)
    
  •                exit(1)
    
  •            url = "file://%s" % dst
    
  •        else:
    

- url = parsed.geturl()

  •        # direct to genindex-all.html
    
  •        if not url.endswith('.html'):
    

- url = url.rstrip("/") + "/genindex-all.html"

- print("Wait for a few seconds ..\nFetching htmls from '%s'" % url)

  •        try:
    
  •            index = urllib.urlopen(url).read()
    
  •            if not issubclass(type(index), str):
    

- index = index.decode()

  •            parser = IndexProcessor(writer, dirname(url))
    
  •            with closing(parser):
    
  •                parser.feed(index)
    
  •        except IOError:
    

- print("Error: fetching file from the web: '%s'" % sys.exc_info())

-def lookup(db, key, format_spec, out=sys.stdout, insensitive=True, desc=True):

- """Lookup key from database and print to out.

  • db : filename to database
  • key : key to lookup
  • out : file-like to write to
  • insensitive : lookup key case insensitive
  • """
  • matcher = get_matcher(insensitive, desc)
  • if insensitive:
  •    key = key.lower()
    
  • with open(db, "rb") as f:
  •    try:
    
  •        while True:
    
  •            e = pickle.load(f)
    
  •            if matcher(e, key):
    
  •                out.write('%s\n' % format(e, format_spec))
    
  •    except EOFError:
    

- pass

-def cache(db, out=sys.stdout):

- """Print unique entries from db to out.

  • db : filename to database
  • out : file-like to write to
  • """
  • with open(db, "rb") as f:
  •    keys = set()
    
  •    try:
    
  •        while True:
    
  •            e = pickle.load(f)
    
  •            k = e.entry
    
  •            k = re.sub( "([^)]*)", "", k )
    
  •            k = re.sub( "[[^]]*]", "", k )
    
  •            keys.add(k)
    
  •    except EOFError:
    
  •        pass
    
  •    for k in keys:
    

- out.write('%s\n' % k)

-if name == "main":

  • import optparse
  • parser = optparse.OptionParser( doc.strip() )
  • parser.add_option( "-d", "--db",
  •                   help="database name", 
    
  •                   dest="db", default="pylookup.db" )
    
  • parser.add_option( "-l", "--lookup",
  •                   help="keyword to search", 
    
  •                   dest="key" )
    
  • parser.add_option( "-u", "--update",
  •                   help="update url or path",
    
  •                   action="append", type="str", dest="url" )
    
  • parser.add_option( "-c", "--cache" ,
  •                   help="extract keywords, internally used",
    
  •                   action="store_true", default=False, dest="cache")
    
  • parser.add_option( "-a", "--append",
  •                   help="append to the db from multiple sources",
    
  •                   action="store_true", default=False, dest="append")
    
  • parser.add_option( "-f", "--format",
  •                   help="type of output formatting, valid: Emacs, Terminal",
    
  •                   choices=["Emacs", "Terminal"],
    
  •                   default="Terminal", dest="format")
    
  • parser.add_option( "-i", "--insensitive", default=1, choices=['0', '1'],
  •                   help="SEARCH OPTION: insensitive search "
    
  •                   "(valid: 0, 1; default: %default)")
    
  • parser.add_option( "-s", "--desc", default=1, choices=['0', '1'],
  •                   help="SEARCH OPTION: include description field "
    
  •                   "(valid: 0, 1; default: %default)")
    
  • parser.add_option("-v", "--verbose",
  •                  help="verbose", action="store_true",
    
  •                  dest="verbose", default=False)
    

- ( opts, args ) = parser.parse_args()

  • VERBOSE = opts.verbose
  • if opts.url:
  •    update(opts.db, opts.url, opts.append)
    
  • if opts.cache:
  •    cache(opts.db)
    
  • if opts.key:
  •    lookup(opts.db, opts.key, FORMATS[opts.format],
    
  •           insensitive=int(opts.insensitive), desc=int(opts.desc))
    
    +#!/usr/bin/env python
    +
    +"""
    +Pylookup is to lookup entries from python documentation, especially within
    +emacs. Pylookup adopts most of ideas from haddoc, lovely toolkit by Martin
    +Blais.
    +
    +(usage)
  • ./pylookup.py -l ljust
  • ./pylookup.py -u http://docs.python.org

+"""
+
+
+
+import os
+import sys
+import re
+try:

  • import cPickle as pickle
    +except:
  • import pickle
    +import formatter

+from os.path import join, dirname, exists, abspath, expanduser
+from contextlib import closing
+
+if sys.version_info[0] == 3:

  • import html.parser as htmllib
  • import urllib.parse as urlparse
  • import urllib.request as urlrequest
    +else:
  • import htmllib
  • import urllib.parse as urlparse
  • import urllib.request as urlrequest

+VERBOSE = False
+FORMATS = {

  •         "Emacs" : "{entry}\t({desc})\t[{book}];{url}",
    
  •         "Terminal" : "{entry}\t({desc})\t[{book}]\n{url}"
    
  •       }
    
    +def build_book(s, num):
  • """
  • Build book identifier from s, with num links.
  • """
  • for matcher, replacement in (("library", "lib"),
  •                            ("c-api", "api"),
    
  •                            ("reference", "ref"),
    
  •                            ("", "etc")):
    
  •    if matcher in s:
    
  •        return replacement if num == 1 else "%s/%d" % (replacement, num)
    
    +def trim(s):
  • """
  • Add any globle filtering rules here
  • """
  • s = s.replace( "Python Enhancement Proposals!", "")
  • s = s.replace( "PEP ", "PEP-")
  • return s

+class Element(object):

  • def init(self, entry, desc, book, url):
  •    self.book = book
    
  •    self.url = url
    
  •    self.desc = desc
    
  •    self.entry = entry
    
  • def format(self, format_spec):
  •    return format_spec.format(entry=self.entry, desc=self.desc,
    
  •                              book=self.book, url=self.url)
    
  • def match_insensitive(self, key):
  •    """
    
  •    Match key case insensitive against entry and desc.
    
  •    `key` : Lowercase string.
    
  •    """
    
  •    return key in self.entry.lower() or key in self.desc.lower()
    
  • def match_sensitive(self, key):
  •    """
    
  •    Match key case sensitive against entry and desc.
    
  •    `key` : Lowercase string.
    
  •    """
    
  •    return key in self.entry or key in self.desc
    
  • def match_in_entry_insensitive(self, key):
  •    """
    
  •    Match key case insensitive against entry.
    
  •    `key` : Lowercase string.
    
  •    """
    
  •    return key in self.entry.lower()
    
  • def match_in_entry_sensitive(self, key):
  •    """
    
  •    Match key case sensitive against entry.
    
  •    `key` : Lowercase string.
    
  •    """
    
  •    return key in self.entry
    
    +def get_matcher(insensitive=True, desc=True):
  • """
  • Get Element.match_* function.
  • get_matcher(0, 0)

  • get_matcher(1, 0)

  • get_matcher(0, 1)

  • get_matcher(1, 1)

  • """
  • _sensitive = "_insensitive" if insensitive else "_sensitive"
  • _in_entry = "" if desc else "_in_entry"
  • return getattr(Element, "match{0}{1}".format(_in_entry, _sensitive))

+class IndexProcessor( htmllib.HTMLParser ):

  • """
  • Extract the index links from a Python HTML documentation index.
  • """
  • def init( self, writer, dirn):
  •    htmllib.HTMLParser.**init**( self, formatter.NullFormatter() )
    
  •    self.writer     = writer
    
  •    self.dirn       = dirn
    
  •    self.entry      = ""
    
  •    self.desc       = ""
    
  •    self.list_entry = False
    
  •    self.do_entry   = False
    
  •    self.one_entry  = False
    
  •    self.num_of_a   = 0
    
  •    self.desc_cnt   = 0
    
  •    self.data       = None
    
  • def handle_starttag(self, tag, attr):
  •    if tag == 'dt':
    
  •        self.start_dt(attr)
    
  •    elif tag == 'dd':
    
  •        self.start_dd(attr)
    
  •    elif tag == 'a':
    
  •        self.start_a(attr)
    
  • def handle_data(self, data):
  •    self.data = data
    
  • def handle_endtag(self, tag):
  •    if tag == 'dt':
    
  •        self.end_dt()
    
  •    elif tag == 'dd':
    
  •        self.end_dd()
    
  •    elif tag == 'a':
    
  •        self.end_a()
    
  • def start_dd( self, att ):
  •    self.list_entry = True
    
  • def end_dd( self ):
  •    self.list_entry = False
    
  • def start_dt( self, att ):
  •    self.one_entry = True
    
  •    self.num_of_a  = 0
    
  • def end_dt( self ):
  •    self.do_entry = False
    
  • def start_a( self, att ):
  •    if self.one_entry:
    
  •        self.url = join( self.dirn, dict( att )[ 'href' ] )
    
  •        if sys.version_info[0] == 2:
    
  •            self.save_bgn()
    
  • def end_a( self ):
  •    global VERBOSE
    
  •    if self.one_entry:
    
  •        if self.num_of_a == 0 :
    
  •            if sys.version_info[0] == 2:
    
  •                self.desc = self.save_end()
    
  •            else:
    
  •                self.desc = self.data
    
  •            if VERBOSE:
    
  •                self.desc_cnt += 1
    
  •                if self.desc_cnt % 100 == 0:
    
  •                    sys.stdout.write("%04d %s\r" \
    
  •                                         % (self.desc_cnt, self.desc.ljust(80)))
    
  •            # extract fist element
    
  •            #  ex) **and**() (in module operator)
    
  •            if not self.list_entry :
    
  •                self.entry = re.sub( "([^)]+)", "", self.desc )
    
  •                # clean up PEP
    
  •                self.entry = trim(self.entry)
    
  •                match = re.search( "([^)]+)", self.desc )
    
  •                if match :
    
  •                    self.desc = match.group(0)
    
  •            self.desc = trim(re.sub( "[()]", "", self.desc ))
    
  •        self.num_of_a += 1
    
  •        book = build_book(self.url, self.num_of_a)
    
  •        e = Element(self.entry, self.desc, book, self.url)
    
  •        self.writer(e)
    
    +def update(db, urls, append=False):
  • """Update database with entries from urls.
  • db : filename to database
  • urls : list of URL
  • append : append to db
  • """
  • mode = "ab" if append else "wb"
  • with open(db, mode) as f:
  •    writer = lambda e: pickle.dump(e, f)
    
  •    for url in urls:
    
  •        # detech 'file' or 'url' schemes
    
  •        parsed = urlparse.urlparse(url)
    
  •        if not parsed.scheme or parsed.scheme == "file":
    
  •            dst = abspath(expanduser(parsed.path))
    
  •            if not os.path.exists(dst):
    
  •                print("Error: %s doesn't exist" % dst)
    
  •                exit(1)
    
  •            url = "file://%s" % dst
    
  •        else:
    
  •            url = parsed.geturl()
    
  •        # direct to genindex-all.html
    
  •        if not url.endswith('.html'):
    
  •            url = url.rstrip("/") + "/genindex-all.html"
    
  •        print("Wait for a few seconds ..\nFetching htmls from '%s'" % url)
    
  •        try:
    
  •            index = urlrequest.urlopen(url).read()
    
  •            if not issubclass(type(index), str):
    
  •                index = index.decode()
    
  •            parser = IndexProcessor(writer, dirname(url))
    
  •            with closing(parser):
    
  •                parser.feed(index)
    
  •        except IOError:
    
  •            print("Error: fetching file from the web: '%s'" % sys.exc_info())
    
    +def lookup(db, key, format_spec, out=sys.stdout, insensitive=True, desc=True):
  • """Lookup key from database and print to out.
  • db : filename to database
  • key : key to lookup
  • out : file-like to write to
  • insensitive : lookup key case insensitive
  • """
  • matcher = get_matcher(insensitive, desc)
  • if insensitive:
  •    key = key.lower()
    
  • with open(db, "rb") as f:
  •    try:
    
  •        while True:
    
  •            e = pickle.load(f)
    
  •            if matcher(e, key):
    
  •                out.write('%s\n' % format(e, format_spec))
    
  •    except EOFError:
    
  •        pass
    
    +def cache(db, out=sys.stdout):
  • """Print unique entries from db to out.
  • db : filename to database
  • out : file-like to write to
  • """
  • with open(db, "rb") as f:
  •    keys = set()
    
  •    try:
    
  •        while True:
    
  •            e = pickle.load(f)
    
  •            k = e.entry
    
  •            k = re.sub( "([^)]*)", "", k )
    
  •            k = re.sub( "[[^]]*]", "", k )
    
  •            keys.add(k)
    
  •    except EOFError:
    
  •        pass
    
  •    for k in keys:
    
  •        out.write('%s\n' % k)
    
    +if name == "main":
  • import optparse
  • parser = optparse.OptionParser( doc.strip() )
  • parser.add_option( "-d", "--db",
  •                   help="database name",
    
  •                   dest="db", default="pylookup.db" )
    
  • parser.add_option( "-l", "--lookup",
  •                   help="keyword to search",
    
  •                   dest="key" )
    
  • parser.add_option( "-u", "--update",
  •                   help="update url or path",
    
  •                   action="append", type="str", dest="url" )
    
  • parser.add_option( "-c", "--cache" ,
  •                   help="extract keywords, internally used",
    
  •                   action="store_true", default=False, dest="cache")
    
  • parser.add_option( "-a", "--append",
  •                   help="append to the db from multiple sources",
    
  •                   action="store_true", default=False, dest="append")
    
  • parser.add_option( "-f", "--format",
  •                   help="type of output formatting, valid: Emacs, Terminal",
    
  •                   choices=["Emacs", "Terminal"],
    
  •                   default="Terminal", dest="format")
    
  • parser.add_option( "-i", "--insensitive", default=1, choices=['0', '1'],
  •                   help="SEARCH OPTION: insensitive search "
    
  •                   "(valid: 0, 1; default: %default)")
    
  • parser.add_option( "-s", "--desc", default=1, choices=['0', '1'],
  •                   help="SEARCH OPTION: include description field "
    
  •                   "(valid: 0, 1; default: %default)")
    
  • parser.add_option("-v", "--verbose",
  •                  help="verbose", action="store_true",
    
  •                  dest="verbose", default=False)
    
  • ( opts, args ) = parser.parse_args()
  • VERBOSE = opts.verbose
  • if opts.url:
  •    update(opts.db, opts.url, opts.append)
    
  • if opts.cache:
  •    cache(opts.db)
    
  • if opts.key:
  •    lookup(opts.db, opts.key, FORMATS[opts.format],
    
  •           insensitive=int(opts.insensitive), desc=int(opts.desc))
    

TypeError: __init__() takes exactly 1 positional argument (2 given)

I've just downloaded pylookup and I want to update the database but I have this error, even with local files:

python3 pylookup.py -u http://docs.python.org/

Wait for a few seconds ..
Fetching htmls from 'http://docs.python.org/'
Traceback (most recent call last):
File "pylookup.py", line 241, in
update(opts.db, opts.url, opts.append)
File "pylookup.py", line 165, in update
parser = IndexProcessor(writer, dirname(url))
File "pylookup.py", line 88, in init
htmllib.HTMLParser.init( self, formatter.NullFormatter() )
TypeError: init() takes exactly 1 positional argument (2 given)

Thanks,

2.6.6 documentation not found on `make download`

If you're like me you're too lazy to switch out to the supported versions.

I found the documentation in a separate section of the site, and modified the makefile instead:

VER := $(shell python --version 2>&1 | grep -o "[0-9].[0-9]")#.[0-9]")                                                                
ZIP := python-${VER}-docs-html.zip
URL := http://docs.python.org/ftp/python/doc/${VER}/${ZIP}
LOC := python-docs-html

download:
        @if [ ! -e $(ZIP) ] ; then     \                                                                                              
                echo "Downloading ${URL}"; \                                                                                          
                wget ${URL};               \                                                                                          
                unzip ${ZIP};              \                                                                                          
        fi
        ./pylookup.py -u ${LOC}

.PHONY: download

This is simply here for people who might get stuck on this issue, or if you'd like to, accommodating for those of us who are slow to move up.

pylookup gives error 127

./pylookup.py -u python-2.7.1-docs-html
/usr/bin/env: python2: No such file or directory
make: *** [download] Error 127

Patch for making it work with Python 3.10

The following appears to work for me (based on a very quick check). Pasting in case it's useful to others.

--- pylookup.py~        2022-11-25 01:29:37.643863100 +0530
+++ pylookup.py 2022-11-26 18:11:38.158974986 +0530
@@ -20,7 +20,6 @@
     import cPickle as pickle
 except:
     import pickle
-import formatter

 from os.path import join, dirname, exists, abspath, expanduser
 from contextlib import closing
@@ -126,7 +125,7 @@
     """

     def __init__( self, writer, dirn):
-        htmllib.HTMLParser.__init__( self, formatter.NullFormatter() )
+        html.parser.HTMLParser.__init__( self, formatter.NullFormatter() )

         self.writer     = writer
         self.dirn       = dirn

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.