Giter VIP home page Giter VIP logo

Comments (11)

hakwerk avatar hakwerk commented on August 9, 2024

The domain under Lockdown in hostname-policy.yaml should not have a leading dot. Try this:

Lockdown:
  - "foo.internal"

And the same for the lockdown value in file /home/labca/admin/data/config.json

from labca.

DavCh1 avatar DavCh1 commented on August 9, 2024

Changing both of those files resulted in this error:

Thu Aug 19 18:31:53 UTC 2021
Parsing account key...
Parsing CSR...
Found domains: az-ca01.foo.internal
Getting directory...
Directory found!
Registering account...
Already registered!
Creating new order...
Order created!
Verifying az-ca01.foo.internal...
Traceback (most recent call last):
File "/home/labca/acme_tiny.py", line 197, in
main(sys.argv[1:])
File "/home/labca/acme_tiny.py", line 193, in main
signed_crt = get_crt(args.account_key, args.csr, args.acme_dir, log=LOGGER, CA=args.ca, disable_check=args.disable_check, directory_url=args.directory_url, contact=args.contact)
File "/home/labca/acme_tiny.py", line 149, in get_crt
raise ValueError("Challenge did not pass for {0}: {1}".format(domain, authorization))
ValueError: Challenge did not pass for az-ca01.foo.internal: {u'status': u'invalid', u'challenges': [{u'status': u'invalid', u'validationRecord': [{u'url': u'http://az-ca01.foo.internal/.well-known/acme-challenge/HVAF2q-J4JeyIIrM-Q3-n8DFeZyCGxZMzUWrs3aHMZY', u'hostname': u'az-ca01.foo.internal', u'addressUsed': u'192.168.66.72', u'port': u'80', u'addressesResolved': [u'192.168.66.72']}], u'url': u'https://az-ca01.foo.internal/acme/chall-v3/3/0U9jkQ', u'token': u'HVAF2q-J4JeyIIrM-Q3-n8DFeZyCGxZMzUWrs3aHMZY', u'error': {u'status': 400, u'type': u'urn:ietf:params:acme:error:dns', u'detail': u"DNS problem: SERVFAIL looking up CAA for internal - the domain's nameservers may be malfunctioning"}, u'validated': u'2021-08-19T18:31:54Z', u'type': u'http-01'}], u'identifier': {u'type': u'dns', u'value': u'az-ca01.foo.internal'}, u'expires': u'2021-08-26T18:31:54Z'}

Notice that the domain is now listed as 'internal', not 'foo.internal' (and there are no CAA records set up on this domain, btw)
So, I rolled-back the change to /home/labca/admin/data/config.json, to put the leading dot back into the lockdown value

And now I get this error:

Thu Aug 19 18:35:23 UTC 2021
Parsing account key...
Parsing CSR...
Found domains: az-ca01.foo.internal
Getting directory...
Directory found!
Registering account...
Already registered!
Creating new order...
Traceback (most recent call last):
File "/home/labca/acme_tiny.py", line 197, in
main(sys.argv[1:])
File "/home/labca/acme_tiny.py", line 193, in main
signed_crt = get_crt(args.account_key, args.csr, args.acme_dir, log=LOGGER, CA=args.ca, disable_check=args.disable_check, directory_url=args.directory_url, contact=args.contact)
File "/home/labca/acme_tiny.py", line 120, in get_crt
order, _, order_headers = _send_signed_request(directory['newOrder'], order_payload, "Error creating new order")
File "/home/labca/acme_tiny.py", line 59, in _send_signed_request
return _do_request(url, data=data.encode('utf8'), err_msg=err_msg, depth=depth)
File "/home/labca/acme_tiny.py", line 45, in _do_request
raise ValueError("{0}:\nUrl: {1}\nData: {2}\nResponse Code: {3}\nResponse: {4}".format(err_msg, url, data, code, resp_data))
ValueError: Error creating new order:
Url: https://az-ca01.foo.internal/acme/new-order
Data: {"protected": "eyJ1cmwiOiAiaHR0cHM6Ly9hei1jYTAxLm53ZWguaW50ZXJuYWwvYWNtZS9uZXctb3JkZXIiLCAiYWxnIjogIlJTMjU2IiwgIm5vbmNlIjogInRhcm90eFV5Y3FYSzBUdGF5T2FEUWJ6NU5qU25GSFVySFN1ZHJwMHdvRXZsUmtrIiwgImtpZCI6ICJodHRwczovL2F6LWNhMDEubndlaC5pbnRlcm5hbC9hY21lL2FjY3QvMSJ9", "payload": "eyJpZGVudGlmaWVycyI6IFt7InR5cGUiOiAiZG5zIiwgInZhbHVlIjogImF6LWNhMDEubndlaC5pbnRlcm5hbCJ9XX0", "signature": "idGfSS2MN4Tzo6_LjlSLSh74b3KuW7y0VZXuzscsDzXUdy_Anz7XvbVUMyX4Gdg13yyEeGIuUckMWZuZeYI38z5yfyV540IP66n5qqgggTZNcMY3NoCaK8dHFR41fOru0zwzqIrEsJkMsAVpMvlZwsU-4rDpiuigRUatYYP-QpicoKYTeoK1RAdRbVqxWIYRanX_11k0dV6Vu3fC-XlipBPzZsk84B6SrsTYQ8rmpTjAo_FbDhAb91Qq7gY1G7ChgDGPuD6v4jHdGbWJn31w-F_DDSjlyveXjnBaOEGnL50mGdB05mxs2EN1cuUc-enAGKDOI08kkDG_GlBkxz1LbUYYrRUqM0WE5r1UIHe8gFxM6RopBJwowa_P7KMyCRDGGYLbSR9-oVLdA6uAG8KmhM84BGaNIz5GlGu2qL5Scxbn2tifFoTpUe1csWrQ2B5vUaEf77Qvobv9TLUxi6UH52r14JJIem3IUpSTqdCYM2u4w90pLQ5f6WM_Mt6Azfj8z81AX0gjoacKEbBUQNJemmt9rTv3vusaJcCkwCdRT1IuwHPyZHsCFcU35KLmMnHd8yzo7xw09Yzvbs5s3dovdJJsOsUie2xfAEmyJSVkPaU68VKMl-1v3MuU973SAAfMhmlHMLngBocZHMIiyS0_vvy5ixOf_zz8PigIW75Vez0"}
Response Code: 429
Response: {
"type": "urn:ietf:params:acme:error:rateLimited",
"detail": "Error creating new order :: too many failed authorizations recently: see http://az-ca01.foo.internal/rate-limits",
"status": 429
}

Is there any way to lift the limit? Or is it just a case of waiting?

from labca.

hakwerk avatar hakwerk commented on August 9, 2024

The limits are configured in /home/labca/boulder_labca/rate-limit-policies.yml, the invalidAuthorizationsPerAccount is set to a treshold of 3 for 5 minutes.

The changes to /home/labca/admin/data/config.json do not affect the boulder container directly (even after a docker-compose restart), but are used by the LabCA install script when installing/updating. The next time you'll run the install script, the value that is in config.json will overwrite the Lockdown value in hostname-policy.yaml !

The SERVFAIL error (which you'll probably see again after the 'too many failed authorizations' has expired) seems to indicate an issue with connectivity to the DNS server (the one in /home/labca/boulder_labca/config/va.json) from inside the boulder docker container. Are there any more details in the boulder logs (docker-compose logs -f boulder) when the error occurs?

from labca.

DavCh1 avatar DavCh1 commented on August 9, 2024

It's back to the CAA record error (but, again, with the truncated domain name of 'internal'):

Fri Aug 20 09:24:05 UTC 2021
Parsing account key...
Parsing CSR...
Found domains: az-ca01.foo.internal
Getting directory...
Directory found!
Registering account...
Already registered!
Creating new order...
Order created!
Verifying az-ca01.foo.internal...
Traceback (most recent call last):
File "/home/labca/acme_tiny.py", line 197, in
main(sys.argv[1:])
File "/home/labca/acme_tiny.py", line 193, in main
signed_crt = get_crt(args.account_key, args.csr, args.acme_dir, log=LOGGER, CA=args.ca, disable_check=args.disable_check, directory_url=args.directory_url, contact=args.contact)
File "/home/labca/acme_tiny.py", line 149, in get_crt
raise ValueError("Challenge did not pass for {0}: {1}".format(domain, authorization))
ValueError: Challenge did not pass for az-ca01.foo.internal: {u'status': u'invalid', u'challenges': [{u'status': u'invalid', u'validationRecord': [{u'url': u'http://az-ca01.foo.internal/.well-known/acme-challenge/b-x_Xy-CbJFv0ha6XVy3hniVJUiXpOYhD-39OzobOtU', u'hostname': u'az-ca01.foo.internal', u'addressUsed': u'192.168.66.72', u'port': u'80', u'addressesResolved': [u'192.168.66.72']}], u'url': u'https://az-ca01.foo.internal/acme/chall-v3/6/fVxyvg', u'token': u'b-x_Xy-CbJFv0ha6XVy3hniVJUiXpOYhD-39OzobOtU', u'error': {u'status': 400, u'type': u'urn:ietf:params:acme:error:dns', u'detail': u"DNS problem: SERVFAIL looking up CAA for internal - the domain's nameservers may be malfunctioning"}, u'validated': u'2021-08-20T09:24:06Z', u'type': u'http-01'}], u'identifier': {u'type': u'dns', u'value': u'az-ca01.foo.internal'}, u'expires': u'2021-08-27T09:24:06Z'}

From the docker-compose logs -f boulder output:

boulder_1 | I210820092406 boulder-va 2ZbxRAA [AUDIT] Attempting to validate HTTP-01 for "az-ca01.foo.internal" with GET to "http://az-ca01.foo.internal/.well-known/acme-challenge/b-x_Xy-CbJFv0ha6XVy3hniVJUiXpOYhD-39OzobOtU"
boulder_1 | I210820092406 boulder-va x8P-4A0 [AUDIT] Validation result JSON={"ID":"6","Requester":1,"Hostname":"az-ca01.foo.internal","Challenge":{"type":"http-01","status":"invalid","error":{"type":"dns","detail":"DNS problem: SERVFAIL looking up CAA for internal - the domain's nameservers may be malfunctioning","status":400},"token":"b-x_Xy-CbJFv0ha6XVy3hniVJUiXpOYhD-39OzobOtU","keyAuthorization":"b-x_Xy-CbJFv0ha6XVy3hniVJUiXpOYhD-39OzobOtU.yko6sDTDfOMkUj4jNM0oTOuDuhF2uCeX7jB0n3BPUkY","validationRecord":[{"url":"http://az-ca01.foo.internal/.well-known/acme-challenge/b-x_Xy-CbJFv0ha6XVy3hniVJUiXpOYhD-39OzobOtU","hostname":"az-ca01.foo.internal","port":"80","addressesResolved":["192.168.66.72"],"addressUsed":"192.168.66.72"}]},"ValidationLatency":0.011,"Error":"dns :: DNS problem: SERVFAIL looking up CAA for internal - the domain's nameservers may be malfunctioning"}
boulder_1 | I210820092406 boulder-remoteva 2ZbxRAA [AUDIT] Attempting to validate HTTP-01 for "az-ca01.foo.internal" with GET to "http://az-ca01.foo.internal/.well-known/acme-challenge/b-x_Xy-CbJFv0ha6XVy3hniVJUiXpOYhD-39OzobOtU"
boulder_1 | I210820092406 boulder-remoteva 2ZbxRAA [AUDIT] Attempting to validate HTTP-01 for "az-ca01.foo.internal" with GET to "http://az-ca01.foo.internal/.well-known/acme-challenge/b-x_Xy-CbJFv0ha6XVy3hniVJUiXpOYhD-39OzobOtU"
boulder_1 | E210820092406 boulder-va 3-b6kAo [AUDIT] Remote VA "va1.boulder:9097".PerformValidation failed: malformed :: rpc error: code = Canceled desc = context canceled
boulder_1 | E210820092406 boulder-va wIallAU [AUDIT] Remote VA "va1.boulder:9098".PerformValidation failed: malformed :: rpc error: code = Canceled desc = context canceled
boulder_1 | I210820092406 boulder-remoteva 0ZKr4Qo [AUDIT] Validation result JSON={"ID":"6","Requester":1,"Hostname":"az-ca01.foo.internal","Challenge":{"type":"http-01","status":"invalid","error":{"type":"connection","detail":"Fetching http://az-ca01.foo.internal/.well-known/acme-challenge/b-x_Xy-CbJFv0ha6XVy3hniVJUiXpOYhD-39OzobOtU: Error getting validation data","status":400},"token":"b-x_Xy-CbJFv0ha6XVy3hniVJUiXpOYhD-39OzobOtU","keyAuthorization":"b-x_Xy-CbJFv0ha6XVy3hniVJUiXpOYhD-39OzobOtU.yko6sDTDfOMkUj4jNM0oTOuDuhF2uCeX7jB0n3BPUkY","validationRecord":[{"url":"http://az-ca01.foo.internal/.well-known/acme-challenge/b-x_Xy-CbJFv0ha6XVy3hniVJUiXpOYhD-39OzobOtU","hostname":"az-ca01.foo.internal","port":"80","addressesResolved":["192.168.66.72"],"addressUsed":"192.168.66.72"}]},"ValidationLatency":0.01,"Error":"connection :: Fetching http://az-ca01.foo.internal/.well-known/acme-challenge/b-x_Xy-CbJFv0ha6XVy3hniVJUiXpOYhD-39OzobOtU: Error getting validation data"}
boulder_1 | I210820092406 boulder-remoteva gM7BLQA [AUDIT] Validation result JSON={"ID":"6","Requester":1,"Hostname":"az-ca01.foo.internal","Challenge":{"type":"http-01","status":"invalid","error":{"type":"connection","detail":"Fetching http://az-ca01.foo.internal/.well-known/acme-challenge/b-x_Xy-CbJFv0ha6XVy3hniVJUiXpOYhD-39OzobOtU: Error getting validation data","status":400},"token":"b-x_Xy-CbJFv0ha6XVy3hniVJUiXpOYhD-39OzobOtU","keyAuthorization":"b-x_Xy-CbJFv0ha6XVy3hniVJUiXpOYhD-39OzobOtU.yko6sDTDfOMkUj4jNM0oTOuDuhF2uCeX7jB0n3BPUkY","validationRecord":[{"url":"http://az-ca01.foo.internal/.well-known/acme-challenge/b-x_Xy-CbJFv0ha6XVy3hniVJUiXpOYhD-39OzobOtU","hostname":"az-ca01.foo.internal","port":"80","addressesResolved":["192.168.66.72"],"addressUsed":"192.168.66.72"}]},"ValidationLatency":0.013,"Error":"connection :: Fetching http://az-ca01.foo.internal/.well-known/acme-challenge/b-x_Xy-CbJFv0ha6XVy3hniVJUiXpOYhD-39OzobOtU: Error getting validation data"}

From the docker-compose logs labca command:

labca_1 | 2021/08/20 09:24:08 ERROR: Message from server: 'ERROR! On line 59 in commander script
labca_1 | '
labca_1 | 2021/08/20 09:24:08 errorHandler: ERROR! On line 59 in commander script
labca_1 | main._hostCommand(0xa75880, 0xc0006de000, 0xc0006ba900, 0x9cbc34, 0xc, 0x0, 0x0, 0x0, 0x9f2e00)
labca_1 | /go/src/labca/main.go:1513 +0x596
labca_1 | main.finalHandler(0xa75880, 0xc0006de000, 0xc0006ba900)
labca_1 | /go/src/labca/main.go:1860 +0xf9
labca_1 | net/http.HandlerFunc.ServeHTTP(0x9f24f0, 0xa75880, 0xc0006de000, 0xc0006ba900)
labca_1 | /usr/local/go/src/net/http/server.go:2049 +0x44
labca_1 | main.authorized.func1(0xa75880, 0xc0006de000, 0xc0006ba900)
labca_1 | /go/src/labca/main.go:2316 +0x230
labca_1 | net/http.HandlerFunc.ServeHTTP(0xc00010c0f0, 0xa75880, 0xc0006de000, 0xc0006ba900)
labca_1 | /usr/local/go/src/net/http/server.go:2049 +0x44
labca_1 | github.com/gorilla/mux.(*Router).ServeHTTP(0xc0001f20c0, 0xa75880, 0xc0006de000, 0xc0006ba000)
labca_1 | /go/pkg/mod/github.com/gorilla/[email protected]/mux.go:210 +0xd3
labca_1 | net/http.serverHandler.ServeHTTP(0xc0006de0e0, 0xa75880, 0xc0006de000, 0xc0006ba000)
labca_1 | /usr/local/go/src/net/http/server.go:2867 +0xa3
labca_1 | net/http.(*conn).serve(0xc0006e6000, 0xa76940, 0xc0006ca180)
labca_1 | /usr/local/go/src/net/http/server.go:1932 +0x8cd
labca_1 | created by net/http.(*Server).Serve
labca_1 | /usr/local/go/src/net/http/server.go:2993 +0x39b
labca_1 | 2021/08/20 09:24:08 http: superfluous response.WriteHeader call from main.finalHandler (main.go:1861)

From the /var/log/labca.err file:

Exception in thread Thread-2:
Traceback (most recent call last):
File "urllib3/response.py", line 436, in _error_catcher
File "urllib3/response.py", line 766, in read_chunked
File "urllib3/response.py", line 719, in _handle_chunk
File "http/client.py", line 626, in _safe_read
http.client.IncompleteRead: IncompleteRead(3880 bytes read, 4272 more expected)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "requests/models.py", line 751, in generate
File "urllib3/response.py", line 571, in stream
File "urllib3/response.py", line 792, in read_chunked
File "contextlib.py", line 130, in exit
File "urllib3/response.py", line 454, in _error_catcher
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(3880 bytes read, 4272 more expected)', IncompleteRead(3880 bytes read, 4272 more expected))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "threading.py", line 926, in _bootstrap_inner
File "threading.py", line 870, in run
File "compose/cli/log_printer.py", line 202, in watch_events
File "compose/project.py", line 626, in yield_loop
File "compose/project.py", line 594, in build_container_event
File "compose/container.py", line 44, in from_id
File "docker/utils/decorators.py", line 19, in wrapped
File "docker/api/container.py", line 774, in inspect_container
File "docker/utils/decorators.py", line 46, in inner
File "docker/api/client.py", line 237, in _get
File "requests/sessions.py", line 543, in get
File "requests/sessions.py", line 530, in request
File "requests/sessions.py", line 685, in send
File "requests/models.py", line 829, in content
File "requests/models.py", line 754, in generate
requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(3880 bytes read, 4272 more expected)', IncompleteRead(3880 bytes read, 4272 more expected))

Exception in thread Thread-1:
Traceback (most recent call last):
File "urllib3/connectionpool.py", line 677, in urlopen
File "urllib3/connectionpool.py", line 426, in _make_request
File "", line 3, in raise_from
File "urllib3/connectionpool.py", line 421, in _make_request
File "http/client.py", line 1369, in getresponse
File "http/client.py", line 310, in begin
File "http/client.py", line 271, in _read_status
File "socket.py", line 589, in readinto
ConnectionResetError: [Errno 104] Connection reset by peer

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "requests/adapters.py", line 449, in send
File "urllib3/connectionpool.py", line 727, in urlopen
File "urllib3/util/retry.py", line 410, in increment
File "urllib3/packages/six.py", line 734, in reraise
File "urllib3/connectionpool.py", line 677, in urlopen
File "urllib3/connectionpool.py", line 426, in _make_request
File "", line 3, in raise_from
File "urllib3/connectionpool.py", line 421, in _make_request
File "http/client.py", line 1369, in getresponse
File "http/client.py", line 310, in begin
File "http/client.py", line 271, in _read_status
File "socket.py", line 589, in readinto
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "threading.py", line 926, in _bootstrap_inner
File "threading.py", line 870, in run
File "compose/cli/log_printer.py", line 168, in tail_container_logs
File "compose/cli/log_printer.py", line 185, in wait_on_exit
File "compose/container.py", line 268, in wait
File "docker/utils/decorators.py", line 19, in wrapped
File "docker/api/container.py", line 1305, in wait
File "docker/utils/decorators.py", line 46, in inner
File "docker/api/client.py", line 233, in _post
File "requests/sessions.py", line 578, in post
File "requests/sessions.py", line 530, in request
File "requests/sessions.py", line 643, in send
File "requests/adapters.py", line 498, in send
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))

from labca.

hakwerk avatar hakwerk commented on August 9, 2024

Does your DNS server have a zone for that parent domain 'internal' ?

from labca.

DavCh1 avatar DavCh1 commented on August 9, 2024

No

from labca.

hakwerk avatar hakwerk commented on August 9, 2024

I'm pretty sure it needs to exist so boulder can check that there is no CAA record there.
See also https://letsencrypt.org/docs/caa/ for more tips on the SERVFAIL.

from labca.

hakwerk avatar hakwerk commented on August 9, 2024

This might help: acmesh-official/acme.sh#266 (comment)
It looks like the parent domain (internal) needs to exist and have NS record(s).

from labca.

DavCh1 avatar DavCh1 commented on August 9, 2024

This looks like it has the potential to break my network.
Would it be easier / safer to create a CAA record for the "foo.internal" domain?

If I understand this correctly, I would need to add the following:

foo.internal. CAA 0 issue "foo.internal"

from labca.

hakwerk avatar hakwerk commented on August 9, 2024

Yes I think that should work. There would indeed be no need to go look for a CAA recode on 'internal' anymore

from labca.

DavCh1 avatar DavCh1 commented on August 9, 2024

Yes, that fixed it.
Thanks!

from labca.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.