slymarbo / spdy Goto Github PK
View Code? Open in Web Editor NEW[deprecated] A full-featured SPDY library for the Go language.
License: BSD 2-Clause "Simplified" License
[deprecated] A full-featured SPDY library for the Go language.
License: BSD 2-Clause "Simplified" License
spdy3_conn:Run()
includes a recover
block, but it doesn't protect anything. All the work goes on in two other goroutines (send
and readFrames
). Since send
includes its own recover
, but readFrames
does not, this makes me believe that the recover
in Run
is meant to be in readFrames
.
Given that, Run
seems more complicated than it needs to be. It should be able to just call readFrames
rather than generating another goroutine (though I'm not completely clear on how the readFrames
goroutine gets terminated when the connection is stopped, so there may be more complexity here than I understand so far.)
Although the client API processes HTTP, HTTPS, and SPDY requests equally, the server API doesn't. Although the server-writer should be able to treat different protocols differently (as with the existing API), it would be nice to give some way for a single set of handlers (preferably http.Handler, to make adapting existing code quicker) to serve both protocols.
there appears to be another race condition.
appears to be an issue with f.transferWindow:
f.transferWindow += int64(deltaWindowSize)
details:
WARNING: DATA RACE
Read by goroutine 10:
github.com/SlyMarbo/spdy.(*flowControl).Write()
/home/cpg/fs/src/github.com/SlyMarbo/spdy/flow.go:255 +0x10d
github.com/SlyMarbo/spdy.(*serverStreamV3).Write()
/home/cpg/fs/src/github.com/SlyMarbo/spdy/spdy3_server_stream.go:80 +0x3a2
io.Copy()
/usr/lib64/golang/src/pkg/io/io.go:350 +0x249
io.CopyN()
/usr/lib64/golang/src/pkg/io/io.go:313 +0xef
net/http.serveContent()
/usr/lib64/golang/src/pkg/net/http/fs.go:236 +0xadc
...
Previous write by goroutine 7:
github.com/SlyMarbo/spdy.(*flowControl).UpdateWindow()
/home/cpg/fs/src/github.com/SlyMarbo/spdy/flow.go:229 +0x243
github.com/SlyMarbo/spdy.(*serverStreamV3).ReceiveFrame()
/home/cpg/fs/src/github.com/SlyMarbo/spdy/spdy3_server_stream.go:194 +0x6cc
github.com/SlyMarbo/spdy.(*connV3).handleWindowUpdate()
/home/cpg/fs/src/github.com/SlyMarbo/spdy/spdy3_conn.go:845 +0x2dd
github.com/SlyMarbo/spdy.(*connV3).readFrames()
/home/cpg/fs/src/github.com/SlyMarbo/spdy/spdy3_conn.go:1055 +0x19f7
gosched0()
/usr/lib64/golang/src/pkg/runtime/proc.c:1218 +0x9f
another possible race in conn.
the connection was closed normally (by the remote end).
WARNING: DATA RACE
Write by goroutine 7:
github.com/SlyMarbo/spdy.(*connV3).handleReadWriteError()
/home/cpg/fs/src/github.com/SlyMarbo/spdy/spdy3_conn.go:924 +0x1fb
github.com/SlyMarbo/spdy.(*connV3).readFrames()
/home/cpg/fs/src/github.com/SlyMarbo/spdy/spdy3_conn.go:1091 +0x20f
gosched0()
/usr/lib64/golang/src/pkg/runtime/proc.c:1218 +0x9f
Previous read by goroutine 6:
github.com/SlyMarbo/spdy.(*connV3).selectFrameToSend()
/home/cpg/fs/src/github.com/SlyMarbo/spdy/spdy3_conn.go:1195 +0x121
github.com/SlyMarbo/spdy.(*connV3).send()
/home/cpg/fs/src/github.com/SlyMarbo/spdy/spdy3_conn.go:1138 +0x575
gosched0()
/usr/lib64/golang/src/pkg/runtime/proc.c:1218 +0x9f
The http.CloseNotifer interface appears to be the only reliable way to detect client drops with the default go web server implementations. It would be nice for the spdy package to implement this as well on the returned ResponseWriter.
(Not a bug; just an issue to help track some thoughts and get some input.)
I've been working through the various race conditions in spdy, and I am concerned that the locking may quickly get out of hand. There are so many fields that are accessed on multiple goroutines (usually the readFrames
goroutine, and then by a public method like Push
and Request
), that it seems that locking each item at each access is very cumbersome and error-prone.
There are two approaches that seem reasonable. The most straightforward, with the least impact to current code, is to create accessors for all the variables. So you would always use lastPushStreamID()
and setLastPushStreamID()
rather than lastPushStreamID
. That way it would be easier to ensure that any required locking was performed.
The other solution is to move to a more channel-based approach with one main goroutine that is the only one allowed to touch the mutable data. This replaces struct fields with a function-local variable and some number of channels that act as the interface. I use this approach in the code I have on top of spdy, and I think it's effective for certain problems.
In this case, rather than locking streams
all over the place, there would be a addStream
channel that you could send a new stream request to. The central goroutine would assign it the next stream id, initialize any other boilerplate stuff, store it, and hand it back on a response channel. Doing it this way would mean that lastRequestStreamID
wouldn't have to be available anywhere outside of the main loop (and so wouldn't need additional locks to manage).
Where this is most useful is a "close" channel, because it means that anyone can write to it, but it is guaranteed to only be executed one time and synchronously with other operations (rather than the current system of closing the channel, which creates race conditions, see #56). I believe this could be used to get rid of a lot of code that verifies that streams are still open before processing, since the stream could not change state in the middle of processing a frame.
So I like the channel approach, but it's a much bigger change to your code. Adding a lot of small locks with accessor methods is a smaller conceptual change, but is harder to get right (and doesn't fix the shutdown race conditions). I wanted some of your thoughts on it, and more background on the current concurrency design in case I'm missing how it's supposed to work.
(Thanks. I know I'm throwing a lot of stuff at you right now. Getting a SPDY implementation that is absolutely rock-solid is critical to my current full-time project. Your framework has had exactly the features I need; it's just that it fails intermittently, and I worry that any small race conditions will bite me when I send it out to hundreds of thousands of machines as I intend to.)
So I tried adding gzip to a spdy server & on every request I get
(spdy) 2014/05/05 10:01:18 spdy3_conn.go:1098: Warning: Received PROTOCOL_ERROR on stream 1. Closing connection.
.
tried with a vanilla server & then with martini.
There is a high possibility that this is not an issue with your library, but I could not figure out where to ask this question.
Q: What's the best way to go about gzipping with a spdy server.
serverStreamV3.ReceiveFrame is trying to copy the frame to s.requestBody before it is initialized. It looks like that buffer is initialized in serverStreamV3.Run, however Run is called AFTER ReceiveFrame.
I added code to ReceiveFrame to conditionally init s.requestBody, which worked, but then serverStreamV3.Run was called, blowing the contents of s.requestBody away. I then removed the code from Run that initialized the buffer, and things started to work. I don't know the s.requestBody lifecycle well enough to reliably tell if this is the correct way to fix the issue or not
Stack trace below. The debug statement on top contains the contents of the frame ("test") and the contents of s.requestBody (nil)
(spdy) 2014/02/17 18:05:42 spdy3_server_stream.go:179: setting req body [116 101 115 116] <nil>
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x68 pc=0x41ba385]
goroutine 13 [running]:
runtime.panic(0x43f1280, 0x4858d39)
/usr/local/Cellar/go/1.2/libexec/src/pkg/runtime/panic.c:266 +0xb6
bytes.(*Buffer).Write(0x0, 0xc210000b60, 0x4, 0x4, 0x4348000, ...)
/usr/local/Cellar/go/1.2/libexec/src/pkg/bytes/buffer.go:126 +0x35
github.com/SlyMarbo/spdy.(*serverStreamV3).ReceiveFrame(0xc21007e080, 0x4b188c8, 0xc21010ee60, 0x0, 0x0)
/Users/liver/devel/go/src/github.com/SlyMarbo/spdy/spdy3_server_stream.go:180 +0x5ac
github.com/SlyMarbo/spdy.(*connV3).handleClientData(0xc2100ef540, 0xc21010ee60)
/Users/liver/devel/go/src/github.com/SlyMarbo/spdy/spdy3_conn.go:484 +0x459
github.com/SlyMarbo/spdy.(*connV3).processFrame(0xc2100ef540, 0x4b188c8, 0xc21010ee60, 0x1)
/Users/liver/devel/go/src/github.com/SlyMarbo/spdy/spdy3_conn.go:1177 +0xcbd
github.com/SlyMarbo/spdy.(*connV3).readFrames(0xc2100ef540)
/Users/liver/devel/go/src/github.com/SlyMarbo/spdy/spdy3_conn.go:1224 +0x4c5
created by github.com/SlyMarbo/spdy.(*connV3).Run
/Users/liver/devel/go/src/github.com/SlyMarbo/spdy/spdy3_conn.go:393 +0xb3
For unknown reasons, requests using SPDY/2 receive error responses from both https://www.google.com/ and https://www.facebook.com/. As far as I'm aware, the request conforms to the specification, and it matches SPDY/2 requests made by Google Chrome.
Any help would be appreciated.
A significant refactoring is currently underway. As a result:
This will probably take a few days. Your patience is very much appreciated. The final commit will contain a changelog with changes to the public API listed and explained.
This should make the package considerably easier to develop and debug.
Watch this space.
(This might not even matter at all; I was using "go get ..." which I shouldn't. I meant to use "go get ./..."; but leaving up in case you care.)
When building the examples (during "go get ..." for instance), the following errors pop up. Probably just need to name variable differently so they won't trip up the compiler.
:~/go/src/github.com/SlyMarbo/spdy/examples[master]$ go get
# github.com/SlyMarbo/spdy/examples
./proxy_server.go:12: handle redeclared in this block
previous declaration at ./proxy_client.go:10
./proxy_server.go:42: main redeclared in this block
previous declaration at ./proxy_client.go:20
With the current code (eac796b) I started getting some spurious errors in normal operation (no stressing, streaming or anything).
not sure if this helps understand where this is coming from. if not, i will try to do some runs with full debug enabled.
(spdy) 2013/10/03 04:48:45 spdy3_server_stream.go:221: Encountered stream error: bytes.Buffer: truncation out of range
(spdy) 2013/10/03 04:49:46 spdy3_server_stream.go:221: Encountered stream error: bytes.Buffer: truncation out of range
(spdy) 2013/10/03 04:52:07 spdy3_server_stream.go:221: Encountered stream error: runtime error: slice bounds out of range
Ensure proofs and slot indices are correctly utilised. This is a security issue, since the validity of the certificate cannot be established without proper processing of the proof.
Honestly, I don't really understand how CREDENTIAL frames work, so there's nothing I can really do to implement handling them yet. At the moment, receiving one triggers a panic, which is stupid, and I'll be fixing that momentarily, but proper handling is important.
When data is received, WINDOW_UPDATE frames need to be sent to regrow the transfer window.
When SETTINGS frames are sent, their settings are not checked for duplicate IDs.
it would be very useful to get debug logs without ALL the data mixed in in the middle.
maybe, say, a boolean option to EnableDebugOutput so that if true, the data is shown and it default to false?
Hi
what's the performance of spdy server using the lib? reach 10000 qps per core? can this lib support spdy reverse proxy feature?
The client does not yet provide support for receiving server pushes. Receiving the pushes is not difficult, but presenting them in a logical and useful API is less simple.
The new common
package leads to the type common.Conn
, common.Receiver
, etc. exposed to the caller. That's a very awkward name. The caller expects these to be in the spdy
package. I believe everything in the common
package should be moved to the top-level spdy
.
(I'm aware of the circular dependency. I'm working through some solutions to that.)
As I dig into this more, I realize the problem. Go packages don't exist for source code organization. They only exist for managing independent modules. There shouldn't be a spdy2
directory unless it is possible to build spdy
without it. I actually would prefer that (I don't want to support spdy2 and don't want the code), but it's not how the code is currently structured.
The simplest solution is to move the subdirectories back into the root directory to match how the code currently works. I believe that should happen now; the current structure is just too confusing on callers IMO.
My recommendation for making it work better is this:
There should be a spdy.Protocol interface implemented by spdy2.Prococol and spdy3.Protocol. spdy.New*Conn() should take a slice of Protocols.
In each place that the protocol is checked (in RoundTrip() for instance), each Protocol should be asked if it can handle it, and if it can, it should be handed off.
A default list of all protocols could be returned from spdyutil so that callers don't have to construct their own list, but callers like me could construct their own list if they want to limit the protocols they support.
I can put together a PR for this recommendation over the next week or so.
I've found a small memory leak in the server. I'll see what I can do to find it.
Not really a bug, just information.
The use of log
as a global variable fights with the goimports tool, which cannot tell that it's not the log
package and so adds an extra import. This is unlikely to be fixed in goimports since it basically works like gofmt.
Anyone who uses goimports as part of GoSublime, etc. will have trouble modifying spdy files. I have a work-around via an extra wrapper script that looks for "spdy" and passes those files through unchanged, but I wanted to let you know about it.
I'm not 100% sure if I'm reading the code right, but it seems that Response.Body is always a bytes.Buffer that gets fully populated before the client can start reading; this means downloading a 10TB file via this spdy client will be.. a disappointing experience.
It seems the Receiver API might be enough to implement this, but of course just behaving like net/http would be better.
Need to make sure that if the setting is received, it is adhered to.
On some machines, I'm occasionally seeing a mismatch in the StatusCode
and the headers. The fact that this happens only intermittently is making me think race condition. Here is the calling code:
resp, err := conn.RequestResponse(req, nil, DefaultSpdyPriority)
resp.Body.Close()
if resp.StatusCode != http.StatusCreated && resp.StatusCode != http.StatusNoContent {
log.WithField("resp", resp).Warn("Unexpected result while setting KUID")
return errors.New("Bad result while reading KUID")
}
From this, I'm getting the following log:
2014-05-23.12:08:41-07:00|WARN |kuid.go:58:SetKuid | Unexpected result while setting KUID|{"resp":{"Status":"0 ","StatusCode":0,"Proto":"HTTP/1.1","ProtoMajor":1,"ProtoMinor":1,"Header":{":status":["201"],":version":["HTTP/1.1"]},"Body":{"Reader":{}},"ContentLength":0,"TransferEncoding":null,"Close":true,"Trailer":{},"Request":{"Method":"PUT","URL":{"Scheme":"https","Opaque":"","User":null,"Host":"..konea..","Path":"/agent/kuid","RawQuery":"","Fragment":""},"Proto":"HTTP/1.1","ProtoMajor":1,"ProtoMinor":1,"Header":{":host":["..konea.."],":method":["PUT"],":path":["/agent/kuid"],":scheme":["https"],":version":["HTTP/1.1"],"Content-Length":["36"]},"Body":{"Reader":{}},"ContentLength":36,"TransferEncoding":null,"Close":false,"Host":"..konea..","Form":null,"PostForm":null,"MultipartForm":null,"Trailer":null,"RemoteAddr":"","RequestURI":"","TLS":null}}}
Notice that StatusCode
is 0, but the :status
header is "201" (which is correct). Most of the time, this code works fine. But sometimes (usually during the production test run), I get this mismatch. I'm investigating further whether this is a race condition on the response.
The output from the race detector on this test case is available at https://gist.github.com/rnapier/2ef6f20ab89684aadccb
Notice the races between spdy.(*response).Response()
and spdy.(*response).ReceiveHeader()
. This is exactly the kind of race that I would suspect.
I have several other test cases that are intermittently failing in similar ways. By "intermittent" I mean that at least some tests fail on at several of the build servers during most builds, so it's quite often (but I haven't reproduced it on my desktop).
Using spdy package I am getting this error message:
spdy3_conn.go:1070: Warning: Received PROTOCOL_ERROR on stream 1. Closing connection.
here is an interesting race condition reported.
WARNING: DATA RACE
Read by goroutine 14:
github.com/SlyMarbo/spdy.(*serverStreamV3).Header()
/home/cpg/proxy/src/github.com/SlyMarbo/spdy/spdy3_server_stream.go:43 +0x38
main.(*responseCopier).ReceiveHeader()
/home/cpg/proxy/src/proxy/service.go:40 +0x5c
...
Previous write by goroutine 19:
github.com/SlyMarbo/spdy.(*serverStreamV3).Close()
/home/cpg/proxy/src/github.com/SlyMarbo/spdy/spdy3_server_stream.go:147 +0x1fe
github.com/SlyMarbo/spdy.(*connV3).handleRstStream()
/home/cpg/proxy/src/github.com/SlyMarbo/spdy/spdy3_conn.go:713 +0x4d0
...
Any requests to twitter.com enter a loop of 301 responses, redirecting between http://twitter.com/ and https://twitter.com/. This is not encountered when connecting with Chrome, using SPDY.
Any help is very welcome.
Add support for SPDY version 2. This should not be visible to the user, apart from telling them for logging purposes.
Since frames received by the connection are passed to the relevant stream by a channel, in cases where a stream receives frames (HEADERS are most likely), but the Handler is busy, the stream can't receive from the channel.
This means that the connection will block and stop processing inbound frames. Since sending occurs in a separate goroutine, the connection will continue to be able to send. However, No streams will receive further frames.
Although the cause of the bug has been identified, the solution is non-trivial. Using a buffered channel would help many cases, but if a stream's Handler is busy for some time and the stream receives lots of inbound frames, the buffer could become filled, blocking the connection.
I'm working on a simple SPDY reverse proxy, and I ran into this bug. The following code sends valid HTTP headers to the reverse-proxied host:
targetURL := url.Parse("http://www.google.com")
reverseProxy := httputil.NewSingleHostReverseProxy(targetURL)
http.Handle("/", reverseProxy)
if err = http.ListenAndServeTLS(":5005", "cert.pem", "key.pem", nil); err != nil {
// handle error.
log.Fatal(err)
}
but replacing http.ListenAndServeTLS
with spdy.ListenAndServeTLS
or spdy.ListenAndServeSPDY
produces the following:
GET / HTTP/1.1
User-Agent: spdylay/1.2.3
: host: localhost:5005
: method: GET
: path: /
: scheme: https
: version: HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
X-Forwarded-For: ::1
Host: www.google.com
Any ideas on how to fix this?
one more race condition
Read by goroutine 39:
github.com/SlyMarbo/spdy.(*connV3).readFrames()
/home/cpg/proxy/src/github.com/SlyMarbo/spdy/spdy3_conn.go:951 +0x3a3
gosched0()
/usr/lib64/golang/src/pkg/runtime/proc.c:1218 +0x9f
Previous write by goroutine 38:
github.com/SlyMarbo/spdy.(*connV3).send()
/home/cpg/proxy/src/github.com/SlyMarbo/spdy/spdy3_conn.go:1147 +0x723
gosched0()
/usr/lib64/golang/src/pkg/runtime/proc.c:1218 +0x9f
The read method for client streams is currently TODO:
func (s *clientStreamV3) Read(out []byte) (int, error) {
// TODO
return 0, nil
}
It would be helpful to log here at least until implemented. I'm currently trying to read the results of a Request().
I've started hitting spdy with a few hundred simultaneous connections, and I'm seeing a lot of data races and crashes. This is the most common data race from the top-of-tree code. I suspect there's a deeper problem here, because I'm not exactly certain what's causing the read error in the first place. (It's possibly because things are timing out due to a large number of simultaneous connections; though this shouldn't lead to crashes). The common cause of later crashes is when I go to read Conn.RemoteAddress() it is nil due to a in-progress shutdown.
==================
WARNING: DATA RACE
Read by goroutine 333:
.../github.com/SlyMarbo/spdy/spdy3.(*Conn).send()
.../github.com/SlyMarbo/spdy/spdy3/io.go:108 +0x5bd
Previous write by goroutine 407:
.../github.com/SlyMarbo/spdy/spdy3.(*Conn).shutdown()
.../github.com/SlyMarbo/spdy/spdy3/shutdown.go:97 +0x822
.../github.com/SlyMarbo/spdy/spdy3.*Conn.(.../github.com/SlyMarbo/spdy/spdy3.shutdown)·fm()
.../github.com/SlyMarbo/spdy/spdy3/shutdown.go:12 +0x33
sync.(*Once).Do()
/usr/local/Cellar/go/1.3/libexec/src/pkg/sync/once.go:40 +0xb1
.../github.com/SlyMarbo/spdy/spdy3.(*Conn).Close()
.../github.com/SlyMarbo/spdy/spdy3/shutdown.go:12 +0x91
.../github.com/SlyMarbo/spdy/spdy3.(*Conn).handleReadWriteError()
.../github.com/SlyMarbo/spdy/spdy3/error_handling.go:68 +0x262
.../github.com/SlyMarbo/spdy/spdy3.(*Conn).readFrames()
.../github.com/SlyMarbo/spdy/spdy3/io.go:36 +0x214
Goroutine 333 (running) created at:
.../github.com/SlyMarbo/spdy/spdy3.(*Conn).Run()
.../github.com/SlyMarbo/spdy/spdy3/conn.go:192 +0x4f
Goroutine 407 (finished) created at:
.../github.com/SlyMarbo/spdy/spdy3.(*Conn).Run()
.../github.com/SlyMarbo/spdy/spdy3/conn.go:196 +0xb3
==================
==================
WARNING: DATA RACE
Read by goroutine 333:
.../github.com/SlyMarbo/spdy/spdy3.(*Conn).send()
.../github.com/SlyMarbo/spdy/spdy3/io.go:121 +0x954
Previous write by goroutine 407:
.../github.com/SlyMarbo/spdy/spdy3.(*Conn).shutdown()
.../github.com/SlyMarbo/spdy/spdy3/shutdown.go:85 +0x5f7
.../github.com/SlyMarbo/spdy/spdy3.*Conn.(.../github.com/SlyMarbo/spdy/spdy3.shutdown)·fm()
.../github.com/SlyMarbo/spdy/spdy3/shutdown.go:12 +0x33
sync.(*Once).Do()
/usr/local/Cellar/go/1.3/libexec/src/pkg/sync/once.go:40 +0xb1
.../github.com/SlyMarbo/spdy/spdy3.(*Conn).Close()
.../github.com/SlyMarbo/spdy/spdy3/shutdown.go:12 +0x91
.../github.com/SlyMarbo/spdy/spdy3.(*Conn).handleReadWriteError()
.../github.com/SlyMarbo/spdy/spdy3/error_handling.go:68 +0x262
.../github.com/SlyMarbo/spdy/spdy3.(*Conn).readFrames()
.../github.com/SlyMarbo/spdy/spdy3/io.go:36 +0x214
Goroutine 333 (running) created at:
.../github.com/SlyMarbo/spdy/spdy3.(*Conn).Run()
.../github.com/SlyMarbo/spdy/spdy3/conn.go:192 +0x4f
Goroutine 407 (finished) created at:
.../github.com/SlyMarbo/spdy/spdy3.(*Conn).Run()
.../github.com/SlyMarbo/spdy/spdy3/conn.go:196 +0xb3
==================
I've been trying to create some unit tests so I can more easily reproduce the race conditions. I'm having a bit of trouble getting spdy to use SPDY when in an httptest server.
This test case works, but it runs over HTTPS. I haven't been able to convince it to run under SPDY (so I'm not testing the code I want to test).
I tried adding NextProtos: []string{"spdy/3.1"}
to the tls.Config
, which seems part of the answer, but I then get "spdy3_conn.go:1018: Error: Encountered error: "Error: Field "flags" recieved invalid data 47, expecting 1." (*spdy.invalidField)". Adding the same thing to server doesn't improve things.
It's failing at transport.go:227:
if !state.NegotiatedProtocolIsMutual {
At this point, state.NegotiatedProtocol
is spdy/3.1
, but NegotiatedProtocolIsMutual
is false
.
Given spdy's design, this feels like it should be pretty straightforward, so I'm not sure if I'm missing something simple or there's a bug.
I based all of this on the http package's tests.
package spdy_test
import (
"crypto/tls"
"fmt"
"io/ioutil"
"net/http"
"net/http/httptest"
"strings"
"testing"
"github.com/SlyMarbo/spdy"
)
func init() {
spdy.EnableDebugOutput()
}
func TestClient(t *testing.T) {
ts := httptest.NewUnstartedServer(robotsTxtHandler)
defer ts.Close()
srv := &http.Server{
Handler: robotsTxtHandler,
TLSConfig: &tls.Config{
NextProtos: []string{"spdy/3.1"},
},
}
ts.Config = srv
spdy.AddSPDY(srv)
ts.StartTLS()
tr := spdy.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: true,
}
client := http.Client{Transport: &tr}
r, err := client.Get(ts.URL)
var b []byte
if err == nil {
b, err = ioutil.ReadAll(r.Body)
r.Body.Close()
}
if err != nil {
t.Error(err)
} else if s := string(b); !strings.HasPrefix(s, "User-agent:") {
t.Errorf("Incorrect page body (did not begin with User-agent): %q", s)
}
}
var robotsTxtHandler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Last-Modified", "sometime")
fmt.Fprintf(w, "User-agent: go\nDisallow: /something/")
})
Complete the client API, complete with HTTP integration.
Ideally, spdy should use a more advanced logging system so that logs can easily be written to disk, and that log levels can be used.
spdy3_client_stream:ReceiveFrame
generates a goroutine to handle each frame. This means that frames can be processed in any order (there is no promise that goroutines will run in the order they are created). I believe this code would be simpler if the frames were processed synchronously.
This will be needed in SPDY/4 / HTTP/2, and is already useful. Won't take long, though.
There are few instances of this pattern in spdy (from spdy3_client_stream, but a similar pattern exists in spdy3_conn):
select {
case <-s.finished:
default:
close(s.finished)
}
This is a race, especially since there are several other goroutines that can call close(s.finished)
(see ReceiveFrame()
). There is nothing preventing s.finished
from being closed between the time it is checked and when the default
leg is run. If this race fails, the calling goroutine will panic.
The closes in ReceiveFrame()
are like this:
case *dataFrameV3:
...
go func() {
s.receiver.ReceiveData(s.request, data, frame.Flags.FIN())
if frame.Flags.FIN() {
s.state.CloseThere()
close(s.finished)
}
}()
...
case *headersFrameV3:
go func() {
s.receiver.ReceiveHeader(s.request, frame.Header)
if frame.Flags.FIN() {
s.state.CloseThere()
close(s.finished)
}
}()
If a data header packet includes FIN and is immediately followed by a data packet the includes FIN, they both could be processed (since they run asynchronously), closing s.finished
twice and causing the goroutine to panic (or triggering the race with Close()
). Since there is no recover
in this goroutine, that would crash the program. (Sending two FINs this way is likely a violation of the protocol, but an attacker could potentially use it to crash the server as part of a DoS attack.)
A channel should only be closed from a single goroutine (the sender).
(I'll provide more thoughts on how to address this and other problems in a latter issue.)
The current implementation uses one TCP session per request, which should be reduced and pooled.
Make sure that all of the relevant net/http API is presented, such as the various different request methods, client connection reuse, and so on. Ideally, the existing API should not change from this point onwards, although further methods may be added.
SPDY servers currently only serve HTTP requests with http.DefaultServeMux
. Server-writers should be able to create custom http.Server
s and still use them with SPDY.
This is low-priority.
Add support for the experimental SPDY/4 alpha.
I know you're in the middle of refactoring to split up the Conn
interface. The interface used to have a Conn()
function on it, which I need to get the address of my peer (RemoteAddr()
). Hiding the underlying connection is probably fine, but I need access to RemoteAddr()
directly in that case.
Note that this change also broke examples/proxy_server for basically the same reason.
I will likely need LocalAddr()
as well. (The more I stare at my code, the more I think I need to switch to LocalAddr()
in several cases because RemoteAddr()
might not be unique.)
More diagnosis required.
When POST/PUT requests send the request data, it is currently being refused, since the server is not configured to receive data. Working on it.
i built and ran an app using this library with -race, the go race detector.
it has detected the following race conditions (code base is at the current head 72eeeda):
WARNING: DATA RACE
Read by goroutine 15:
github.com/SlyMarbo/spdy.(*connV3).InitialWindowSize()
/home/cpg/proxy/src/github.com/SlyMarbo/spdy/spdy3_conn.go:132 +0x3b
github.com/SlyMarbo/spdy.(*clientStreamV3).AddFlowControl()
/home/cpg/proxy/src/github.com/SlyMarbo/spdy/flow.go:90 +0xb5
github.com/SlyMarbo/spdy.(*connV3).Request()
/home/cpg/proxy/src/github.com/SlyMarbo/spdy/spdy3_conn.go:335 +0x1459
...
Previous write by goroutine 9:
github.com/SlyMarbo/spdy.(*connV3).readFrames()
/home/cpg/proxy/src/github.com/SlyMarbo/spdy/spdy3_conn.go:1012 +0x13d6
gosched0()
/usr/lib64/golang/src/pkg/runtime/proc.c:1218 +0x9f
WARNING: DATA RACE
Write by goroutine 33:
github.com/SlyMarbo/spdy.(*connV3).readFrames()
/home/cpg//proxy/src/github.com/SlyMarbo/spdy/spdy3_conn.go:949 +0x3c5
gosched0()
/usr/lib64/golang/src/pkg/runtime/proc.c:1218 +0x9f
Previous read by goroutine 32:
github.com/SlyMarbo/spdy.(*connV3).selectFrameToSend()
/home/cpg/proxy/src/github.com/SlyMarbo/spdy/spdy3_conn.go:1184 +0x121
github.com/SlyMarbo/spdy.(*connV3).send()
/home/cpg//proxy/src/github.com/SlyMarbo/spdy/spdy3_conn.go:1110 +0x997
gosched0()
/usr/lib64/golang/src/pkg/runtime/proc.c:1218 +0x9f
hope this helps.
I've got another one for you. I was debugging a concurrency issue with my app and compiled it with race detection enabled. I'm running into a race warning repeatedly in a handler that reads the post body with a
buf, err := ioutil.ReadAll(req.Body)
I'm not spawning any goroutines myself in that handler, and the normal net/http server does not trigger a race warning, so I think this is a library issue.
edit: An additional note - the race is only be detected the first time I hit that handler, so perhaps it's an initialization issue?
WARNING: DATA RACE
Write by goroutine 18:
bytes.(*Buffer).Read()
/usr/local/Cellar/go/1.2/libexec/src/pkg/bytes/buffer.go:252 +0x63
github.com/SlyMarbo/spdy.(*readCloser).Read()
/Users/liver/devel/go/src/github.com/SlyMarbo/spdy/client_conn.go:1 +0x83
bytes.(*Buffer).ReadFrom()
/usr/local/Cellar/go/1.2/libexec/src/pkg/bytes/buffer.go:169 +0x402
io/ioutil.readAll()
/usr/local/Cellar/go/1.2/libexec/src/pkg/io/ioutil/ioutil.go:32 +0x1ca
io/ioutil.ReadAll()
/usr/local/Cellar/go/1.2/libexec/src/pkg/io/ioutil/ioutil.go:41 +0x4c
github.com/pabbott0/bellows.postPosts()
/Users/liver/devel/go/src/github.com/pabbott0/bellows/posts.go:65 +0xad
runtime.call64()
/usr/local/Cellar/go/1.2/libexec/src/pkg/runtime/asm_amd64.s:340 +0x31
reflect.Value.Call()
/usr/local/Cellar/go/1.2/libexec/src/pkg/reflect/value.go:345 +0xaa
github.com/codegangsta/inject.(*injector).Invoke()
/Users/liver/devel/go/src/github.com/codegangsta/inject/inject.go:98 +0x394
github.com/codegangsta/martini.(*context).Invoke()
/Users/liver/devel/go/src/github.com/codegangsta/martini/env.go:1 +0x67
github.com/codegangsta/martini.(*routeContext).run()
/Users/liver/devel/go/src/github.com/codegangsta/martini/router.go:264 +0x13e
github.com/codegangsta/martini.(*route).Handle()
/Users/liver/devel/go/src/github.com/codegangsta/martini/router.go:186 +0x10d
github.com/codegangsta/martini.(*router).Handle()
/Users/liver/devel/go/src/github.com/codegangsta/martini/router.go:90 +0x25c
github.com/codegangsta/martini.Router.Handle·fm()
/Users/liver/devel/go/src/github.com/pabbott0/bellows/bellows.go:67 +0x6d
runtime.call64()
/usr/local/Cellar/go/1.2/libexec/src/pkg/runtime/asm_amd64.s:340 +0x31
reflect.Value.Call()
/usr/local/Cellar/go/1.2/libexec/src/pkg/reflect/value.go:345 +0xaa
github.com/codegangsta/inject.(*injector).Invoke()
/Users/liver/devel/go/src/github.com/codegangsta/inject/inject.go:98 +0x394
github.com/codegangsta/martini.(*context).run()
/Users/liver/devel/go/src/github.com/codegangsta/martini/martini.go:152 +0x110
github.com/codegangsta/martini.(*Martini).ServeHTTP()
/Users/liver/devel/go/src/github.com/codegangsta/martini/martini.go:68 +0x60
github.com/SlyMarbo/spdy.(*serverStreamV3).Run()
/Users/liver/devel/go/src/github.com/SlyMarbo/spdy/spdy3_server_stream.go:239 +0x1b5
Previous write by goroutine 15:
bytes.(*Buffer).Write()
/usr/local/Cellar/go/1.2/libexec/src/pkg/bytes/buffer.go:126 +0x57
github.com/SlyMarbo/spdy.(*serverStreamV3).ReceiveFrame()
/Users/liver/devel/go/src/github.com/SlyMarbo/spdy/spdy3_server_stream.go:179 +0x5a4
github.com/SlyMarbo/spdy.(*connV3).handleClientData()
/Users/liver/devel/go/src/github.com/SlyMarbo/spdy/spdy3_conn.go:485 +0x554
github.com/SlyMarbo/spdy.(*connV3).processFrame()
/Users/liver/devel/go/src/github.com/SlyMarbo/spdy/spdy3_conn.go:1182 +0x13ac
github.com/SlyMarbo/spdy.(*connV3).readFrames()
/Users/liver/devel/go/src/github.com/SlyMarbo/spdy/spdy3_conn.go:1229 +0x62a
Goroutine 18 (running) created at:
github.com/SlyMarbo/spdy.(*connV3).handleRequest()
/Users/liver/devel/go/src/github.com/SlyMarbo/spdy/spdy3_conn.go:714 +0xa2d
github.com/SlyMarbo/spdy.(*connV3).processFrame()
/Users/liver/devel/go/src/github.com/SlyMarbo/spdy/spdy3_conn.go:1061 +0x1672
github.com/SlyMarbo/spdy.(*connV3).readFrames()
/Users/liver/devel/go/src/github.com/SlyMarbo/spdy/spdy3_conn.go:1229 +0x62a
Goroutine 15 (running) created at:
github.com/SlyMarbo/spdy.(*connV3).Run()
/Users/liver/devel/go/src/github.com/SlyMarbo/spdy/spdy3_conn.go:394 +0x113
github.com/SlyMarbo/spdy.func·014()
/Users/liver/devel/go/src/github.com/SlyMarbo/spdy/server_conn.go:391 +0x172
net/http.(*conn).serve()
/usr/local/Cellar/go/1.2/libexec/src/pkg/net/http/server.go:1116 +0x5f1
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.