- stable - the server would always be online
- concurrency - use case show that many users may concurrent send data server at traffic summit period
- persistence - data at last need to be persisted safely for further analysis(both offline and online)
- protocol - maybe protobuf for protocol definition.
- economy - zero-copy, less memory resource consumption
Goal : Asynchronous, non-blocking, event-driven data transfer
- netty(5.x) - asynchorous, event-driven network application. concurrent, non-block
- levelDB
- Kafka
- Redis, MongoDB, MySQL etc.
- Flume???
- Buffers
- Codec
- Pipelines and Handlers
- Multiple protocol - Http, Websocket, protobuf, Binary; TCP&UDP
- Tomcat - 1 thread = 1 request; fast for ~ 1000 clients, too much memory consumption
- NodeJs - 1 thread = all request; super scalable and limited error handling.
- Netty - 1 thread = many request; flexible model
- Set up thread pool - boss pool for handling incoming connections; working pool for handling I/O