Giter VIP home page Giter VIP logo

nameko-grpc's Introduction

Nameko

[nah-meh-koh]

A microservices framework for Python that lets service developers concentrate on application logic and encourages testability.

A nameko service is just a class:

# helloworld.py

from nameko.rpc import rpc

class GreetingService:
    name = "greeting_service"

    @rpc
    def hello(self, name):
        return "Hello, {}!".format(name)

You can run it in a shell:

$ nameko run helloworld
starting services: greeting_service
...

And play with it from another:

$ nameko shell
>>> n.rpc.greeting_service.hello(name="ナメコ")
'Hello, ナメコ!'

Features

  • AMQP RPC and Events (pub-sub)
  • HTTP GET, POST & websockets
  • CLI for easy and rapid development
  • Utilities for unit and integration testing

Getting Started

Support

For help, comments or questions, please go to https://discourse.nameko.io/.

For enterprise

Available as part of the Tidelift Subscription.

The maintainers of Nameko and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. Learn more.

Security contact information

To report a security vulnerability, please use the Tidelift security contact. Tidelift will coordinate the fix and disclosure.

Contribute

  • Fork the repository
  • Raise an issue or make a feature request

License

Apache 2.0. See LICENSE for details.

nameko-grpc's People

Contributors

andyclegg avatar iky avatar joechild-pace avatar mattbennett avatar samantharosa avatar stephenc-pace avatar timbu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

nameko-grpc's Issues

ConnectionManager objects remain in pool after close

This is a bug in 1.2.0rc and applies to the ServerConnectionPool and ClientConnectionPool introduced there.

When a ConnectionManager's run_forever method terminates, the connection is no longer considered "alive".

The ClientConnectionPool makes a determination about the health of a connection when one is requested, and discards any that are not "alive". In situations where a client connection regularly dies and is replaced by another one (e.g. a streaming pull where the server occasionally terminates), dead ClientConnectionManager objects can build up if the client is idle.

The ServerConnectionPool is worse, and keeps references to all ServerConnectionManager objects, never purging the dead ones. Dead ServerConnectionManager objects will build up in long-running servers which constitutes a slow memory leak.

Server sockets are not closed on graceful client disconnect

Tested with 1.2.0rc, probably the case previous releases also.

When a client terminates normally, the underlying socket on the server side is not closed until the server is shut down. This leads to a growing number of open file descriptors in long-running servers.

ConnectionResetError on long running service

Hello,

I'm a PhD student working on metrics to assist developers in discovering security vulnerabilities in software. I am using Nameko to develop a number of microservices to collect these metrics from large projects. I am migrating the services I have developed so far to use the nameko-grpc extension but I encountered a ConnectionResetError from one of the services. I have included two source code files (service.py and client.py) below which shows customized code to demonstrate the implementation. trace.txt contains the stack trace from the service when the exception occurred.

The service in question is an implementation of the collaboration centrality metric. The service depends on another nameko-grpc service (called repository) which stream changes (commits) from a git repository.

The exception seems to be thrown only when the service takes a long time to start streaming the results. Is there something I am missing in the implementation?

Thank you,
Nuthan Munaiah

service.py

import logging

import graph_tool
from graph_tool.centrality import betweenness

from nameko.dependency_providers import Config
from nameko_grpc.entrypoint import Grpc
from nameko_grpc.dependency_provider import GrpcProxy

from .centrality_pb2 import CentralityResponse
from .centrality_pb2_grpc import centralityStub
from .repository_pb2 import RepositoryRequest
from .repository_pb2_grpc import repositoryStub

logger = logging.getLogger(__name__)
grpc = Grpc.implementing(centralityStub)

def _build_graph(changes):
    graph = graph_tool.Graph()
    for change in changes:
        graph.add_edge(change.developer, change.path)
    logger.debug('# Nodes: %s', graph.num_vertices())
    logger.debug('# Edges: %s', graph.num_edges())   
    return graph

def _get_centrality(changes):
    graph = _build_graph(changes)
    _, centralities = betweenness(graph)
    for edge, centrality in centralities.items():
        yield CentralityResponse(edge=edge, centrality=centrality)

class CentralityService:
    name = 'centrality'

    config = Config()
    repository_grpc = GrpcProxy('//repository', repositoryStub)

    @grpc
    def get(self, request, context):
        repository_request = RepositoryRequest(project=request.project)
        changes = self.repository_grpc.get_changes(repository_request)
        for centrality in _get_centrality(changes):
            yield centrality

client.py

from nameko_grpc.client import Client

from .centrality_pb2 import CentralityRequest
from .centrality_pb2_grpc import centralityStub

def main():
    request = CentralityRequest(project='firefox')
    with Client('//127.0.0.1', centralityStub) as client:
        response = client.get(request)
        for centrality in response:
            print(centrality.edge, centrality.centrality)

if __name__ == '__main__':
    main()

Ready for production?

Hello there, thanks for your work on Nameko :)

We want to use nameko + grpc in our company and we're wondering about the state of this project. We are still far from a production version ourselves so we don't really need nameko-grpc to be production ready right now, but we would like to know about the roadmap so that we can decide whether to use it or not.

If we do decide to use it we could report any bugs and suggestions we find along the way, possibly even contribute with PRs at some point.

Thank you!

several problems with grpc context

Hi, I got several issues in nameko_grpc context/metadata processing.

  1. I don't see any reason why there is METADATA_PREFIX hardcoded? It causing problem, that all our custom headers have this prefix, which is not desired.

  2. There is not correctly working encoding values. Strings are encoded two times.

  3. There is definitely issue in context_data_from_metadata because there is decode_value(value) only in one part of if/else block. I am not sure, about what is correct, but for my case there should not be any decoding. I think that decode is already done by that time.

generate code from proto

Im reading your code and your examples but in none place you run the command to generate python code from proto. It's needed and it's just implicit to do it or this lib already does that?.

Receiving server timeout from grpcio >= 1.50.0 results in exceptional shutdown

The test test_timeout_while_streaming_result fails intermittently against versions of grpcio >= 1.50.0 and I would expect others to as well.

From debugging, we appear to receive multiple frames from the server with deadline exceeded headers which results in h2 throwing a StreamClosed exception after the first frame.

I've pinned (will pin) the version of grpcio we're testing against to prevent flakey tests for now but this would cause issues running against servers using these versions of grpcio.

As a side note, I'm not sure the test TestDeadlineExceededAtClient.test_timeout_while_streaming_result is good as the server also times out.

server crash if header grpc-accept-encoding is missing

If client does not send header "grpc-accept-encoding" then the server crash. Bellow is stacktrace:

�[36mmailing_1  |�[0m   File "/usr/local/lib/python3.7/site-packages/eventlet/greenthread.py", line 181, in wait
�[36mmailing_1  |�[0m     return self._exit_event.wait()
�[36mmailing_1  |�[0m   File "/usr/local/lib/python3.7/site-packages/eventlet/event.py", line 132, in wait
�[36mmailing_1  |�[0m     current.throw(*self._exc)
�[36mmailing_1  |�[0m   File "/usr/local/lib/python3.7/site-packages/eventlet/greenthread.py", line 221, in main
�[36mmailing_1  |�[0m     result = function(*args, **kwargs)
�[36mmailing_1  |�[0m   File "/usr/local/lib/python3.7/site-packages/nameko_grpc/connection.py", line 76, in run_forever
�[36mmailing_1  |�[0m     self.request_received(event)
�[36mmailing_1  |�[0m   File "/usr/local/lib/python3.7/site-packages/nameko_grpc/entrypoint.py", line 56, in request_received
�[36mmailing_1  |�[0m     request_stream.headers.get("grpc-encoding"),
�[36mmailing_1  |�[0m   File "/usr/local/lib/python3.7/site-packages/nameko_grpc/compression.py", line 46, in select_algorithm
�[36mmailing_1  |�[0m     if encoding in acceptable_encodings:
�[36mmailing_1  |�[0m TypeError: argument of type 'NoneType' is not iterable

The problem is within this line and is obvious:

if encoding in acceptable_encodings:

I can contribute with PR if you would like to.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.