Giter VIP home page Giter VIP logo

python-sqs-listener's Introduction

AWS SQS Listener

PyPI PyPI - Python Version

This package takes care of the boilerplate involved in listening to an SQS queue, as well as sending messages to a queue. Works with python 2.7 & 3.6+.

Installation

pip install pySqsListener

Listening to a queue

Using the listener is very straightforward - just inherit from the SqsListener class and implement the handle_message() method. The queue will be created at runtime if it doesn't already exist. You can also specify an error queue to automatically push any errors to.

Here is a basic code sample:

Standard Listener

from sqs_listener import SqsListener

class MyListener(SqsListener):
    def handle_message(self, body, attributes, messages_attributes):
        run_my_function(body['param1'], body['param2'])

listener = MyListener('my-message-queue', error_queue='my-error-queue', region_name='us-east-1')
listener.listen()

Error Listener

from sqs_listener import SqsListener
class MyErrorListener(SqsListener):
    def handle_message(self, body, attributes, messages_attributes):
        save_to_log(body['exception_type'], body['error_message']

error_listener = MyErrorListener('my-error-queue')
error_listener.listen()
The options available as kwargs are as follows:
  • error_queue (str) - name of queue to push errors.
  • force_delete (boolean) - delete the message received from the queue, whether or not the handler function is successful. By default the message is deleted only if the handler function returns with no exceptions
  • interval (int) - number of seconds in between polls. Set to 60 by default
  • visibility_timeout (str) - Number of seconds the message will be invisible ('in flight') after being read. After this time interval it reappear in the queue if it wasn't deleted in the meantime. Set to '600' (10 minutes) by default
  • error_visibility_timeout (str) - Same as previous argument, for the error queue. Applicable only if the error_queue argument is set, and the queue doesn't already exist.
  • wait_time (int) - number of seconds to wait for a message to arrive (for long polling). Set to 0 by default to provide short polling.
  • max_number_of_messages (int) - Max number of messages to receive from the queue. Set to 1 by default, max is 10
  • message_attribute_names (list) - message attributes by which to filter messages
  • attribute_names (list) - attributes by which to filter messages (see boto docs for difference between these two)
  • region_name (str) - AWS region name (defaults to us-east-1)
  • queue_url (str) - overrides queue parameter. Mostly useful for getting around this bug in the boto library
  • deserializer (function str -> dict) - Deserialization function that will be used to parse the message body. Set to python's json.loads by default.
  • aws_access_key, aws_secret_key (str) - for manually providing AWS credentials

Running as a Daemon

Typically, in a production environment, you'll want to listen to an SQS queue with a daemonized process. The simplest way to do this is by running the listener in a detached process. On a typical Linux distribution it might look like this:
nohup python my_listener.py > listener.log &
And saving the resulting process id for later (for stopping the listener via the kill command).
A more complete implementation can be achieved easily by inheriting from the package's Daemon class and overriding the run() method.

The sample_daemon.py file in the source root folder provides a clear example for achieving this. Using this example, you can run the listener as a daemon with the command python sample_daemon.py start. Similarly, the command python sample_daemon.py stop will stop the process. You'll most likely need to run the start script using sudo.

Logging

The listener and launcher instances push all their messages to a logger instance, called 'sqs_listener'. In order to view the messages, the logger needs to be redirected to stdout or to a log file.

For instance:
logger = logging.getLogger('sqs_listener')
logger.setLevel(logging.INFO)

sh = logging.StreamHandler()

formatstr = '[%(asctime)s - %(name)s - %(levelname)s]  %(message)s'
formatter = logging.Formatter(formatstr)

sh.setFormatter(formatter)
logger.addHandler(sh)

Or to a log file:
logger = logging.getLogger('sqs_listener')
logger.setLevel(logging.INFO)

sh = logging.FileHandler('mylog.log')
sh.setLevel(logging.INFO)

formatstr = '[%(asctime)s - %(name)s - %(levelname)s]  %(message)s'
formatter = logging.Formatter(formatstr)

sh.setFormatter(formatter)
logger.addHandler(sh)

Sending messages

In order to send a message, instantiate an SqsLauncher with the name of the queue. By default an exception will be raised if the queue doesn't exist, but it can be created automatically if the create_queue parameter is set to true. In such a case, there's also an option to set the newly created queue's VisibilityTimeout via the third parameter. It is possible to provide a serializer function if custom types need to be sent. This function expects a dict object and should return a string. If not provided, python's json.dumps is used by default.

After instantiation, use the launch_message() method to send the message. The message body should be a dict, and additional kwargs can be specified as stated in the SQS docs. The method returns the response from SQS.

Launcher Example

from sqs_launcher import SqsLauncher

launcher = SqsLauncher('my-queue')
response = launcher.launch_message({'param1': 'hello', 'param2': 'world'})

Important Notes

  • The environment variable AWS_ACCOUNT_ID must be set, in addition to the environment having valid AWS credentials (via environment variables or a credentials file) or if running in an aws ec2 instance a role attached with the required permissions.
  • For both the main queue and the error queue, if the queue doesn’t exist (in the specified region), it will be created at runtime.
  • The error queue receives only two values in the message body: exception_type and error_message. Both are of type str
  • If the function that the listener executes involves connecting to a database, you should explicitly close the connection at the end of the function. Otherwise, you're likely to get an error like this: OperationalError(2006, 'MySQL server has gone away')
  • Either the queue name or the queue url should be provided. When both are provided the queue url is used and the queue name is ignored.

Contributing

Fork the repo and make a pull request.

python-sqs-listener's People

Contributors

acidbotmaker avatar arakawamoriyuki avatar e271828- avatar eligro91 avatar gabrielnorton avatar hjpotter92 avatar jegesh avatar josemlp91 avatar posix4e avatar priordan83 avatar thundzz avatar zombrie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

python-sqs-listener's Issues

How to run this daemon in docker container?

Hi All,
I'm trying to run this listener in docker with the following ENTRYPOINT but container is not staying alive. please help what I'm doing wrong

ENTRYPOINT python3 /data/gdal_listener.py start &

Allow for queue_url to be passed in

Any reason not to allow queue_url to be passed in rather than queue_name? If you are okay with supporting it, I can draft a pull request.

The address https://queue.amazonaws.com/ is not valid for this endpoint

When running the code down here I'm having trouble with the SQS pulling events from the selected Queue below. The error we get is "The address https://queue.amazonaws.com/ is not valid for this endpoint." It does create the queue for us if we remove the "someQueue" from the stack. This is also being run in pipenv. Let me know if you need anymore info.

from sqs_listener import SqsListener
from src import image_handler,  ApiEndpoints

class ImageLabelingListener(SqsListener):
    def handle_message(self, body, attributes, messages_attributes, access_token):
        image_handler([body], api)

listener = ImageLabelingListener('someQueue', region_name='us-east-1')
listener.listen()
'<?xml version="1.0"?><ErrorResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/"><Error><Type>Sender</Type><Code>InvalidAddress</Code><Message>The address https://queue.amazonaws.com/ is not valid for this endpoint.</Message><Detail/></Error><RequestId>7b75f49f-f99b-5f81-91ae-a0684f8d1674</RequestId></ErrorResponse>'
I0304 18:25:10.150804 4700857792 hooks.py:210] Event needs-retry.sqs.GetQueueUrl: calling handler <botocore.retryhandler.RetryHandler object at 0x1333e3470>
I0304 18:25:10.150948 4700857792 retryhandler.py:187] No retry needed.
Traceback (most recent call last):
  File "handler.py", line 14, in <module>
    listener = ImageLabelingListener('ImageTagging', region_name='us-east-1')
  File "/Users/timothybrantleyii/.local/share/virtualenvs/imageLabeling-SmPpLJku/lib/python3.7/site-packages/sqs_listener/__init__.py", line 56, in __init__
    self._client = self._initialize_client()
  File "/Users/timothybrantleyii/.local/share/virtualenvs/imageLabeling-SmPpLJku/lib/python3.7/site-packages/sqs_listener/__init__.py", line 116, in _initialize_client
    QueueOwnerAWSAccountId=os.environ.get('AWS_ACCOUNT_ID', None))
  File "/Users/timothybrantleyii/.local/share/virtualenvs/imageLabeling-SmPpLJku/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/Users/timothybrantleyii/.local/share/virtualenvs/imageLabeling-SmPpLJku/lib/python3.7/site-packages/botocore/client.py", line 661, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (InvalidAddress) when calling the GetQueueUrl operation: The address https://queue.amazonaws.com/ is not valid for this endpoint.

support of shared-credentials-file

HI, why shared-credentials-file is not valid method?
if ( not os.environ.get('AWS_ACCOUNT_ID', None) and not (boto3.Session().get_credentials().method in ['iam-role', 'assume-role', 'assume-role-with-web-identity']) ): raise EnvironmentError('Environment variableAWS_ACCOUNT_IDnot set and no role found.')

Invalid Syntax

When using the example I get the following error:

File "t.py", line 7 listener = MyListener('my-message-queue', error_queue='my-error-queue', region_name='us-east-1') ^ SyntaxError: invalid syntax

Using Python 2.7.10 and pySqsListener 0.6.7

handle_message in sample_daemon does not execute?

Hi,

I'm trying to run this as a Daemon, but unfortunately I am having a really hard time making it work. I don't usually code in Python, so apologies if there is a basic mistake at my end.

I have set up all the AWS credentials correctly and have enabled some logging in the sample daemon file by putting the following lines:

import logging

logger = logging.getLogger('sqs_listener')
logger.setLevel(logging.INFO)

sh = logging.FileHandler('/home/ubuntu/python_errors/error.log')
sh.setLevel(logging.INFO)

formatstr = '[%(asctime)s - %(name)s - %(levelname)s]  %(message)s'
formatter = logging.Formatter(formatstr)

sh.setFormatter(formatter)
logger.addHandler(sh)

When I run the sample daemon as

sudo python sample_daemon.py start,

It does run, as I see

Starting listener daemon
Initializing listener

Then, in the log files, I see logs created by the file __init__.py, which are

[2018-02-20 13:51:30,300 - sqs_listener - INFO] 1 messages received
[2018-02-20 13:51:30,300 - sqs_listener - WARNING] Unable to parse message - JSON is not formatted properly

coming from these lines

sqs_logger.info( str(len(messages['Messages'])) + " messages received")

and

sqs_logger.warning("Unable to parse message - JSON is not formatted properly")

in __init__.py

So it is able to read from the queue. All this is fine. However, no code I put in the handle_message() function of the sample_daemon executes.

For example, in my sample_daemon.py, I have added

def handle_message(self, body, attributes, messages_attributes):
        logger.info("all done")
        file = open("testfile.txt","w")
        file.write("Hello World")
        file.close() 

Just some test code, but neither can I see the text "all done" in the logfile (although the same logfile has logs created by __Init__.py), nor can I see any testfile.txt created. Please help!

Unable to parse message - JSON is not formatted properly

Hi,
I want to poll an SQS queue for new messages. I run the listener directly not as demaon.

I get the following error:

2020-11-13 10:03:26,975 - INFO - 1 messages received
2020-11-13 10:03:26,975 - WARNING - Unable to parse message - JSON is not formatted properly

I used your code from the example you provide:

` from sqs_listener import SqsListener

class MyListener(SqsListener):
def handle_message(self, body, attributes, messages_attributes):
print ("In handle_message")

listener = MyListener('Orders',error_queue='orders-dead-letter-queue',region_name='eu-central-1')
listener.listen()
`

Do you know what I am missing in my code?
I read in one of your threads your comment "override in a subclass". Can you give an example for that?

State of this repository ?

Hi,

This repository has not have any commits for the last 14 months. I would like to know if it is still maintained and accepting new pull requests / feature requests.

Thank you

Fifo Queue

Hi: Does python-sqs-listener support fifo queue? Thanks.

Provide a heartbeat feature

Hi,

It would be nice to have a native implementation on how to provide a heartbeat feature to extend the visibility timeout if the processing of an SQS message is not done.

Inspiration could be taken from this node package: https://github.com/BBC/sqs-consumer and their heartbeatInterval param.

Unnecessarily fails when AWS credentials method is 'assume-role-with-web-identity'

In EKS environments, each container on a single host can have its own credentials. Authentication is supported via a mechanism of 'serviceAccounts' (a kubernetes term).

boto3.Session().get_credentials().method returns assume-role-with-web-identity which is another form of iam role based credentials.

Current implementation only allows iam-role, assume-role but not 'assume-role-with-web-identity'.

Error queue excluded from list queues prefix in ListQueues check

Say I have two existing queues: a queue process-queue and an error queue error-queue.

My error queue is never flagged as existing because the ListQueues operation is prefixed to the main queue name:

queues = sqs.list_queues(QueueNamePrefix=self._queue_name)
main_queue_exists = False
error_queue_exists = False
if 'QueueUrls' in queues:
for q in queues['QueueUrls']:
qname = q.split('/')[-1]
if qname == self._queue_name:
main_queue_exists = True
if self._error_queue_name and qname == self._error_queue_name:
error_queue_exists = True

The listener attempts to create the error queue since it wasn't found, and gets a QueueAlreadyExists error immediately.

If I specify the error queue as process-queue-error and it already exists, I don't get any errors, because it comes back in the ListQueues call that uses the prefix.

A couple solutions to this but it's a pretty big issue for anyone using pre-created queues.

Make deletion of queue message optional even in success cases

Hi,
I'm currently developing using python-sqs-listener, and I've found myself wishing that deleting a queue message after handle_message was optional even when handle_message runs without raising an exception. I envision using this feature almost exclusively during development, but maybe there are use cases that don't want the message deleted after processing even in production code. I would be more than willing to submit a pull request, but I wanted to check in and see how this idea might be received. Thoughts?

botocore.errorfactory.QueueNameExists when error_queue_name does not share a "prefix" with queue_name

When creating queues via CloudFormation you can allow the queue names to be generated automatically. For example a template with these resources:

  MailDeadLetterQueue:
    Type: AWS::SQS::Queue

  MailQueue:
    Type: AWS::SQS::Queue
    Properties:
      ReceiveMessageWaitTimeSeconds: 20
      RedrivePolicy:
        deadLetterTargetArn: !Sub ${MailDeadLetterQueue.Arn}
        maxReceiveCount: 3

Generates two queues named StackName-MailQueue-UPCWLGTA34SP and StackName-MailDeadLetterQueue-1TD3W1IE8HV2C.

Currently, the codebase only looks for queues that match the queue name as a prefix.

https://github.com/nimbusscale/python-sqs-listener/blob/master/sqs_listener/__init__.py#L80

Given we are passing the exact queue names to SqsListener, I'd like to update the code so that it looks for each queue individually. Something like this:

        sqs = self._session.client('sqs', region_name=self._region_name, endpoint_url=self._endpoint_name, use_ssl=ssl)
        try:
            self._queue_url = sqs.get_queue_url(QueueName=self._queue_name)['QueueUrl']
            mainQueueExists = True
        except sqs.exceptions.QueueDoesNotExist:
            mainQueueExists = False

        try:
            self._error_queue_url = sqs.get_queue_url(QueueName=self._error_queue_name)['QueueUrl']
            errorQueueExists = True
        except sqs.exceptions.QueueDoesNotExist:
            errorQueueExists = False

This is just to get the concept apart, I realize that there needs to be support for when AWS_ACCOUNT_ID env var is set and there may be a few other changes to match the rest of the code as well. I'll submit a complete PR assuming this sounds like a direction you are OK with and this project is still active (which it seems to be).

Thanks!

Message delete optional at runtime

Is it possible to decide at run-time wether i want to delete a message or not?

i.e.: in my SQS, events are injected that should be process after a given timestamp (this is how it works).
When i listen for messages, all messages get deleted if i use force_delete=True.
By contrast, if I use force_delete=False, no message is ever deleted.

ps: I tried returning False on the message_handler when i do NOT want the message deleted, but it doesn't seem to change the behaviour.

Thanks for your help!

outdated pip package

Could you please update your pip package. It does not have your latest code changes.

botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://queue.amazonaws.com/ is denied.

Im trying to follow the tutorial but shows me and error

from sqs_listener import SqsListener
import os

os.environ["AWS_ACCOUNT_ID"] = "xxxxx"

import boto3
# create a boto3 client
client = boto3.client('sqs')

class MyListener(SqsListener):
    def handle_message(self, body, attributes, messages_attributes):
        print('here')


listener = MyListener('https://sqs.us-east-1.amazonaws.com/xxxxxx/yyyyyyyy',
                      error_queue='my-error-queue', region_name='us-east-1', )
listener.listen()

Having difficulty listening to a queue.

Hello,

As part of my POC, I am pushing /var/log/auth.log of a couple of servers to an SQS queue. Now, I want to be able to listen to them so that I know exactly what is being pushed. I am using the following code as a python script and it throws different errors, the latest one was "no logger found". Can you please advise?

from sqs_listener import SqsListener

class MyListener(SqsListener):
    def handle_message(self, body, attributes, messages_attributes):
        run_my_function(body['data'], body['SenderId'])

listener = MyListener('my-queue-name', region_name='us-east-1')
listener.listen()

Also, AWS_ACCOUNT_ID is exported and is not a part of the problem.

sqs_listener not found

I've been getting this error for the SqsListener saying that it doesn't exist for some reason even when I pip install pip install pySqsListener. Below is the error and code used to run the code.

File "./run.py", line 1, in <module>
    from sqs_listener import SqsListener
ModuleNotFoundError: No module named 'sqs_listener'
from sqs_listener import SqsListener
from src import image_handler, dev_log, ApiEndpoints, TrainingFile

# host address
target_address = 'api.foyerapp.io'

# will get access token
api = ApiEndpoints(target_address)

# Build Model Training Files
training = TrainingFile()

class ImageLabelingListener(SqsListener):
    def handle_message(self, body, attributes, messages_attributes):
        image_handler(api, training, *[body])

listener = ImageLabelingListener('ImageTagging', region_name='us-east-1', queue_url='https://sqs.us-east-1.amazonaws.com/618135106065/ImageTagging')
listener.listen()

Below I've also tried running pip freeze and get the same error.

asn1crypto==0.24.0
boto3==1.9.122
botocore==1.12.122
certifi==2019.3.9
chardet==3.0.4
cryptography==2.1.4
docutils==0.14
enum34==1.1.6
futures==3.2.0
idna==2.8
ipaddress==1.0.17
jmespath==0.9.4
keyring==10.6.0
keyrings.alt==3.0
numpy==1.16.2
pycrypto==2.6.1
pygobject==3.26.1
pySqsListener==0.8.7
python-dateutil==2.8.0
pyxdg==0.25
requests==2.21.0
s3transfer==0.2.0
SecretStorage==2.3.1
six==1.12.0
tf==1.0.0
urllib3==1.24.1

Getting `OSError: [Errno 29] Invalid seek` when daemon starts

Hi,

I'm trying to get this listener up and running in Docker but I'm getting the following error when the daemon starts up:

[2018-10-17 16:43:32 +0000] [9] [ERROR] Exception in worker process
Traceback (most recent call last):
  File "/opt/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
    worker.init_process()
  File "/opt/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 129, in init_process
    self.load_wsgi()
  File "/opt/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi
    self.wsgi = self.app.wsgi()
  File "/opt/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
    self.callable = self.load()
  File "/opt/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load
    return self.load_wsgiapp()
  File "/opt/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp
    return util.import_app(self.app_uri)
  File "/opt/local/lib/python3.7/site-packages/gunicorn/util.py", line 350, in import_app
    __import__(module)
  File "/app/joker/poller/app.py", line 54, in <module>
    daemon.start()
  File "/opt/local/lib/python3.7/site-packages/sqs_listener/daemon.py", line 98, in start
    self.daemonize()
  File "/opt/local/lib/python3.7/site-packages/sqs_listener/daemon.py", line 60, in daemonize
    so = open(self.stdout, 'a+')
OSError: [Errno 29] Invalid seek

Here's my code:

from flask import Flask
from sqs_listener.daemon import Daemon
from sqs_listener import SqsListener
import sys
import logging
from os import getenv
sys.path.insert(0, '..')

app = Flask(__name__)
app.config.from_object('settings.Config.%sConfig' % getenv('FLASK_ENV').title())


class VoteMessagesListener(SqsListener):
    # TODO: send message to endpoint for processing.
    def handle_message(self, body, attributes, messages_attributes):
        return True


class VoteMessagesDaemon(Daemon):
    def run(self):
        print("Initializing listener")
        listener = VoteMessagesListener(
            '{url}{queue_name}'.format(
                url = app.config['SQS_QUEUE_URL'],
                queue_name = app.config['SQS_POLLS_VOTES_QUEUE']
            ),
            error_queue='{url}{queue_name}'.format(
                url = app.config['SQS_QUEUE_URL'],
                queue_name = app.config['SQS_POLLS_ERROR_QUEUE']
            ),
            region_name=app.config['AWS_DEFAULT_REGION']
        )
        listener.listen()


logger = logging.getLogger('sqs_listener')
logger.setLevel(logging.INFO)

sh = logging.StreamHandler(sys.stdout)
sh.setLevel(logging.INFO)

formatstr = '[%(asctime)s - %(name)s - %(levelname)s]  %(message)s'
formatter = logging.Formatter(formatstr)

sh.setFormatter(formatter)
logger.addHandler(sh)

# Doing this so I can see the daemon running.
daemon = VoteMessagesDaemon('/var/run/sqs_daemon.pid')
daemon.start()

Any thoughts on what is the cause of the problem?

Thank you for your help!
Joycelyn

Exception when handle_message returns an exception

Every time the handle_message fails to process a message for some reason, it gets an exception when it tries send a message to the error queue. I could also be making a mistake.

I'm fairly new to python, from what I can see the issue is that the region is not being passed to sqs_launcher. I've fixed it in my fork, but it might not be the correct way. I'll submit a pull request anyway.

Traceback (most recent call last):
File "/testing.py", line 67, in
listener.listen()
File "/usr/local/lib/python3.5/dist-packages/sqs_listener/init.py", line 156, in listen
self._start_listening()
File "/usr/local/lib/python3.5/dist-packages/sqs_listener/init.py", line 140, in _start_listening
error_launcher = SqsLauncher(self._error_queue_name, True)
File "/usr/local/lib/python3.5/dist-packages/sqs_launcher/init.py", line 39, in init
self._client = boto3.client('sqs')
File "/usr/local/lib/python3.5/dist-packages/boto3/init.py", line 83, in client
return _get_default_session().client(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/boto3/session.py", line 263, in client
aws_session_token=aws_session_token, config=config)
File "/usr/local/lib/python3.5/dist-packages/botocore/session.py", line 836, in create_client
client_config=config, api_version=api_version)
File "/usr/local/lib/python3.5/dist-packages/botocore/client.py", line 71, in create_client
verify, credentials, scoped_config, client_config, endpoint_bridge)
File "/usr/local/lib/python3.5/dist-packages/botocore/client.py", line 281, in _get_client_args
verify, credentials, scoped_config, client_config, endpoint_bridge)
File "/usr/local/lib/python3.5/dist-packages/botocore/args.py", line 45, in get_client_args
endpoint_url, is_secure, scoped_config)
File "/usr/local/lib/python3.5/dist-packages/botocore/args.py", line 111, in compute_client_args
service_name, region_name, endpoint_url, is_secure)
File "/usr/local/lib/python3.5/dist-packages/botocore/client.py", line 354, in resolve
service_name, region_name)
File "/usr/local/lib/python3.5/dist-packages/botocore/regions.py", line 122, in construct_endpoint
partition, service_name, region_name)
File "/usr/local/lib/python3.5/dist-packages/botocore/regions.py", line 135, in _endpoint_for_partition
raise NoRegionError()
botocore.exceptions.NoRegionError: You must specify a region.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.