Giter VIP home page Giter VIP logo

yum-s3-iam's Introduction

yum-s3-iam

This is a yum plugin that allows for private AWS S3 buckets to be used as package repositories. The plugin utilizes AWS Identity and Access Management (IAM) roles for authorization, removing any requirement for an access or secret key pair to be defined anywhere in your repository configuration.

What is an IAM Role?

IAM Roles are used to control access to AWS services and resources.

For further details, take a look at the AWS-provided documentation: docs.

Why it's useful: when you assign an IAM role to an EC2 instance, credentials to access the instance are automatically provided by AWS. This removes the need to store them, change and/or rotate them, while also providing fine-grain controls over what actions can be performed when using the credentials.

This particular plug-in makes use of the IAM credentials when accessing S3 buckets backing a yum repository.

How to set it up?

There is a great blog post by Jeremy Carroll which explains in depth how to use this plugin: S3 Yum Repos With IAM Authorization (via Wayback Machine).

Notes on S3 buckets and URLs

There are 2 types of S3 URLs:

  • virtual-hosted–style URL:
    • https://<bucket>.s3.amazonaws.com/<path> if region is US East (us-east-1)
    • https://<bucket>.s3-<aws-region>.amazonaws.com/<path> in other regions
  • path-style URLs:
    • https://s3.amazonaws.com/<bucket>/<path> if region is US East (us-east-1)
    • https://s3-<aws-region>.amazonaws.com/<bucket>/<path> in other regions

When using HTTP/S and a bucket name containing a dot (.) you need to use the path-style URL syntax.

Use outside of EC2

Some use-cases (Continuous Integration, Docker) involve S3-hosted yum repositories being accessed from outside EC2. For those cases two options are available:

  • Use AWS API keys in AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY (and optionally AWS_SESSION_TOKEN) environment variables. Those will be used as a fallback if IAM role credentials can not be accessed.
  • Defining the environment DISABLE_YUM_S3_IAM to 1 will disable the use of the yum-s3-iam plugin. This should be used with S3 bucket IP white-listing.

Limitations

Currently the plugin does not support:

  • Proxy server configuration
  • Multi-valued baseurl or mirrorlist

Testing

Use make test to run some simple tests.

Testing with docker compose:

docker-compose -f docker-compose.tests.yml run yum-s3-iam test
docker-compose -f docker-compose.tests.yml down --volumes --rmi all

License

Apache 2.0 license. See LICENSE.

Maintainers

  • Mathias Brossard
  • Mischa Spiegelmock
  • Sean Edge

Author(s)

yum-s3-iam's People

Contributors

agarfu avatar dresnick-sf avatar jlambert121 avatar jonnangle avatar mattjamison avatar mbrossard avatar paulegan avatar phobos182 avatar ryandb avatar seporaitis avatar toned avatar zackse avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

yum-s3-iam's Issues

IAM support seems to have broken for me

Not sure maybe I am missing a step here, this used to work for me but now with a new 3ec2 instance, new role, and new bucket it no longer works.

For troubleshooting purposes I created the role/policy as:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": "*"
        }
    ]
}

aws s3 cli tools work with no need to give keys, they use IAM fine and work for every url I have tried.

So what can I do to troubleshoot this and figure out whats going on?

CentOS 7.0 support

Hullo,

We've been using this plugin for months on CentOS 6.5 AMI's without any problem (thank you!) - the IAM EC2 role is always reliably picked up. Today I'm starting to use a CentOS 7.0 AMI and the plugin is failing. When I yum makecache I can see Loaded plugins: fastestmirror, s3iam however the yum run fails:

https://sit-yumbucket-rbix7r3dlq8j-s3bucket-8trs8qdn31lj.s3.amazonaws.com/repos/puppet/repodata/repomd.xml: [Errno 14] HTTPS Error 403 - Forbidden

The EC2 instance definitely does have a role defined, since if I use the native awscli (and hence the IAM EC2 roles) I can read the repomd.xml just fine...

[root@bastion-iea61640f ~]# aws s3 cp s3://sit-yumbucket-rbix7r3dlq8j-s3bucket-8trs8qdn31lj/repos/puppet/repodata/repomd.xml /tmp
download: s3://sit-yumbucket-rbix7r3dlq8j-s3bucket-8trs8qdn31lj/repos/puppet/repodata/repomd.xml to ../tmp/repomd.xml
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
sit-bastion1-SSUO17B1DRF-RootRole-T63LJKQOB5NB

When I tcpdump -i eth0 -n host 169.254.169.254 whilst running yum makecache there is no network traffic - the credentials are not being looked up by the plugin. My repo definition is:

name=itv-s3
baseurl=https://sit-yumbucket-rbix7r3dlq8j-s3bucket-8trs8qdn31lj.s3.amazonaws.com/repos/puppet
metadata_expire=10s
enabled=1
s3_enabled=1
gpgcheck=0

How can I get more info / debug about what the plugin is doing 'behind the scenes'?

FWIW, I used the official CentOS 7 AMI on the Marketplace: https://aws.amazon.com/marketplace/pp/B00O7WM7QW

Cheers,
Gavin.

Important Update From Author - Jun 5th, 2020

Hello everyone, author of the plugin would like to make an announcement.

This project has not been updated in a while. I have stopped maintaining it in 2014/15 (Issue 38). After that a few excellent open source maintainers took over and kept this project up-to-date until late 2017. I think mostly thanks to them is why this project surprisingly is alive, still in use and even finds new users. I don't know the reasons for stopping maintenance after 2017, but I am sure there is a valid one and I appreciate their time maintaining project that long So what is this about?

Lately, I had a few inquiries about this project and since there had not been any releases, I intend to come back "from retirement" and bring this project up-to-date. I haven't worked with CentOS flavor Linux for a while (6 years), but in the coming weeks I plan to:

  1. Refamiliarize myself with the plugin code and it's usage.
  2. Go through issues open and closed and understand the use-cases this project was part of.
  3. Go through pull requests, evaluate and merge them.

Overall my goal is to:

  1. Bring the code up-to-date (e.g. switch to boto3 if possible; here - #78 is very helpful contribution).
  2. Setup a visible CI pipeline so that it is easier to contribute and test changes.
  3. Update the documentation with concrete examples.

Understandably, I don't want to accidentally break someone's use-case unannounced - I will be deliberately slow at first. If anyone wants to help and steer me and point out potential issues - I would really appreciate that in a form of a comment or code review. Hopefully, together we all can continue keeping this plugin enjoyable to work with.

Problems using IAM role

Hey there!

I'm trying to use this plugin (latest master commit) on an EC2 instances, authenticating via IAM role.
The plugin/yum gives this error, however:

Error: Cannot retrieve repository metadata (repomd.xml) for repository: foo. Please verify its path and try again

The auth/IAM role seems fine, as I can run

aws s3 ls s3://bucket-url/path

on the instance without problem.

Any idea how i might debug the issue?
I was looking at the plugin to see if i could enable logging, but no such luck.

Thanks!

S3 URL deprecation

According to information from the forums located here ( https://forums.aws.amazon.com/ann.jspa?annID=6776 ), the path naming will need to be updated to always use https://.s3./ instead of https://s3.//. Currently that pattern is used for us-east-1 as the region, but not for non-us-east-1 or CN regions.

Amazon S3 currently supports two request URI styles in all regions: path-style (also known as V1) that includes bucket name in the path of the URI (example: //s3.amazonaws.com//key), and virtual-hosted style (also known as V2) which uses the bucket name as part of the domain name (example: //.s3.amazonaws.com/key). In our effort to continuously improve customer experience, the path-style naming convention is being retired in favor of virtual-hosted style request format. Customers should update their applications to use the virtual-hosted style request format when making S3 API requests before September 30th, 2020 to avoid any service disruptions. Customers using the AWS SDK can upgrade to the most recent version of the SDK to ensure their applications are using the virtual-hosted style request format.

Virtual-hosted style requests are supported for all S3 endpoints in all AWS regions. S3 will stop accepting requests made using the path-style request format in all regions starting September 30th, 2020. Any requests using the path-style request format made after this time will fail.

If there is any reason why your application is not able to utilize the virtual-hosted style request format, or if you have any questions or concerns, please reach out to AWS Support.\

We may also want to adjust the parse_url logic to return the base awsurl to remove the case statement that does not work for other regions that use a different AWS endpoint, but that can be covered in another issue.

One issue that may be worth addressing as a part of this change - if a bucket name has a . in it, do we properly send a warning/error up to the user of this plugin? Is there a reasonable exception that gets flowed up, or should we manually add the warning to the case?

Getting occasional failures which I would expect retries to handle

I am occasionally getting failures like the following:

Execution of '/usr/bin/yum -d 0 -e 0 -y install vim' returned 1: <urlopen error [Errno 104] Connection reset by peer>

I would expect that retries would handle this and I have a hard time believing that s3 itself would really do this 10 times in a row. I say 10 because we are using default value for retries.

Enumerate use-cases.

List things like:

  • Current and deprecated.
  • Uses of credentials
  • Uses of S3 buckets
  • Uses of Proxies
  • Else?

Link to instructions is dead.

In README.md, the link under "How do I set it up?" is dead, because Jeremy Carroll's site is offline. I'd love to use this tool, but there's no instructions anymore.

Yum unable to download package with + in its name (giving http 404 error).

I tried this plugin and it works perfect but I did run into one problem with a package that contained a + in the name.
I was able to work around this by adding ".replace('+','%2B')" at the end of line 160:
Making it :
"url = urlparse.urljoin(self.baseurl, path).replace('+','%2B')"

This might not be the cleanest way but it solved my problem.
The %2B come from S3 that changes a + to to %2B in the url.

IAM doesn't support v4 signatures (china, frankfurt)

Hello,

I installed the yum-s3-iam and finished the configuration for repo and aws permission in aws china. but IAM support doesn't work.
baseurl is https://rio-yum-repo.s3.cn-north-1.amazonaws.com.cn
return HTTP Error 400: Bad Request.
AWS support replied my question:
The "HTTP Error 400: Bad Request" was thrown out because s3iam doesn't support AWS signature version 4, which is the only support option in Beijing region.
s3iam only support AWS signature version 2, so you may need to submit a feature request to its author or modify source code by yourself to implement signature version 4. You can find some reference below on how to sign with version 4
http://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html

will it be fixed in the future?

Best Regards

Fails when mirrorlist is present

Nice Plugin, thanks for creating it. I hit the following when setting this up today

Issue is two fold, one I'd consider a bug, the other an enhancement. If the mirrorlist attribute is set in the .repo file you get the exception below. Even if mirrorlists are not support, the plugin shouldn't barf.

It would be double awesome if mirrorlists were actually supported.

Traceback (most recent call last):
  File "/usr/bin/yum", line 29, in <module>
    yummain.user_main(sys.argv[1:], exit_code=True)
  File "/usr/share/yum-cli/yummain.py", line 285, in user_main
    errcode = main(args)
  File "/usr/share/yum-cli/yummain.py", line 136, in main
    result, resultmsgs = base.doCommands()
  File "/usr/share/yum-cli/cli.py", line 434, in doCommands
    self._getTs(needTsRemove)
  File "/usr/lib/python2.6/site-packages/yum/depsolve.py", line 99, in _getTs
    self._getTsInfo(remove_only)
  File "/usr/lib/python2.6/site-packages/yum/depsolve.py", line 110, in _getTsInfo
    pkgSack = self.pkgSack
  File "/usr/lib/python2.6/site-packages/yum/__init__.py", line 887, in <lambda>
    pkgSack = property(fget=lambda self: self._getSacks(),
  File "/usr/lib/python2.6/site-packages/yum/__init__.py", line 669, in _getSacks
    self.repos.populateSack(which=repos)
  File "/usr/lib/python2.6/site-packages/yum/repos.py", line 308, in populateSack
    sack.populate(repo, mdtype, callback, cacheonly)
  File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 165, in populate
    if self._check_db_version(repo, mydbtype):
  File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 223, in _check_db_version
    return repo._check_db_version(mdtype)
  File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 1256, in _check_db_version
    repoXML = self.repoXML
  File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 1455, in <lambda>
    repoXML = property(fget=lambda self: self._getRepoXML(),
  File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 1447, in _getRepoXML
    self._loadRepoXML(text=self)
  File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 1437, in _loadRepoXML
    return self._groupLoadRepoXML(text, self._mdpolicy2mdtypes())
  File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 1412, in _groupLoadRepoXML
    if self._commonLoadRepoXML(text):
  File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 1230, in _commonLoadRepoXML
    result = self._getFileRepoXML(local, text)
  File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 1008, in _getFileRepoXML
    size=102400) # setting max size as 100K
  File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 823, in _getFile
    result = self.grab.urlgrab(misc.to_utf8(relative), local,
  File "/usr/lib/yum-plugins/s3iam.py", line 95, in grab
    self.grabber = S3Grabber(self)
  File "/usr/lib/yum-plugins/s3iam.py", line 114, in __init__
    "'baseurl' value" % repo.id)

403 Forbidden

I'm back with another fun problem. Again. :) Perhaps by typing it all out in a issue report I'll find a mistake on my end. Here's hoping. :)

I'm still working on mirroring a handful of public repositories to S3. I've got most of CentOS and RepoForge working correctly with yum-s3-iam, as well as our own private repo.

This time I'm trying to mirror the Fedora EPEL repo. Pretty simple script to mirror it:

rsync -vaH --numeric-ids --delete --delete-after --delay-updates rsync://dl.fedoraproject.org/fedora-epel epel/

Seems to work fine:

$ ls -al
drwxr-xr-x 6 root root   4096 Jan 21 22:48 .
drwxr-xr-x 5 root root   4096 Sep 27 12:09 ..
drwxr-xr-x 3 root root   4096 Sep 26 16:11 CentOS
drwxrwsr-x 7  263    263 4096 Jan 24 20:42 epel
drwxrwxr-x 4  101 nobody 4096 Jan 24 20:50 repoforge
drwxr-xr-x 4 root root   4096 Sep 27 12:10 mycompany

I then sync it to S3 (along with the others) with s3cmd:

s3cmd sync --skip-existing --delete-removed ./ s3://example-yum/

Everything shows up in S3 successfully. I updated the /etc/yum.repos.d/epel.repo config:

[epel]
name=Extra Packages for Enterprise Linux 6 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
baseurl=baseurl=http://example-yum.s3.amazonaws.com/epel/6/x86_64
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
s3_enabled=1
exclude=nagios* mongodb*

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 6 - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch
baseurl=http://example-yum.s3.amazonaws.com/epel/6/$basearch/debug
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1
s3_enabled=1
exclude=nagios* mongodb*

[epel-source]
name=Extra Packages for Enterprise Linux 6 - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch
baseurl=http://example-yum.s3.amazonaws.com/epel/6/SRPMS
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1
s3_enabled=1
exclude=nagios* mongodb*

When I try to do a yum info, I get a 403:

Loaded plugins: fastestmirror, s3iam, security
s3iam: found S3 private repository
s3iam: found S3 private repository
s3iam: found S3 private repository
s3iam: found S3 private repository
Determining fastest mirrors
 * base: mirrors.seas.harvard.edu
 * extras: mirrors.rit.edu
 * updates: mirror.cogentco.com
10gen
10gen/primary     
10gen
base
base/primary_db

HTTP Error 403: Forbidden

If I disable the EPEL repo, it works. Meanwhile, CentOS updates and RepoForge work just fine with yum-s3-iam. So there has to be something different about the EPEL repo. Any suggestions on what I might look for?

fails to work in cn-north-1

I was trying to get this to run in our AWS China deployment and was unable to get it to work.

I tried setting the the region, bucket and path and the url they combine into manually to cn-north-1, mybucket, / and combine them into https://s3.cn-north-1.amazonaws.com.cn/mybucket/ and it failed. The real way would be to have the regex handle it properly but I was just trying to get something working quick.

Sadly I was in the middle of firefighting several issues and had to stop troubleshooting this as I had a faster way out of my immediate problems.

I'd love to contribute a PR to make it work in china but I may need a little more guidance / help on debugging, or I might just need more sleep. I'm able to test anyones changes if you get to it faster than I do.

Its been super helpful in our non-china deployments, so thanks for your work.

edit: http://docs.amazonaws.cn/en_us/general/latest/gr/rande.html#s3_region aws china is special and more akin to aws gov cloud. It has unique endpoints and discrete iam from the normal aws. tl;dr its a pain.

Stop building with variable "Name:" in .spec file

Putting in "Name:" as an undefined value in the .spec file makes the SRPM useless without setting "name" in the rpmbuild command. I've a branch set up to generate ".spec" from a ".spec.in" file, and use that.

I've also switched the syntax to generate the tarball locally and efficiently, to exclude the tarballs and any generated content in .gitignore, to generate .src.rpm file first and use that for building RPM's, and to use "mock" to generate cleaner, environment capable rpms as needed for typical deployment. Using "mock" helps avoid requiting the build system to be the same OS version and architecture as the target systems. I'll submit a pull request for these ASAP.

s3iam.py does not honour yum proxy setting

When I install and activate yum-cron on a system that must use a proxy to access yum repositories, any repositories configured with s3_enabled triggers a stack trace, which points at line 197 in s3iam.py. Looking at firewall logs, I can also confirm the server is trying to connect directly to an S3 IP address, rather than the web proxy.

Yum supports specifying a proxy server in /etc/yum.conf. See this link for more on this.

A workaround is to modify the cron job to source another file containing http_proxy/https_proxy settings, but it would be good to have this bug fixed, ideally by ensuring the s3iam plugin honours the global and/or repo-specific proxy setting from yum.

s3iam.py fails when running outside of EC2

We are running yum inside of docker images which are run in EC2 for production and staging, and on developer's machines during dev and testing. Because of this, s3iam.py is trying to access S3 repos from either:

  • EC2 (via IAM roles), or
  • whitelisted IP addresses (via the S3 bucket policy)

Accessing from EC2 obviously works, but running yum from a whitelisted IP (eg. VPN, office, etc) fails with the error: <urlopen error [Errno 111] Connection refused>.

For this use-case, we should only try to retrieve IAM credentials if IAM is accessible, and fall back to normal un-authenticated access if not.

I'll submit the fix we implemented as a PR shortly.

yum check-update --security command is not being handled by s3iam plugin

Hi,
I am really happy that you guys have decided to keep the project alive! I am using this plugin for self-hosted repos in S3 bucket. I have created the repo for amazonlinux and amazonlinux2, both being ports of CentOs. With regards to install, check-update commands the plugin is doing great by generating hmac signatures and adding them to yum http requests. However, when I do the yum check-update --security or yum update --security the requests are not being handled by the plugin.
I have analysed the packets that are being sent to backend when these commands are executed. They are being directly sent as ordinary http requests without hmac signatures to the s3 url resulting in HTTP Error 403 Forbidden which is of course not a surprise.
Any info in this regard will be really helpful. Thank you

CertificateError for bucket name with dots

Using a bucket which name includes dots raises Certificate Error:

Loaded plugins: priorities, s3iam, update-motd, upgrade-helper
Traceback (most recent call last):
  File "/usr/bin/yum", line 29, in <module>
    yummain.user_main(sys.argv[1:], exit_code=True)
  File "/usr/share/yum-cli/yummain.py", line 367, in user_main
    errcode = main(args)
  File "/usr/share/yum-cli/yummain.py", line 174, in main
    result, resultmsgs = base.doCommands()
  File "/usr/share/yum-cli/cli.py", line 572, in doCommands
    return self.yum_cli_commands[self.basecmd].doCommand(self, self.basecmd, self.extcmds)
  File "/usr/share/yum-cli/yumcommands.py", line 1728, in doCommand
    return base.search(extcmds)
  File "/usr/share/yum-cli/cli.py", line 1441, in search
    for (po, keys, matched_value) in matching:
  File "/usr/lib/python2.7/dist-packages/yum/__init__.py", line 3176, in searchGenerator
    for sack in self.pkgSack.sacks.values():
  File "/usr/lib/python2.7/dist-packages/yum/__init__.py", line 1077, in <lambda>
    pkgSack = property(fget=lambda self: self._getSacks(),
  File "/usr/lib/python2.7/dist-packages/yum/__init__.py", line 782, in _getSacks
    self.repos.populateSack(which=repos)
  File "/usr/lib/python2.7/dist-packages/yum/repos.py", line 383, in populateSack
    sack.populate(repo, mdtype, callback, cacheonly)
  File "/usr/lib/python2.7/dist-packages/yum/yumRepo.py", line 250, in populate
    if self._check_db_version(repo, mydbtype):
  File "/usr/lib/python2.7/dist-packages/yum/yumRepo.py", line 342, in _check_db_version
    return repo._check_db_version(mdtype)
  File "/usr/lib/python2.7/dist-packages/yum/yumRepo.py", line 1520, in _check_db_version
    repoXML = self.repoXML
  File "/usr/lib/python2.7/dist-packages/yum/yumRepo.py", line 1706, in <lambda>
    repoXML = property(fget=lambda self: self._getRepoXML(),
  File "/usr/lib/python2.7/dist-packages/yum/yumRepo.py", line 1702, in _getRepoXML
    self._loadRepoXML(text=self.ui_id)
  File "/usr/lib/python2.7/dist-packages/yum/yumRepo.py", line 1693, in _loadRepoXML
    return self._groupLoadRepoXML(text, self._mdpolicy2mdtypes())
  File "/usr/lib/python2.7/dist-packages/yum/yumRepo.py", line 1667, in _groupLoadRepoXML
    if self._commonLoadRepoXML(text):
  File "/usr/lib/python2.7/dist-packages/yum/yumRepo.py", line 1492, in _commonLoadRepoXML
    result = self._getFileRepoXML(local, text)
  File "/usr/lib/python2.7/dist-packages/yum/yumRepo.py", line 1270, in _getFileRepoXML
    size=102400) # setting max size as 100K
  File "/usr/lib/python2.7/dist-packages/yum/yumRepo.py", line 1058, in _getFile
    **kwargs
  File "/usr/lib/yum-plugins/s3iam.py", line 302, in urlgrab
    response = urllib2.urlopen(request)
  File "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/lib64/python2.7/urllib2.py", line 431, in open
    response = self._open(req, data)
  File "/usr/lib64/python2.7/urllib2.py", line 449, in _open
    '_open', req)
  File "/usr/lib64/python2.7/urllib2.py", line 409, in _call_chain
    result = func(*args)
  File "/usr/lib64/python2.7/urllib2.py", line 1242, in https_open
    context=self._context)
  File "/usr/lib64/python2.7/urllib2.py", line 1196, in do_open
    h.request(req.get_method(), req.get_selector(), req.data, headers)
  File "/usr/lib64/python2.7/httplib.py", line 1053, in request
    self._send_request(method, url, body, headers)
  File "/usr/lib64/python2.7/httplib.py", line 1093, in _send_request
    self.endheaders(body)
  File "/usr/lib64/python2.7/httplib.py", line 1049, in endheaders
    self._send_output(message_body)
  File "/usr/lib64/python2.7/httplib.py", line 893, in _send_output
    self.send(msg)
  File "/usr/lib64/python2.7/httplib.py", line 855, in send
    self.connect()
  File "/usr/lib64/python2.7/httplib.py", line 1274, in connect
    server_hostname=server_hostname)
  File "/usr/lib64/python2.7/ssl.py", line 352, in wrap_socket
    _context=self)
  File "/usr/lib64/python2.7/ssl.py", line 579, in __init__
    self.do_handshake()
  File "/usr/lib64/python2.7/ssl.py", line 816, in do_handshake
    match_hostname(self.getpeercert(), self.server_hostname)
  File "/usr/lib64/python2.7/ssl.py", line 271, in match_hostname
    % (hostname, ', '.join(map(repr, dnsnames))))
ssl.CertificateError: hostname 'repo.trv.ams.s3.amazonaws.com' doesn't match either of '*.s3.amazonaws.com', 's3.amazonaws.com'

Same happened for boto some time ago: boto/boto#2836
Simplest way to fix it is monkey patching ssl:

# https://github.com/boto/boto/issues/2836#issuecomment-68608362
import ssl
if hasattr(ssl, '_create_unverified_context'):
   ssl._create_default_https_context = ssl._create_unverified_context

or

# https://github.com/boto/boto/issues/2836#issuecomment-68682573
import ssl

_old_match_hostname = ssl.match_hostname

def _new_match_hostname(cert, hostname):
   if hostname.endswith('.s3.amazonaws.com'):
      pos = hostname.find('.s3.amazonaws.com')
      hostname = hostname[:pos].replace('.', '') + hostname[pos:]
   return _old_match_hostname(cert, hostname)

ssl.match_hostname = _new_match_hostname

I've made a pull request here #46

Is this project dead?

No code has been pushed in about a year and there are pending PRs with no responses/feedback on them.

404 Not Found

I'm still struggling with mirrors CentOS and epel repositories on S3 for re-use by our EC2 infrastructure.

For example, with the EPEL repo, take a look at:

http://download.fedoraproject.org/pub/epel/6/x86_64

My baseurl in my epel.repo config for yum looks like:

baseurl=http://example.s3.amazonaws.com/epel/6/x86_64/

When I do a yum clean and then yum info on a package that I know lives in the EPEL repo, I get an error:

HTTP Error 404: Not Found

Is it something specific about the repo layout? Is it possible I'm creating the mirror incorrectly?

Bucket URL does not always contain S3

On line 210 of the s3iam.py file you state:

TODO: bucket name finding is ugly, I should find a way to support
both naming conventions: http://bucket.s3.amazonaws.com/ and
http://s3.amazonaws.com/bucket/

This is true but what i always do is create a bucket called: mybucket.example.com and then the url is: mybucket.example.com.s3.amazonaws.com or s3.amazonaws.com/mybucket.example.com/

But if you create a CNAME like this:
mybucket.example.com. 300 IN CNAME mybucket.example.com.s3-eu-west-1.amazonaws.com.

You are able to address your content in your bucket like this: http://mybucket.example.com/repodata/....

So it would be great of you build support in for that usage as well.

Using the s3 plugin with DNF

I am able to successfully use the plugin on a CentOS 7 based ec2 instance.
But I am struggling to use it on my Fedora 28 laptop.

The current RPM installs the plugin and config in yum related directories.

Am I correct in assuming that the config should go into /etc/dnf/plugins/ and the plugin itself should go into /usr/lib/python2.7/site-packages/dnf-plugins ?

On my Fedora28 laptop, the current plugins are all Python3. Would the s3 plugin work in python3 or 2.7 ?
Thanks in advance

Rename git repo to yum-plugin-s3-iam, to match RPM name

The name of the git repo is misleading. The actual RPM is "yum-plugin-s3-iam", please reset the repo name accordingly.

Github leaves hooks so that old references to the old location will still work, so it won't break existing branches.

Special value "_none_" for repository proxy is not handled correctly

Hello,

Configuring a RPM repository with the special value _none_ for its proxy is not handled correctly. The plugin is using _none_ as proxy and fails resolving it as a proxy FQDN.

As a result, the retrieval of the IAM credentials fails.

According to man(5) yum.conf, that value indicates that NO proxy should be used in the requests made to that repository.

proxy URL to the proxy server for this repository. Set to '_none_' to disable the global proxy setting for this repository. If this is unset it inherits it from the global setting

Here is a patch that fixes the issue:

--- s3iam.py	2017-08-18 18:56:33.000000000 +0000
+++ s3iam.py.fixed	2017-08-22 07:14:22.746504644 +0000
@@ -168,7 +168,8 @@
         if 'http_proxy' in os.environ:
             proxy_config['http'] = os.environ['http_proxy']
         if repo.proxy:
-            proxy_config['https'] = proxy_config['http'] = repo.proxy
+            if repo.proxy != '_none_':
+                proxy_config['https'] = proxy_config['http'] = repo.proxy
         if proxy_config:
             proxy = urllib2.ProxyHandler(proxy_config)
             opener = urllib2.build_opener(proxy)

We would appreciate if a fix could be included ASAP.

Thank you very much!

plugin breaks skip_if_unavailable=1

If you set both skip_if_unavailable=1 and s3_enabled=1 on the same repo, and the instance's iam role doesn't provide access to the repo, yum does not properly skip the unavailable s3 enabled repo.

I will file a pull request later if I manage to come up with a solution.

S3 will stop accepting requests signed using SigV2 in all regions on June 24, 2019

We will need to make sure that all signing uses SigV4 prior to the June 24th, 2019 cutoff. There may still be a case add a SigV2 signing override/configuration (if people are using an s3 compatible service), but that may need to be a part of a larger rework of parse_url ( mentioned in #75 (comment) )

https://forums.aws.amazon.com/ann.jspa?annID=5816

AWS Signature Version 4 (SigV4) is recommended for signing S3 API requests over AWS Signature
Version 2 (SigV2) as it provides improved security by using a signing key rather than your secret access
key. SigV4 is currently supported in all AWS regions while SigV2 is only supported in regions launched
prior to Jan 2014. S3 will stop accepting requests signed using SigV2 in all regions on June 24, 2019, any
requests signed using SigV2 made after this time will fail. The signature version is set in your software,
please update your applications to use the latest versions of the tools and SDKs to take advantage of the
improved security and avoid impact.

error in v1.1.1 when region is NoneType

I'm getting the following error when trying to run yum w/ this plugin v1.1.1
v1.1.0 is still working fine.

File "/usr/lib/yum-plugins/s3iam.py", line 116, in prereposetup_hook
  replace_repo(repos, repo)
File "/usr/lib/yum-plugins/s3iam.py", line 100, in replace_repo
  repos.add(S3Repository(repo.id, repo))
File "/usr/lib/yum-plugins/s3iam.py", line 136, in __init__
  if 'cn-north-1' in region:
TypeError: argument of type 'NoneType' is not iterable

IMDSv2 breaks plugin

Describe the bug
If IMDSv1 is disabled plugin doesn't work

To Reproduce
Steps to reproduce the behavior:

  1. Spin up ec2 instance with HttpTokens required

aws ec2 describe-instances --instance-ids <instance_id> | jq '.Reservations[0].Instances[0].MetadataOptions.HttpTokens'

Logs
yum repolist
Loaded plugins: extras_suggestions, langpacks, priorities, s3iam, update-motd
Could not access AWS credentials

Expected behavior
S3 repos to work

Additional context
Looking at the code I can't see any support for tokens, it just seems to use imdsv1:

def get_role(self):
    """Read IAM role from AWS metadata store."""
    request = urllib2.Request(
        urlparse.urljoin(
            "http://169.254.169.254",
            "/latest/meta-data/iam/security-credentials/"
        ))

Not working with RPMForge mirror

I've got yum-s3-iam working great with my own custom repository, but I'm also trying to get it working with a mirror of RPMForge. Basically, I'm syncing all of RPMForge to a local folder:

sync -vai4CHP --safe-links --delay-updates --delete --exclude "local*" rsync://repoforge.eecs.wsu.edu/repoforge/ repoforge/

Then I'm using s3cmd to sync that up to S3:

s3cmd sync ./ s3://example/

Since the sync includes my custom repository, that gives me several top level folders in my S3 bucket:

  • mycustomrepo
  • repoforge

So, my rpmforge.repo file has a line that looks like the following:

baseurl = http://example.s3.amazonaws.com/repoforge/redhat/el6/en/$basearch/rpmforge

Unfortunately, when I enable that, yum gives me a 404 error:

HTTP Error 404: Not Found

The directory layout is a little different between my custom repo and the RPMForge mirror. My custom repo contains my RPM files and a repodata subfolder. The RPMForge mirror contains two folders:

  • RPMS
  • repodata

Is there any reason this shouldn't work?

Allow S3 credentials to be provided for non-EC2 use

I just looked through the code briefly and didn't see what I was looking for, so this may already exist. However…

This plugin looks excellent for use within the EC2 environment, but I develop in a local VM using Vagrant and I'd like to be able to provide AWS credentials as a fall-back. Basically, I think I'm looking for some overlap with yum-s3-plugin; if credentials are provided in the repo description, use them, otherwise fall back to the automagic IAM role.

Getting 403 Access Forbidden when accessing repomd.xml

The exact error I am getting is this:
Error: Cannot retrieve repository metadata (repomd.xml) for repository: samplerepo. Please verify its path and try again

I did some logging and found that the request to get the repomd file is getting rejected. HTTP Error 403: Forbidden

I see that the S3 URL is all ok. https://bucket-name.s3.amazonaws.com/path/to/samplerepo/repodata/repomd.xml

All the other values such as access key secret key and the token are getting its values as well.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.