brefphp / bref Goto Github PK
View Code? Open in Web Editor NEWServerless PHP on AWS Lambda
Home Page: https://bref.sh
License: MIT License
Serverless PHP on AWS Lambda
Home Page: https://bref.sh
License: MIT License
While running the following:
php bref.php bref:invoke # with or without an event
returns:
In InputDefinition.php line 238:
[Symfony\Component\Console\Exception\LogicException]
An option with shortcut "e" already exists.
Any ideas what I might be doing wrong? Or is this a bug?
result of:
vendor/symfony/console/Input/InputDefinition.php:232 -> throw new LogicException(sprintf('An option named "%s" already exists.', $option->getName()));
vendor/symfony/console/Input/InputDefinition.php:222 -> $this->addOption($option);
@mnapoli I could only use the httpHandler after applying this patch, else $request
was containing a gibberish array.. maybe it's only me..
Try with any of the json from https://apex.sh/docs/ping/webhooks/.
diff --git a/src/Bridge/Psr7/RequestFactory.php b/src/Bridge/Psr7/RequestFactory.php
index 8efae90..cd3e490 100644
--- a/src/Bridge/Psr7/RequestFactory.php
+++ b/src/Bridge/Psr7/RequestFactory.php
@@ -17,7 +17,7 @@ class RequestFactory
{
$method = $event['httpMethod'] ?? 'GET';
$query = $event['queryStringParameters'] ?? [];
- parse_str($event['body'] ?? '', $request);
+ $request = $event['body'] ?? '';
$files = [];
$uri = $event['requestContext']['path'] ?? '/';
$headers = $event['headers'] ?? [];
Else, thanks for this wrapper π
Currently we have to run serverless remove
.
The goal would be to have a bref remove
command that runs serverless remove
.
The bref info
command works like that so it can be a good example: https://github.com/mnapoli/bref/blob/d4cf7a8470c63213b0c37c33dfd9d675aadc00d1/bref#L119-L121
Hi Matthieu,
my usecase is that I'd like to have a Lambda function to run different PHPStan versions. So ideally I'd like to have the ability to invoke different lambda versions identified by a commit hash. This is possible since the invoke
API has a qualifier
field that does exactly this.
But how should I publish lambda function under a specific version qualified with Serverless/Bref?
Thank you.
As done in #14 with opcache.so
, I tried to load more extensions dynamically.
I was able to compile Redis and MongoDB extensions with a bit tweaked version of bin/php/Dockerfile
and bin/php/build.sh
.
Then I created a tar archive containing those two extensions in the ext
directory.
And finally I used the following hook to make php binary load those extensions:
hooks:
build:
# Add custom extensions to php.ini
- 'echo "extension=mongodb.so" >> .bref/php.ini'
- 'echo "extension=redis.so" >> .bref/php.ini'
A phpinfo()
confirms that those two extensions are correctly loaded.
I think this could be done in a better way, especially the php.ini
update.
What do you think?
Obviously, with Lambda being totally ephemeral, we need to store sessions in some external data store when performing functions which require storing state between requests (such as logging in to an admin)
One such way is to use the Symfony PdoSessionHandler
https://symfony.com/doc/current/doctrine/pdo_session_storage.html
When testing locally using the php -S 127.0.0.1:8000 bref.php
command, this works fine. However, running in the Lambda environment the session doesn't work.
Although the session appears to be created, for some reason the response object ($symfonyResponse = $this->httpKernel->handle($symfonyRequest);
) in /src/Bridge/Symfony/SymfonyAdapter.php
doesn't contain the session cookie / Set-Cookie
header, and so the client never sends the PHPSESSID
cookie back on the next request.
Additionally, when using Symfony, the framework has its own interface for accessing the its own internal session handling, meaning you can't use session_id()
or other session_...
methods for interacting with the session handling.
Adding the cookie header is easily fixed by adding the following code to SymfonyAdapter.php
:
$symfonyResponse->headers->setCookie(
new Cookie(
session_name(),
$this->httpKernel->getContainer()->get('session')->getId()
)
);
However, during the login cycle, requests are redirected using 301
and 302
redirects, and unless the session id is passed to Symfony, a new session id is generated each time and thus is never re-fetched from the data store. This means that the user record is stored authenticated in the database, but as the client has received a new PHPSESSID
cookie, the session is never retrieved back from the data store and the user is not able to access the logged-in content.
This can also be fixed by adding the following to SymfonyAdapter.php
:
if (!is_null($symfonyRequest->cookies->get(session_name()))) {
$this->httpKernel->getContainer()->get('session')->setId(
$symfonyRequest->cookies->get(session_name())
);
}
However, this means that in order to work with Symfony's sessions, the handle
method now looks like this:
public function handle(ServerRequestInterface $request): ResponseInterface
{
$httpFoundationFactory = new HttpFoundationFactory;
$symfonyRequest = $httpFoundationFactory->createRequest($request);
if (!is_null($symfonyRequest->cookies->get(session_name()))) {
$this->httpKernel->getContainer()->get('session')->setId(
$symfonyRequest->cookies->get(session_name())
);
}
$symfonyResponse = $this->httpKernel->handle($symfonyRequest);
$symfonyResponse->headers->setCookie(
new Cookie(
session_name(),
$this->httpKernel->getContainer()->get('session')->getId()
)
);
if ($this->httpKernel instanceof TerminableInterface) {
$this->httpKernel->terminate($symfonyRequest, $symfonyResponse);
}
$psr7Factory = new DiactorosFactory;
$response = $psr7Factory->createResponse($symfonyResponse);
return $response;
}
This looks quite ugly to me, and contains quite a lot of code just to solve a single implementation problem. However, without it, sessions won't work when using Symfony.
Is it better to:
mnapoli/bref
repo that when using the PdoSessionHandler
class, you should extend SymfonyAdapter
with the session-specific stuffIt would be great if Phalcon (https://phalconphp.com) could be supported.
Unlike most frameworks (Slim etc) it is compiled as a PHP extension. Given that, I did wonder if the commands from their example Dockerfile, such as https://github.com/phalcon/dockerfiles/blob/master/php-fpm/7.2-min/Dockerfile ... but there are a bunch more if that's not the right one ... could be added.
It is then enabled like other PHP extensions by adding the line 'extension=phalcon.so' to php.ini,
Thanks.
Would you be averse to me investigating/creating a pull request for this?
This is how I use serverless package
for CI/CD deployments...
I use serverless package
so that I can build the package(s) as artefacts in my CI system (currently Bitbucket pipelines).
All commit builds get automatically deployed to dev.
And all tagged commit builds get deployed to stage.
And all tagged commit builds can be manually deployed to prod.
To achieve this, I package all three options at build time.
This is not ideal, but does mean we are deploying to prod the same npm/composer versions etc. that we are testing on dev/stage at least.
Then I just do serverless deploy --package
to deploy the relevant package.
I have downloaded new version of framework and I get this error in last step of deploy:
vendor/bin/bref deploy
Uploading the lambda
7/8 [ββββββββββββββββββββββββ ] 1 sec
In Process.php line 1154:
The process "serverless deploy" exceeded the timeout of 60 seconds.
deploy
It seems application is deployed but I am not sure what will missing.
I can run bref init
but when running brief deploy:
PHP Fatal error: Uncaught Error: Class 'Bref\Console\Deployer' not found in /home/ubuntu/test/demo/vendor/bin/bref:114
Any clues?
thanks in advance,
Encountered the following error on my cloudwatch after following the symfony's instruction:
Fatal error: Uncaught Symfony\Component\Debug\Exception\ClassNotFoundException: Attempted to load class "Dotenv" from namespace "Symfony\Component\Dotenv".
Did you forget a "use" statement for another namespace? in /var/task/bref.php:14
Stack trace:
#0 {main}
thrown in /var/task/bref.php on line 14
Anyone experiencing the same issue?
We could set a maximum concurrency by default at a lower setting than its default, which is 1000 parallel executions.
The goal for this would be to avoid scenarios where a lambda could be DOS by a malicious 3rd party and create a huge AWS bill in a short amount of time.
For example let's say the lambda executes in 100ms. Its execution could be trigger at maximum 10000 times per second (1000 lambdas in parallel). That means a bill of $1613.09 for a whole day of this (not counting API Gateway which is even more expensive!).
I guess we could set it explicitly to 20Β for example (that would mean up to $30 per day). It could be by default in serverless.yml (https://github.com/mnapoli/bref/blob/master/template/serverless.yml). The feature seems to be implemented, see serverless/serverless#4555
Users would be of course free to change that value or remove the limit.
WDYT?
Since we know which extension is activated, shouldn't we remove those which are not ?
This way, the generated archive would be lighter ?
Hi @mnapoli ,
Just saw video of you on YT explaining about serverless and found this Bref project interesting, so I give it a try.
I know that you are not working on Windows but maybe someone faced this issue and has a solution:
PS C:\Users\xxxxx\Desktop\test> vendor/bin/bref deploy
0/8 [>---------------------------] < 1 sec
1/8 [===>------------------------] < 1 sec
Building the project in the `.bref/output` directory
1/8 [===>------------------------] < 1 sec
Building the project in the `.bref/output` directory
2/8 [=======>--------------------] < 1 sec
Downloading PHP in the `.bref/bin/` directory
2/8 [=======>--------------------] < 1 sec
Downloading PHP in the `.bref/bin/` directory
3/8 [==========>-----------------] < 1 sec
Installing the PHP binary
3/8 [==========>-----------------] < 1 sec
In Process.php line 256:
The command "tar -xzf .bref/bin/php/php-7.2.5.tar.gz -C .bref/output/.bref/bin" failed.
Exit Code: 2(Misuse of shell builtins)
Working directory: C:\Users\xxxxx\Desktop\test
Output:
================
Error Output:
================
gzip: stdin: unexpected end of file
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
deploy [--dry-run] [--stage STAGE]
Thanks
Currently we have to run serverless invoke -f main
.
The goal would be to have a bref invoke
command that runs serverless invoke -f main
.
The bref info
command works like that so it can be a good example: https://github.com/mnapoli/bref/blob/d4cf7a8470c63213b0c37c33dfd9d675aadc00d1/bref#L119-L121
Given the message, this issue is with the underlying serverless framework, but having edited a single character in bref (related to the progress bar), deploys now all fail. It mentions a Process.php so is that the file that calls serverless to do the deploy on its behalf?
Not very helpfully it does not show any error output, just complains about malformed XML. Given it happens at the end, the point where the file is uploaded to S3, it could be an issue with that? Strange. This is what happens, in case anyone else has had this. If not I can close and maybe take it up with serverless. Strange because I haven't changed the version of it and it was working earlier!
Hi, how should I assume AWS IAM role in Lambda running with Bref?
I found this documentation article https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/guide_credentials_assume_role.html but I'm not sure if it applies here - feels like it should be easier π
serverless deploy
command has a --stage
option.
Adding this option to bref deploy
and forward its value to serverless deploy
could be useful to deploy lambdas on different stages.
This is a follow up of #35, this needs to be documented but I'm not sure what the best option for that yet.
I think I want to make a whole separate guide for deployment and all its options, else the README will become overcrowded (it's already quite long).
AWS announced the possibility to use any programming language on Lambda. This is awesome! That means a simplification in Bref, (probably) better performances and a more official support of PHP.
Stackery announced they are working on the PHP runtime and this is available in this repository: https://github.com/stackery/php-lambda-layer
The questions are:
Let's use this issue to track information about all this.
At the moment I have been trying Stackery's PHP layer and here is what I noted:
json
, intl
, etc.Update: this runtime does not seem to be made or maintained by PHP developers judging from the discussions in the issues/PR. I don't consider it viable at the moment.
What's interesting is that creating a runtime for AWS is in the end pretty easy. Our build script is almost ready, and more powerful that what can be found there.
I'll be trying out more things, if you have info to share feel free to post it here.
Could be a great idea to proxy rollback
functionality. See https://serverless.com/framework/docs/providers/aws/cli-reference/rollback/
Here:
If the handler doesn't return any output, the null
value will be json encoded, resulting in null
string. I think it's a strange behavior. The output should be empty if the handler just return null
.
Just reporting a bug I'm working on right now: I have trouble getting the correct URL called in the browser when using CloudFront in front of API Gateway.
CloudFront -> API Gateway -> AWS Lambda
CloudFront is useful to be able to serve both assets from S3 and dynamic content from Lamdba in the same website.
However the URL detected by Bref in the lambda is not the one actually loaded in Cloudfront because Cloudfront URLs do not contain the stage prefix (e.g. /dev
).
I have asked a question on stackowerflow: https://stackoverflow.com/questions/52582792/how-to-get-the-original-url-called-in-aws-lambda Here is a description of the problem:
I have a lambda setup with the proxy integration in API Gateway. I can call my lambda with a URL like https://7kbw9fcfa4.execute-api.us-east-1.amazonaws.com/dev/foo The path here is /dev/foo
.
However if I set up CloudFront in front of that, the URL becomes https://a45ex7tnds5r5o.cloudfront.net/foo and the path is /foo
.
In both cases I have the same keys in the event
variable:
path
contains /foo
requestContext.path
contains /dev/foo
So I don't know yet how to reliably retrieve the correct URL of the original request.
Are there any plans to add other versions of php? i.e. 7.0, 7.1?
If not, how would one go about adding one?
After a successful completion of the entire Laravel set up and receiving my API endpoints when running "vendor/bin/bref deploy
", I keep getting the following error when hitting my endpoint:
{
"message": "Internal server error"
}
The best thing i can find in the logs is this:
`Process exited before completing request
`Any ideas to fix this?
APCu does not make sense on lambdas because each lambda boots a new CLI process (there is no shared memory like with PHP-FPM), this needs to be explained in the documentation to avoid surprises.
The idea would be to recommend instead to use a shared cache like Memcached or Redis (AWS supports both for example).
the php preloading ref landed in php-src@master recently.
it might make sense to - at least prototype - how pref can/should work with php-src preloading and how it can/will affect runtime performance.
the feature is rather new and requires a cutting edge php build - but I guess the target audience of Bref also works on pretty recent versions and are curious/willing to use such things / take this risk.
take this is not as a reall feature request but more a kind of start the discussion thing.
thanks
This is needed for (some?) watchers in Laravel Telescope. (At least the EventWatcher
).
Still working on getting Telescope usable, so not an urgent request as may not be possible anyway (or at least not for the watchers that need it).
I'm mostly getting Telescope working as a reasonably complex app (along with Nova) to see where the pain points are with Bref.
Bref is currently using the serverless framework internally to deploy lambdas. I want to explore whether using AWS SAM could be an alternative.
Serverles | AWS SAM | |
---|---|---|
Cross-provider | β | β (limited to AWS) |
Easy deployment from CLI | β | β |
Deployment using a CloudFormation stack | β | β |
Allow extra CloudFormation resources | β | β |
Simpler syntax than CloudFormation | β (serverless plugins) | β (SAM resources) |
Allow to run hooks before deployment | β (plugins, but runs in the project's directory so messes up the local project) | β |
Run preview locally | β | β |
Run preview in Docker | β (not natively) | β |
Run API Gateway locally | β | β |
Run Lambda API locally | β | β |
Invoke locally with fake S3/SQS/... events | β | β |
Integration with CodePipeline/CodeDeploy | β | β |
Deployment strategies (blue/green, canary deploymentβ¦) | β | β |
Integration with AWS SAR (serverless application repository) | β | β |
Deployment of different stages | β | β aws/aws-sam-cli#814 |
Deployment of different stages to different accounts | β | β |
Auto-creates a S3 bucket for packaging the function | β | β |
Deploy multiple functions at once | β | β |
Deploy multiple functions that use different languages | β | β |
View logs in CLI | β | β |
Exclude some directories when deploying | β | β |
Other lines to come, feel free to comment to add questions.
This comes with 2 questions:
should Bref only target AWS?
I'm leaning on a very strong yes here considering how everything even today with serverless (which is generic) is strictly tied to AWS. AWS is also the leading FaaS provider today, and supporting other providers would be a huge effort. By targeting only one we can provide the best DX, features and performances.
should Bref hide the details of the deployment tools, or should Bref integrates in the deployment tool?
What I'm getting at is that Bref could be a "whole in one" solution, hiding serverless/SAM (so that we control the details and we can provide a very simple user experience). Or Bref could be the "PHP part" that fits inside serverless/SAM (so that we can benefit from everything those frameworks have to offer).
From #22 (comment)
bref deploy --dry-run
That would run bref deploy
except it would not call serverless deploy
. The goal of that new option would be to be able to inspect the generated archive in .bref/output
.
Hi, can anybody help me with setup of AWS? Probably some permissions problem.
I know this is not exactly issue for bref, but maybe someone can help.
I am trying to execute example hello world code from readme. Code was successfully uploaded, but I get this Error:
{
"errorMessage": "RequestId: 5132a39e-662b-11e8-bd86-f374b2af9362 Process exited before completing request"
}
START RequestId: 5132a39e-662b-11e8-bd86-f374b2af9362 Version: $LATEST
2018-06-02T06:08:06.724Z 5132a39e-662b-11e8-bd86-f374b2af9362 Error: spawn EACCES
at exports._errnoException (util.js:1018:11)
at ChildProcess.spawn (internal/child_process.js:319:11)
at exports.spawn (child_process.js:378:9)
at exports.handle (/var/task/handler.js:32:18)
END RequestId: 5132a39e-662b-11e8-bd86-f374b2af9362
REPORT RequestId: 5132a39e-662b-11e8-bd86-f374b2af9362 Duration: 71.31 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 19 MB
RequestId: 5132a39e-662b-11e8-bd86-f374b2af9362 Process exited before completing request
Line 32: let script = spawn('php', ['bref.php', JSON.stringify(event)]);
Currently the PDO extension is not built-in the compiled PHP binary (mysqlnd is though). The goal of this issue is to make sure the PHP application can connect to a database using PDO.
Hello,
I see you can add a build hook within .bref.yml. I was wondering about using that to run grunt to prepare the static files (CSS, JS etc). So I would run a hook like 'grunt production' and presumably then those generated (minified) files would get deployed.
... which got me thinking ...
Can you access variables within that .yml file in the same way serverless mentions you can in its yml? (https://serverless.com/framework/docs/providers/aws/guide/variables/). Only if so, would it be possible to set the build hook to be like 'grunt ${opt:stage}'? Since then it could dynamically run 'grunt staging' or 'grunt production', or whatever the stage being deployed was, and so do a different set of operations (like upload them to a different location). Thanks!
Following the documentation for Symfony / Bref config, an exception is thrown by the application and logged in CloudWatch as:
Fatal error: Uncaught UnexpectedValueException: /var/task/translations defined in translator.paths does not exist or is not a directory in /var/task/vendor/symfony/framework-bundle/DependencyInjection/FrameworkExtension.php:989
Stack trace:
#0 /var/task/vendor/symfony/framework-bundle/DependencyInjection/FrameworkExtension.php(247): Symfony\Bundle\FrameworkBundle\DependencyInjection\FrameworkExtension->registerTranslatorConfiguration(Array, Object(Symfony\Component\DependencyInjection\Compiler\MergeExtensionConfigurationContainerBuilder), Object(Symfony\Component\DependencyInjection\Loader\XmlFileLoader))
#1 /var/task/vendor/symfony/dependency-injection/Compiler/MergeExtensionConfigurationPass.php(76): Symfony\Bundle\FrameworkBundle\DependencyInjection\FrameworkExtension->load(Array, Object(Symfony\Component\DependencyInjection\Compiler\MergeExtensionConfigurationContainerBuilder))
#2 /var/task/vendor/symfony/http-kernel/DependencyInjection/MergeExtensionConfigurationPass.php(39): Symfony\Component\DependencyInjection\Com in /var/task/vendor/symfony/framework-bundle/DependencyInjection/FrameworkExtension.php on line 989
The default config/packages/translation.yml
file in the symfony/website-skeleton
application looks as follows:
framework:
default_locale: '%locale%'
translator:
paths:
- '%kernel.project_dir%/translations'
fallbacks:
- '%locale%'
Changing it to the following fixes the problem, but reduces flexibility.
framework:
default_locale: 'en'
translator:
fallbacks: ['en']
From what I've tried this isn't as simple as just setting an environment variable in the Lambda console (unless I'm doing it wrong?). Maybe some note in the documentation will be helpful?
Hi.. I have laravel application that run with pgsql database. is it possible to using it? in aws I will use RDS. how to integrate with it?
Thanks in advice.
Hello
Or is it not an issue as presumably there isn't an equivalent of try_files where a file will be served, so perhaps someone can't request /path/to/something.php to access settings etc.
Thanks.
Hi Matthieu,
thank you very much for this package, you've done a really good job for helping people to deploy PHP to Lambda seamlessly! I will be bothering you with some issues and findings from my recent experiences with Bref π
I'm using simpleHandler
to provide a Lambda function that is used by other Lambda functions (those other functions are written in JS and provide a layer for authentication etc.). I'd like to indicate that the simpleHandler
wasn't successful, in a native way as most as possible. What should I use for that?
I'm invoking the Lambda in TypeScript like this:
const lambdaResult = await lambda.invoke({
FunctionName: 'phpstan-runner-prod-main',
Payload: JSON.stringify({
code: json.code,
level: json.level,
}),
}).promise();
Since it's not an HTTP request, I don't think that StatusCode
will be populated with anything useful. Ideally, the invoke
promise call should throw an AWSError so i can catch
it.
I'm aware that I can always do my custom protocol, having the returned JSON have something like error
key but I'd like to avoid that.
Thank you!
Currently we have to run serverless logs -f main
.
The goal would be to have a bref logs
command that runs serverless logs -f main
.
The bref info
command works like that so it can be a good example: https://github.com/mnapoli/bref/blob/d4cf7a8470c63213b0c37c33dfd9d675aadc00d1/bref#L119-L121
If our lambda reach the maximum execution time, it gets shut down. Since we use an external log system, we don't get any log in that case and we still have to check on cloudwatch what happened.
We could use set_time_limit()
or ini_set()
with the max_execution_time
directive. But the issue with those functions is that you can't take control back when the limit is reached, the PHP process is immediatly stopped. The other issue is that systems calls are not taken in account (eg sleep()
, file_get_contents()
, etc.).
So I'm trying to find a solution to handle that case.
I had the following idea: we could configure a specific timeout (obviously shorter than the lambda max execution time) which, when it's reached, trigger a signal sending to the PHP process (eg SIGTERM
).
This could let some time to the PHP process to gracefully shut down and do whatever is required during the time left (eg log what happened).
And this should obviously be handled in the parent process, the javascript one (and I'm not sure to be skilled enough in javascript to achive that).
This is just an idea, any other idea is very welcome.
Following on from issue #61, when using php 7.0, I get the following:
Parse error: syntax error, unexpected 'const' (T_CONST), expecting variable (T_VARIABLE) in /var/task/vendor/mnapoli/bref/src/Application.php on line 27
The goal of this task is to make it easy to configure applications with secret keys. That may be through documentation or through tooling.
See https://serverless.com/blog/serverless-secrets-api-keys/ because that will help.
Following the example code for Symfony and deploying, the request fails with the response {"message": "Internal server error"}
.
Checking CloudWatch logs shows 2 relevant entries:
Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 20480 bytes) in /var/task/vendor/monolog/monolog/src/Monolog/Handler/StreamHandler.php on line 171
and
Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 73728 bytes) in /var/task/vendor/symfony/debug/DebugClassLoader.php on line 145
Altering Kernel.php
to include the following makes the problem go away
public function getCacheDir()
{
// When on the lambda only /tmp is writable
if (getenv('LAMBDA_TASK_ROOT') !== false) {
return '/tmp/cache/'.$this->environment;
}
return $this->getProjectDir().'/var/cache/'.$this->environment;
}
However...
Presumably setting the cache dir to Lambda's local /tmp
kind-of defeats the point pre-warming the cache with the hooks in the .bref.yml
file? The object is to avoid launching the application without the cache already in place, right?
.bref.yml
:
hooks:
build:
- 'APP_ENV=prod php bin/console cache:clear --no-debug --no-warmup'
- 'APP_ENV=prod php bin/console cache:warmup'
serverless.yml
:
service: test
provider:
name: aws
runtime: nodejs6.10
package:
exclude:
- '*'
- '**'
include:
- bref.php
- 'src/**'
- 'vendor/**'
- composer.json # Symfony uses it to figure out the root directory
- 'bin/**'
- 'config/**'
- 'var/cache/prod/**' # We want to deploy the production caches
functions:
# By default we create one "main" function
main:
handler: handler.handle
timeout: 20 # Timeout in seconds, the default is 6 seconds
# The function will match all HTTP URLs
events:
- http: 'ANY /'
- http: 'ANY {proxy+}'
environment:
APP_ENV: 'prod'
APP_DEBUG: '0'
bref.php
:
<?php
use App\Kernel;
use Bref\Bridge\Symfony\SymfonyAdapter;
use Symfony\Component\Debug\Debug;
use Symfony\Component\Dotenv\Dotenv;
require __DIR__.'/vendor/autoload.php';
Debug::enable();
// The check is to ensure we don't use .env in production
if (!isset($_SERVER['APP_ENV'])) {
(new Dotenv)->load(__DIR__.'/.env');
}
if ($_SERVER['APP_DEBUG'] ?? ('prod' !== ($_SERVER['APP_ENV'] ?? 'dev'))) {
umask(0000);
}
$kernel = new Kernel($_SERVER['APP_ENV'] ?? 'dev', (bool) ($_SERVER['APP_DEBUG'] ?? ('prod' !== ($_SERVER['APP_ENV'] ?? 'dev'))));
$app = new \Bref\Application;
$app->httpHandler(new SymfonyAdapter($kernel));
$app->cliHandler(new \Symfony\Bundle\FrameworkBundle\Console\Application($kernel));
$app->run();
I get the same behaviour with cloning the mnapoli/bref-symfony-demo
repo and running bref deploy
, and also with adding bref to a symfony/website-skeleton
project.
Hi,
I've just updated bref to version 0.2.13
with opcache support added by #14.
And now the bref local
command isn't working anymore.
Before the update, the handler.js
ran the php
command without any argument. It used to work fine but since you've added a php.ini
defining a specific extensions directory, my local binary fail to load extensions, including opcache.so
.
I am wondering if I'm using the local
command the right way or if I'm missing something?
Or is there really an issue here?
Here is an output example:
$ vendor/bin/bref local
Invoking the lambda
7/7 [ββββββββββββββββββββββββββββ] 20 secs
In Process.php line 223:
The command "serverless invoke local -f 'main'" failed.
Exit Code: 1(General error)
Working directory: .bref/output
Output:
================
[STDERR] PHP Warning: Failed loading Zend extension 'opcache.so' (tried: /var/task/.bref/bin/ext/opcache.so (/var/task/.bref/bin/ext/opcache.so: cannot open shared object file: No such file or directory), /var/task/.bref/bin/ext/opc
ache.so.so (/var/task/.bref/bin/ext/opcache.so.so: cannot open shared object file: No such file or directory)) in Unknown on line 0
We can deploy on a specific stage using the --stage=
option with deploy
command.
This options should be available on other commands:
cli
info
remove
logs
invoke
Spent ages debugging the errors that happened because the .env
file wasn't included (gets an error with the DebugServer, then that errors with the translation class being missing).
Easy fix was to add .env
to the list of packages to include.
Might also be worth mentioning that the APP_KEY and APP_URL needs to be configured.
In the serverless laravel Guide on (https://mnapoli.fr/serverless-laravel/), I keep having trouble when running the "vendor/bin/bref init" command. I keep getting this error:
[ERROR] AWS credentials not found.
Please follow the instructions on https://github.com/mnapoli/bref#setup
I have tried this:
serverless config credentials --provider aws --key keyhere --secret secret here
and also running:
set AWS_ACCESS_KEY_ID=id here
set AWS_SECRET_ACCESS_KEY=secret here
but I still getting the above error. Can you help, please? I'm stuck
A post deploy event would be very usefull.
For instance, we'd like to run a cache warmup command after deploying our lambda.
First of all thanks for great library.
I have two issues.
First one - I can't change AWS region to anything else than us-east-1
eu-west-1
AWS_DEFAULT_REGION=eu-west-1
.serverless/
files and hardcoding eu-west-1
where I found us-east-1
but no luckI've also tried to add region: eu-west-1
to serverless.yml file but it's always overridden during deployment.
Here is my other issue:
If you do not want the application to be accessible through HTTP, simply edit your
serverless.yml
file and deploy again
but when I remove events
lines, they are brought back during another deployment.
Running vendor/bin/bref deploy makes a progress-bar placeholder appear, however that does not update. It just sits empty. Even after the deploy successfully completes after a few minutes, it is still empty. It happens on the first deploy, and also on subsequent ones (updates).
This is on Ubuntu 16.04 if that helps.
The time taken next to it also does not update during an upload. That would be good to have too.
Thanks.
Hello,
I have been experimenting with trying a build hook. I wanted to use grunt to prepare CSS, JS etc separately, so they could be minified etc before deployment. So after setting that up with node, I now have a /node_modules folder in my main project folder, as a sibling of bref.php etc. That /node_modules folder is only needed locally and does not need to be deployed as part of the Lambda.
I looked at the serverless.yml and saw it was already set to exclude * and then only include the specific folders like /app and the bref.php. All fine.
So I tried setting that up, and ran the deploy. It took a while and I looked at the .bref/output folder and noticed it had copied the /node_modules folder into it. Or was in the process of doing so - it's huge so it was still copying when I cancelled it. That was despite me not including it in the serverless.yml "include" section.
I took a look at the copyProjectToOutputDirectory function in Deployer.php and it seems to not respect the include/exclude part of serverless.yml, as it copies the whole project folder (well, apart from files it knows not to, like vendor). I assume that include/exclude only then gets processed when deploying the zip?
Maybe with an SSD drive it would be instant to copy the thousands of files in node_modules across but I'm wondering what the best option for avoiding that would be. Apart from using the include/exclude setting to build the output folder, I'm not sure how else to exclude it being added.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.