Giter VIP home page Giter VIP logo

posthog-php's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

posthog-php's Issues

PostHog::alias() returns a different type depending on batch_size

It seems a bit inconsistent that when batch_size=1, PostHog::alias() returns a raw JSON string, while when batch_size > 1 PostHog::alias() returns a boolean. Is this expected behavior?

Example:

# When batch_size > 1
use Str;
use PostHog\PostHog;
$uuid = Str::uuid();
$key = config('services.posthog.api_key');
$host = config('services.posthog.api_host');
PostHog::init($key, ['host' => $host, 'batch_size' => 7]);
var_dump(PostHog::alias(["distinctId" => "user-$uuid", "alias" => "guest-$uuid"]));

bool(true)

# When batch_size == 1
use Str;
use PostHog\PostHog;
$uuid = Str::uuid();
$key = config('services.posthog.api_key');
$host = config('services.posthog.api_host');
PostHog::init($key, ['host' => $host, 'batch_size' => 1]);
var_dump(PostHog::alias(["distinctId" => "user-$uuid", "alias" => "guest-$uuid"]));

string(13) "{"status": 1}"

I'm using posthog-php version 3.0.8.

Unexpected behaviour with "maximumBackOffDuration" in HttpClient

In this snippet of code in the HttpClient.php

$backoff = 200;

while ($backoff < $this->maximumBackoffDuration) {
            ...

            // retry failed requests just once to diminish impact on performance
            $httpResponse = $this->executePost($ch);
            $responseCode = $httpResponse->getResponseCode();

            //close connection
            curl_close($ch);

            if (200 != $responseCode) {
                // log error
                $this->handleError($ch, $responseCode);

                if (($responseCode >= 500 && $responseCode <= 600) || 429 == $responseCode) {
                    // If status code is greater than 500 and less than 600, it indicates server error
                    // Error code 429 indicates rate limited.
                    // Retry uploading in these cases.
                    usleep($backoff * 1000);
                    $backoff *= 2;
                } elseif ($responseCode >= 400) {
                    break;
                } elseif ($responseCode == 0) {
                    break;
                }
            } else {
                break;  // no error
            }
        }

If a request fails for a 50* or 429 status code, the code will attempt to retry the request (and adds a sleep in the process).

However I think there is a fundamental logic bug in this process. Pointing at this area of code:

usleep($backoff * 1000);
$backoff *= 2;

Let's assume we have an API that constantly returns a 503 status code.

The first time it runs, we'd sleep for 0.2 seconds, then double $backoff to 400
The second time it runs, we'd sleep for 0.4 seconds, then double $backoff to 800
The third time it runs, we'd sleep for 0.8 seconds, then double $backoff to 1600
The fourth time it runs, we'd sleep for 1.6 seconds, then double $backoff to 3200
The fifth time it runs, we'd sleep for 3.2 seconds, then double $backoff to 6400
The sixth time it runs, we'd sleep for 6.4 seconds, then double $backoff to 12800

The loop would then bail out, based on the default $maximumBackoffDuration of 10000.

This means we have in taken 12.6 seconds (sleep time), plus the request time before the script bails - in a synchronous fashion, blocking any other script in the stack in the meantime.

I propose a solution which deprecates the maxiumumBackoffDuration parameter, and instead uses an integer for maxRetryCount, with a retrySleep parameter to give users the option of a sleep inbetween each retry.

Alternatively, we could strip out the retry logic altogether, and pass the responsibility of this to the user (we return a HttpResponse object, so users could determine the responseCode from this and retry themselves if needed.

Opinions on this?

Client doesn't work with Laravel Queue

Hi,

We have this package installed in a Laravel application. The app listens for events for some specific events which are queued and processed in background (through queues). The thing is that when the queued jobs are processed we also submit the events to PostHog, eg:

        PostHog::capture([
            'distinctId' => 'user:1',
            'event' => 'some-event'
        ]);

But we found out that the event is never sent to PostHog. There are no errors, it just does nothing. We tested the same app locally (without queue) and it works as expected. Also we tested making a HTTP request directly to the API through the Guzzle client and it works fine. After some debugging I think there may be some issues with the consumers clients (which by default is lib_curl.), and I guess its related to how __destruct function handles the code within a queued job in Laravel app.

Laravel Version 8.75.0
Queue Jobs are processed with the command php artisan queue:work.

Looking forward to a solution. Thank you.

TODO

1, should handle 'host' (https://www.test.com) and 'use_ssl' => true, it becomes https://https://www.test.com
2, LibCurl the batch api endpoint is 'capture' rather than 'batch'
3, the endless loop when http code is 0

PostHog::init crashes when adding a personalAPIKey

I have been trying to integrate the Experiments feature into our site. I have set up a new experiment at https://app.posthog.com/experiments and it is running.
image

I am currently using "posthog/posthog-php": "^3.0" and php 8.1.

I have been trying to get PostHog::getFeatureFlag('sell-your-classic-test', 'some distinct id'); to work but it would keep returning null. I tried:

if (PostHog::isFeatureEnabled('sell-your-classic-test', 'some distinct id')) {
    // do something here
}

But this always returns false. I then read about Local Evaluation which requires a personal API key. I created one and have added it to PostHog::init but when I do this I get the below error:

array_key_exists(): Argument #2 ($array) must be of type array, null given

Which is found in vendor/posthog/posthog-php/lib/Client.php:356.
image

This is the code that I'm running:

PostHog::init(
    apiKey: "phc_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
    personalAPIKey: "phx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
);

I have also tried:

PostHog::init(
    "phc_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
    [],
    null,
    "phx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
);

Any help would be helpful.

Slowness in Posthog server response, cause general slowness

We're using a self-hosted instance of posthog, for some events we record them on server side using this library.

We experienced recently a general slowness of the application for some endpoints.

After investigation the slowness was correlated to our posthog instance having performance issue (most precisely it was timeout-ing due to a disk full, so the client was waiting for the connection to timeout, so each requests were having a +10s hit).

The question is, is there a way to avoid that ? (i.e the same as on client side, the call being non-blocking, the fact they timeout or fail does not affect the business code)

Support feature flags API

Via the PHP library, I would like to be able to use the decide endpoint to check the status of a feature for a user

Proposal: add optional cache for feature flags

I'm trialing PostHog for our SaaS app. For us, the lack of caching for feature flags are a total no-go. (We're currently canary-releasing a feature that requires a decide on every single page view.)

So I've added caching by extending the PostHog\Client and overriding fetchFeatureVariants.

<?php

use PostHog\Client;
use PostHog\HttpClient;
use Psr\Cache\CacheItemPoolInterface;

class CachingClient extends Client {
	
	private const DEFAULT_EXPIRY_DURATION = 'PT5M';    // five minutes
	
	private ?CacheItemPoolInterface $cache;
	private \DateInterval $cacheExpiryInterval;
	
	public function __construct(
		string $apiKey,
		array $options = [],
		?HttpClient $httpClient = null,
		string $personalAPIKey = null,
		CacheItemPoolInterface $cache = null
	) {
		parent::__construct($apiKey, $options, $httpClient, $personalAPIKey);
		
		$this->cache = $cache;
		$this->cacheExpiryInterval = new \DateInterval($options['cacheDuration'] ?? self::DEFAULT_EXPIRY_DURATION);
	}
	
	public function fetchFeatureVariants(string $distinctId, array $groups = [], array $personProperties = [], array $groupProperties = []): array {
		$callParent = fn() => parent::fetchFeatureVariants($distinctId, $groups, $personProperties, $groupProperties);
		
		if (!$this->cache) {
			return $callParent();
		}
		
		$query = substr(md5(json_encode([$groups, $personProperties, $groupProperties])), 0, 8);	// yes, md5 is totally suitable here: it's fast and provides a good distribution
		$cacheItem = $this->cache->getItem("FeatureFlags.dId=$distinctId.q=$query");
		
		if ($cacheItem->isHit()) {
			return $cacheItem->get();
		}
		
		$result = $callParent();
		
		$this->cache->save(
			$cacheItem
				->expiresAfter($this->cacheExpiryInterval)
				->set($result)
		);
		
		return $result;
	}
	
}

I'd be happy to create a PR to add this to the base class.

posthog-php does not respect http protocol

This came up when developing this locally.

When passing a custom http host to the library (e.g. http://localhost:8000) the library internally strips the http:// and by default assumes https is used.

This makes the bin/posthog script unusable locally.

Dependabot friendly releases

Hi,

I've noticed that when changes are released and Dependabot opens a PR on our platform to update the library that the release notes don't get included.

I think this is because your changes are captured in History.md.

Would it be possible to change this to CHANGELOG.md which I believe Dependabot will pick up?

Thanks!

Message size is larger than 32KB

Why is the message size limited to 32KB ? The api documentation states that the limit is 20MB.

There is no limit on the number of events you can send in a batch, but the entire request body must be less than 20MB by default.

FeatureFlags: PHP issues a warning for non-local feature flags that are not enabled for the user

In PHP.

If:

  1. you have defined a feature flag and
  2. it is not locally computable and
  3. it not enabled for the user you are checking

(Or, a rarer case, you are checking for an undefined feature flag)

Then PHP will throw a warning of the form
PHP Warning: Undefined array key "<feature flag name>" in /var/www/vendor/posthog/posthog-php/lib/Client.php on line 234

This can generate a lot of log spam if a feature flag is only enabled for a small % of users.

We could suppress that kind of warning in our PHP config, but it is helpful to have it exist to highlight cases where the key should have been defined.

The fix is easy, and shown in this (unfortunately outdated) PR: #42

I can update the PR, but before I do, was there a reason it was not merged? Is this in some way expected behaviour?

Possibility to "accumulate" properties ?

I would like to use posthog in a symfony php application , due to the it's architectured I would like to do the following

  1. when the "User" constructed to set some of his properties in posthog internal state (the same as the ::init() will save the api key)
  2. Later in some place where it's not convenient to access the user object to be able to capture events with the pre-saved user properties

is it something possible to do ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.