Giter VIP home page Giter VIP logo

laravel's Introduction

GitHub Workflow Status (master) Total Downloads Latest Version License


OpenAI PHP for Laravel is a community-maintained PHP API client that allows you to interact with the Open AI API. If you or your business relies on this package, it's important to support the developers who have contributed their time and effort to create and maintain this valuable tool:

Note: This repository contains the integration code of the OpenAI PHP for Laravel. If you want to use the OpenAI PHP client in a framework-agnostic way, take a look at the openai-php/client repository.

Get Started

Requires PHP 8.1+

First, install OpenAI via the Composer package manager:

composer require openai-php/laravel

Next, execute the install command:

php artisan openai:install

This will create a config/openai.php configuration file in your project, which you can modify to your needs using environment variables. Blank environment variables for the OpenAI API key and organization id are already appended to your .env file.

OPENAI_API_KEY=sk-...
OPENAI_ORGANIZATION=org-...

Finally, you may use the OpenAI facade to access the OpenAI API:

use OpenAI\Laravel\Facades\OpenAI;

$result = OpenAI::chat()->create([
    'model' => 'gpt-3.5-turbo',
    'messages' => [
        ['role' => 'user', 'content' => 'Hello!'],
    ],
]);

echo $result->choices[0]->message->content; // Hello! How can I assist you today?

Configuration

Configuration is done via environment variables or directly in the configuration file (config/openai.php).

OpenAI API Key and Organization

Specify your OpenAI API Key and organization. This will be used to authenticate with the OpenAI API - you can find your API key and organization on your OpenAI dashboard, at https://openai.com.

OPENAI_API_KEY=
OPENAI_ORGANIZATION=

Request Timeout

The timeout may be used to specify the maximum number of seconds to wait for a response. By default, the client will time out after 30 seconds.

OPENAI_REQUEST_TIMEOUT=

Usage

For usage examples, take a look at the openai-php/client repository.

Testing

The OpenAI facade comes with a fake() method that allows you to fake the API responses.

The fake responses are returned in the order they are provided to the fake() method.

All responses are having a fake() method that allows you to easily create a response object by only providing the parameters relevant for your test case.

use OpenAI\Laravel\Facades\OpenAI;
use OpenAI\Responses\Completions\CreateResponse;

OpenAI::fake([
    CreateResponse::fake([
        'choices' => [
            [
                'text' => 'awesome!',
            ],
        ],
    ]),
]);

$completion = OpenAI::completions()->create([
    'model' => 'gpt-3.5-turbo-instruct',
    'prompt' => 'PHP is ',
]);

expect($completion['choices'][0]['text'])->toBe('awesome!');

After the requests have been sent there are various methods to ensure that the expected requests were sent:

// assert completion create request was sent
OpenAI::assertSent(Completions::class, function (string $method, array $parameters): bool {
    return $method === 'create' &&
        $parameters['model'] === 'gpt-3.5-turbo-instruct' &&
        $parameters['prompt'] === 'PHP is ';
});

For more testing examples, take a look at the openai-php/client repository.


OpenAI PHP for Laravel is an open-sourced software licensed under the MIT license.

laravel's People

Contributors

askdkc avatar butschster avatar cosmastech avatar gehrisandro avatar krishnahimself avatar nunomaduro avatar pb30 avatar peterfox avatar trippo avatar xenon87 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

laravel's Issues

How to use HTTP Proxy?

How to use HTTP Proxy or other GuzzleHttp Options?
Can provide a method like OpenAi::requestOptions(['proxy' => 'http://...', ...])?

Do you have a plan to support functions?

Hi,
I am trying to use the new function calling property in openai api.
here is a simple laravel call:

$result = OpenAI::chat()->create([
            'model' => 'gpt-3.5-turbo-0613',
            'messages' => [
                [
                    "role" => "user",
                    "content" => "What's the weather like in Boston?"
                ]
            ],
            'functions' => [
                [
                    "name" => "get_current_weather",
                    "description" => "Get the current weather in a given location",
                    "parameters" => [
                        "type" => "object",
                        "properties" => [
                            "location" => [
                                "type" => "string",
                                "description" => "The city and state, e.g. San Francisco, CA",
                            ],
                            "unit" => [
                                "type" => "string", 
                                "enum" => ["celsius", "fahrenheit"]
                            ],
                        ],
                        "required" => ["location"],
                    ],
                ]
            ],
            'function_call' =>'auto'
        ]);
        return serialize($result);

but getting :

OpenAI\Responses\Chat\CreateResponseMessage::__construct(): Argument #2 ($content) must be of type string, null given, called in /Users/nuri/Sites/cognicode/vendor/openai-php/client/src/Responses/Chat/CreateResponseMessage.php on line 20

Any help appreciated

OpenAI API Errors when Streaming Chat Completions

Hi, thank you for providing such a nice library!

I am currently using the streamed chat interface OpenAI::chat()->createStreamed([...]); and I was wondering if there is any way to capture error messages in case the OpenAI API returns an error like this:

{
    "error": {
        "message": "The model `gpt-3.5-turboo` does not exist",
        "type": "invalid_request_error",
        "param": null,
        "code": null
    }
}

Currently, it looks to me like the stream is only closed immediately but no exception seems to be thrown in case of an error.

Fakeable's buildAttributes doesn't play nicely with batch embeddings

The embeddings endpoint can take a batch of inputs.

However, when trying to fake this in a test, only the first input in the batch is taken into account, the rest are ignored.

OpenAI::fake([
            EmbeddingsCreateResponse::fake([
                'data' => [
                    [
                        'embedding' => [0.1, 0.2, 0.3], // only 0.1 and 0.2 are used; 0.3 is ignored
                    ],
                    [
                        'embedding' => [0.1, 0.2, 0.3], // this is completely ignored
                    ],
                ],
            ])
        ]);

This is due to the specific implementation of buildAttributes:

foreach ($original as $key => $entry) {
            $new[$key] = is_array($entry) ?
                self::buildAttributes($entry, $override[$key] ?? []) :
                $override[$key] ?? $entry;
        }

This is a problem for tests because any downstream code won't have the correct number of items in the batch coming out of the fake.

Tonen

{
"id": "cmpl-GERzeJQ4lvqPk8SkZu4XMIuR",
"object": "text_completion",
"created": 1586839808,
"model": "text-davinci:003",
"choices": [
{
"text": "\n\nThis is indeed a test",
"index": 0,
"logprobs": null,
"finish_reason": "length"
}
],
"usage": {
"prompt_tokens":99999
"completion_tokens": 7,
"total_tokens": 99999

}

}

How to Mock?

How can I mock the facade or underlying class in tests?

Because the underlying OpenAI client (and other classes) are 'final' I'm unable to use mocking.

I've tried with the standard Laravel facade mocking

OpenAi::shouldReceive('completions')->andReturn([])

Am I missing something simple here? is there a way to mock API calls without wrapping all API calls in my own methods?

Make organization configuration more clear

Make it more clear, that the organization ID is required, not the name.


          Problem here lays in env variable naming. 

min openai is actually two similar things, one is called display name and other is called id. In env file in laravel mislead me kind of so I wrote Coding Wisely as organisation name, and when I removed it and changed to organisation id provided in openai, it worked. Maybe just update environment name to be clear that that is ID

Here is what i am talking about:
OPENAI_ORGANIZATION=
I would rename it to OPENAI_ORGANIZATION_ID and point user where to get it.

In openAI you have both open organization and open organization id that starts with org-....

Originally posted by @nezaboravi in #47 (comment)

Unrecognized request argument supplied: messages

I'm not sure what I'm doing wrong in this :

   $result = OpenAI::completions()->create([
        'model' => 'gpt-3.5-turbo',
        'messages' =>  [
                ['role' => 'system', 'content' => 'you are a proffessional assistant.'],
                ['role' => 'user', 'content' => 'Who are you?' ],
                ],
        ]);

it spits an error "Unrecognized request argument supplied: messages"

Undefined array key "permission"

I have installed the package, published the configuration file the used the example in the controller as per the provided example:

`use OpenAI\Laravel\Facades\OpenAI;

$result = OpenAI::chat()->create([
'model' => 'gpt-3.5-turbo-instruct',
'messages' => [
['role' => 'user', 'content' => 'Hello!'],
],
]);

echo $result->choices[0]->message->content; // Hello! How can I assist you today?`

I am getting the following error:

[2023-11-06 07:09:24] local.ERROR: Undefined array key "permission" {"exception":"[object] (ErrorException(code: 0): Undefined array key \"permission\" at ***\\vendor\\openai-php\\client\\src\\Responses\\Models\\RetrieveResponse.php:51)

How can I use factory for Azure

Can i use

$client = OpenAI::factory() ->withBaseUri('{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}') ->withHttpHeader('api-key', '{your-api-key}') ->withQueryParam('api-version', '{version}') ->make();

for Azure as described here?

I don't know exactly how to implement this with OpenAI PHP for Laravel.

Streaming with GPT4 Turbo Vision Preview in Laravel

            $result = $client->chat()->createStreamed([
                'model' => $openai_model,
                'messages' => $messages
            ]);

            foreach ($result as $response) {
                error_log(json_encode(['message' => $response]));
                echo "event: data\n";
                echo "data: " . json_encode(['message' => $response->choices[0]->delta->content]) . "\n\n";
                flush();
            }

            echo "event: stop\n";
            echo "data: stopped\n\n";

This code works with all models except gpt4 turbo vision preview model.

In case of gpt4 turbo vision preview model, it logs
Undefined array key 'finish_reason'

New GPT API Endpoints.

The newly available GPT API doesn't use the 'completions' endpoint. What's the easiest way for us to use the GPT compliant end points ? It looks like the 'client' package which is more general php is pretty close to being updated. Should those of us on the Laravel package switch or are the two packages wrapper / synced?

Call to undefined method of OpenAI::completions()

In version 0.4.2 using any methods like completions, images, chats not working and throws Call to undefined method of OpenAI::xxxxx(). Reverting back to 0.4.1 seems solving it. This was tested in fresh laravel install.

Laravel version: 10.0
PHP: 8.1
OpenAI-laravel: 0.4.2

Add test fake

Hi @nunomaduro

While using the package I ran into the problem that testing / mocking is too complicated and I tried to make it more Laravel like. So I tinkered a bit around and came up with a solution inspired by Laravels Mail::fake() and Http::fake() capabilities.

As the code is still a terrible mess I didn't submit a PR yet instead here is a test snippet which should explain how it works:

  public function test_open_ai_fake(): void
  {
      // switch to fake mode and pass the fake response
      OpenAI::fake([
          CreateResponse::from([
              'choices' => [
                  [
                      'text' => 'awesome!',
                      'index' => 0,
                      'logprobs' => null,
                      'finish_reason' => 'length',
                  ],
              ],
              // ... all the other required response params
          ]),
      ]);

      // execute a request on the api
      $response = OpenAI::completions()->create([
          'model' => 'text-davinci-003',
          'prompt' => 'PHP is ',
      ]);

      // verify the (fake) response
      $this->assertEquals('awesome!', $response['choices'][0]['text']);

      // do various asserts if the expected request was sent
      // all asserts can be done on the facade by passing the resource class ...
      OpenAI::assertSent(Completions::class, function ($method, $parameters) {
          return $method === 'create' &&
              $parameters['model'] === 'text-davinci-003' &&
              $parameters['prompt'] === 'PHP is ';
      });

      // ... or directly on the (faked) resource
      OpenAI::completions()->assertSent(function ($method, $parameters) {
          return $method === 'create' &&
              $parameters['model'] === 'text-davinci-003' &&
              $parameters['prompt'] === 'PHP is ';
      });

      OpenAI::assertSent(Completions::class);
      OpenAI::completions()->assertSent();

      OpenAI::assertSent(Completions::class, 1);
      OpenAI::completions()->assertSent(1);

      OpenAI::assertNotSent(Completions::class, function ($method, $parameters) {
          return $parameters['prompt'] === 'Python is ';
      });
      OpenAI::completions()->assertNotSent(function ($method, $parameters) {
          return $parameters['prompt'] === 'Python is ';
      });

      OpenAI::assertNotSent(Chat::class);
      OpenAI::chat()->assertNotSent();

      OpenAI::assertNothingSent(); // this one does fail because something was sent
  }

Do you think this is worth to continue?

At least in my current project it already helped me a lot 😉

OPEN AI GPT turbo 3.5 (text-davinci-003)

I am trying to get the response using gpt turbo 3..5 language model is returns response in JSON but It adds \n in the json how can I get a proper json RESPONSE.
Example here:
"\n\n{\n status : "initiate",\n response : "start",\n price : 7500,\n message: "Hello, I am the negotiation bot. I am here to help you get the best deal on this 7500 watch. Are you interested in a discount?"\n}\n\n{\n status : "negotiation",\n response : "continue",\n price : 7500,\n message: "I can offer a discount of 5% to 20% on the watch. How much of a discount would you like?"\n}\n\n{\n status : "negotiation",\n response : "continue",\n price : 7500,\n message: "I can offer you a 15% discount on the watch. Does that work for you?"\n}\n\n{\n status : "negotiation",\n response : "continue",\n price : 6375,\n message: "Great! I've applied a 15% discount to the watch, bringing the price down to 6375. Does that sound like a fair price?"\n}\n\n{\n status : "close",\n response : "success",\n price : 6375,\n message: "Wonderful! I'm glad we were able to reach an agreement. The watch is now yours for 6375. Thanks for shopping with us!"\n}"

Getting error by using model " gpt-3.5-turbo "

It gives this error
" Undefined array key "choices"

my code is:

$title = $request->youtube_title;

$result = OpenAI::completions()->create([
"model" => "gpt-3.5-turbo",
"temperature" => 0.7,
"top_p" => 1,
"frequency_penalty" => 0,
"presence_penalty" => 0,
'max_tokens' => 500,
'prompt' => sprintf('write a lesson plan on %s', $title),
]);

Can we find this out?

Laravel-zero

is the dependency "laravel/framework: ^9.46.0|^10.7.1" necessary? Could it be ""laravel/support: ^9.46.0|^10.7.1"?

I've tried installing with laravel-zero (it gave errors with "pestphp/pest" but I was able to update it), but composer warns that there are ambiguous class resolution in Illuminate\Foundation* classes (since openia dependencies install in vendor/laravel/framework/src/Illuminate /Foundation and those in vendor/laravel-zero/foundation/src/Illuminate/Foundation)

Character Encoding Issue

Not sure if this issue has to do with this package, or with the API itself (if not related to this package, please let me know and I will delete this issue from here).

But basically, all characters with tildes are being omitted.

For example, some Spanish words like "canción" are returning as "cancin".

Timeout

Thank you for this great package!

I have been having a lot of jobs timing out with the following exception:

Symfony\Component\HttpClient\Exception\TimeoutException: Idle timeout reached for "https://api.openai.com/v1/chat/completions".

I see in the main PHP client there is a method to set the timeout limit, how can this be set using the Laravel facade? Do you think that method will address the issue?

Fine Tuning a model in laravel not working

Can you give an example in php when using the fineTunes resource - this is not working:

model.jsonl

{"prompt": "Who is Jack Black?", "completion": "A comedian"}
{"prompt": "Who is Jack Black?", "completion": "A funny guy"}

test

public function test_open_ai_fineTuning_test(): void
    {

        try {
            $model= 'curie';
            $result = OpenAI::fineTunes()->create([
                'training_file' => storage_path('/ai/model.jsonl'),
                'model' => $model,
                'n_epochs' => 4,
                'batch_size' => null,
                'learning_rate_multiplier' => null,
                'prompt_loss_weight' => 0.01,
                'compute_classification_metrics' => false,
                'classification_n_classes' => null,
                'classification_positive_class' => null,
                'classification_betas' => [],
                'suffix' => null,
            ]);
            $result = OpenAI::completions()->create([
                'model' => $model,
                'prompt' => 'Who is Jack Black?',
            ]);
        }catch(ErrorException $e){
            $this->assertFalse(true,message: 'code is not working: ' .$e->getMessage());
        }
        if (
            str_contains(haystack: $result['choices'][0]['text'], needle: 'funny guy') ||
            str_contains(haystack: $result['choices'][0]['text'], needle: 'comedian')){
                $this->assertTrue(condition: true, message: 'model was finetuned');
            }else{
            $this->fail('Model Was not fine tuned');
        }
    }
//result: code is not working None is not of type 'string' - 'suffix'

Problem creating the client

Hi there,
I am sending the request to OpenAI API with this code:

    $response = OpenAI::chat()
    ->create([
        'model' => $conversation->model,
        'messages' => $list_of_messages
    ]);

And it works like a charm.
Now I'd like to use the API Key for each call, but if I try to create the client with this:

    $yourApiKey = getenv('OPENAI_API_KEY');
    $client = OpenAI::client($yourApiKey);

I get the error: Call to undefined method OpenAI\Client::client()

What I'm doing wrong?

Thank you.

Documentation issues in README.md

Hi, thanks for making this package.

I ran into two issues when trying to run your example code in readme.md.

  1. I needed to add:
    use OpenAI\Laravel\Facades\OpenAI;

Otherwise it was pulling the OpenAI object from the OpenAI\client

  1. It should be:
    $result = OpenAI::completions()->create([
    rather than
    $client = OpenAI::completions()->create([

Once I made those changes it worked great.

Audio transcribe undefined transient index

Hi ,

i have this new issue that i don't have before previous week when i send audio mp3 file to transcribe:

{message: "Undefined array key "transient"", exception: "ErrorException",…}
exception :"ErrorException"
file: "/var/www/html/vendor/openai-php/client/src/Responses/Audio/TranscriptionResponseSegment.php"
line: 56
message: 
"Undefined array key \"transient\""

any ideas?

Readme is ahead current version

Tag version is not up to date. For this reason readme is ahead and install command doesn't work because doesn't exists in vendor.

Why we need full laravel/framework ?

"laravel/framework": "^9.46.0|^10.7.1",

This file requires full laravel framework so it causes an issue in frameworks like laravel-zero, lumen, etc. Rather we can require just illuminate/support or other page only. Also I found an issue pointing to the same problem #34 which caused by the same Foundation class.

Switch to Laravel HTTP client for compatibility with Pulse, Telescope, Sentry etc

The package currently uses Guzzle directly, which means requests won't appear in core tools like Pulse and Telescope. It would be great if Laravel's HTTP client could be used instead to ensure compatibility with the wider Laravel ecosystem.

It also doesn't work with things like Sentry's Laravel HTTP client integration: getsentry/sentry-laravel#797

As a side note, it would also provide a solution to this issue since Laravel's HTTP client has built in retry functionality.

Fluent method to specify API key (multi tenancy)

Hi guys

Is there a clean way to use multiple api keys with this package? Use case is a multi-tenant app where users supply their own keys

Overriding config values feels a bit hacky and doesn’t always work well with Octane.

Idle timeout

I had to use PHP OpenAi instead of this package and set:

\OpenAI::factory()
->withHttpClient(new Client(['timeout' => 90, 'connect_timeout' => 90]))
->withApiKey(config('openai.api_key'))
->make()->chat()->create([
'model' => $gptModel,
'messages' => [
         ['role' => 'user', 'content' => $data['prompt']],
],
'temperature' => (float)$data['temperature'],
'max_tokens' => (int)$data['max_tokens'],
]);

How can we set the timeout?
I'm using GPT-4, and it's usual to have this kind of error.
I think it's related #36

Bug with GPT Vision ?

Hello,

I don't know whether it's me who has a problem with my code or whether the API isn't yet working with GPT Vision:

Here is my code

$result = OpenAI::chat()->create([
            'model' => 'gpt-4-vision-preview',
            'messages' => [
                [
                    'role' => 'user',
                    'content' => [
                        ['type' => 'text', 'text' => 'Describe image'],
                        ['type' => 'image_url', 'image_url' => "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"],
                    ],
                ]
            ],
            'max_tokens' => 900,
        ]);

And here is the error returned by Laravel:
Undefined array key "finish_reason"

Thanks for you help.

How can i do for support stream in laravel?

How can i do for support stream in laravel?
i try this but maybe something went wrong

return response([
  'text'=>$result
] )->header('Content-type', 'text/event-stream')
->header('Cache-Control', 'no-cache');

Mocking in Tests any suggestions?

Typically I can mock a facade like this

    public function test_gets_data() {
        $user = User::factory()->create();
        $data = get_fixture("barbershop_results.json");
        OpenAI::shouldReceive('completions->create')
            ->once()
            ->andReturn($data);

        $this->actingAs($user)->post(route('examples.ask'), [
            'question' => "PHP Is"
        ])
            ->assertStatus(200);
    }

but I am getting this error.

Mockery\Exception: The class \OpenAI\Client is marked final and its methods cannot be replaced. Classes marked final can be passed in to \Mockery::mock() as instantiated objects to create a partial mock, but only if the mock is not subject to type hinting checks.

Any suggestions? Sorry if obvious but if not then I will commit a PR to update the README.md to help the next person.

Thanks for the library

retry after

At times, ChatGPT may display the following error message: "That model is currently overloaded with other requests." This issue arises due to the high volume of users attempting to access the API.

To address this problem, referring to the documentation, it is recommended to retry the request after a certain period of time. Adding a 'retryAfter' method, akin to Laravel's 'Http::retry()', could potentially resolve this issue.

Do we have stream completion support SSE

It is giving me error: Call to undefined method OpenAI\Resources\Completions::createStreamed()

I have tried this as well but still same error.

BTW: I have managed it with curl with no support of this package.

Please help!

GPT 3.5-turbo/4

Is there any plan to add the new endpoints for these models?

Completions also request moderation

I'm using the completions method like this:

use OpenAI\Laravel\Facades\OpenAI;

...

$result = OpenAI::completions()->create([
    'model' => 'text-davinci-003',
    'prompt' => $prompt,
    'max_tokens' => 250,
    'temperature' => 0.2,
]);

For some reason, moderation requests are also being send:

Screenshot 2023-02-12 at 18 40 54

This is the only call that I've done in the past 30 minutes to the OpenAI API, so it shouldn't be merging various usages.

Could it be that we're automatically making moderation requests when making completion requests?

Update the Open AI Client

Open AI client is out of date, it's giving error in fake method for tests. It's missing the fake trait on responses class

Problem with v0.5.1 at composer require - http-interop/http-factory-guzzle

Hi there,

I'm probably being dim, but I was going to try this out so just did a laravel new test-openai to create a fresh project, cd'd into it and ran the composer require openai-php/laravel and I get :

Using version ^0.5.1 for openai-php/laravel
./composer.json has been updated
Running composer update openai-php/laravel
Loading composer repositories with package information
Updating dependencies
Your requirements could not be resolved to an installable set of packages.

  Problem 1
    - http-interop/http-factory-guzzle 0.1.0 requires guzzlehttp/psr7 ^1.3.1 -> found guzzlehttp/psr7[1.3.1, ..., 1.9.1] but the package is fixed to 2.5.0 (lock file version) by a partial update and that version does not match. Make sure you list it as an argument for the update command.
    - openai-php/laravel v0.5.1 requires http-interop/http-factory-guzzle ^0.1.0 -> satisfiable by http-interop/http-factory-guzzle[0.1.0].
    - Root composer.json requires openai-php/laravel ^0.5.1 -> satisfiable by openai-php/laravel[v0.5.1].

Use the option --with-all-dependencies (-W) to allow upgrades, downgrades and removals for packages currently locked to specific versions.
You can also try re-running composer require with an explicit version constraint, e.g. "composer require openai-php/laravel:*" to figure out if any version is installable, or "composer require openai-php/laravel:^2.1" if you know which you need.

Installation failed, reverting ./composer.json and ./composer.lock to their original content.

I tried dropping back to specifically v0.5.0 and that worked a-ok. Might just be me of course! PHP 8.2 and the Laravel project is at 10.12.0.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.