Giter VIP home page Giter VIP logo

client's People

Contributors

adrianpaiva1 avatar arnebr avatar benbjurstrom avatar bobbrodie avatar careybaird avatar elodieirdor avatar filipstojkovski13 avatar gehrisandro avatar georgebohnisch avatar godruoyi avatar grahamcampbell avatar gromnan avatar haydar avatar ibotpeaches avatar jeffreyway avatar karlerss avatar laureano avatar lucianotonet avatar lucidpolygon avatar mattsmilin avatar mpociot avatar nunomaduro avatar ordago avatar paulber33 avatar ruud68 avatar sandermuller avatar sergiy-petrov avatar shcherbanich avatar thoasty-dev avatar trippo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

client's Issues

Getting an error when I try to run the chat() method

Thank you for providing this code. When I try ..., I get this error:

Fatal error: Uncaught Error: Call to undefined method OpenAI\Client::chat()

I've been able to run the code snippets fine as I work down the page. I get this error when I try the chat method. Any suggestions? Thanks!

why not support stream responses?

why not support stream responses? it is so nice to user . it can tell user ai is working .if wait long time user may think ai is not working .

all results truncated

Hi,

All my results to prompts are getting truncated. Typically less than a sentence is returned. Any idea why? Example below:

Any help is much appreciated.

Wyatt

My prompt:

"Write me a story."

Result:

[model] => text-davinci-003
[choices] => Array
(
[0] => OpenAI\Responses\Completions\CreateResponseChoice Object
(
[text] =>

Once upon a time, there was a young girl named Daisy who was
[index] => 0
[logprobs] =>
[finishReason] => length
)

    )

[usage] => OpenAI\Responses\Completions\CreateResponseUsage Object
    (
        [promptTokens] => 4
        [completionTokens] => 16
        [totalTokens] => 20
    )

)

Unable to mock anything due to everything being `final`

Wanted to start with thanks a lot for this great package! Really appreciate the work you've done here. ❤️


I'm currently trying to mock responses from OpenAI, however this appears to not be easily done, because everything is marked final, which prevents mocking anything. This makes it a very painful developer experience when testing.

Maybe we can use a factory or something in the OpenAI::client() static method, and remove final on the Client, so it's at least possible to mock the client itself?

final class OpenAIClientFactory
{
    public function make(string $apiToken, string $organization = null): Client
    {
        // ...
    }
}
final class OpenAI
{
    /**
     * Creates a new Open AI Client with the given API token.
     */
    public static function client(string $apiToken, string $organization = null): Client
    {
        return app(OpenAIClientFactory::class)->make($apiToken, $organization);
    }
}
$this->app->bind(OpenAIClientFactory::class);
// TestCase

use OpenAI\Client;

$client = Mockery::mock(Client::class);

app()->bind(OpenAIClientFactory::class, function () use ($client) {
    $mock = Mockery::mock(OpenAIClientFactory::class);

    $mock->shouldRecieve('make')->andReturn($client);

    return $mock;
});

$client->shouldReceive('...')->andReturn('...');

Let me know your thoughts, thanks!

Response error:Your access was terminated due to violation of our policies?

Request

 $client = OpenAI::client($key);
  $result = $client->completions()->create([
      'model' => 'text-davinci-003',
      'prompt' => $input->getArgument('question'),
      "temperature" => 0.7,
      "top_p" => 1,
      "frequency_penalty" => 0,
      "presence_penalty" => 0,
      'max_tokens' => 600,
  ]);

Response

 Your access was terminated due to violation of our policies, please check your email for more information. If you believe this is
   in error and would like to appeal, please contact [email protected].

Add Proxy Support for HTTP Request

Can you add support for proxy setting for HTTP Client?for now, classes in openai-php/client are all final,we can't extend them for customize

HttpTransporter object to return error instead of throwing it.

Hope everyone is doing well.
Currently the architecture of the component is such that the src/Transporters/HttpTransporter.php under its requestObject method, checks for presence of $response['error'], if there's none, it returns the response object (passes it the corresponding CreateResponse class ).
I suggest to redefine the behavior of the transporter so that it will return any possible error and pass it along to the CreateResponse class. I propose the CreateResponse class to process the response attributes, including those with errors and ultimately forward the error details to the application in which it has initiated the API call.
The benefits of my idea:

  1. In line with TDD approach, the developer can easily check for the presence of the error message, statusCode, and deal with them accordingly;
  2. The developer will be able to log those errors easily and use it for overall enhancement and debugging of his host application;
  3. The developer can provide a more user-friendly error message to the end-user based on the returned/received error;
  4. The developer will be able to provide translation of the errors to the end-user.

Currently, I don't see how these 4 things are possible under the currently behavior of the transporter.
The following test can show better what I mean:

public function test_client_handles_error_response_correctly(): void
    {
        $client = OpenAI::client('sk-````');
        $response = $client->completions()->create([
            'prompt' => 'PHP is',
            'model' => 'wrongModel', //invoke error
            'max_tokens' => 20,
            'temperature' => 0,
        ]);
        // Make assertions
        $this->assertNotEmpty($response->error["message"]);
       $this->assertEquals(500, $response->error["status_code"]);
    }

cURL error 60: SSL certificate problem: certificate has expired

Is anyone else having this issue? this is a brand new laravel project on windows, running through php artisan serve im just running the code from the example in the docs.

My code:

Route::get('/', function () {
    $client = OpenAI::client(config('app.open-ai-key'));

    $prompt = <<<TEXT
Extract the requirements for this job offer as a list.
 
"We are seeking a PHP web developer to join our team. The ideal candidate will have experience with PHP, MySQL, HTML, CSS, and JavaScript. They will be responsible for developing and managing web applications and working with a team of developers to create high-quality and innovative software. The salary for this position is negotiable and will be based on experience."
TEXT;

    $result = $client->completions()->create([
        'model' => 'text-davinci-002',
        'prompt' => $prompt,
    ]);

    ray($result);
});

Flare exception:
https://flareapp.io/share/xPQoaD25#F47

Model gpt-3.5-turbo not matching settings in usage report

Hi 👋

I set up the OpenAI client to use the gpt-3.5-turbo model, however, in the usage report, it appears as gpt-3.5-turbo-0301.

image

My configurations are set to use gpt-3.5-turbo:
image

Although they are almost the same, the documentation states the following:
image

In my tests, I noticed that it is really not following the system instructions.

I could not find the responsible part of the code to fix and submit a pull request, so how can we check that?

Error creating fine tune with default params

I facing a problem with the default params in open ai api fine tunes
Triyng creating a fine tunes with this code:

 $responseFineTuning = $openAIClient->fineTunes()->create([
    'training_file' => 'my_file_id',
    'model' => 'davinci',
]);

result in a trhow

 TypeError 

  OpenAI\Responses\FineTunes\RetrieveResponseHyperparams::__construct(): Argument #1 ($batchSize) must be of type int, null given, called in /code/vendor/openai-php/client/src/Responses/FineTunes/RetrieveResponseHyperparams.php on line 39

  at vendor/openai-php/client/src/Responses/FineTunes/RetrieveResponseHyperparams.php:20
     16▕      * @use ArrayAccessible<array{batch_size: int, learning_rate_multiplier: float, n_epochs: int, prompt_loss_weight: float}>
     17▕      */
     18use ArrayAccessible;
     19▕ 
  ➜  20private function __construct(
     21public readonly int $batchSize,
     22public readonly float $learningRateMultiplier,
     23public readonly int $nEpochs,
     24public readonly float $promptLossWeight,

      +3 vendor frames 
  4   app/App/Console/Commands/GenerateFineTuning.php
      OpenAI\Resources\FineTunes::create(["file-XXXXXXXXXXXXX", "davinci"])

      +13 vendor frames 
  18  artisan:37
      Illuminate\Foundation\Console\Kernel::handle(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))

Looking the api $batchSize is optional param and null by default

image

Make base URI configurable

Hi,

what do you think about making the base URI configurable? At the moment, the base URI is hardcoded to https://api.openai.com/v1.

Making it configurable would make end-to-end testing of applications using the OpenAI client easier, as one could use an url to a mocking server in the test environment.

I would implement this as a non-breaking change via the following steps:

  1. Extract interface from OpenAI\ValueObjects\Transporter\BaseUri
  2. Change parameter type of $baseUri in OpenAI\ValueObjects\Transporter\Payload::toRequest to extracted interface
  3. Add optional parameter BaseUriInterface $baseUri = null to OpenAI::client
  4. Modify OpenAI::client, so that it handles the default value for $baseUri like that:
public static function client(string $apiToken, string $organization = null, BaseUriInterface $baseUri = null): Client
{
    ...
    $baseUri = $baseUri ?? BaseUri::from('api.openai.com/v1');
    ...
}

What do you think about that? If you don't object, I would implement that.

Error: NULL finish_reason for completions

Sometimes getting error:

OpenAI\Responses\Completions\CreateResponseChoice::__construct(): Argument #4 ($finishReason) must be of type string, null given, called in /var/www/vendor/openai-php/client/src/Responses/Completions/CreateResponseChoice.php on line 24

$attributes from \OpenAI\Responses\Completions\CreateResponseChoice::from:

array(4) {
  ["text"]=>
  string(972) " ... somthing here ... "
  ["index"]=>
  int(0)
  ["logprobs"]=>
  NULL
  ["finish_reason"]=>
  NULL
}

Answers are truncated

I can't find it in the documentation, maybe I'm missing something. But the answers are truncated. What could be the reason?

TypeError when constructing OpenAI response choice object with null finish reason.

TypeError: OpenAI\Responses\Completions\CreateResponseChoice::__construct(): Argument #4 ($finishReason) must be of type string, null given, called in /vendor/openai-php/client/src/Responses/Completions/CreateResponseChoice.php](*/vendor/openai-php/client/src/Responses/Completions/CreateResponseChoice.php) on line 26
  File "/app/Actions/Document/CreateNewContent.php", line 64, in App\Actions\Document\CreateNewContent::complete
    $result = OpenAI::completions()->create($parameters);
  File "/app/Actions/Document/CreateNewContent.php", line 39, in App\Actions\Document\CreateNewContent::App\Actions\Document\{closure}
    return $this->complete($template, $document, $data);
  File "/app/Actions/Document/CreateNewContent.php", line 42, in App\Actions\Document\CreateNewContent::create
    });
  File "/app/Http/Controllers/App/DocumentController.php", line 182, in App\Http\Controllers\App\DocumentController::writeForMe
    $choices = $contentCreator->create($template, $document, $data);
  File "/public/index.php", line 52
    $request = Request::capture()
...
(77 additional frame(s) were not displayed)

If the API Key contains a newline, an incorrect error is thrown

I am reading my API key from a file. My editor was adding a newline if the file was open. The result was that any call made via the client would error with the message "you must provide a model parameter", even though a model parameter was being sent.

I fixed my issue by simply trimming the result of my call to get the file's contents but the error message from the API was very confusing. Maybe just add the trim before sending to the API?

OpenAI::client() return error

Hello, $client = OpenAI::client($yourApiKey); return {"error":"Parse error: syntax error, unexpected '?', expecting function (T_FUNCTION) or const (T_CONST)"}

Any possible reasons? Thanks!

Authorization issue (html 500)

I am trying to call OpenAi in a test website for future projects, but encountered a problem.

On my website I have a button and output field.
when clicking on the button the following code is executed via js:
` <script>
console.log("hello");
window.onload = function() {
document.getElementById("submit-request").addEventListener("click", function() {
var prompt = document.getElementById("prompt-input").value;
var xhr = new XMLHttpRequest();

        xhr.open("GET", "/wp-admin/admin-ajax.php?action=make_request, true);

        xhr.onreadystatechange = function() {
            if (xhr.readyState === 4 && xhr.status === 200) {
                var response = JSON.parse(xhr.responseText);
                document.getElementById("response-output").innerHTML = JSON.stringify(response);
            }
    };
    xhr.send();
});

}
</script>`

The PHP function that is behind the "make_request" is the following:
`
add_action( 'wp_ajax_make_request', 'make_request' );
add_action( 'wp_ajax_nopriv_make_request', 'make_request' );

function make_request() {

$client = OpenAI::client('sk-xxx');

$result = $client->completions()->create([
'model' => 'text-davinci-003',
'prompt' => 'PHP is',
'max_tokens' => 6
]);

echo $result['choices'][0]['text'];

wp_die();
}`

As you can see it is just the basic example from the Readme for the moment. The API key is removed for obvious reasons.
This is the error I got in the webbrowser console:
"GET https://xx.host.com/wp-admin/admin-ajax.php?action=make_request&prompt=kn 500"

What is the problem. Is there some extra authorisation I should do for Openai-php/client?

Add Timeout Param

The official Python library allows a timeout to be set on requests. It would be really helpful for production applications to be able to set up a timeout on requests so we don't keep our web workers hanging if there is hiccups in connections or issues on the OpenAI side.

Request with context

I could not make a request with the context that preceded the conversation. Is this functionality not implemented, or am I looking at the wrong function? Maybe you don't call it context, but something else?

The problem with KeyAPI.

Hello,
I created and changed my key today and yesterday.. But I have the errors. What can I do?

Fatal error: Uncaught OpenAI\Exceptions\ErrorException: You didn't provide an API key. You need to provide your API key in an Authorization header using Bearer auth (i.e. Authorization: Bearer YOUR_KEY), or as the password field (with blank username) if you're accessing the API from your browser and are prompted for a username and password. You can obtain an API key from https://platform.openai.com/account/api-keys. in D:\OpenServer\domains\localhost\openai\vendor\openai-php\client\src\Transporters\HttpTransporter.php:61 Stack trace: #0 D:\OpenServer\domains\localhost\openai\vendor\openai-php\client\src\Resources\Completions.php(26): OpenAI\Transporters\HttpTransporter->requestObject() #1 D:\OpenServer\domains\localhost\openai\test.php(9): OpenAI\Resources\Completions->create() #2 {main} thrown in D:\OpenServer\domains\localhost\openai\vendor\openai-php\client\src\Transporters\HttpTransporter.php on line 61

Error Call to undefined method OpenAI\Client::chat()

Hi!!!

I'm doing some tests and for the chat I'm having this error, the others tested resource worked, except the chat.

The code is exactly as in the example:

    $api_key = getenv('OPENAI_KEY');
    $organization = getenv('ORGANIZATION');

    $client = \OpenAI::client($api_key, $organization);

    $response = $client->chat()->create([
        'model' => 'gpt-3.5-turbo',
        'messages' => [
            ['role' => 'user', 'content' => 'Hello!'],
        ],
    ]);

    var_dump($response);

Thoughts on concurrent/async requests?

Hello!

I was attempting to replace some of the underlying concrete implementations of this project in order to send concurrent API requests to OpenAI to generate multiple completions at once, but due to the architecture of the Resources, they will always make a Request and generate a Response.

For example, 10 synchronous requests to the /completions endpoint with this library can take up to 50 seconds, depending on what's being generated.

I did a basic implementation using Laravel's Http client utilizing pooling (basically Guzzle Async), and I can generate the same 10 completions in ~4-5 seconds.

Any thoughts on adding concurrent/async support in the future, or at least some way of collecting a pool of Requests, so developers could process them on their own?

Pay as you go users can use up to 3000 requests /minute after 48 hours.

Thanks!

An issue on Fine Tune List API: when the status_details of a RetrieveResponseFile is an exception message (string)

Hi, I encountered an issue with retrieving the list of fine tunes.

  1. I uploaded a JSONL file
  2. Used the file to create a new fine-tune
  3. When I tried to retrieve the list of fine tunes.
  4. It gave me this error.

I suspect this is because the status details show that the file I uploaded was invalid. However, this is not really an issue with the package. But it would be great if the package could also handle this scenario.

I hope this can be resolved soon. Thanks!

OpenAI\Responses\FineTunes\RetrieveResponseFile::__construct(): Argument #8 ($statusDetails) must be of type ?array, string given, called in /var/www/html/vendor/openai-php/client/src/Responses/FineTunes/RetrieveResponseFile.php on line 50

The exception upon checking:
image

The stack trace I received:
image

Symfony Bundle

Hi, thank you for this library that is very easy to use.

I created a Symfony bundle by copying things from the Laravel integration. You can find it here: https://github.com/GromNaN/openai-symfony (work in progress).

What do you think of moving this project into the openai-php organisation? It would be good for this project to provide an integration with a 2nd major framework. The bundle is not published on Packagist yet, so that it would be a clean start.

PHP 8.1+

Not really an issue - but why does it need PHP 8.1+ to run? would like to use the official client ... anyway, besides that - i really love gpt-3 ...

How to remember previous chat when using `gpt-3.5-turbo`?

Hi guys,

I am using gpt-3.5-turbo. Whenever I use this to get an answer, it forgets the previous one.

$response = $client->chat()->create([
    'model' => 'gpt-3.5-turbo',
    'messages' => [
        ['role' => 'user', 'content' => 'Message here'],
    ],
]);

How do I link it to the previous message? Using id from response or something?

new chat completion endpoint (api version 1.2.0)

Does this support ChatCompletion? gpt-3.5-turbo

import openai

openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "Where was it played?"}
]
)

Timeout with GPT-4, stream = true required

It seems that with GPT-4 it takes too long to receive a response from APIs. In the reference they mention this stream = true to start receiving the first tokens immediately and avoiding a timeout.

Parameters in the Fines Tunes Resource

Sorry for my English, but I want to know if $parameters are an array of arrays or what kind of structure I need to use because in the documentation I read that I need a JSONL so I don't know how I can make it here in PHP

$client->fineTunes()->create($parameters);

Thanks

Undefined array key "events"

I get the following error: Undefined array key "events"

Here is the stack trace:


  Undefined array key "events"

  at vendor/openai-php/client/src/Responses/FineTunes/RetrieveResponse.php:52
     48▕     public static function from(array $attributes): self
     49▕     {
     50▕         $events = array_map(fn (array $result): RetrieveResponseEvent => RetrieveResponseEvent::from(
     51▕             $result
  ➜  52▕         ), $attributes['events']);
     53▕
     54▕         $resultFiles = array_map(fn (array $result): RetrieveResponseFile => RetrieveResponseFile::from(
     55▕             $result
     56▕         ), $attributes['result_files']);

Here is the code I am running: $response = $client->fineTunes()->list();

change Endpoint to Azure

Since the open AI API is available in azure, is there a possibility to change to endpoint to azure.
or will there be a plan to add this feature

fully typed responses and requests

Hi @nunomaduro

First of all, thank you for reviewing and merging my previous PRs so quickly! 👍
It's a pleasure to help you with this package. Learned already a lot about (Open)AI and even more from your way how to build a clean package.

I am a huge fan of using fully typed responses and requests. Therefore I gave it a try with the moderations endpoint to see how it could work.

What I ended up with is the following:

$client = OpenAI::client('TOKEN');

$request = new ModerationCreateRequest(input: 'I want to kill them.', model: ModerationModel::TextModerationLatest);

$response = $client->moderations()->create($request);

dump($response->id); // modr-5vvCuUd3dRjgIumIZIu0yBepv5qwL
dump($response->model); // text-moderation-003
dump($response->results[0]->flagged); // true
dump($response->results[0]->categories[0]->toArray()); // ["category" => "hate", "violated" => true, "score" => 0.40681719779968 ]

In my opinion this gives the developers the better UX than plain arrays.

More or less I took the approach Steve McDougall described here: https://laravel-news.com/working-with-data-in-api-integrations

I also implemented request factories to give the user various options how to create the request instance:

// create the request directly
$request = new ModerationCreateRequest(
    input: 'I want to kill them.',
    model: ModerationModel::TextModerationLatest,
);

// pass an array to a factory instance
$request = (new ModerationCreateRequestFactory)->make([
    'input' => 'I want to kill them.',
    'model' => ModerationModel::TextModerationLatest,
]);

// pass an array to a static factory method
$request = ModerationCreateRequestFactory::new([
    'input' => 'I want to kill them.',
    'model' => ModerationModel::TextModerationLatest,
]);

If you want to have a look, I pushed the POC here: https://github.com/gehrisandro/openai-php-client/tree/poc-strong-typed-requests-and-responses

($finishReason) must be of type string

local.ERROR: OpenAI\Responses\Completions\CreateResponseChoice::__construct(): Argument #4 ($finishReason) must be of type string, null given, called in /data1/chatgpt/vendor/openai-php/client/src/Responses/Completions/CreateResponseChoice.php on line 22 {"exception":"[object] (TypeError(code: 0): OpenAI\Responses\Completions\CreateResponseChoice::__construct(): Argument #4 ($finishReason) must be of type string, null given, called in /data1/chatgpt/vendor/openai-php/client/src/Responses/Completions/CreateResponseChoice.php on line 22 at /data1/chatgpt/vendor/openai-php/client/src/Responses/Completions/CreateResponseChoice.php:9)
[stacktrace]

Rate limiter

Hi,

Quick question, does this package include a rate limiter or do we need to do it ourselves?

Thanks,

Authorization error, BearerAuthentication

Hello,
I have a persistent error and can't get over it.

require __DIR__ . '/vendor/autoload.php';

use OpenAI\Client;
use OpenAI\Api\Authentication\BearerAuthentication;
use OpenAI\Resources\Completions\Create as CompletionCreate;

$apiKey = 'sk-fEp........';
$client = new Client(new BearerAuthentication($apiKey));

function generateText($client, $model, $prompt, $length, $temperature = 0.5) {
    $response = $client->completions()->create(
        $model,
        (new CompletionCreate())
            ->setPrompt($prompt)
            ->setMaxTokens($length)
            ->setTemperature($temperature)
    );
    return $response->getChoices()[0]->getText();
}


Fatal error: Uncaught Error: Class "OpenAI\Api\Authentication\BearerAuthentication" not found in D:\OpenServer\domains\localhost\openai\test.php:10 Stack trace: #0 {main} thrown in D:\OpenServer\domains\localhost\openai\test.php on line 10

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.