Giter VIP home page Giter VIP logo

doc's Introduction

Npgsql - the .NET data provider for PostgreSQL

stable next patch daily builds (vnext) build gitter

What is Npgsql?

Npgsql is the open source .NET data provider for PostgreSQL. It allows you to connect and interact with PostgreSQL server using .NET.

For the full documentation, please visit the Npgsql website. For the Entity Framework Core provider that works with this provider, see Npgsql.EntityFrameworkCore.PostgreSQL.

Quickstart

Here's a basic code snippet to get you started:

using Npgsql;

var connString = "Host=myserver;Username=mylogin;Password=mypass;Database=mydatabase";

var dataSourceBuilder = new NpgsqlDataSourceBuilder(connString);
var dataSource = dataSourceBuilder.Build();

var conn = await dataSource.OpenConnectionAsync();

// Insert some data
await using (var cmd = new NpgsqlCommand("INSERT INTO data (some_field) VALUES (@p)", conn))
{
    cmd.Parameters.AddWithValue("p", "Hello world");
    await cmd.ExecuteNonQueryAsync();
}

// Retrieve all rows
await using (var cmd = new NpgsqlCommand("SELECT some_field FROM data", conn))
await using (var reader = await cmd.ExecuteReaderAsync())
{
    while (await reader.ReadAsync())
        Console.WriteLine(reader.GetString(0));
}

Key features

  • High-performance PostgreSQL driver. Regularly figures in the top contenders on the TechEmpower Web Framework Benchmarks.
  • Full support of most PostgreSQL types, including advanced ones such as arrays, enums, ranges, multiranges, composites, JSON, PostGIS and others.
  • Highly-efficient bulk import/export API.
  • Failover, load balancing and general multi-host support.
  • Great integration with Entity Framework Core via Npgsql.EntityFrameworkCore.PostgreSQL.

For the full documentation, please visit the Npgsql website at https://www.npgsql.org.

doc's People

Contributors

addisonbeck avatar bouke avatar brar avatar dependabot[bot] avatar dharmaturtle avatar emill avatar ericmutta avatar franciscojunior avatar gilprime avatar kamchii avatar kenjiuno avatar kielek avatar liamlaverty avatar meenzen avatar mircowidmer avatar nathanielblack avatar nikohellgren avatar ninofloris avatar nirvinm avatar petergummer avatar pjb3005 avatar prog-rajkamal avatar roji avatar rwasef1830 avatar shulapy avatar sitepodmatt avatar sombrasoft avatar udoliess avatar vonzshik avatar yohdeadfall avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

doc's Issues

Look into merging the two doc repositories

We currently have two doc repositories:

  • npgsql/doc which is where the conceptual doc markdown files live, and all the configuration for building the docs site
  • npgsql/livedocs which is where generated HTML/CSS/JS gets pushed from npgsql/doc

The latter could simply be a branch within the former, simplifying the whole setup.

Keyword not supported: max pool size

The issue

I'm running into this issue and have been unable to track down why it happens. The "Max Pool Size" connection string keyword should be supported but somehow isn't.

Exception has occurred: CLR/System.ArgumentException
Exception thrown: 'System.ArgumentException' in System.Data.Common.dll: 'Keyword not supported: max pool size'
   at Npgsql.NpgsqlConnectionStringBuilder.GetProperty(String keyword)
   at Npgsql.NpgsqlConnectionStringBuilder.set_Item(String keyword, Object value)
   at System.Data.Common.DbConnectionStringBuilder.set_ConnectionString(String value)
   at Npgsql.NpgsqlConnection.GetPoolAndSettings()

Further technical details

Npgsql version: 4.1.1
PostgreSQL version: 11
Operating system: Linux

How to get output extra information about MERGE operation

Hello guys!

How to I can get access to information about updated/deleted/inserted/default inserts/no action rows for MERGE operation using npgsql?

Quoting from the official documentation:

The number of rows updated, inserted or deleted is not available as part of the command tag. An optional NOTIFY message can be generated to present this information, if desired.

NOTIFY: 34 rows processed: 11 updated, 5 deleted, 15 inserted, 3 default inserts, 0 no action

As far as I understand, I can force pgsql to display this information, including in JSON form, and then deserialize it.
How to do it?

Please document the new replication API

Hi,

We have been using the logical replication feature with Npgsql 5.x quite successfully. The release notes for Npgsql 6.x say:

The logical replication API has been redone

As part of the redesign around logical replication support, some major changes to the API had to be done. Applications will have to be changed to use the new API.

It would be helpful if there was some documentation for this API, even a migration guide. I searched the manual and the best I could find is here which is for version 5 and doesn't compile with Npgsql 6.x.

A migration guide would be helpful since PgOutputReplicationOptions now expects a protocol version where previously none was required. I will read the Postgres manual to find out which version of replication is supported by my v13.6 instance but it would be nice if this change (and others) was covered.

I hope to use the logical replication API in Npgsql v6 as successfully as v5 ... I just need a little help.

Sorry if this information exists elsewhere and I have not found it.

Thank you for Npgsql.

How to get information about primary keys of table?

So, example:

            var cmd = new NpgsqlCommand($"SELECT * FROM \"{SchemaName}\".\"{TargetTableName}\" LIMIT 0", pgsql);
            var reader = cmd.ExecuteReader();
            var columnSchema = reader.GetColumnSchema();
            
            for (int i = 0; i < columnSchema.Count; i++)
            {
                Console.WriteLine(columnSchema[i].IsKey); // ALWAYS IS NULL
            }
            Console.WriteLine("++++++++++++++++++++");

No info:

            var columnSchema2 = pgsql.GetSchema("Columns", new string[] { null, null, TargetTableName });
            for (int i = 0; i < columnSchema2.Rows.Count; i++)
            {
                for (int j = 0; j < columnSchema2.Columns.Count; j++)
                {
                    Console.WriteLine("{0}: {1}", columnSchema2.Columns[j], columnSchema2.Rows[i][j]);
                }
                Console.WriteLine("********************");
            }
            Console.WriteLine("++++++++++++++++++++");

table_catalog: TestCatalog
table_schema: public
table_name: TestTable
column_name: Id
ordinal_position: 1
column_default:
is_nullable: NO
data_type: integer
character_maximum_length:
character_octet_length:
numeric_precision: 32
numeric_precision_radix: 2
numeric_scale: 0
datetime_precision:
character_set_catalog:
character_set_schema:
character_set_name:
collation_catalog:

Also no info:

            for (int i = 0; i < columnSchema2.PrimaryKey.Length; i++)
            {
                Console.WriteLine(columnSchema2.PrimaryKey[i].ColumnName);
                Console.WriteLine("********************");
            }
            Console.WriteLine("++++++++++++++++++++");

Is there a workaround?

Move conceptual docs to the doc repo

At the moment our conceptual (non-API) docs lives in each specific project's repo, under a doc directory. The doc repo (this repo) uses git submodules to include the three projects and access those directories. The logic here was that documentation changes should be part of the same commit which implements the feature.

In practice, this doesn't seem to have worked well:

  • Documentation is frequently written later and in a separate commit: devs don't necessarily have time when developing the feature to also write full docs.
  • Doc commits are done in the work branch (dev), and then have to be cherry-picked to master, pushed to github, and only then can the git submodule be updated in the doc repo. This is a very heavy and cumbersome flow.
  • Our project's git history becomes cluttered with doc commits.

Instead, I propose we just move all conceptual docs to this repo and be done with it. Updates can be done directly in the repo, without touching the project repos, and the flow will be much lighter.

Note that we'll still use git submodules to access the code in the main repos, in order to generate API docs.

Any objections?

/cc @YohDeadfall @austindrenski

Document the new timeout and cancellation behavior

  • Soft (server-side) vs. hard (client-side) cancellation
  • Cancellation timeout (in the connection string params page as well)
  • Timeouts are per-operation, the cancellation token isn't stored internally.
    • This means you must call Read/NextResult with the token to actually trigger the cancellation (see npgsql/npgsql#4542)
    • But just one Read/NextResult might not work, since if it has the data buffered, it ignores the token entirely.
    • So the guidance should always be to consume the reader until the end.
  • And are not aggregated (npgsql/npgsql#1041)
  • Nothing (yet) supported in multiplexing
  • Describe statement_timeout and where it's problematic, and also how to configure it on the connection string (npgsql/efcore.pg#2486 (comment))

Can't write CLR type System.DateTimeOffset with handler type TimestampHandler

Steps to reproduce

  • Checkout https://github.com/SeppPenner/SerilogSinkForPostgreSQL.

  • Setup a local PostgreSQL database.

  • Check the SerilogSinksPostgreSQL.IntegrationTests project

  • Check the class DbWriteTests with the method AutoCreateTableIsTrueShouldCreateTable()

  • Adjust the private const string ConnectionString = "User ID=postgres;Password=postgres;Host=localhost;Port=5432;Database=Serilog"; property and run the method AutoCreateTableIsTrueShouldCreateTable()

The issue

DateTimeOffset cannot be mapped to Timestamp (NpgsqlType). I do not know if this is a bug because https://www.npgsql.org/doc/types/basic.html shows:

NpgsqlDbType DbType PostgreSQL type Accepted .NET types
Timestamp DateTime, DateTime2 timestamp DateTime, DateTimeOffset, NpgsqlDateTime

for the write mapping.

Exception:

{System.InvalidCastException: Can't write CLR type System.DateTimeOffset with handler type TimestampHandler
   at Npgsql.TypeHandling.NpgsqlSimpleTypeHandler`1.ValidateObjectAndGetLength(Object value, NpgsqlLengthCache& lengthCache, NpgsqlParameter parameter) in C:\projects\npgsql\src\Npgsql\TypeHandling\NpgsqlSimpleTypeHandler.cs:line 236
   at Npgsql.NpgsqlParameter.ValidateAndGetLength() in C:\projects\npgsql\src\Npgsql\NpgsqlParameter.cs:line 553
   at Npgsql.NpgsqlCommand.ValidateParameters() in C:\projects\npgsql\src\Npgsql\NpgsqlCommand.cs:line 793
   at Npgsql.NpgsqlCommand.<ExecuteDbDataReader>d__100.MoveNext() in C:\projects\npgsql\src\Npgsql\NpgsqlCommand.cs:line 1140}

My query:

INSERT INTO "Logs" ("Message", "MessageTemplate", "Level", "RaiseDate", "Exception", "Properties", "PropertyTest", "IntPropertyTest", "MachineName") VALUES (@Message, @MessageTemplate, @Level, @RaiseDate, @Exception, @Properties, @PropertyTest, @IntPropertyTest, @MachineName)

The parameter mapping:

Parameters:

DbType, Value, .Net type, NpgsqlType

String, "Test0: TestObjectType1 { IntProp: 42, StringProp: "Test" } test2: TestObjectType2 { DateProp: 05/12/2019 18:21:41, NestedProp: TestObjectType1 { IntProp: 42, StringProp: "Test" } } testStr: stringValue", string, Text
String, "Test{testNo}: {@testObject} test2: {@testObj2} testStr: {@testStr:l}", string, Text
String, "Information", Enum, Varchar
DateTime, 12.05.2019 18:21:41 +02:00, DateTimeOffset, Timestamp
String, {}, DbNull, Text
Object, "{"Timestamp":"2019-05-12T18:21:41.2106317+02:00","Level":"Information","MessageTemplate":"Test{testNo}: {@testObject} test2: {@testObj2} testStr: {@testStr:l}","Properties":{"testNo":0,"testObject":{"_typeTag":"TestObjectType1","IntProp":42,"StringProp":"Test"},"testObj2":{"_typeTag":"TestObjectType2","DateProp":"2019-05-12T18:21:41.1639858+02:00","NestedProp":{"_typeTag":"TestObjectType1","IntProp":42,"StringProp":"Test"}},"testStr":"stringValue","MachineName":"MyPC"},"Renderings":{"testStr":[{"Format":"l","Rendering":"stringValue"}]}}\r\n", string, Jsonb
String, "{"testNo":0, "testObject":{"IntProp":42,"StringProp":"Test","_typeTag":"TestObjectType1"}, "testObj2":{"DateProp":"2019-05-12T18:21:41.1639858+02:00","NestedProp":{"IntProp":42,"StringProp":"Test","_typeTag":"TestObjectType1"},"_typeTag":"TestObjectType2"}, "testStr":"stringValue", "MachineName":"MyPC"}", string, Text
Int32, 0, int, Integer
String, "MyPC", string, Text

Further technical details

Npgsql version: 4.0.6 (Latest)
PostgreSQL version: PostgreSQL 11.2 on x86_64-pc-mingw64, compiled by gcc.exe (Rev5, Built by MSYS2 project) 4.9.2, 64-bit
Operating system: Windows 10

Other details about my project setup

I'm using Serilog with the Serilog.Sinks.PeriodicBatching sink.

Misleading documentation about connection strings an unix sockets

If you look on the page related to connection string parameters you will see that of the Host parameter it says "If the value begins with a slash, it is used as the directory for the Unix-domain socket." This implies, at least to me, that npgsql can connect over Unix socket (and indeed I've spent much of the last hour trying to do just that) however if you go the security page we find that in fact "Npgsql does not support logging in via Unix-domain sockets (auth method local)."

The connection string page needs to be much clearer about this than it currently is. I would also like to see a link to the security page. I would also suggest adding to the description of the username and password parameters not that makes it clear that the pg_hba.conf needs to be configured to allow that user to connect to that database using password or md5. It should also be made clear that the it is not necessary to change your password parameter if the method is set to md5 (unless it is, in which case you should let us know that fact and how to change the password accordingly).

Keyword not supported: includeerrordetail

Hi,

I need to use the IncludeErrorDetail parameter in my connection string and even if I used the latest Npgsql version (5.0.7 at this time) I kept receiving Keyword not supported: includeerrordetail error.

Since there is a typo in this title having a trailing t at the end, I even gave that a shot.

Turns out the correct variant is IncludeErrorDetails (that is, with an s ).

Please correct the name in the official documentation and everywhere else it is referenced.

Thanks

Document not blocking the thread after multiplexing execution

[Test]
public async Task GetDataTypeName_enum()
{
	var csb = new NpgsqlConnectionStringBuilder(ConnectionString)
	{
		MaxPoolSize = 1,
		Multiplexing = true
	};
	await using var conn = await OpenConnectionAsync(csb);
	await using var _ = await GetTempTypeName(conn, out var typeName);
	await conn.ExecuteNonQueryAsync($"CREATE TYPE {typeName} AS ENUM ('one')");
	conn.ReloadTypes();
	await using var cmd = new NpgsqlCommand($"SELECT 'one'::{typeName}", conn);
	await using var reader = await cmd.ExecuteReaderAsync(Behavior);
	await reader.ReadAsync();
	Assert.That(reader.GetDataTypeName(0), Is.EqualTo($"public.{typeName}"));
}

@roji @NinoFloris

[Doc]: Item types supported for arrays

Steps to reproduce

I've been reading about array, and I was wondering if there was a list of the item types supported for array and ranges:

I can't find any... I mean the documentation can make you imagine that you can actually define an array or arrays or arrays of IP Address... which I doubt can work. It's very likely only 1D arrays, right?

But reading a further it gets a little fuzzy:

Some talk about multi dimensional arrays... others just 1D arrays of a given base type... it's all confusing, so I would like to what is supported

The issue

No clear documentation about what types are supported as array items.

Further technical details

Npgsql version: 5.0.1.1
PostgreSQL version: PostgreSQL 10.5 (Ubuntu 10.5-2.pgdg16.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609, 64-bit
Operating system: (Client) Microsoft Windows 10 Enterprise

Github Repo Link

When looking around the Npgsql website I sometimes want to go straight to the Github repo to look at code, or find related issues I'm running across. There seems to be no visible way to do this.

Thoughts on adding links to the Github repos?

Could maybe just put the link to the org (https://github.com/npgsql) on the homepage under contributors? Or add a link on each page (npgsql/efcore/ef6) to their respective repos?

Add documentation to NpgsqlLogger to make it easier to create custom loggers.

Version 3.2.5.

The class NpgsqlLogger and some interfaces define the methods such as IsEnabled and Log.

It seems that connectorId means: no connector. Maybe it is better to make it int?.

But in general, it would be easier to understand for developers writing a logger to add some documentation to the methods and their arguments.

Now it requires a download of npgsql itself and looking at the code to what it means.

Also, it might be handy when the Log method accept an object of a LogEvent class or alike. That class might define something like a unique message code to ease identification of what a message means.

Also, it might be handy to check whether bool Enabled is actually called ever.

Doc resource on MSSQL->PG migrations

From time to time users port an existing codebase from MSSQL to PostgreSQL, and run into issues - it would be useful to have a doc page that lists common pitfalls and issues. The following is a probably partial list that needs to be completed::

  • Lack of MARS support (#462)
var myQueryable1 = myContext.MyDbset1.OrderByDescending(m => m.Timestamp);
var myQueryable2 = myContext.MyDbset2.Where(m => m.IsActive);

foreach (var q1 in myQueryable1) {
  foreach (var q2 in myQueryable2) { //Exception occurs here
    ...
  }
}
  • No DateTimeOffset database type (timestamptz is not a good representation)
  • Stored functions and procedures differences
    • Cannot return multiple resultsets from functions, need to manually use cursors (#1785)
    • No out params separate from the resultset
    • If you use the new PG 11 stored procedures, you have to call them manually (no CommandType.StoredProcedure support)

Clarification of SSL parameters

I've been reviewing https://github.com/npgsql/doc/blob/main/conceptual/Npgsql/security.md/#L1 and I'm not sure of one key point: are the environment variables/connection string parameters meant to contain only file paths? or can the contents of the certs themselves be used as the value? I'm on a sub-5.0 version.

Related, is the Client Certificate Key not exposed as an environment variable? and if not, can it be used with the root and client certs being specified through the environment variables?

Contribution info on EFCore.PG point to Npgsql

Our release note page for EFCore.PG 3.x contains contribution information, but they seem to refer to Npgsql instead of EFCore.PG.

Note that to me it makes sense to unify 3.0 and 3.1 into the same 3.x page (including all their patch version contribution info) - 3.0 is out of support and there's no reason for anyone to use it anymore (plus 3.1 doesn't really improve that much over it).

@Brar do you think you can take a look at this?

Full text search examples with EF Core

I'm trying to do the following but cannot find any help :

  • sorted by relevance
  • different weight per property (title, description)
  • ignore accents

I have tried many ways and only managed to make this work (which only returns matches) :

Entries = Db.Posts.Where(p => p.SearchVector.Matches(Query));
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity<Post>()
        .HasGeneratedTsVectorColumn(
            p => p.SearchVector, "english",
            p => new { p.Name, p.Description })
        .HasIndex(p => p.SearchVector).HasMethod("GIN");

I think it would be worth adding in the docs.

Missing Npgsql Windows installer referenced in doc

On https://www.npgsql.org/doc/installation.html#gac-installation, there is the text:

"... you can download the Npgsql Windows installer from our Github releases page"

However, there is no Npgsql Windows installer, only source releases.

  1. Is there a binary Windows installer available somewhere, then the link should be updated
  2. If no binary exists, but the Windows installer is still supported, perhaps this should be a separate project, and link to that?
  3. If the Npgsql Windows installer is no longer supported, I think Microsoft would like to know that, since they reference is on: https://support.microsoft.com/en-us/office/import-data-from-external-data-sources-power-query-be4330b3-5356-486c-a168-b68e9e616f5a

Clarify transactional behaviour for bulk copies

I want to import multiple CSV files within a single transaction into my Postgres database, i.e., if any of the CSV file imports fails, all imports should be rolled back.

Bulk copy for CSV is explained here. However, I am not sure if this happens all within the same transaction, especially when using different writers. So let's assume I do this:

NpgsqlTransaction transaction = conn.BeginTransaction();
using (var fooWriter = conn.BeginTextImport("COPY foo (field_text, field_int4) FROM STDIN")) {
    fooWriter.Write("HELLO FOO\t1\n");
    fooWriter.Write("GOODBYE\t2\n");
}
using (var barWriter = conn.BeginTextImport("COPY bar (field_text, field_int4) FROM STDIN")) {
    barWriter.Write("HELLO BAR\t1\n");
    barWriter.Write("GOODBYE\t2\n");
}
transaction.Commit();

in order to import data to a table called foo and a table called bar.

When either the import to bar fails, or transaction.Rollback() is called, is the import to foo rolled back, too? I think it would help to clarify this in the docs.

Furthermore, for the Binary Copy the explanation about writer.Complete():

When you've finished, you must call Complete() to save the data; not doing so will cause the COPY operation to be rolled back when the writer is disposed (this behavior is important in case an exception is thrown).

seems to imply that either a commit or a rollback happens already when the Writer is disposed. How does this relate to my question above and what is the relationship between Writer lifecycle / scope and the NpgsqlTransaction instance in the context of binary bulk copies?

Prepare Documentation Needs Improvement

https://www.npgsql.org/doc/prepare.html shows an example of two UPDATE statements. It explains that the individual statements are prepared. However, it doesn't clearly state if using multiple statements with a single prepare in one command is actually the best practice.

What if I want to do 200 Updates or 200 Inserts? Do I put all 200 Updates in a single parameterized query and call Prepare() once followed by ExecuteQuery()? Is this the optimal way to do it?

Would it perform differently if I created 1 single Update statement in one query, prepared it, executed.... and then put the remaining 199 in a second query without preparing it? I feel like it would help if the documentation explained better what's happening under the hood.

Another example on the same page demonstrates Preparing a query, Executing it, then changing the parameters and Executing it again. Is this meant to be in a loop?? The code is lacking explanation.

Maybe some guidance on how to use prepare based on what KIND of operation(s) you're doing would be helpful.

Detailed pooling page

We should have a Pooling page under "advanced topics", which details how the pool works and all the various tweaks we support.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.