Giter VIP home page Giter VIP logo

anorm's Introduction

Anorm

Twitter Follow Discord GitHub Discussions StackOverflow YouTube Twitch Status OpenCollective

Build Status Maven Repository size Scala Steward badge Mergify Status

Anorm is a simple data access layer that uses plain SQL to interact with the database and provides an API to parse and transform the resulting datasets.

Usage

In a projects built with SBT, dependency to Anorm can be added as following:

libraryDependencies ++= Seq(
  "org.playframework.anorm" %% "anorm" % ReplaceByAnormVersion)

Build manually

Anorm can be built from this source repository.

sbt publish-local

To run the tests, use:

sbt test

Documentation

To run the documentation server, run:

sbt docs/run

To test the documentation code samples, run:

sbt docs/test

Releasing a new version

See https://github.com/playframework/.github/blob/main/RELEASING.md

anorm's People

Contributors

alexitc avatar cchantep avatar dependabot[bot] avatar dotta avatar gmethvin avatar guillaumebort avatar huntc avatar ihostage avatar jroper avatar kaplanbar avatar ljungqvist avatar marcospereira avatar mkurz avatar othp avatar ph2734 avatar pschichtel avatar pvlugter avatar richdougherty avatar rrramiro avatar sadache avatar scala-steward avatar schmitch avatar sullis avatar toastbroad avatar tommycli avatar vdebergue avatar waxzce avatar willtrnr avatar wsargent avatar xuwei-k avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

anorm's Issues

license

this project is missing a license so it's hard to tell if it's "safe" to use it. what license do you plan to use?

Release for Scala 2.13.0-M3

Notice that Anorm does not have a 2.13.0-M3 build when trying to add support for Omnidoc:

playframework/omnidoc#20

We may want to remove the duplication of publishing Anorm docs there (since they are already published here), but applications using Play, Anorm may still need Scala 2.13 support.

java.time.LocalDate is not accepted as a statement parameter

java.time.LocalDate support was introduced in 2.5, but creating a statement with this type as a parameter, like:

SQL( "update some_table set some_date = {date}" )
  .on( 'date -> someLocalDateVariable )
  .executeUpdate()

...fails with the following error:
type mismatch; found : (Symbol, java.time.LocalDate) required: anorm.NamedParameter

It seems that JavaTimeToStatement trait does not support it yet.

Anorm 2.6.3 depends on scala-arm 2.1-SNAPSHOT, which does not exist

We recently attempted to build one of our projects against the new Anorm 2.6.3 and found that we were unable to resolve one of its dependencies:

[error] coursier.lmcoursier.ResolutionException: Encountered 1 error(s) in dependency resolution:
[error]     com.jsuereth:scala-arm_2.12:2.1-SNAPSHOT:

It appears that there is no scala-arm 2.1-SNAPSHOT release. On further investigation, it appears that a fork was published to this repository: https://github.com/cchantep/tatami

We could add a resolver for that repository to our build.sbt, but for now we're probably just going to stick with Anorm 2.6.2, since it frankly feels kind of weird to depend on some undocumented fork of an otherwise unmaintained library. Is there a plan to remove the scala-arm dependency at some point, or at least to fork it properly and open source any of the changes?

Also, the README and other documentation should be updated to reflect that this new resolver is needed. Otherwise people are going to have trouble resolving this.

Anorm AkkaStream throwing NPE

Are you looking for help?

I posted this issue in Play Framework Google Group but didn't get a response.

https://groups.google.com/forum/#!topic/play-framework/OIscPmFpH5I

Anorm Version (2.5.x / etc)

2.5.3

Operating System (Ubuntu 15.10 / MacOS 10.10 / Windows 10)

Linux ubuntu 4.4.0-66-generic #87-Ubuntu SMP Fri Mar 3 15:29:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

JDK (Oracle 1.8.0_72, OpenJDK 1.8.x, Azul Zing)

openjdk version "1.8.0_121"
OpenJDK Runtime Environment (build 1.8.0_121-8u121-b13-0ubuntu1.16.04.2-b13)
OpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode)

Library Dependencies

Akka Streams version: 2.4.16
Play version: 2.4.3
DB: MySQL 5.6 on Amazon RDS
mysql-connector-java version: 5.1.38

Expected Behavior

  1. Materialize a source with Akka Streams Anorm integration.

Actual Behavior

I got the following exception when running an Akka Stream using Anorm as the Source. I've seen this happen twice so far. Most of the time it works.

java.lang.NullPointerException: null
        at anorm.AkkaStream$ResultSource$$anon$1$$anonfun$release$1.apply(AkkaStream.scala:128)
        at anorm.AkkaStream$ResultSource$$anon$1$$anonfun$release$1.apply(AkkaStream.scala:127)
        at scala.Option.foreach(Option.scala:257)
        at anorm.AkkaStream$ResultSource$$anon$1.release(AkkaStream.scala:127)
        at anorm.AkkaStream$ResultSource$$anon$1.postStop(AkkaStream.scala:116)
        at akka.stream.impl.fusing.GraphInterpreter.finalizeStage(GraphInterpreter.scala:801)
        at akka.stream.impl.fusing.GraphInterpreter.afterStageHasRun(GraphInterpreter.scala:780)
        at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:664)
        at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:471)
        at akka.stream.impl.fusing.GraphInterpreterShell.init(ActorGraphInterpreter.scala:381)
        at akka.stream.impl.fusing.ActorGraphInterpreter.tryInit(ActorGraphInterpreter.scala:538)
        at akka.stream.impl.fusing.ActorGraphInterpreter.preStart(ActorGraphInterpreter.scala:586)
        at akka.actor.Actor$class.aroundPreStart(Actor.scala:504)
        at akka.stream.impl.fusing.ActorGraphInterpreter.aroundPreStart(ActorGraphInterpreter.scala:529)
        at akka.actor.ActorCell.create(ActorCell.scala:590)
        at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:461)
        at akka.actor.ActorCell.systemInvoke(ActorCell.scala:483)
        at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:282)
        at akka.dispatch.Mailbox.run(Mailbox.scala:223)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

My use case is very basic:

  def taskSource()(implicit m: Materializer): Source[TASK, Future[Int]]#ReprMat[Task, Unit] = {
    
    implicit val connection = db.getConnection()
    
    val sql = SQL("SELECT * FROM tasks")
    AkkaStream.source(sql, Task.parser).mapMaterializedValue( _.onComplete { _ =>
      connection.close()
    })
  }

I believe the problematic code is that ResultSet.getStatement() is returning null (not sure how this happens), and then the null is wrapped in a Some() and hence the NPE happens.

        override def postStop() = release()

        private def release() {
          val stmt: Option[java.sql.Statement] = {
            if (resultSet != null && !resultSet.isClosed) {
              val s = resultSet.getStatement
              resultSet.close()
              Some(s)                  // <---- s is null somehow
            } else None
          }

          stmt.foreach { s =>
            if (!s.isClosed) s.close() // <---- NPE thrown on this line
          }
        }

The Some(s) should be changed to an Option(s) since s could possibly be null.

Reproducible Test Case

Please provide a PR with a failing test.

If the issue is more complex or requires configuration, please provide a link to a project on Github that reproduces the issue.

Scapegoat Compile Dependency?

it looks like com.sksamuel.scapegoat » scalac-scapegoat-plugin_2.12 is know a Compile Dependency.
I guess that it should only be used for analysis and not be included in the final build or am I wrong?

use a different Calendar for Date/Timestamp/Time get & set

Is there a way to set the Calendar used by Anorm when dealing with Date objects?

I believe that a method such as rs.getTime(x) will use the default java.util.Calendar for the local timezone, but what if I would like to use a Calendar set to UTC?

Support aliasing in macros

Either by field annotations or other way, allows to alias the columns from nested parsers used from the one created by the macros. Specially useful for parser of type with self reference.

Issue with Lower case Alias names

I see you guys are going to release v2.5.1, and I wanted to check that this version would work for us and allow us to upgrade. (See #94) Good news is it works fine with our current implementation, but if we change our alias' back to lower case run into an issue.

We changed to upper case due to this:
http://stackoverflow.com/a/36551747/1413470

Seems like the solution for this would be to set a to uppercase here:
https://github.com/playframework/anorm/blob/master/core/src/main/scala/anorm/Row.scala#L127
as all the column names in the metaData are uppercase.

An exception or error caused a run to abort: ColumnName(test_users.test_users_email,Some(user_email)) 
anorm.AnormException: ColumnName(test_users.test_users_email,Some(user_email))

How to insert curly braces when using Anorm?

Hi,

I posted that question on Stackoverflow a few days ago, but no answer. So trying here.

I've been using Anorm for a while, and now comes the moment when I need to insert curly braces. Curly braces are special characters in Anorm, as they are used to indicate placeholders to be replaced by dynamic values.

I've tried escaping curly braces with \, but haven't managed to make the SQL query work so far.

Can someone provide an example of a SQL query inserting curly braces?

The executeInsert can't return a string type primary key

Anorm Version (2.5.x / etc)

2.5.0

Operating System (Ubuntu 15.10 / MacOS 10.10 / Windows 10)

Windows 10

JDK (Oracle 1.8.0_72, OpenJDK 1.8.x, Azul Zing)

scala 2.11
play 2.5.9

Actual Behavior

Please provide a description of what actually happens, working from the same starting point.

Be descriptive: "it doesn't work" does not describe what the behavior actually is -- instead, say "java.time.LocalDate is not accepted as a statement parameter " Copy and paste logs, and include any URLs.

val list = SQL"INSERT INTO wordnote.word (word,translate) VALUES ({word}, {translate})"
              .on("word" -> input.word, "translate" -> input.translate)
              .executeInsert(scalar[String].singleOpt)

The result is alwasy None, changed a few Parser like scalar[String].*, no use. I'm certain the primary key of db is charsequence

Allow to set the transaction isolation level

At the moment, there is no way to specify the transaction isolation level for a single transaction, it would be great to have a syntax like this:

database.withTransaction(TransactionIsolationLevel.Serializable) { implicit conn => ... }

The code changes to support that are trivial, would you accept a pull request?

Short and Integer Mapping

According to the doc (and the code) a Short can't be parsed as an Int.
As mentioned in the doc that could cause compatibility issues between H2 and Mysql.

I don't understand the reason behind that, especially knowing that an Int can be parsed as a Long.

The workaround is either to use BigDecimal or to define a custom parser.

[Feature] Add a concat / + method to SimpleSql[T]

Actually It would've been great to have a + method in SimpleSql[T].

so we could write something like that:

(SQL"SELECT foo FROM test1 WHERE id = $id" + SQL" AND bar = $bar").as(SqlParser.scalar[String].single)

This could be really helpful when concating multiple statements together.
I actually made a implementation:

  def +(other: SimpleSql[T]): SimpleSql[T] = {
    val tokens = (ss.sql.stmt.tokens ++ other.sql.stmt.tokens).zipWithIndex.map {
      case (tokenGroup, idx) => tokenGroup.copy(placeholder = tokenGroup.placeholder.map(_ => s"_$idx") )
    }
    val names = tokens.map(_.placeholder).filter(_.isDefined).map(_.get)
    val params = ss.params.valuesIterator.toList ++ other.params.valuesIterator.toList
    val mapped = (names zip params).map{ case (name, param) => name -> param}(collection.breakOut): Map[String, ParameterValue]

    SimpleSql(SqlQuery.prepare(TokenizedStatement(tokens, names), names), mapped, ss.defaultParser)
  }

However my sbt won't allow me to grab all dependencies (somehow).

And I wondered where to add the test?

  "SimpleSql"{
    "should be concatable" in withH2Database { implicit c =>
      createTest1Table()
      SQL"""insert into test1(id, foo, bar) values (${10L}, ${"Hello"}, ${20})""".execute()

      val id = 10L
      val bar = 20L
      val foo = (SQL"SELECT foo FROM test1 WHERE id = $id" + SQL" AND bar = $bar").as(SqlParser.scalar[String].single)
      foo must be_==("Hello")
    }
  }

I couldn't test it, since my test dependencies won't get grabbed..

Use table two times in query

Table currency

id  | title
-------------------------------
USD | United States dollar
BYR | Belarusian ruble

Table rate

from | to  
--------------
USD  | BYR

I try use one table two times in query:

SELECT f.id, f.title, t.id, t.title
FROM rate JOIN currency AS f ON rate.from = f.id
          JOIN currency AS t ON rate.to = t.id

and try parse it use this parser:

val parser: RowParser[Rate] = {
  get[String]("f.id")~
  get[String]("f.title")~
  get[String]("t.id")~
  get[String]("t.title") map {
  case f_id~f_title~t_id~t_title =>
    Rate(Currency(f_id, f_title), Currency(t_id, t_title))
  }
}

I receive exception:

anorm.AnormException: 'f.id' not found, available columns: currency.id, id, currency.title, title, currency.id, id, currency.title, title

If I try use qualified column names as:

SELECT f.id AS f_id, f.title AS f_title, t.id AS t_id, t.title AS t_title
FROM rate JOIN currency AS f ON rate.from = f.id
          JOIN currency AS t ON rate.to = t.id

and try parse it use this parser:

val parser: RowParser[Rate] = {
  get[String]("f_id")~
  get[String]("f_title")~
  get[String]("t_id")~
  get[String]("t_title") map {
  case f_id~f_title~t_id~t_title =>
    Rate(Currency(f_id, f_title), Currency(t_id, t_title))
  }
}

Result is

Rate(Currency(BYR,Belarusian ruble),Currency(BYR,Belarusian ruble))

i.e used last columns t_id and t_title.
if I'm swaping columns in query then result is

Rate(Currency(USD,United States dollar),Currency(USD,United States dollar))

I use anorm 2.5.0

Connection in Column.nonNull

Currently it would be great if there was a way to get the current connection inside a Column.nonNull block. (or well maybe consider a Column.nonNullWithConnection block

At the moment there are various Conversion which can be made directly by the drive if somebody want's his own Column types, considering supporting a Range Type with LocalDateTime, where PGConnection will actually have a converter for the String value to LocalDateTime: https://github.com/pgjdbc/pgjdbc/blob/master/pgjdbc/src/main/java/org/postgresql/jdbc/TimestampUtils.java#L425

Your error messages suck

How am I ever supposed to debug the following error:

anorm.AnormException: 'body' not found, available columns: article.id, id, article.time_stamp, time_stamp, article.body, body, article.scrape, scrape, article.scrape_temp, scrape_temp, article.title, title, article.web_address, web_address

I have a future that looks like this:

var insert_row = Future{
      database.db.withConnection { implicit connection =>
        val insert_id = SQL("""
          INSERT INTO ARTICLE(body, scrape, scrape_temp, title, web_address, time_stamp)
          VALUES ({body}, {scrape}, {scrape_temp}, {title}, {web_address}, {time_stamp});
        """)
        .on("body"->"", "scrape"->"", "scrape_temp"->"", "title"->"", "web_address"->web_address, "time_stamp"->article_timestamp.toString)
        .executeUpdate()

        val insert_val_LINKTABLE = SQL("""
          INSERT INTO LINKTABLE(article_id)
          VALUES ({article_id});
        """)
        .on("article_id"->insert_id)
        .executeUpdate()

        val returnVal = new WriteGenericClass(
          SQL("""
            SELECT * FROM ARTICLE
            ORDER BY id DESC
            LIMIT 50;
          """)
          .as(parser.*), 
          SQL("SELECT COUNT(*) FROM ARTICLE")
          .as(scalar[Long].single), 
          "NO ERROR"
        )
        returnVal;
      }
    }(ec)

    val mapped: Future[WriteGenericClass[T]] = insert_row.transform {
      case scala.util.Success(e) => scala.util.Try({
        println("inside success of mapped and e: ");
        println(e)
        e.asInstanceOf[WriteGenericClass[T]]
      })
      case scala.util.Failure(e) => scala.util.Try({
        println("inside failure of mapped (in insert_article in protectedCommands) and e: ");
        println(e)
        new WriteGenericClass(List(), 0.toLong, e.toString)
      })
    }
    println("value of mapped")
    println(mapped)
    mapped

And as far as I can tell the error is somewhere in this first sql query:

val insert_id = SQL("""
          INSERT INTO ARTICLE(body, scrape, scrape_temp, title, web_address, time_stamp)
          VALUES ({body}, {scrape}, {scrape_temp}, {title}, {web_address}, {time_stamp});
        """)
        .on("body"->"", "scrape"->"", "scrape_temp"->"", "title"->"", "web_address"->web_address, "time_stamp"->article_timestamp.toString)
        .executeUpdate()

Which looks fine.

I have zero to go on with this message. "body" is *wrong*...great...

Support of ZonedDateTime with H2

Anorm Version (2.5.x / etc)

2.6.5

Operating System (Ubuntu 15.10 / MacOS 10.10 / Windows 10)

macOS 10.15.2

JDK (Oracle 1.8.0_72, OpenJDK 1.8.x, Azul Zing)

openjdk version "1.8.0_232"
OpenJDK Runtime Environment (Zulu 8.42.0.21-CA-macosx) (build 1.8.0_232-b18)
OpenJDK 64-Bit Server VM (Zulu 8.42.0.21-CA-macosx) (build 25.232-b18, mixed mode)

Library Dependencies

  • Play: 2.8.0
  • H2: 1.4.200

Expected Behavior

  1. Parse an H2 TIMESTAMP WITH TIMEZONE field to Java ZonedDateTime

Actual Behavior

I got the following exception when I run a SELECT query:

anorm.AnormException: TypeDoesNotMatch(Cannot convert 2020-01-19 12:52:47.924+01: org.h2.api.TimestampWithTimeZone to Java8 ZonedDateTime for column ColumnName(user.login_date,Some(login_date)))

The table definition is:

create table user
(
    id         identity primary key,
    username   varchar(32)             not null,
    login_date timestamp with timezone not null
)

And my query is:

def findLoginDate(userID: Long): Future[Option[ZonedDateTime]] =
  Future(db.withConnection { implicit connection =>
    SQL"""
         select login_date
         from user
         where id = $userID
       """.as(scalar[ZonedDateTime].singleOpt)
  })(ec)

Release for Scala 2.13

Now that scala 2.13.0 is out, and there is a playframework release that supports it, anorm is one of the few dependencies blocking me from upgrading to 2.13.

I took a look at upgrading in the build.sbt file, but pretty quickly ran into a failed dependency on sbt-scapegoat.

Let me know if I can help. (such as filing issues for dependencies that need to be upgraded)

Handling deprecation of play.api.db.DB in Anorm 2.5

Hi,

I noticed that my usual Anorm code now triggered some deprecation warnings with Anorm 2.5. That code is based on the latest Anorm documentation, which is in fact outdated for Anorm 2.5, due to the usage of play.api.db.DB.

My old code, triggering the deprecation warnings:

import play.api.db.DB

object MyDatabaseService {
    def addSomeData {
        DB.withConnection { implicit c =>
            /* ... */
        }
    }
}

My updated code, no deprecation warnings:

import javax.inject.Singleton
import play.api.db.Database

@Singleton
class MyDatabaseService @Inject() (db: Database) {
    def addSomeData {
        db.withConnection { implicit c =>
            /* ... */
        }
    }
}

When I noticed this problem, I first planned to update the doc and create a pull request on Github. But I didn't manage to do it: the main issue being that I don't know how to update https://github.com/playframework/anorm/blob/master/docs/manual/working/scalaGuide/main/sql/code/ScalaAnorm.scala to use the new API.

So instead of creating a pull request, I opened this issue. Hope it helps still!

Why is BatchSql(String, Seq[Seq[NamedParameter]]) deprecated? The suggested alternative seems more fragile.

What is the reasoning behind deprecating the apply method in BatchSql that accepts a Seq[Seq[NamedParameter]]?

The alternative signature

def apply(sql: String, first: Seq[NamedParameter], other: Seq[NamedParameter]*)

seems to be very awkward and error prone, for instance where before I could do

stuff.map( x => Seq[NamedParameter]('id -> x.id, 'a -> x.a) )

now triggers the deprecation warning, but the intuitive "fix" changing map to flatMap is actually incorrect despite satisfying the type checker (will result in only one record being generated) and there seems to be no straightforward way of feeding a Seq[Seq[NamedParameter]] to the new apply, as :_* annotation does not work because there are in fact two arguments.

Publish 2.5.3

Publish a minor release 2.5.3 from the current master, including the Scala 2.12 artifact.

RowParsers on multiple joins of the same table don't resolve to the expected field in the result.

I have a pair of MySQL tables:

Table A { "id" varchar }
Table B { "a_ref" varchar, "discriminator" number, "value" varchar }

populated with:

A {
'abc'
}
B {
'abc', 0, 'a-0'
'abc', 1, 'a-1'
}

I use the following scala code:

SQL("""
select A.id, c.value cval, d.value dval
from A, B c, B d
where c.a_ref = A.id and c.discriminator = 0
and d.a_ref = A.id and d.discriminator = 1
""").as( str("id") ~ str("cval") ~ str("dval") map(flatten) *)

What I am getting back is:

List((abc,a-0,a-0))

What I am expecting is:

List((abc,a-0,a-1))

If I wrap the query an an inner query, this problem goes away. For example:

SQL("""
select id, cval, dval from (
select A.id, c.value cval, d.value dval
from A, B c, B d
where c.a_ref = A.id and c.discriminator = 0
and d.a_ref = A.id and d.discriminator = 1) t
""").as( str("id") ~ str("cval") ~ str("dval") map(flatten) *)

I am currently using version 2.4.0 of anorm.

Thank you for your time. Best regards.

Document changes in member visibility since 2.3

In a4c9031, the visibility of Row.metaData was changed from public to private[anorm]. However, the 2.4 Migration Guide makes no mention of this change, which was breaking in my case.

Since it was exposed as a public member in previous versions of the API, I imagine I'm not the only one who encountered this issue when upgrading to Play 2.4. It would be helpful to add a note, however brief, that explains the change and how best to refactor around it.

Mysql JSON_UNQUOTE support

JSON_UNQUOTE command is a useful Mysql command to extract values from JSON without double quotes because JSON_EXTRACT returns the double quotes. It appears that Anorm does not support this command. We got the following error

anorm.AnormException: TypeDoesNotMatch(Cannot convert [B@2845af: [B to String for column ColumnName(.JSON_UNQUOTE(payload->'$.data[0]')),Some(JSON_UNQUOTE(payload->'$.data[0]')))))

for a selection query like

SELECT JSON_UNQUOTE(payload->'$.data[0]')

It would be appreciated if you could consider adding support for it. Thank you!

Support for async database access

I do understand that Anorm builds on top of JDBC which is inherently synchronous, just like most of the underlying drivers and protocols. However there are some databases that natively support asynchronous queries such as the postgresql and there are new database APIs like jasync-sql coming up that try to support this.
I like the simplicity Anorm, but I'd also like to use the asynchronous query support of the postgresql.

Are there any plans for supporting asychronous querying?

sealed ResultSetParser

Hi there,
When upgrading from play 2.3 to 2.4 I encountered some issue with my code that was using ResultSetParser.
In particular I had a utility function
def vector[A](parser: RowParser[A]): ResultSetParser[Vector[A]] = {
new ResultSetParser[Vector[A]] {...}

since I prefer Vector over List.

I Also I implemented a custom resultsetparser in order to read the returned id from SQL("...").executeUpdate when doing, on mysql, an "insert into... on duplicate key update..."

I'm wondering now that ResultSetParser has gone private is there a way to provide a custom RSParser?
Thanks for your help

columnsDictionary contains null value for aliased columns with self join

Attempting to upgrade from Anorm 2.4.0 to 2.5.0 we run into this issue.

For all aliased columns we get a null value inside columnsDictionary in Row.scala#L128 resulting in the exception below:

An exception or error caused a run to abort: ColumnName(test_users.test_users_email,Some(USER_EMAIL)) anorm.AnormException: ColumnName(test_users.test_users_email,Some(USER_EMAIL))
  private[anorm] def get(a: String): MayErr[SqlRequestError, (Any, MetaDataItem)] = for {
    m <- MayErr(metaData.get(a).toRight(ColumnNotFound(a, this)))
    data <- MayErr(columnsDictionary.get(m.column.qualified.toUpperCase()).
      toRight(ColumnNotFound(m.column.qualified, metaData.availableColumns)))

  } yield (data, m)

This issue stems from the workaround we used in: http://stackoverflow.com/questions/36431356/unexpectednullablefound-on-left-join

Likely solved in here, but an issue in 2.5
https://github.com/playframework/anorm/blob/2.4.x/core/src/main/scala/anorm/Row.scala#L128
&
https://github.com/playframework/anorm/blob/master/core/src/main/scala/anorm/Row.scala#L128

val simple = {
get[Long]("test_users_id") ~
  get[String]("USER_EMAIL") ~
  get[String]("test_users_first_name") ~
  get[String]("test_users_second_name") ~
  get[String]("USER_COMPANY") ~
  get[String]("useas") ~
  get[Option[String]]("Stripe_CUSTOMER_ID") ~
  get[Boolean]("is_invoice_customer") ~
  get[Option[String]]("test_users_invoice_email") ~
  get[Array[Byte]]("password_hash") ~
  get[Array[Byte]]("salt") ~
  get[String]("invoice_address") ~
  get[Option[String]]("time_zone") ~
  get[DateTime]("USER_LAST_PAYMENT_DATE") ~
  get[DateTime]("sign_up_date") ~
  get[Option[String]]("USER_SPIKE_ROLE") ~
  get[Option[Int]]("num_team") ~
  get[Int]("paid_for_team") ~
  get[Option[Long]]("test_users_admin_id") ~
  get[Option[DateTime]]("ADMIN_LAST_PAYMENT_DATE") ~
  get[Option[String]]("ADMIN_SPIKE_ROLE") ~
  get[Option[DateTime]]("FIRST_PAYMENT_DATE") ~
  get[Option[String]]("ADMIN_EMAIL") ~
  get[Option[DateTime]]("ADMIN_FIRST_PAYMENT_DATE") ~
  get[Option[String]]("utm_source") ~
  get[Option[String]]("utm_medium") ~
  get[Option[String]]("utm_campaign") ~
  get[Option[String]]("sf_exec") ~
  get[Option[Long]]("syndication_access_manager") ~
  get[Option[String]]("vat_number") ~
  get[Boolean]("bounced_email") ~
  get[Option[String]]("ADMIN_COMPANY") map {
  case testUsersId ~ userEmail ~ testUsersFirstName ~ testUsersSecondName
    ~ company ~ useAs ~ stripeCustomerId ~ isInvoiceCustomer ~ xeroContactEmail ~ passwordHash ~ salt ~
    invoiceAddress ~ timeZone ~ lastPaymentDate ~ signUpDate ~ testRole ~ numTeam ~ paidForTeam ~ testUsersAdminId ~ adminLastPaymentDate ~ adminTestRole ~ firstPaymentDate ~
    adminEmail ~ adminFirstPaymentDate ~ utmSource ~ utmMedium ~ utmCampaign ~ sfExec ~ syndicationAccessManagerId ~ vatNumber ~ bouncedEmail ~ adminCompany =>
    TestUser(testUsersId, userEmail, testUsersFirstName,
      testUsersSecondName, company, useAs, invoiceAddress,
      stripeCustomerId,
      isInvoiceCustomer,
      xeroContactEmail,
      passwordHash.toVector, salt.toVector,
      teamLastPaymentDate(lastPaymentDate, adminLastPaymentDate, testUsersAdminId, stripeCustomerId), new DateTime(signUpDate, DateTimeZone.UTC),
      testRoleFromTeam(testRole, testUsersAdminId, stripeCustomerId, adminTestRole), paidByOtherUser(testUsersAdminId, stripeCustomerId), numTeam.getOrElse[Int](0), //
      getSeatCount(paidForTeam), timeZone.flatMap(getSafeTimeZone),
      firstPaymentDateFromTeam(testUsersAdminId, stripeCustomerId, firstPaymentDate, adminFirstPaymentDate), adminEmail, utmSource, utmMedium, utmCampaign,
      sfExec, syndicationAccessManagerId, vatNumber, getCompanyCustomer(testUsersAdminId, adminCompany, adminFirstPaymentDate, adminTestRole, testUsersId, company, firstPaymentDate, testRole), bouncedEmail)
}
  }

   def createCompanyCustomer(testUsersAdminId: Option[Long], adminCompany: Option[String], adminFirstPaymentDate: Option[DateTime], adminTestRole: Option[String]): Option[NewsWhipCustomer] = {
for {
  id <- testUsersAdminId
  company <- adminCompany
  firstPaymentDate <- adminFirstPaymentDate
  testRole <- adminTestRole
} yield NewsWhipCustomer(id, company, firstPaymentDate, testRole)
}

def getCompanyCustomer(testUsersAdminId: Option[Long], adminCompany: Option[String], adminFirstPaymentDate: Option[DateTime], adminTestRole: Option[String], testUserId: Long, company: String, firstPaymentDate: Option[DateTime], testRole: Option[String]): Option[NewsWhipCustomer] = {
if (testUsersAdminId.isDefined) {
  createCompanyCustomer(testUsersAdminId, adminCompany, adminFirstPaymentDate, adminTestRole)
} else {
  createCompanyCustomer(Some(testUserId), Some(company), firstPaymentDate, testRole)
}
}


def findByUserId(userId: Long): TestUser = {
DB.withConnection { implicit connection =>
  SQL"""
      SELECT s.test_users_id, s.test_users_email AS USER_EMAIL, s.test_users_first_name, s.test_users_second_name, s.company AS USER_COMPANY, s.useas, s.Stripe_CUSTOMER_ID,
  s.is_invoice_customer, s.test_users_invoice_email,
  s.password_hash, s.salt, s.invoice_address,
        s.time_zone, s.last_payment_date AS USER_LAST_PAYMENT_DATE, s.sign_up_date, s.test_role AS USER_SPIKE_ROLE, s.num_team, (SELECT COUNT(*) FROM test_users_paid_for WHERE test_users_admin_id = $userId) AS paid_for_team,
  s.syndication_access_manager, p.test_users_admin_id, pe.last_payment_date AS ADMIN_LAST_PAYMENT_DATE, pe.test_role AS ADMIN_SPIKE_ROLE,
  s.FIRST_PAYMENT_DATE AS FIRST_PAYMENT_DATE, pe.FIRST_PAYMENT_DATE AS ADMIN_FIRST_PAYMENT_DATE, pe.test_users_email AS ADMIN_EMAIL, s.utm_source, s.utm_medium, s.utm_campaign, s.sf_exec, s.bounced_email,
  s.vat_number, pe.company AS ADMIN_COMPANY

  FROM test_users AS s
  LEFT JOIN test_users_paid_for AS p ON email_users_email = s.test_users_email
  LEFT JOIN test_users AS pe ON p.test_users_admin_id = pe.test_users_id

    WHERE s.test_users_id = $userId
      """.as(simple.single)
}
}

This is the content of my columnsDictionary and aliasesDictionary

columsDictionary:
0 = {Tuple2@9987} "(TEST_USERS.FIRST_PAYMENT_DATE,2016-04-11 11:05:44.0)"
1 = {Tuple2@9988} "(TEST_USERS.STRIPE_CUSTOMER_ID,null)"
2 = {Tuple2@9989} "(TEST_USERS.SALT,[B@70ecf57b)"
3 = {Tuple2@9990} "(TEST_USERS.SIGN_UP_DATE,2014-07-07 11:24:55.0)"
4 = {Tuple2@9991} "(TEST_USERS.IS_INVOICE_CUSTOMER,false)"
5 = {Tuple2@9992} "(TEST_USERS.SF_EXEC,null)"
6 = {Tuple2@9993} "(TEST_USERS.UTM_CAMPAIGN,null)"
7 = {Tuple2@9994} "(TEST_USERS.TEST_USERS_SECOND_NAME,d)"
8 = {Tuple2@9995} "(.PAID_FOR_TEAM,0)"
9 = {Tuple2@9996} "(TEST_USERS.INVOICE_ADDRESS,)"
10 = {Tuple2@9997} "(TEST_USERS.UTM_SOURCE,null)"
11 = {Tuple2@9998} "(TEST_USERS.VAT_NUMBER,null)"
12 = {Tuple2@9999} "(TEST_USERS_PAID_FOR.TEST_USERS_ADMIN_ID,4806)"
13 = {Tuple2@10000} "(TEST_USERS.NUM_TEAM,0)"
14 = {Tuple2@10001} "(TEST_USERS.LAST_PAYMENT_DATE,2016-08-06 10:43:32.0)"
15 = {Tuple2@10002} "(TEST_USERS.COMPANY,LeleCreation)"
16 = {Tuple2@10003} "(TEST_USERS.USEAS,general-interest)"
17 = {Tuple2@10004} "(TEST_USERS.PASSWORD_HASH,[B@4e96cb04)"
18 = {Tuple2@10005} "(TEST_USERS.TIME_ZONE,Atlantic/Cape_Verde)"
19 = {Tuple2@10006} "(TEST_USERS.UTM_MEDIUM,null)"
20 = {Tuple2@10007} "(TEST_USERS.SYNDICATION_ACCESS_MANAGER,null)"
21 = {Tuple2@10008} "(TEST_USERS.TEST_USERS_ID,4800)"
22 = {Tuple2@10009} "(TEST_USERS.TEST_USERS_FIRST_NAME,test2)"
23 = {Tuple2@10010} "(TEST_USERS.TEST_USERS_EMAIL,[email protected])"
24 = {Tuple2@10011} "(TEST_USERS.TEST_USERS_INVOICE_EMAIL,null)"
25 = {Tuple2@10012} "(TEST_USERS.BOUNCED_EMAIL,false)"
26 = {Tuple2@10013} "(TEST_USERS.TEST_ROLE,TestTeam)"

aliasesDictionary
0 = {Tuple2@10123} "(SF_EXEC,null)"
1 = {Tuple2@10124} "(BOUNCED_EMAIL,false)"
2 = {Tuple2@10125} "(USER_COMPANY,test2test2)"
3 = {Tuple2@10126} "(SYNDICATION_ACCESS_MANAGER,null)"
4 = {Tuple2@10127} "(ADMIN_LAST_PAYMENT_DATE,2016-08-06 10:43:32.0)"
5 = {Tuple2@10128} "(PAID_FOR_TEAM,0)"
6 = {Tuple2@10129} "(INVOICE_ADDRESS,)"
7 = {Tuple2@10130} "(ADMIN_EMAIL,[email protected])"
8 = {Tuple2@10131} "(TEST_USERS_ADMIN_ID,4806)"
9 = {Tuple2@10132} "(USER_LAST_PAYMENT_DATE,2014-07-09 13:17:06.0)"
10 = {Tuple2@10133} "(TEST_USERS_INVOICE_EMAIL,null)"
11 = {Tuple2@10134} "(ADMIN_FIRST_PAYMENT_DATE,2016-04-11 11:05:44.0)"
12 = {Tuple2@10135} "(TEST_USERS_SECOND_NAME,d)"
13 = {Tuple2@10136} "(STRIPE_CUSTOMER_ID,null)"
14 = {Tuple2@10137} "(TEST_USERS_FIRST_NAME,test2)"
15 = {Tuple2@10138} "(UTM_MEDIUM,null)"
16 = {Tuple2@10139} "(IS_INVOICE_CUSTOMER,false)"
17 = {Tuple2@10140} "(USER_EMAIL,[email protected])"
18 = {Tuple2@10141} "(TEST_USERS_ID,4800)"
19 = {Tuple2@10142} "(VAT_NUMBER,null)"
20 = {Tuple2@10143} "(UTM_CAMPAIGN,null)"
21 = {Tuple2@10144} "(TIME_ZONE,Atlantic/Cape_Verde)"
22 = {Tuple2@10145} "(UTM_SOURCE,null)"
23 = {Tuple2@10146} "(SALT,[B@70ecf57b)"
24 = {Tuple2@10147} "(NUM_TEAM,0)"
25 = {Tuple2@10148} "(ADMIN_COMPANY,LeleCreation)"
26 = {Tuple2@10149} "(FIRST_PAYMENT_DATE,null)"
27 = {Tuple2@10150} "(USEAS,general-interest)"
28 = {Tuple2@10151} "(PASSWORD_HASH,[B@4e96cb04)"
29 = {Tuple2@10152} "(ADMIN_TEST_ROLE,TestTeam)"
30 = {Tuple2@10153} "(SIGN_UP_DATE,2014-07-07 11:24:55.0)"
31 = {Tuple2@10154} "(USER_TEST_ROLE,null)"

Why streamming results all loaded into memory

I'm working with big table with more than 4G data (approximate 2 million row). So, I tried to process row per row (not store in memory) like this https://github.com/playframework/anorm/blob/master/docs/manual/working/scalaGuide/main/sql/ScalaAnorm.md#streaming-results but I got a GC overhead error.

This is my code

SQL("select * from Banner").withResult(write)

@tailrec
def write(op: Option[Cursor])(implicit writer: Writer): Unit = op match {
    case Some(cursor) =>
      writer.append(cursor.row[Int]("Id").toString).append(',')
        .append(cursor.row[String]("Content").toString).append(',')
      write(cursor.next)
    case _ =>
  }

anorm convert oracle long type to string

Hi all.

My new project used anorm in play-scala 2.4.3. But I have a problem.
It seems to convert oracle long to string.

For example
oracle table schema

ID   NULL  LONG
DATA NULL  VARCHAR2(200)

data class

case class Test(id: Long, data: String)


object Test {

  val test = {
    get[Long]("id") ~
      get[String]("data") map {
      case a ~ b => Test(a, b)
    }
  }

  def find = {
    DB.withConnection { implicit c =>
      val results = SQL("SELECT id, data FROM test").as(test *)
    }
  }
}

It will throw below exception

[RuntimeException: TypeDoesNotMatch(Cannot convert 0: class java.lang.String to Long for column ColumnName(.ID,Some(ID)))]

Please let me know How can i solve it :)

Regards.

Lower-case aliases pick same column in joins with multiple references to same table

Anorm Version (2.5.x)

Expected Behavior

In a query where the same table is joined with itself, aliases don't work properly.

  1. Execute e.g. query select t1.name as name1, t2.name as name2 from test t1 join test t2 on t1.id != t2.id
  2. Refer to a Row returned by query.
  3. Observe that row.apply[String]("name1") is always equal to row.apply[String]("name2").

Actual Behavior

  1. Same as above
  2. Same as above
  3. Observe that row.apply[String]("name1") is different from row.apply[String]("name2").

Note that it works as expected when aliases in query are upper-case:

select t1.name as NAME1, t2.name as NAME2 from test t1 join test t2 on t1.id != t2.id

Analysis and proposed solution

The problem can be traced to the get method in Row.scala:

    data <- MayErr(m.column.alias.flatMap(aliasesDictionary.get(_)).
      orElse(columnsDictionary.get(m.column.qualified.toUpperCase())).

So the qualified name is upper-cased before being retrieved from the columnsDictionary, but the alias is not upper-cased when retrieved from the aliasesDictionary. This would also explain how it works when we have the table referenced only once- it falls back to the qualified name. However when there are multiple references to a column in a table, it always picks the first one it encounters, bypassing aliases. I think the alias then should either be upper-cased upon constructing the MetaDataItem or, to be more consistent with how qualified is used in the code fragment, to be changed to this:

    data <- MayErr(m.column.alias.flatMap(a => aliasesDictionary.get(a.toUpperCase())).

Validate RowParser from query by metadata without executing it

It should be possible to emulate doobie's query checking since all the necessary types are already encoded in the parser. This is an example of how I imagine this would look:

val rowParser = str("name") ~ int("population")
val query = SQL("SELECT * FROM Country")

query.validate(rowParser)

It might return an Either or similar showing the validation errors.

Using this feature it should be much easier to check simple type mapping errors and whether the schema matches the types expected in the code. Since no code is executed, it should be safe to perform even on a production database.

Update the documentation

Have a place where Anorm documentation can be up-to-date between two Play release (as the Play documentation is for now the only place where Anorm one can be found along).

I'm thinking to have this documentation published through the gh-pages of this repository, e.g. Anorm documentation as GitHub pages.

  • CONTRIBUTING.md
  • Highlights: Highlights24.md, Highlights25.md, Highlights26.md, Migration24.md, Migration25.md, Migration26.md
  • Try Cayman

exception during macro expansion error with openjdk11, sbt 1.3.3 and scala 2.12.10

I'm receiving a lot of errors [error] /root/something-updates/src/main/scala/Something.scala:67:92: exception during macro expansion:


[error] java.lang.AssertionError: assertion failed: List(package anorm, package anorm)
... // I skipped long long messages
[error] at anorm.macros.ToParameterListImpl$.caseClass(ToParameterListImpl.scala:80)
[error] at anorm.Macro$.configuredParameters(Macro.scala:430)

[error] /root/something-updates/src/main/scala/com/something/infrastructure/SomeUpdatesRepository.scala:74:89: exception during macro expansion:
[error] java.lang.AssertionError: assertion failed: import failure: cannot determine unique overloaded method alternative from
[error] final package anorm
[error] final package anorm
[error] that matches package anorm:anorm.type
....
[error] implicit private def ToParams: ToParameterList[MyCaseClass] = Macro.toParameters(


[error] /root//src/main/scala/something/updates/MyRepository.scala:66:92: exception during macro expansion:
[error] java.lang.AssertionError: assertion failed:
[error] anorm package anorm
[error] while compiling:
[error] during phase: globalPhase=, enteringPhase=typer
[error] library version: version 2.12.10
[error] compiler version: version 2.12.10
[error] reconstructed args:
[error] last tree to typer: EmptyTree
[error] tree position:
[error] tree tpe:
[error] symbol: null
[error] call site: package in
[error] == Source file context for tree position ==
[error] at scala.reflect.internal.SymbolTable.throwAssertionError(SymbolTable.scala:184)

This is the definition of my case class:

implicit private def toParamsFlightInfo: ToParameterList[FlightInfo] = Macro.toParameters(
Macro.ParameterProjection(propertyName = "startDatetime", parameterName = "start_datetime"),
Macro.ParameterProjection(propertyName = "arrivalDatetime", parameterName = "arrival_datetime"),
Macro.ParameterProjection(propertyName = "classCode", parameterName = "class_code"),
Macro.ParameterProjection(propertyName = "airportCode", parameterName = "airport_code")
)


Are these errors related to java version and sbt version?

call-stack overflow on large datasets in WithResult.apply()

When using apply() to get a Stream of results on a large dataset, I ran into the situation in the topic.

I have posted a solution in StackOverflow here: http://stackoverflow.com/questions/13270825/is-there-an-easy-way-to-get-a-stream-as-output-of-a-rowparser/#31643766

But, this solution is obviously not ideal. Perhaps it'll provide clues on the direction you wish to take for this package.

I know that apply() is currently deprecated, but it would be very nice if you could use something akin to this approach to reinstate it.

Regression error between Anorm 2.6.2 and 2.6.5

Anorm Version 2.6.2 vs. 2.6.4/5

Operating System MacOS 10.15.3

JDK

% java -version
openjdk version "1.8.0_242"
OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_242-b08)
OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.242-b08, mixed mode)

Library Dependencies

% postgres --version
postgres (PostgreSQL) 12.1

running locally on MacOS 10.15.3

Expected Behavior

The following code behaves differently under Anorm version 2.6.2 and version 2.6.4/2.6.5. It's running as part of a Play app.

       try {
          dbapi.database("default").withConnection { implicit c ⇒
            import anorm.SqlParser.{str, int}
            val parser = int("billy") ~ str("bob") map { case a ~ b ⇒ (a → b) }
            val b      =
              SQL"""
                  SELECT 1 as billy, null as bob
                  """.as(parser.single)
          }
        } catch {
          case e: Exception ⇒
            println(s"\n${e}\n")
        }

When my scala project uses the dependency "org.playframework.anorm" %% "anorm" % "2.6.2" it prints

anorm.AnormException: UnexpectedNullableFound(ColumnName(.bob,Some(bob)))

I would expect it to do the same with the dependency "org.playframework.anorm" %% "anorm" % "2.6.5"

Actual Behavior

When depending on "org.playframework.anorm" %% "anorm" % "2.6.5" the code above prints

anorm.AnormException: 'bob' not found, available columns: billy, billy, bob, bob

Reproducible Test Case

Sorry, I tried, but I just couldn't figure out the testing framework in a reasonable amount of time.

[PSQLException: Multiple ResultSets were returned by the query.] with Anorm 2.4.0

Note
When I use the following dependencies (i.e., use an older version of Anorm) the below works as expected (no exception is thrown).

 addSbtPlugin("com.typesafe.play" % "sbt-plugin" % "2.3.8")
 "org.postgresql" % "postgresql" % "9.4-1202-jdbc42"

Issue

In PostgreSQL I have a user defined function

CREATE OR REPLACE FUNCTION pPersonGet(p_personId bigint)
RETURNS TABLE (
    id bigint,
    shortname character varying,
    longname  character varying,
    avatarURL character varying,
    isActive boolean) AS
$$
BEGIN

return QUERY
select p.id, p.shortname, p.longname, p.avatarURL, p.isActive
 From person p
 where p_personId is null or p.id = p_personId;

END
$$ LANGUAGE plpgsql;

When executing the following find function with Anorm 2.4.0

 val selectStmt =
    """
      select id, shortname, longname, avatarURL, isActive from pPersonGet({id});
    """
  ....
  ....
  val simple = {
    get[PersonID]("id") ~
      str("shortname") ~
      str("longname") ~
      str("avatarurl") ~
      get[Boolean]("isActive") map {
      case id~shortname~longname~avatarurl~isActive
      => Person(Some(id),
        Name(short, long),
        avatarurl,
        isActive)
    }
  }
  ....
  ....
  def find(id:Option[PersonID]) : List[Person] = {
    DB.withConnection { implicit conn =>
      anorm.SQL(selectStmt).on("id" -> id).as(simple *)
    }
  }

I get the following exception

[PSQLException: Multiple ResultSets were returned by the query.]

Dependencies:

 addSbtPlugin("com.typesafe.play" % "sbt-plugin" % "2.4.0")
 "org.postgresql" % "postgresql" % "9.4-1202-jdbc42"
 "com.typesafe.play" %% "anorm" % "2.4.0"

Scala Version

scala-sdk-2.11.2

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.