Giter VIP home page Giter VIP logo

selda's Introduction

Selda

Join the chat at https://gitter.im/selda-hs/Lobby Hackage IRC channel MIT License Haskell CI Hackage Dependencies

What is Selda?

Selda is a Haskell library for interacting with SQL-based relational databases. It was inspired by LINQ and Opaleye.

Features

  • Monadic interface.
  • Portable: backends for SQLite and PostgreSQL.
  • Generic: easy integration with your existing Haskell types.
  • Creating, dropping and querying tables using type-safe database schemas.
  • Typed query language with products, filtering, joins and aggregation.
  • Inserting, updating and deleting rows from tables.
  • Conditional insert/update.
  • Transactions, uniqueness constraints and foreign keys.
  • Type-safe, backend-specific functionality, such as JSON lookups.
  • Seamless prepared statements.
  • Lightweight and modular: few dependencies, and non-essential features are optional or split into add-on packages.

Getting started

Install the selda package from Hackage, as well as at least one of the backends:

$ cabal update
$ cabal install selda selda-sqlite selda-postgresql

Then, read the tutorial. The API documentation will probably also come in handy.

Requirements

Selda requires GHC 8.0+, as well as SQLite 3.7.11+ or PostgreSQL 9.4+. To build the SQLite backend, you need a C compiler installed. To build the PostgreSQL backend, you need the libpq development libraries installed (libpq-dev on Debian-based Linux distributions).

Hacking

Contributing

All forms of contributions are welcome!

If you have a bug to report, please try to include as much information as possible, preferably including:

  • A brief description (one or two sentences) of the bug.
  • The version of Selda+backend where the bug was found.
  • A step-by-step guide to reproduce the bug.
  • The expected result from following these steps.
  • What actually happens when following the steps.
  • Which component contains the bug (selda, selda-sqlite or selda-postgresql), if you're reasonably sure about where the bug is.

Bonus points for a small code example that illustrates the problem.

If you want to contribute code, please consult the following checklist before sending a pull request:

  • Does the code build with a recent version of GHC?
  • Do all the tests pass?
  • Have you added any tests covering your code?

If you want to contribute code but don't really know where to begin, issues tagged good first issue are a good start.

Setting up the build environment

From the repository root:

  • Install libpq-dev from your package manager. This is required to build the PostgreSQL backend.
  • Make sure you're running a cabal version that supports v2-style commands.
  • Familiarise yourself with the various targets in the makefile. The dependencies between Selda, the backends and the tests are slightly complex, so straight-up cabal is too quirky for day to day hacking.

PostgreSQL backend testing with Docker

To test the PostgreSQL backend, use the provided pgtest-compose.yml docker-compose file:

sudo docker-compose -f pgtest-compose.yml up -d
make pgtest
sudo docker-compose -f pgtest-compose.yml down

TODOs

Features that would be nice to have but are not yet implemented.

  • Monadic if/else
  • Streaming
  • MySQL/MariaDB backend
  • MSSQL backend

selda's People

Contributors

acentelles avatar alexmingoia avatar alicemaz avatar anisjonischkeit avatar aphonicchaos avatar benny-medflyt avatar cdepillabout avatar dalaing avatar divarvel avatar dmjio avatar gitter-badger avatar hadronized avatar immae avatar jchia avatar ludat avatar pamu avatar paolino avatar pkamenarsky avatar revskill10 avatar reygoch avatar tomberek avatar valderman avatar waiting-for-dev avatar ypares avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

selda's Issues

"too many SQL variables" error with SQLite backend

Happens when I try to insert too many things. Same problem used to happen in persistent too, see yesodweb/persistent#527 .

Full output:

        ┃     │ SqlError "SQLite3 returned ErrorError while attempting to perform prepare \"INSERT INTO \\\"logs\\\" (\\\"who\\\", \\\"when\\\", \\\"log\\\") VALUES ($1, $2, $3), ($4, $5, $6), ($7, $8, $9), ($10, $11, $12), ($13, $14, $15), ($16, $17, $18), ($19, $20, $21), ($22, $23, $24), ($25, $26, $27), ($28, $29, $30), ($31, $32, $33), ($34, $35, $36), ($37, $38, $39), ($40, $41, $42), ($43, $44, $45), ($46, $47, $48), ($49, $50, $51), ($52, $53, $54), ($55, $56, $57), ($58, $59, $60), ($61, $62, $63), ($64, $65, $66), ($67, $68, $69), ($70, $71, $72), ($73, $74, $75), ($76, $77, $78), ($79, $80, $81), ($82, $83, $84), ($85, $86, $87), ($88, $89, $90), ($91, $92, $93), ($94, $95, $96), ($97, $98, $99), ($100, $101, $102), ($103, $104, $105), ($106, $107, $108), ($109, $110, $111), ($112, $113, $114), ($115, $116, $117), ($118, $119, $120), ($121, $122, $123), ($124, $125, $126), ($127, $128, $129), ($130, $131, $132), ($133, $134, $135), ($136, $137, $138), ($139, $140, $141), ($142, $143, $144), ($145, $146, $147), ($148, $149, $150), ($151, $152, $153), ($154, $155, $156), ($157, $158, $159), ($160, $161, $162), ($163, $164, $165), ($166, $167, $168), ($169, $170, $171), ($172, $173, $174), ($175, $176, $177), ($178, $179, $180), ($181, $182, $183), ($184, $185, $186), ($187, $188, $189), ($190, $191, $192), ($193, $194, $195), ($196, $197, $198), ($199, $200, $201), ($202, $203, $204), ($205, $206, $207), ($208, $209, $210), ($211, $212, $213), ($214, $215, $216), ($217, $218, $219), ($220, $221, $222), ($223, $224, $225), ($226, $227, $228), ($229, $230, $231), ($232, $233, $234), ($235, $236, $237), ($238, $239, $240), ($241, $242, $243), ($244, $245, $246), ($247, $248, $249), ($250, $251, $252), ($253, $254, $255), ($256, $257, $258), ($259, $260, $261), ($262, $263, $264), ($265, $266, $267), ($268, $269, $270), ($271, $272, $273), ($274, $275, $276), ($277, $278, $279), ($280, $281, $282), ($283, $284, $285), ($286, $287, $288), ($289, $290, $291), ($292, $293, $294), ($295, $296, $297), ($298, $299, $300), ($301, $302, $303), ($304, $305, $306), ($307, $308, $309), ($310, $311, $312), ($313, $314, $315), ($316, $317, $318), ($319, $320, $321), ($322, $323, $324), ($325, $326, $327), ($328, $329, $330), ($331, $332, $333), ($334, $335, $336), ($337, $338, $339), ($340, $341, $342), ($343, $344, $345), ($346, $347, $348), ($349, $350, $351), ($352, $353, $354), ($355, $356, $357), ($358, $359, $360), ($361, $362, $363), ($364, $365, $366), ($367, $368, $369), ($370, $371, $372), ($373, $374, $375), ($376, $377, $378), ($379, $380, $381), ($382, $383, $384), ($385, $386, $387), ($388, $389, $390), ($391, $392, $393), ($394, $395, $396), ($397, $398, $399), ($400, $401, $402), ($403, $404, $405), ($406, $407, $408), ($409, $410, $411), ($412, $413, $414), ($415, $416, $417), ($418, $419, $420), ($421, $422, $423), ($424, $425, $426), ($427, $428, $429), ($430, $431, $432), ($433, $434, $435), ($436, $437, $438), ($439, $440, $441), ($442, $443, $444), ($445, $446, $447), ($448, $449, $450), ($451, $452, $453), ($454, $455, $456), ($457, $458, $459), ($460, $461, $462), ($463, $464, $465), ($466, $467, $468), ($469, $470, $471), ($472, $473, $474), ($475, $476, $477), ($478, $479, $480), ($481, $482, $483), ($484, $485, $486), ($487, $488, $489), ($490, $491, $492), ($493, $494, $495), ($496, $497, $498), ($499, $500, $501), ($502, $503, $504), ($505, $506, $507), ($508, $509, $510), ($511, $512, $513), ($514, $515, $516), ($517, $518, $519), ($520, $521, $522), ($523, $524, $525), ($526, $527, $528), ($529, $530, $531), ($532, $533, $534), ($535, $536, $537), ($538, $539, $540), ($541, $542, $543), ($544, $545, $546), ($547, $548, $549), ($550, $551, $552), ($553, $554, $555), ($556, $557, $558), ($559, $560, $561), ($562, $563, $564), ($565, $566, $567), ($568, $569, $570), ($571, $572, $573), ($574, $575, $576), ($577, $578, $579), ($580, $581, $582), ($583, $584, $585), ($586, $587, $588), ($589, $590, $591), ($592, $593, $594), ($595, $596, $597), ($598, $599, $600), ($601, $602, $603), ($604, $605, $606), ($607, $608, $609), ($610, $611, $612), ($613, $614, $615), ($616, $617, $618), ($619, $620, $621), ($622, $623, $624), ($625, $626, $627), ($628, $629, $630), ($631, $632, $633), ($634, $635, $636), ($637, $638, $639), ($640, $641, $642), ($643, $644, $645), ($646, $647, $648), ($649, $650, $651), ($652, $653, $654), ($655, $656, $657), ($658, $659, $660), ($661, $662, $663), ($664, $665, $666), ($667, $668, $669), ($670, $671, $672), ($673, $674, $675), ($676, $677, $678), ($679, $680, $681), ($682, $683, $684), ($685, $686, $687), ($688, $689, $690), ($691, $692, $693), ($694, $695, $696), ($697, $698, $699), ($700, $701, $702), ($703, $704, $705), ($706, $707, $708), ($709, $710, $711), ($712, $713, $714), ($715, $716, $717), ($718, $719, $720), ($721, $722, $723), ($724, $725, $726), ($727, $728, $729), ($730, $731, $732), ($733, $734, $735), ($736, $737, $738), ($739, $740, $741), ($742, $743, $744), ($745, $746, $747), ($748, $749, $750), ($751, $752, $753), ($754, $755, $756), ($757, $758, $759), ($760, $761, $762), ($763, $764, $765), ($766, $767, $768), ($769, $770, $771), ($772, $773, $774), ($775, $776, $777), ($778, $779, $780), ($781, $782, $783), ($784, $785, $786), ($787, $788, $789), ($790, $791, $792), ($793, $794, $795), ($796, $797, $798), ($799, $800, $801), ($802, $803, $804), ($805, $806, $807), ($808, $809, $810), ($811, $812, $813), ($814, $815, $816), ($817, $818, $819), ($820, $821, $822), ($823, $824, $825), ($826, $827, $828), ($829, $830, $831), ($832, $833, $834), ($835, $836, $837), ($838, $839, $840), ($841, $842, $843), ($844, $845, $846), ($847, $848, $849), ($850, $851, $852), ($853, $854, $855), ($856, $857, $858), ($859, $860, $861), ($862, $863, $864), ($865, $866, $867), ($868, $869, $870), ($871, $872, $873), ($874, $875, $876), ($877, $878, $879), ($880, $881, $882), ($883, $884, $885), ($886, $887, $888), ($889, $890, $891), ($892, $893, $894), ($895, $896, $897), ($898, $899, $900), ($901, $902, $903), ($904, $905, $906), ($907, $908, $909), ($910, $911, $912), ($913, $914, $915), ($916, $917, $918), ($919, $920, $921), ($922, $923, $924), ($925, $926, $927), ($928, $929, $930), ($931, $932, $933), ($934, $935, $936), ($937, $938, $939), ($940, $941, $942), ($943, $944, $945), ($946, $947, $948), ($949, $950, $951), ($952, $953, $954), ($955, $956, $957), ($958, $959, $960), ($961, $962, $963), ($964, $965, $966), ($967, $968, $969), ($970, $971, $972), ($973, $974, $975), ($976, $977, $978), ($979, $980, $981), ($982, $983, $984), ($985, $986, $987), ($988, $989, $990), ($991, $992, $993), ($994, $995, $996), ($997, $998, $999), ($1000, $1001, $1002), ($1003, $1004, $1005), ($1006, $1007, $1008), ($1009, $1010, $1011), ($1012, $1013, $1014), ($1015, $1016, $1017), ($1018, $1019, $1020), ($1021, $1022, $1023), ($1024, $1025, $1026), ($1027, $1028, $1029), ($1030, $1031, $1032), ($1033, $1034, $1035), ($1036, $1037, $1038), ($1039, $1040, $1041), ($1042, $1043, $1044), ($1045, $1046, $1047), ($1048, $1049, $1050), ($1051, $1052, $1053), ($1054, $1055, $1056), ($1057, $1058, $1059), ($1060, $1061, $1062), ($1063, $1064, $1065), ($1066, $1067, $1068), ($1069, $1070, $1071), ($1072, $1073, $1074), ($1075, $1076, $1077), ($1078, $1079, $1080), ($1081, $1082, $1083), ($1084, $1085, $1086), ($1087, $1088, $1089), ($1090, $1091, $1092), ($1093, $1094, $1095), ($1096, $1097, $1098), ($1099, $1100, $1101), ($1102, $1103, $1104), ($1105, $1106, $1107), ($1108, $1109, $1110), ($1111, $1112, $1113), ($1114, $1115, $1116), ($1117, $1118, $1119), ($1120, $1121, $1122), ($1123, $1124, $1125), ($1126, $1127, $1128), ($1129, $1130, $1131), ($1132, $1133, $1134), ($1135, $1136, $1137), ($1138, $1139, $1140), ($1141, $1142, $1143), ($1144, $1145, $1146), ($1147, $1148, $1149), ($1150, $1151, $1152), ($1153, $1154, $1155), ($1156, $1157, $1158), ($1159, $1160, $1161), ($1162, $1163, $1164), ($1165, $1166, $1167), ($1168, $1169, $1170), ($1171, $1172, $1173), ($1174, $1175, $1176), ($1177, $1178, $1179), ($1180, $1181, $1182), ($1183, $1184, $1185), ($1186, $1187, $1188), ($1189, $1190, $1191), ($1192, $1193, $1194), ($1195, $1196, $1197), ($1198, $1199, $1200), ($1201, $1202, $1203), ($1204, $1205, $1206), ($1207, $1208, $1209), ($1210, $1211, $1212), ($1213, $1214, $1215), ($1216, $1217, $1218), ($1219, $1220, $1221), ($1222, $1223, $1224), ($1225, $1226, $1227), ($1228, $1229, $1230), ($1231, $1232, $1233), ($1234, $1235, $1236), ($1237, $1238, $1239), ($1240, $1241, $1242), ($1243, $1244, $1245), ($1246, $1247, $1248), ($1249, $1250, $1251), ($1252, $1253, $1254), ($1255, $1256, $1257), ($1258, $1259, $1260), ($1261, $1262, $1263), ($1264, $1265, $1266), ($1267, $1268, $1269), ($1270, $1271, $1272), ($1273, $1274, $1275), ($1276, $1277, $1278), ($1279, $1280, $1281), ($1282, $1283, $1284), ($1285, $1286, $1287), ($1288, $1289, $1290), ($1291, $1292, $1293), ($1294, $1295, $1296), ($1297, $1298, $1299), ($1300, $1301, $1302), ($1303, $1304, $1305), ($1306, $1307, $1308), ($1309, $1310, $1311), ($1312, $1313, $1314), ($1315, $1316, $1317), ($1318, $1319, $1320), ($1321, $1322, $1323), ($1324, $1325, $1326), ($1327, $1328, $1329), ($1330, $1331, $1332), ($1333, $1334, $1335), ($1336, $1337, $1338), ($1339, $1340, $1341), ($1342, $1343, $1344), ($1345, $1346, $1347), ($1348, $1349, $1350), ($1351, $1352, $1353), ($1354, $1355, $1356), ($1357, $1358, $1359), ($1360, $1361, $1362), ($1363, $1364, $1365), ($1366, $1367, $1368), ($1369, $1370, $1371), ($1372, $1373, $1374), ($1375, $1376, $1377), ($1378, $1379, $1380), ($1381, $1382, $1383), ($1384, $1385, $1386), ($1387, $1388, $1389), ($1390, $1391, $1392), ($1393, $1394, $1395), ($1396, $1397, $1398), ($1399, $1400, $1401), ($1402, $1403, $1404), ($1405, $1406, $1407), ($1408, $1409, $1410), ($1411, $1412, $1413), ($1414, $1415, $1416), ($1417, $1418, $1419), ($1420, $1421, $1422), ($1423, $1424, $1425), ($1426, $1427, $1428), ($1429, $1430, $1431), ($1432, $1433, $1434), ($1435, $1436, $1437), ($1438, $1439, $1440), ($1441, $1442, $1443), ($1444, $1445, $1446), ($1447, $1448, $1449), ($1450, $1451, $1452), ($1453, $1454, $1455), ($1456, $1457, $1458), ($1459, $1460, $1461), ($1462, $1463, $1464), ($1465, $1466, $1467), ($1468, $1469, $1470), ($1471, $1472, $1473), ($1474, $1475, $1476), ($1477, $1478, $1479), ($1480, $1481, $1482), ($1483, $1484, $1485), ($1486, $1487, $1488), ($1489, $1490, $1491), ($1492, $1493, $1494), ($1495, $1496, $1497), ($1498, $1499, $1500), ($1501, $1502, $1503), ($1504, $1505, $1506), ($1507, $1508, $1509), ($1510, $1511, $1512), ($1513, $1514, $1515), ($1516, $1517, $1518), ($1519, $1520, $1521), ($1522, $1523, $1524), ($1525, $1526, $1527), ($1528, $1529, $1530), ($1531, $1532, $1533), ($1534, $1535, $1536), ($1537, $1538, $1539), ($1540, $1541, $1542), ($1543, $1544, $1545), ($1546, $1547, $1548), ($1549, $1550, $1551), ($1552, $1553, $1554), ($1555, $1556, $1557), ($1558, $1559, $1560), ($1561, $1562, $1563), ($1564, $1565, $1566)\": too many SQL variables"

Question about Generics

Hi 👋

Thanks for this awesome library!
I'm a bit stuck on the generics part though.

I was wondering what the type signature would look like for this function:

query theQuery =
    theQuery
        |> Selda.query
        |> fmap Selda.Generic.fromRels

Any ideas? Thanks 🙏

BUG: result with unknown type oid

I have this error message whenever I use a type that is not supported by selda-postgresql.

I first had it when I declared a column as INTEGER (oid 23). In that case I solved the issue by changing the column type to BIGINT (oid 20). Related to #51.

Now I am doing an aggregate query on a BIGINT column, and somehow the resulting type has oid 1700, which is not supported by selda-postgresql. So I have to use a patched version for the moment.

Either I am doing something wrong, or else I do not understand how I am supposed to work with so few types. I can do a PR to add more types if needed. But I want to be sure that the problem is not somewhere else.

Access RowID value

This is especially important for serialization purpose and for API that require to pass the ID of the value we want to query / perform operation on.

Wrong handling of AutoIncrement for PostgreSQL

I have the following code:

data User = User
  { id :: ID User
  , admin :: Bool

  , username :: Text
  , password :: Text

  , dateCreated :: UTCTime
  , dateModified :: UTCTime
  } deriving ( Eq, Show, Generic )

g_user :: GenTable User
g_user = genTable "User"
  [ ( id :: User -> ID User ) :- autoPrimaryGen
  ]

When I try to insert new user into that table I get this error message:

*** Exception: SqlError "error executing query `INSERT INTO \"User\" (\"id\", \"admin\", \"username\", \"password\", \"dateCreated\", \"dateModified\") VALUES (DEFAULT, $1, $2, $3, $4, $5)': ERROR:  null value in column \"id\" violates not-null constraint\nDETAIL:  Failing row contains (null, t, reygoch, $2y$14$daB6zQcjAbQRIys9k1QnI.I4JABb28J1F4KI0iFLM780jBjaP1q5m, 2017-12-22 19:39:13.581346, 2017-12-22 19:39:13.581346).\n"

Generated SQL with defPPConfig shows this:

CREATE TABLE IF NOT EXISTS "User"
  ( "id" INT PRIMARY KEY AUTOINCREMENT NOT NULL UNIQUE
  , "admin" BOOLEAN NOT NULL
  , "username" TEXT NOT NULL
  , "password" TEXT NOT NULL
  , "dateCreated" DATETIME NOT NULL
  , "dateModified" DATETIME NOT NULL
  )

AUTOINCREMENT should be SERIAL for PostgreSQL. I know that defPPConfig is more SQLite oriented but when I've looked at pgBackend implementation AutoIncrement is just filtered out which prevents me from inserting new rows to my database.

There should be a new version of this function which will render AutoIncrement as SERIAL.

Multiple aggregates: column reference "name_3_5" is ambiguous

Installed from hackage selda-0.1.9.0 and selda-postgresql-0.1.5.1

Modified the countHomes query from the tutorial to the following:

countHomes :: Query s (Col s Text :*: Col s Int :*: Col s Int)
countHomes = do
  (name :*: _ :*: _) <- select people

  (owner :*: homes) <- aggregate $ do
    (owner :*: city) <- select addresses
    owner' <- groupBy owner
    return (owner' :*: count city)  
  restrict (owner .== name)

  (owner2 :*: homesInTokyo) <- aggregate $ do 
    (owner :*: city) <- select addresses
    restrict (city .== "Tokyo")
    owner' <- groupBy owner
    return (owner' :*: count city)  
  restrict (owner2 .== name)

  order homes descending
  return (owner :*: homes :*: homesInTokyo)

When running with PostgreSQL I get the following database error:

ERROR:  column reference "name_3_5" is ambiguous

Here is the SQL that Selda generated:

SELECT "name_3_5", 
       "tmp_6", 
       "tmp_6" 
FROM   (SELECT "name_3_5", 
               "tmp_6", 
               "name_3_5", 
               "tmp_6", 
               "name_0" 
        FROM   (SELECT "name_3_5", 
                       "tmp_6", 
                       "name_0" 
                FROM   (SELECT "name" AS "name_0" 
                        FROM   "people") AS q1, 
                       (SELECT "name_3"        AS "name_3_5", 
                               COUNT("city_4") AS "tmp_6" 
                        FROM   (SELECT "name" AS "name_3", 
                                       "city" AS "city_4" 
                                FROM   "addresses") AS q0 
                        GROUP BY "name_3") AS q2 
                WHERE  ( "name_3_5" = "name_0" )) AS q5, 
               (SELECT "name_3"        AS "name_3_5", 
                       COUNT("city_4") AS "tmp_6" 
                FROM   (SELECT "name_3", 
                               "city_4" 
                        FROM   (SELECT "name" AS "name_3", 
                                       "city" AS "city_4" 
                                FROM   "addresses") AS q3 
                        WHERE  ( "city_4" = $1 )) AS q4 
                GROUP BY "name_3") AS q6 
        WHERE  ( "name_3_5" = "name_0" ) 
        ORDER BY "tmp_6" DESC) AS q7 

Attachment: Full program:

{-# LANGUAGE TypeOperators #-}
{-# LANGUAGE OverloadedStrings #-}

module Main where

import Database.Selda
import Database.Selda.PostgreSQL

people :: Table (Text :*: Int :*: Maybe Text)
people = table "people" $ primary "name" :*: required "age" :*: optional "pet"

addresses :: Table (Text :*: Text)
addresses = table "addresses" $ required "name" :*: required "city"

db :: PGConnectInfo
db = PGConnectInfo
  { pgHost = "localhost"
  , pgPort = 5432
  , pgDatabase = "seldatest"
  , pgUsername = Just "seldatest"
  , pgPassword = Just ""
  }

setup :: SeldaM ()
setup = do
  createTable people
  createTable addresses

teardown :: SeldaM ()
teardown = do
  tryDropTable people
  tryDropTable addresses

main :: IO ()
main = withPostgreSQL db $ do
  teardown
  setup
  homes <- query countHomes
  liftIO (print homes)

countHomes :: Query s (Col s Text :*: Col s Int :*: Col s Int)
countHomes = do
  (name :*: _ :*: _) <- select people

  (owner :*: homes) <- aggregate $ do
    (owner :*: city) <- select addresses
    owner' <- groupBy owner
    return (owner' :*: count city)
  restrict (owner .== name)

  (owner2 :*: homesInTokyo) <- aggregate $ do
    (owner :*: city) <- select addresses
    restrict (city .== "Tokyo")
    owner' <- groupBy owner
    return (owner' :*: count city)
  restrict (owner2 .== name)

  order homes descending
  return (owner :*: homes :*: homesInTokyo)

Caching with triggers?

Disclaimer: I do not use Selda, just evaluate second-level cache solutions for postgres.

Docs states that Selda keeps track of changes and evicts cache on quesries. But what if there is a trigger which updates/inserts/deletes some tables in the same transaction? Would Selda know about that changes?

More disciplined error handling

Currently, backends just throw whichever exceptions they feel like and expect the user to just deal with it somehow. Instead, the backend API should provide a (very small) well-documented set of exceptions, and backends should restrict themselves to only throw those exceptions.

This also applies to table validation. If some aspect of a table isn't controlled by the type system, an error is thrown as soon as the table is evaluated. While we shouldn't encourage developers to actually catch this error and do something with it (in this situation, the program should explode, loudly and irrecoverably, as the schema is simply broken beyond repair), it would still be useful to throw a specific exception instead.

GenTable

Was curious, is there something about a table created with GHC.Generics that needs to be distinct from one specified manually?

newtype GenTable a = GenTable {gen :: Table (Relation a)}

If not, we could probably get rid of GenTable, but I might be missing something.

Cannot order by on several columns

The current behavior of order seems to erase last call to order, which sucks, because we cannot express something like that:

order forename ascending
order lastname ascending

Either make order monoidal – that is, it accumulates the order rules – or change its type so that it takes a list of ordering somewhat.

Danke schön!

Indexable for (:*:)

Inductively defined tuples can have lenses like normal tuples.

instance Field1 (a :*: b) (a' :*: b) a a' where
  _1 k ~(a :*: b) = indexed k (0 :: Int) a <&> \a' -> (a' :*: b)

result :: String :*: Int 
result = someTuple & _1 .~ "wat"
  where
    someTuple = ("foo" :*: 0) 

-- > result
-- > "wat" :*: 0

These would be nice to have for updates. But, unless we want to depend on lens, unsure how to make instances w/o introducing orphans. Of course, we could ignore the warnings.

An alternative would be to have selector return lenses, and have Selda implement minimal lens functionality.

Returning product type from JOIN?

Say I'd like to write the following:

do
  a <- select tableA
  b <- select tableB
  pure (a, b)

Right now, that's not possible, because (,) is not an instance of Result. However, without being able to return products from queries, one would have to flatten the results using app and then destructure everything at the call site, which isn't really scalable (i.e. tableA and tableB may have dozens of columns each).

Alternatively, return only specific fields, but this also gets unwieldy fast, since it amounts to basically programming with tuples (error prone, since multiple fields may have the same type and order becomes very important).

Am I missing something?

Feature Requests

Selda is pretty amazing, it's probably the best database interface for Haskell yet. I have a few feature requests:

  • Naming the columns returned from a query (at the moment they're just called tmp_#), so that I can read the result into a dataframe and not have to rename the columns after.

  • Using the Labels library or something similar so that we can name columns when we return them, instead of having

(x, y, z) = do
...
...
return (a :*: b :*: c)

we could have

result = do
...
...
return ( "x" := a, "y" := b, "z":= c)

This would also allow duplicate record fields.

  • A function that turns a join into a left join (I know this may not be possible). To motivate this, suppose I have a function that gives the addresses of each person. Then If I want the function to give null when the person has no addresses, I need another function. And then if I have a function for the latitude of each address, I need a left join version if I want to apply it to the left joined addresses. So I have to write a separate left join version of every relationship.

Then lj should be a function that just converts things to a left join. Ie a function that's the query equivalent of lj :: [a] -> NonEmpty a with lj [] = [null]. This actually fits in with the intuition of left join being just a function that turns an empty list into [null]. For example, suppose I have

do
  p <- select people
  a <- addresses p
  return p :*: a

Here it works exactly like it does with lists. Then the left join of lists is written exactly as

do
  p <- select people
  a <- lj $ addresses p
  return p :*: a

Since if addresses p is empty, then lj turns it into a null instead.

  • Binding to an aggregate should automatically group by. For example, if I have a function that finds the first page visit for a given user:
firstvisit :: Column UID -> Query (Column LocalTime)
firstvisit uid = do
    v <- select visittable
    restrict (v!user_id .== uid)
    return $ max (v!visit_timestamp)

which generates the SQL:

SELECT min(visit_timestamp) 
FROM visittable
where visitable.user_id = uid

Now if I want to find the first visits for a whole table of users (usertable >>= firstvisit), then this should become:

SELECT min(visit_timestamp) 
FROM visittable, usertable
where visitable.user_id = usertable.user_id
group by usertable.user_id

Typed migration support

Given two table types t1 and t2 and two functions t1 -> t2 and t2 -> t1, Selda should be able to migrate a table from t1 to t2 or vice versa when asked. Initially, the easiest way to go about this is probably to add SELECT INTO support and simply copy over the table contents (with modifications). This is probably too slow for very large data sets however, so in the long run we should probably strive to use ALTER TABLE etc. instead.

For bonus points, these migrations should be nicely composable and we should ideally be able to just look at the database to determine whether some particular migration needs to be carried out, or if it's already been applied. May require polluting the user's database with a __selda_migration_bookkeeping table or similar.

Strongly related to #50.

Parameterise Query over backend

Different backends often support unique features that'd be nice to have. (#47, #31, #61, possibly others.) However, we don't want other backends to blow up when encountering those features.

One possible solution to this dilemma would be to parameterise the Query monad over the backend, giving all the common functionality the type Query a, PostgreSQL-specific features Query PG, etc. This way, we're not tied down to the lowest common denominator features, but can still generate nicely portable code safely.

More PostgreSQL result types

As per discussion on #71, the PostgreSQL backend should support a wider variety of result types, even when those types aren't supported by the Selda language, to make adding new types easier for users.

Which additional types do we need (inb4 "all of them"), and how should we support this? Should we take the opportunity to rework the marshalling bits entirely, and if so, what other changes should be implemented there while we're at it?

Multiple primary keys

This is standard design for databases and I failed to see how I could have ones with selda.

Bug: `isIn` returns incorrect results

I believe I have found a bug. I expect the following query to return only the people who have an address in "Kakariko". But it actually returns all of the people.

Expected: ["Link"]
Actual: ["Kobayashi","Link","Miyu","Velvet"]

  res <- query $ do
    (name :*: _ :*: _) <- select people
    restrict $ (int 1) `isIn` (do
        (name2 :*: age :*: _) <- select people
        (name3 :*: city) <- select addresses
        restrict (name3 .== name2)
        restrict (name .== name2)
        restrict (city .== "Kakariko")
        return (int 1)
      )
    return name
  liftIO $ print res

Here is the SQL that selda generated:

SELECT "name_0" 
FROM   (SELECT "name_0" 
        FROM   (SELECT "name" AS "name_0" 
                FROM   "people") AS q0 
        WHERE  ( $1 IN (SELECT $2 
                        FROM   (SELECT "name_4", 
                                       "city_5", 
                                       "name_0" 
                                FROM   (SELECT "name" AS "name_0" 
                                        FROM   "people") AS q1, 
                                       (SELECT "name" AS "name_4", 
                                               "city" AS "city_5" 
                                        FROM   "addresses") AS q2 
                                WHERE  ( "name_4" = "name_0" ) 
                                       AND ( "name_0" = "name_0" ) 
                                       AND ( "city_5" = $3 )) AS q3) )) AS q4 

It looks like the problem is that the name_0 column in the inner query is shadowing the name_0 column of the outer query.

Cannot derive instances of SqlType

Inference fails when using GeneralizedNewTypeDeriving w/ SqlType. StandaloneDeriving is similar.

newtype TodoId = TodoId { unTodoId :: Int }
  deriving (Show, Eq, FromJSON, ToJSON, SqlType) 
 Main.hs:62:41-47: error: …  
    • Couldn't match type ‘Int’ with ‘TodoId’
        arising from the coercion of the method ‘mkLit’
          from type ‘Int -> Lit Int’ to type ‘TodoId -> Lit TodoId’ 
    • When deriving the instance for (SqlType TodoId)

SqlType instance for ByteString?

Is it feasible to provide an SqlType instance for ByteString? Since many popular backends (e.g. sqlite3) has support for storing blobs.

Creating table with Bool column creates a wrong type.

I have a following table definition :

type UserRow = RowID :*: Text :*: Text :*: Bool :*: UTCTime :*: UTCTime

user :: Table UserRow
user = table "User"
  $   autoPrimary "id"

  :*: required "username"
  :*: required "password"

  :*: required "admin"

  :*: required "dateCreated"
  :*: required "dateModified"

Running tryCreateTable user creates a new table in database but with admin columns data type set to integer.

And when I try to insert new user I get following error :

*** Exception: SqlError "error executing query `INSERT INTO "User" ("id", "username", "password", "admin", "dateCreated", "dateModified") VALUES (DEFAULT, $1, $2, $3, $4, $5)':

ERROR:  column "admin" is of type integer but expression is of type boolean

LINE 1: ...reated", "dateModified") VALUES (DEFAULT, $1, $2, $3, $4, $5...
HINT:  You will need to rewrite or cast the expression."

Exception safety issues

I was just reading the source code, and saw functions like this:

-- | Perform the given computation over an SQLite database.
--   The database is guaranteed to be closed when the computation terminates.
withSQLite :: (MonadIO m, MonadMask m) => FilePath -> SeldaT m a -> m a
withSQLite file m = do
  conn <- sqliteOpen file
  runSeldaT m conn `finally` seldaClose conn

This functions is not thread-safe, if you get an asynchronous exception between conn <- ... and runSeldaT ... you leak the file descriptor.

Another example:

-- | Run a Selda transformer. Backends should use this to implement their
--   @withX@ functions.
runSeldaT :: (MonadIO m, MonadMask m) => SeldaT m a -> SeldaConnection -> m a
runSeldaT m c = do
    liftIO $ takeMVar (connLock c)
    go `finally` liftIO (putMVar (connLock c) ())
  where
    ...

This is worse because you if you get an async exception between two lines all other threads get blocked indefinitely.

Create a combinator that switches off foreign key checking

I created a development version of my application that automatically drops the tables on startup and re-creates them to work correctly when my schema changes. However I ran into SQL exceptions when I introduced foreign keys. The problem was that by dropping a table that is referenced from another one, the foreign key checker triggered.

I propose a combinator that temporary switches off the foreign key check to do such migration operations. It should be used like this:

cleanDatabase = withForeignCheckTurnedOff $ do
   tryDropTable table1
   createTable table1
   -- ...
                                             

I've written a simple version of it for sqlite using the Selda backend. It might need to be defined differently to also work with other backends:

withForeignCheckTurnedOff :: (MonadMask m, MonadIO m, MonadThrow m) => SeldaT m a -> SeldaT m a
withForeignCheckTurnedOff act = do
   backend <- Backend.seldaBackend
   (resCode, _) <- liftIO $ Backend.runStmt backend (pack "PRAGMA foreign_keys = OFF") [] -- TODO: something different for postgre
   when (resCode /= 0) (throwM $ Backend.SqlError "withForeignCheckTurnedOff: Switching foreign keys off was not successful.") 
   res <- act
   (resCode, _) <- liftIO $ Backend.runStmt backend (pack "PRAGMA foreign_keys = ON") []
   when (resCode /= 0) (throwM $ Backend.SqlError "withForeignCheckTurnedOff: Switching foreign keys back on was not successful.") 
   return res

Synopsis compilation failed

The latest Selda, cloned from github, fails to compile the synopsis example:

  update_ people
    (\person -> person ! pName .== "Velvet")
    (\person -> person `with` [pPet := just "orthros"])
Error:error:
          • Couldn't match expected type ‘NotNullable’
                        with actual type ‘IsNullable a9’
            The type variable ‘a9’ is ambiguous
          • In the first argument of ‘(:=)’, namely ‘pPet’
            In the expression: pPet := just "orthros"
            In the second argument of ‘with’, namely ‘[pPet := just "orthros"]’

Is this expected? (Work in progress etc)?

Foreign key examples?

The readme mentions foreign key support, but I can't see anything relevant in Haddocks or in the tutorial page. Is foreign keys supported?

Function to validate database schema

First of all, thank you very much for this awesome library.

I was thinking about having a function to check that the database has a table, for example I'd like to be certain that some table exists before I get my application up, I suppose it's not too much work, if I can make myself some time I'll implement it but I'd like to get your feedback.

Allow foreign keys to be null

First of all, thank you for developing Selda, it is a very nice database abstraction layer!

In my application, I built a tree-like structure, where each node keeps a reference to its parent. The representation looks like this:

nodes :: Table (RowID :*: Maybe RowID)
nodes = table "nodes" $ autoPrimary "node_id" 
                          :*: optional "parent_id" -- `fk` (nodes, node_id)
( node_id :*: node_parent ) = selectors nodes

Unfortunately, I cannot create the foreign key because fk requires the column to have the same value type as the referenced column (RowID and Maybe RowID does not match).

It would be nice to have an opt_fk combinator to attach optional foreign keys to columns.

BUG: result with unknown type oid: Oid 23

I have a table with the following types of columns: TIMESTAMP, TEXT, INTEGER, SERIAL. When I try to interact with the table, I get the error about Oid 23. If I use a patched version of Selda that makes 23 an integer type in addition to 20, things work fine. Am I using the wrong types, an incompatible version of Postgres, or something like that? This seems like something that should work.

(Edited to note I have tried various sizes of integers on the DB-side.)

More efficient encoding / decoding

We could probably use postgresql-binary as a drop-in replacement for what we're currently doing here:

 | t == boolType    = SqlBool $ readBool val
 | t == intType     = SqlInt $ read (BS.unpack val)
 | t == doubleType  = SqlFloat $ read (BS.unpack val)
 | t `elem` textish = SqlString (decodeUtf8 val)

This would also allow us to be more specific about time and other textish data.

valueParser :: Value a -> ByteString -> Either Text a

Same story for encoding too:

fromSqlValue :: Lit a -> Maybe (Oid, BS.ByteString, Format)
fromSqlValue (LitB b)    = Just (boolType, if b then "true" else "false", Text)
fromSqlValue (LitI n)    = Just (intType, BS.pack $ show n, Text)
fromSqlValue (LitD f)    = Just (doubleType, BS.pack $ show f, Text)
fromSqlValue (LitS s)    = Just (textType, encodeUtf8 s, Text)
fromSqlValue (LitTS s)   = Just (timestampType, encodeUtf8 s, Text)
fromSqlValue (LitTime s) = Just (timeType, encodeUtf8 s, Text)
fromSqlValue (LitDate s) = Just (dateType, encodeUtf8 s, Text)
fromSqlValue (LitNull)   = Nothing
fromSqlValue (LitJust x) = fromSqlValue x

w/ postgresql-binary

PostgreSQL.Binary.Encoding.encodingBytes :: Encoding -> ByteString

Should probably get away from [[SqlValue]] as well, since that's probably not going to be the most efficient.

MySQL backend support

Currently, a few things are missing to be able to support a MySQL backend:

  • Support for ? as a parameter placeholder.
  • Probably more.

This is probably a good time to restructure the pretty-printer to be explicitly parameterized over some configuration, instead of over different functions in different places.

build failure: Proxy not in scope

Build failure with GHC 8.2.1 as seen on the Stackage build server.

[ 3 of 24] Compiling Database.Selda.Table ( src/Database/Selda/Table.hs, dist/build/Databa
se/Selda/Table.o )

src/Database/Selda/Table.hs:55:34: error:
    Not in scope: type constructor or class ‘Proxy’
   |
55 |   , colType  = sqlType (Proxy :: Proxy a)
   |                                  ^^^^^

src/Database/Selda/Table.hs:123:17: error:
    Not in scope: type constructor or class ‘Proxy’
    |
123 |   mergeSpecs :: Proxy a -> ColSpecs a -> [ColInfo]
    |                 ^^^^^

src/Database/Selda/Table.hs:125:62: error:
    Not in scope: type constructor or class ‘Proxy’
    |
125 |   mergeSpecs _ (ColSpec a :*: b) = a ++ mergeSpecs (Proxy :: Proxy b) b
    |                                                              ^^^^^

src/Database/Selda/Table.hs:137:59: error:
    Not in scope: type constructor or class ‘Proxy’
    |
137 |     tcs = validate name $ map tidy $ mergeSpecs (Proxy :: Proxy a) cs
    |                                                           ^^^^^

Is the "Updating Rows" Section Correct?

2017-04-20-173717_1015x313_scrot

Given the text in the screenshot, I'd expect to see the order of the functions flipped. That is, the first function appears to be filtering out those persons whose name is Link, but the text says "The first is a mapping over table columns", which appears to be what the second function here is doing.

Support for UUID primary keys in PostgreSQL

I was wondering if this feature exists or is planned. I took a look at the code and it doesn't look like it would be very hard to handle the UUID type, but UUID primary keys (and foreign keys) seem more complicated.

(Thanks a lot for the awesome library, I'm really enjoying working with it!)

ifThenElse with nullable columns

ifThenElse (#37) is on it's way into Selda... but it doesn't work well with common scenarios involving nullable columns.

For example:
You have a nullable integer column A, and you want to write an expression like: "If A is NULL then return the value 0, otherwise return the value of A + 1"

We need something like Opaleye's matchNullable

Something like that could be implemented entirely in user-land, using ifThenElse together with Selda's unsafe cast function (to cast away the null-ness). But I believe that this is common enough that it should belong as a core function in Selda.

Should we add something like matchNullable to Selda? If so, what should it look like? I personally think that calling it matchNullable and having it look exactly like opaleye's would be good. I can maybe do a pull request

Optional fk on gen tables

Hi! Thanks for such good library, really like it!
But i have a problem.

For example:
I have a table with self-reference fk, and by default this fk is always with not null constraint. Is there a way to make it optional? Cause now I cannot create category without parent.
I inspected sources of Generic module, but did not find any ways to add optional attribute to make field in gen table declaration optional.

Code:
Categories table with nested categories (fk on the same table, which defining now).
Purpose solution commented, not found any ways to do it. Maybe you know a workaround?

image

Thanks!

P.S. I can research and write some code in pr for this feature if there are no workaround and it is possible to implement without too much work.

Cache invalidation race during transaction

Currently, transactions don't do anything in particular with the in-process cache. This is a bit problematic, since a table that is involved in a query gets invalidated as soon as it is touched, not upon completion of the transaction. This may cause an inconsistent view between the cache and the database if another thread accesses a table after it's been invalidated but before the transaction completes: the cache will be populated with the pre-transaction value, and we already invalidated the table so we're not going to do it again.

A possible solution to this is to turn off cache updates during transactions. Items already in the cache may be read and items may be invalidated, but no new items must be added to the cache until after the transaction completes.

Another possible solution would be to collect all tables that are modified by a transaction, instead of actually invalidating them, and then invalidate them all once the transaction completes. This would have the advantage of keeping the cache consistent even when doing out of process caching, with memcache or similar.

More type safe RowID

I was trying to make my code more type safe and self documenting, so I did the following:

newtype ID a = ID { unID :: Int }
  deriving ( Eq, Show, Enum, Read, Generic )

instance Bounded (ID a) where
  minBound = ID minBound
  maxBound = ID maxBound

...

data User = User
  { userId       :: ID User
  , userType     :: UserType
  , userUsername :: Text
  , userPassword :: Text
  } deriving ( Eq, Show, Generic )

user :: Table (Relation User)
user = table "User"
  $   autoPrimary  "id"
  :*: required "type"
  :*: required "username"
  :*: required "password"

...

data Recipe = Recipe
  { recipeId               :: ID Recipe
  , recipeUserId           :: ID User
  , recipeRecipeCategoryId :: ID RecipeCategory
  , recipeName             :: Text
  , recipeDescription      :: Text
  } deriving ( Eq, Show, Generic )

Here it is much more obvious what ID is actually referencing so I'd suggest we replace current RowID with something like ID a.

Current problem that I'm having with this approach is that it doesn't work with auto incrementing keys, but that is mostly because I can't implement my own version of autoPrimary function because I can't import ColName constructor and newCol from Database.Selda.Table module.

Is this intentional or...? It might be worth using the Internal naming convention and making Table module available to devs.

Semantics of aggregation

Could you tell me how the following should behave? (I'd try it myself but believe it or not I'm stuck on GHC 7.6).

-- A table containing rows of the form
-- (adult_id)
adults :: Query s (Col s Int)
adults = ...

-- A table containing rows of the form
-- (adult_id, childs_age)
childrenAge :: Query s (Col s Int :*: Col s Int)
childrenAge = ...

youngestChild :: Col (Inner s) Int -> Query s (Col s Int)
youngestChild adult = aggregate $ do
    adult' :*: childAge <- childrenAge
    restrict (adult .== adult')
    return (min_ childAge)

myQuery :: Query s (Col s Int)
myQuery = adults >>= youngestChild

Am I right in thinking that myQuery will return a single row regardless of how many rows there are in adults?

Natural transformation and SeldaT

In my code base, I use the Handler monad from servant. It’s basically a ReaderT IO. All the code of servant runs in such a monad. However, in order to work for other monads as well, servant advises you to use a natural transformation m ~> Handler so that you can use all the servant functions on your own stack.


My problem kicks in because of withSQLite. It has a constraint on its base monad, MonadMask m. However, if you try to implement the natural transformation SeldaT CustomMonad :~> Handler, you quickly see there’s a problem, because Handler, from servant, doesn’t provide you with a MonadMask instance.

I strongly believe there’s something related to MonadBaseControl / MonadTransControl here, but I cannot see what.

type Foo = 
type Handler = ReaderT Foo IO

myNat :: SeldaT Handler :~> Handler
myNat = NT $ \selda -> do
  -- here, we’re in Handler
  withSQLite "foo.db" selda -- this must run in (MonadIO m, MonadMask m) => m: not possible here
  
  -- so the solution would be to liftIO or something like that? heeeeeelp?

Prepared statements

Currently, prepared statements are only used to make sure that input data is properly sanitized. It would be beneficial to allow prepared statements to be reused as well for performance reasons. It would be nice if statements could be prepared on the top level and reused throughout the application.

A scheme to enable this:

  • Add a function prepared :: MonadSelda m => (a -> ... -> Query s b) -> a -> ... -> m [Res b].
  • Replace (or augment) the current functions provided by backends with a function prepare :: Text -> [Param] -> IO PreparedQuery. This may require us to introduce a type representing each backend, and a type family PreparedQuery which lets backends define their own representation for prepared queries.
  • MonadSelda instances must provide a globally unique connection identifier.
  • prepared produces a function which, when fully applied, prepares the given query, caches the result keyed on the connection identifier used to prepare the query, and then executes the prepared query. If the query is already prepared for the current connection identifier, just execute the prepared query. This should probably be implemented using weakrefs.

A possible drawback of this scheme is that the user has no ability to force preparation of queries beforehand. If this is a real problem, an alternative version of prepare could be added which produces a "preparation handle" in addition to the function, which could be used to force preparation using a function forcePrepared. See the implementation of automatic tuning in Aplite for more information about how this could work.

`make license` is not executed on OSX

Because the OSX filesystem is case-insensitive, the presence of a LICENSE file at the root of the project misleads make, which considers that the license target has already been executed.

This issue is particularly pernicious when using stack, because the missing file will cause it to fail with the unhelpful message:

/Users/me/.stack/setup-exe-cache/x86_64-osx/Cabal-simple_mPHDZzAJ_2.0.0.2_ghc-8.2.1:
startProcess: invalid argument (Bad file descriptor)

Maybe license should be made .PHONY?

Code re-usability issues

In my project I have code that is repeated for every entity in the database. For example, getting an entity by id. e.g. User entity:

getUser :: SeldaPool -> ID User -> IO (Maybe User)
getUser pool id = do
  users <- fmap ( fmap fromRel ) $ runSelda pool $ query $ do
    user <- select D.user
    restrict ( user ! D.s_user_id .== literal id )
    pure user
  pure $ listToMaybe users

where D.user is gen generic_user.

I have tried to do something like this:

getEntity :: ToDyn (Cols () t) => Table raw_entity -> Selector t (ID entity) -> SeldaPool -> ID entity -> IO (Maybe entity)
getEntity table selector pool id = do
  entities <- fmap ( fmap fromRel ) $ runSelda pool $ query $ do
    entity <- select table
    restrict ( entity ! selector .== literal id )
    pure entity
  pure $ listToMaybe entities

Which doesn't work. It does work when I remove type signature, but I have to enable AllowAmbiguousTypes and resulting deduced type is very unpractical:

 getEntity :: (Cols s t ~ Cols s a2,
                    Database.Selda.Compile.Res (Cols s a2)
                    ~
                    Database.Selda.Generic.Rel (GHC.Generics.Rep a1),
                    Database.Selda.Types.ToDyn (Cols () t),
                    Database.Selda.Types.ToDyn
                      (Database.Selda.Generic.Rel (GHC.Generics.Rep a1)),
                    Database.Selda.SqlType.SqlType a3,
                    Database.Selda.Column.Columns (Cols s a2),
                    Database.Selda.Compile.Result (Cols s a2),
                    Control.Monad.Catch.MonadMask m, Control.Monad.IO.Class.MonadIO m,
                    Control.Monad.Trans.Control.MonadBaseControl IO m,
                    Database.Selda.SQL.Insert
                      (Database.Selda.Generic.Rel (GHC.Generics.Rep a1)),
                    Database.Selda.Generic.GFromRel (GHC.Generics.Rep a1),
                    Database.Selda.Generic.GRelation (GHC.Generics.Rep a1),
                    Generic a1) =>
                   Table a2
                   -> Selector t a3
                   -> Data.Pool.Pool Database.Selda.Backend.Internal.SeldaConnection
                   -> a3
                   -> m (Maybe a1)

Am I missing something here or can this be material for possible future improvements in selda?

BIGSERIAL SQL error

ERROR:  type "bigserial" does not exist

That type exists in PostgreSQL. I don’t get the error. Is there something specific to do on the psql side? That error occurs whenever I try to insert / query / whatever.

Updating with generics and `toRel`

Currently I have this code for updating user info :

updateUserQ :: User -> SeldaM Int
updateUserQ User{..} = S.update user
  (\(userId :*: _) -> userId .== literal id)
  (\( userId :*: _ ) ->
    userId :*: literal username :*: literal password :*: literal admin :*: literal dateCreated :*: literal dateModified
  )

Since user is generic and I can use fromRel to convert query result to my User data type I was expecting that toRel would work in reverse like this :

updateUserQ :: User -> SeldaM Int
updateUserQ u@(User{..}) = S.update user
  (\(userId :*: _) -> userId .== literal id)
  (\_ -> toRel u  )

But I'm getting the following error :

src\Cookster\Repository\PostgreSQL\Selda.hs:37:10: error:
    * Couldn't match type `UTCTime' with `Col s0 UTCTime'
      Expected type: Cols
                       s0 (Int :*: (Text :*: (Text :*: (Bool :*: (UTCTime :*: UTCTime)))))
        Actual type: Database.Selda.Generic.Relation User
    * In the expression: toRel u
      In the third argument of `S.update', namely `(\ _ -> toRel u)'
      In the expression:
        S.update
          user (\ (userId :*: _) -> userId .== literal id) (\ _ -> toRel u)
   |
37 |   (\_ -> toRel u)
   |          ^^^^^^^

do I have to perform some additional transformation on toRel result or did I miss the point here?

Generics feature

@valderman I saw you added Generics to the feature list :)

Had some ideas / implementations around type level DDL statements. Suppose we have a type-level representation of a Table that looks like:

data Table (name :: Symbol) (cols :: [(Symbol, *)])

The goal would be to get from here

data Person = Person { name :: String, age :: Int }
  deriving (Show, Generic) 

to here

table :: Proxy (Table "Person" '[ '("name", String), '("age", Int) ]) 
table = toTable (Proxy @ Person) 

We can make two passes over Rep Person to extract the columns and table name.

For the table name:

type family ToTableName a :: Symbol where
  ToTableName (D1 a next) = ToTableName next 
  ToTableName (C1 (MetaCons name x y) z) = name 
  ToTableName a = TypeError (Text "Must have at least one constructor")

For the columns:

type family Append xs ys :: [k] where
  Append '[] '[] = '[]
  Append '[] ys  = ys
  Append xs '[]  = xs
  Append (x ': xs) ys = x ': Append xs ys

type family ToCols k :: [(Symbol, *)] where
  ToCols (D1 x d) = ToCols d 
  ToCols (C1 x c) = ToCols c
  ToCols (a :*: b) = ToCols a `Append` ToCols b
  ToCols (a :+: b) = TypeError (Text "Sum types are not supported")
  ToCols (S1 ('MetaSel ('Just x) a b c) (Rec0 y)) = '[ '(x,y)]
  ToCols x = '[] 

And finally, application of the functions above.

-- | Type application
toTable
  :: ( ToTableName (Rep a) ~ tableName
     , ToCols (Rep a) ~ columns
     , Generic a
     ) => Proxy a -> Proxy (Table tableName columns)
toTable Proxy = Proxy

Some things to note:

  • In GHC 8.2.0, we will have type level ++ for Symbol, so we could probably put all of the SQL at the type level.
    • symbolVal would still require conversion from String to ByteString though.
  • We could construct ByteStrings directly using Builder when reifying Table.
  • Could this representation solve the need for GenTable (#5) ? Since the tableName should always be unique, it could preserve distinctness, and therefore injectivity.

A few problems / issues with this approach

  • How do we encode required / optional ? (Just pattern match on Maybe ?)
  • How do we handle PRIMARY KEY (FOREIGN?) etc.
  • I'm unsure how the above representation would work with joins. (This is where you come in :) !

Full source:

https://gist.github.com/dmjio/63dc22d6ea72ec8574f10ae09a262a70

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.