Giter VIP home page Giter VIP logo

conduit's Introduction

Conduit is a framework for dealing with streaming data, such as reading raw bytes from a file, parsing a CSV response body from an HTTP request, or performing an action on all files in a directory tree. It standardizes various interfaces for streams of data, and allows a consistent interface for transforming, manipulating, and consuming that data.

Some of the reasons you'd like to use conduit are:

  • Constant memory usage over large data
  • Deterministic resource usage (e.g., promptly close file handles)
  • Easily combine different data sources (HTTP, files) with data consumers (XML/CSV processors)

Want more motivation on why to use conduit? Check out this presentation on conduit. Feel free to ignore the yesod section.

NOTE As of March 2018, this document has been updated to be compatible with version 1.3 of conduit. This is available in Long Term Support (LTS) Haskell version 11 and up. For more information on changes between versions 1.2 and 1.3, see the changelog.

Table of Contents

  1. Synopsis
  2. Libraries
  3. Conduit as a bad list
  4. Interleaved effects
  5. Terminology and concepts
  6. Folds
  7. Transformations
  8. Monadic composition
  9. Primitives
  10. Evaluation strategy
  11. Resource allocation
  12. Chunked data
  13. ZipSink
  14. ZipSource
  15. ZipConduit
  16. Forced consumption
  17. FAQs
  18. More exercises
  19. Legacy syntax
  20. Further reading

Synopsis

Basic examples of conduit usage, much more to follow!

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

main = do
    -- Pure operations: summing numbers.
    print $ runConduitPure $ yieldMany [1..10] .| sumC

    -- Exception safe file access: copy a file.
    writeFile "input.txt" "This is a test." -- create the source file
    runConduitRes $ sourceFileBS "input.txt" .| sinkFile "output.txt" -- actual copying
    readFile "output.txt" >>= putStrLn -- prove that it worked

    -- Perform transformations.
    print $ runConduitPure $ yieldMany [1..10] .| mapC (+ 1) .| sinkList

Libraries

There are a large number of packages relevant to conduit, just search for conduit on the LTS Haskell package list page. In this tutorial, we're going to rely mainly on the conduit library itself, which provides a large number of common functions built-in. There is also the conduit-extra library, which adds in some common extra support, like GZIP (de)compression.

You can run the examples in this tutorial as Stack scripts.

Conduit as a bad list

Let's start off by comparing conduit to normal lists. We'll be able to compare and contrast with functions you're already used to working with.

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
{-# LANGUAGE ExtendedDefaultRules #-}
import Conduit

take10List :: IO ()
take10List = print
    $ take 10 [1..]

take10Conduit :: IO ()
take10Conduit = print $ runConduitPure
    $ yieldMany [1..] .| takeC 10 .| sinkList

main :: IO ()
main = do
    putStrLn "List version:"
    take10List
    putStrLn ""
    putStrLn "Conduit version:"
    take10Conduit

Our list function is pretty straightforward: create an infinite list from 1 and ascending, take the first 10 elements, and then print the list. The conduit version does the exact same thing, but:

  • In order to convert the [1..] list into a conduit, we use the yieldMany function. (And note that, like lists, conduit has no problem dealing with infinite streams.)
  • We're not just doing function composition, and therefore we need to use the .| composition operator. This combines multiple components of a conduit pipeline together.
  • Instead of take, we use takeC. The Conduit module provides many functions matching common list functions, but appends a C to disambiguate the names. (If you'd prefer to use a qualified import, check out Data.Conduit.Combinators).
  • To consume all of our results back into a list, we use sinkList
  • We need to explicitly run our conduit pipeline to get a result from it. Since we're running a pure pipeline (no monadic effects), we can use runConduitPure.
  • And finally, the data flows from left to right in the conduit composition, as opposed to right to left in normal function composition. There's nothing deep to this; it's just intended to make conduit feel more like common streaming abstraction from other places. For example, notice how similar the code above looks to piping in a Unix shell: ps | grep ghc | wc -l.

Alright, so what we've established is that we can use conduit as a bad, inconvenient version of lists. Don't worry, we'll soon start to see cases where conduit far outshines lists, but we're not quite there yet. Let's build up a slightly more complex pipeline:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
{-# LANGUAGE ExtendedDefaultRules #-}
import Conduit

complicatedList :: IO ()
complicatedList = print
    $ takeWhile (< 18) $ map (* 2) $ take 10 [1..]

complicatedConduit :: IO ()
complicatedConduit = print $ runConduitPure
     $ yieldMany [1..]
    .| takeC 10
    .| mapC (* 2)
    .| takeWhileC (< 18)
    .| sinkList

main :: IO ()
main = do
    putStrLn "List version:"
    complicatedList
    putStrLn ""
    putStrLn "Conduit version:"
    complicatedConduit

Nothing more magical going on, we're just looking at more functions. For our last bad-list example, let's move over from a pure pipeline to one which performs some side effects. Instead of printing the whole result list, let's use mapM_C to print each value individually.

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
{-# LANGUAGE ExtendedDefaultRules #-}
import Conduit

complicatedList :: IO ()
complicatedList = mapM_ print
    $ takeWhile (< 18) $ map (* 2) $ take 10 [1..]

complicatedConduit :: IO ()
complicatedConduit = runConduit
     $ yieldMany [1..]
    .| takeC 10
    .| mapC (* 2)
    .| takeWhileC (< 18)
    .| mapM_C print

main :: IO ()
main = do
    putStrLn "List version:"
    complicatedList
    putStrLn ""
    putStrLn "Conduit version:"
    complicatedConduit

For the list version, all we've done is added mapM_ at the beginning. In the conduit version, we replace print $ runConduitPure with runConduit (since we're no longer generating a result to print, and our pipeline now has effects), and replaced sinkList with mapM_C print. We're no longer reconstructing a list at the end, instead just streaming the values one at a time into the print function.

Interleaved effects

Let's make things a bit more difficult for lists. We've played to their strengths until now, having a pure series of functions composed, and then only performing effects at the end (either print or mapM_ print). Suppose we have some new function:

magic :: Int -> IO Int
magic x = do
    putStrLn $ "I'm doing magic with " ++ show x
    return $ x * 2

And we want to use this in place of the map (* 2) that we were doing before. Let's see how the list and conduit versions adapt:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
{-# LANGUAGE ExtendedDefaultRules #-}
import Conduit

magic :: Int -> IO Int
magic x = do
    putStrLn $ "I'm doing magic with " ++ show x
    return $ x * 2

magicalList :: IO ()
magicalList =
    mapM magic (take 10 [1..]) >>= mapM_ print . takeWhile (< 18)

magicalConduit :: IO ()
magicalConduit = runConduit
     $ yieldMany [1..]
    .| takeC 10
    .| mapMC magic
    .| takeWhileC (< 18)
    .| mapM_C print

main :: IO ()
main = do
    putStrLn "List version:"
    magicalList
    putStrLn ""
    putStrLn "Conduit version:"
    magicalConduit

Notice how different the list version looks: we needed to break out >>= to allow us to have two different side-effecting actions (mapM magic and mapM_ print). Meanwhile, in conduit, all we did was replace mapC (* 2) with mapMC magic. This is where we begin to see the strength of conduit: it allows us to build up large pipelines of components, and each of those components can be side-effecting!

However, we're not done with the difference yet. Try to guess what the output will be, and then ideally run it on your machine and see if you're correct. For those who won't be running it, here's the output:

List version:
I'm doing magic with 1
I'm doing magic with 2
I'm doing magic with 3
I'm doing magic with 4
I'm doing magic with 5
I'm doing magic with 6
I'm doing magic with 7
I'm doing magic with 8
I'm doing magic with 9
I'm doing magic with 10
2
4
6
8
10
12
14
16

Conduit version:
I'm doing magic with 1
2
I'm doing magic with 2
4
I'm doing magic with 3
6
I'm doing magic with 4
8
I'm doing magic with 5
10
I'm doing magic with 6
12
I'm doing magic with 7
14
I'm doing magic with 8
16
I'm doing magic with 9

In the list version, we apply the magic function to all 10 elements in the initial list, printing all the output at once and generating a new list. We then use takeWhile on this new list and exclude the values 18 and 20. Finally, we print out each element in our new 8-value list. This has a number of downsides:

  • We had to force all 10 items of the list into memory at once. For 10 items, not a big deal. But if we were dealing with massive amounts of data, this could cripple our program.
  • We did "more magic" than was strictly necessary: we applied magic to 10 items in the list. However, our takeWhile knew when it looked at the 9th result that it was going to ignore the rest of the list. Nonetheless, because our two components (magic and takeWhile) are separate from each other, we couldn't know that.

Let's compare that to the conduit version:

  • From the output, we can see that the calls to magic are interleaved with the calls to print. This shows that our data flows through the whole pipeline one element at a time, and never needs to build up an intermediate list. In other words, we get constant memory usage in this pipeline, a huge selling point for conduit.
  • Notice that we only perform "magic" 9 times: once we run magic on 9, get a result of 18, and find out that it fails our takeWhileC (< 18), the conduit pipeline doesn't demand any more values, and therefore magic isn't run again. We'll describe in more detail later how conduit is consumer-driven, but this is your first taste of this.

To be clear, it's entirely possible to get this behavior with a list-based program. What you'll lose is easy composition. For example, here's one way to get the same behavior as was achieved with conduit:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
magic :: Int -> IO Int
magic x = do
    putStrLn $ "I'm doing magic with " ++ show x
    return $ x * 2

main :: IO ()
main = do
    let go [] = return ()
        go (x:xs) = do
            y <- magic x
            if y < 18
                then do
                    print y
                    go xs
                else return ()

    go $ take 10 [1..]

Notice how we've had to reimplement the behavior of takeWhile, mapM, and mapM_ ourselves, and the solution is less compositional. Conduit makes it easy to get the right behavior: interleaved effects, constant memory, and (as we'll see later) deterministic resource usage.

Terminology and concepts

Let's take a step back from the code and discuss some terminology and concepts in conduit. Conduit deals with streams of data. Each component of a pipeline can consume data from upstream, and produce data to send downstream. For example:

runConduit $ yieldMany [1..10] .| mapC show .| mapM_C print

In this snippet, yieldMany [1..10], mapC show, and mapM_C print are each components. We use the .| operator—a synonym for the fuse function—to compose these components into a pipeline. Then we run that pipeline with runConduit.

From the perspective of mapC show, yieldMany [1..10] is its upstream, and mapM_C is its downstream. When we look at yieldMany [1..10] .| mapC show, what we're actually doing is combining these two components into a larger component. Let's look at the streams involved:

  • yieldMany consumes nothing from upstream, and produces a stream of Ints
  • mapC show consumes a stream of Ints, and produces a stream of Strings
  • When we combine these two components together, we get something which consumes nothing from upstream, and produces a stream of Strings.

To add some type signatures into this:

yieldMany [1..10] :: ConduitT ()  Int    IO ()
mapC show         :: ConduitT Int String IO ()

There are four type parameters to ConduitT:

  • The first indicates the upstream value, or input. For yieldMany, we're using (), though really it could be any type since we never read anything from upstream. For mapC, it's Int
  • The second indicates the downstream value, or output. For yieldMany, this is Int. Notice how this matches the input of mapC, which is what lets us combine these two. The output of mapC is String.
  • The third indicates the base monad, which tells us what kinds of effects we can perform. A ConduitT is a monad transformer, so you can use lift to perform effects. (We'll learn more about conduit's monadic nature later.) We're using IO in our example.
  • The final indicates the result type of the component. This is typically only used for the most downstream component in a pipeline. We'll get into this when we discuss folds below.

Let's also look at the type of our .| operator:

(.|) :: Monad m
     => ConduitT a b m ()
     -> ConduitT b c m r
     -> ConduitT a c m r

This shows us that:

  • The output from the first component must match the input from the second
  • We ignore the result type from the first component, and keep the result of the second
  • The combined component consumes the same type as the first component and produces the same type as the second component
  • Everything has to run in the same base monad

Exercise Work through what happens when we add .| mapM_C print to the mix above.

Finally, let's look at the type of the runConduit function:

runConduit :: Monad m => ConduitT () Void m r -> m r

This gives us a better idea of what a pipeline is: just a self contained component, which consumes nothing from upstream (denoted by ()) and producing nothing to downstream (denoted by Void)*. When we have such a stand-alone component, we can run it to extract a monadic action that will return a result (the m r).

* The choice of () and Void instead of, say, both () or both Void, is complicated. For now, I recommend just accepting that this makes sense. The short explanation is that the input is in negative position whereas the output is in positive position, and therefore we can give the stronger Void guarantee in the output case. The long explanation can be found here.

Finally, we talked about pure pipelines before. Those are just pipelines with Identity as the base monad:

runConduitPure :: ConduitT () Void Identity r -> r

Folds

A common activity with lists is folding down to a single result. This concept translates directly into conduit, and works nicely at ensuring constant memory usage. If you're familiar with folding over lists, the concepts here should be pretty straightforward, so this will mostly just be a collection of examples.

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

main :: IO ()
main = print $ runConduitPure $ yieldMany [1..100 :: Int] .| sumC

Summing is straightforward, and can be done if desired with the foldlC function:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

main :: IO ()
main = print $ runConduitPure $ yieldMany [1..100 :: Int] .| foldlC (+) 0

You can use foldMapC to fold monoids together:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit
import Data.Monoid (Sum (..))

main :: IO ()
main = print $ getSum $ runConduitPure $ yieldMany [1..100 :: Int] .| foldMapC Sum

Or you can use foldC as a shortened form of foldMapC id:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

main :: IO ()
main = putStrLn $ runConduitPure
     $ yieldMany [1..10 :: Int]
    .| mapC (\i -> show i ++ "\n")
    .| foldC

Though if you want to make that easier you can use unlinesC:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

main :: IO ()
main = putStrLn $ runConduitPure
     $ yieldMany [1..10 :: Int]
    .| mapC show
    .| unlinesC
    .| foldC

You can also do monadic folds:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit
import Data.Monoid (Product (..))

magic :: Int -> IO (Product Int)
magic i = do
    putStrLn $ "Doing magic on " ++ show i
    return $ Product i

main :: IO ()
main = do
    Product res <- runConduit $ yieldMany [1..10] .| foldMapMC magic
    print res

Or with foldMC:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

magic :: Int -> Int -> IO Int
magic total i = do
    putStrLn $ "Doing magic on " ++ show i
    return $! total * i

main :: IO ()
main = do
    res <- runConduit $ yieldMany [1..10] .| foldMC magic 1
    print res

There are plenty of other functions available in the conduit-combinator library. We won't be covering all of them in this tutorial, but hopefully this crash-course will give you an idea of what kinds of things you can do and help you understand the API docs.

Transformations

When learning lists, one of the first functions you'll see is map, which transforms each element of the list. We've already seen mapC, above, which does the same thing for conduit. This is just one of many functions available for performing transformations. Like folds, these functions are named and behave like their list counterparts in many examples, so we'll just blast through some examples.

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

main :: IO ()
main = runConduit $ yieldMany [1..10] .| mapC (* 2) .| mapM_C print

We can also filter out values:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

main :: IO ()
main = runConduit $ yieldMany [1..10] .| filterC even .| mapM_C print

Or if desired we can add some values between each value in the list:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

main :: IO ()
main = runConduit $ yieldMany [1..10] .| intersperseC 0 .| mapM_C print

It's also possible to "flatten out" a conduit, by converting a stream of chunks (like a list of vector) of data into the individual values.

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

main :: IO ()
main = runConduit
     $ yieldMany (map (replicate 5) [1..10])
    .| concatC
    .| mapM_C print

NOTE This is our first exposure to "chunked data" in conduit. This is actually a very important and common use case, especially around ByteStrings and Texts. We'll cover it in much more detail in its own section later.

You can also perform monadic actions while transforming. We've seen mapMC being used already, but other such functions exist:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
{-# LANGUAGE OverloadedStrings #-}
import Conduit

evenM :: Int -> IO Bool
evenM i = do
    let res = even i
    print (i, res)
    return res

main :: IO ()
main = runConduit
     $ yieldMany [1..10]
    .| filterMC evenM
    .| mapM_C print

Or you can use the iterM function, which performs a monadic action on the upstream values without modifying them:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21

import Conduit

main :: IO ()
main = do
    res <- runConduit $ yieldMany [1..10] .| iterMC print .| sumC
    print res

EXERCISE Implement iterMC in terms of mapMC.

Monadic composition

We've so far only really explored half of the power of conduit: being able to combine multiple components together by connecting the output of the upstream to the input of the downstream (via the .| operator or the fuse function). However, there's another way to combine simple conduits into more complex ones, using the standard monadic interface (or do-notation). Let's start with some examples, beginning with a data producer:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21

import Conduit

source :: Monad m => ConduitT i Int m ()
source = do
    yieldMany [1..10]
    yieldMany [11..20]

main :: IO ()
main = runConduit $ source .| mapM_C print

We've created a new conduit, source, which combines together two calls to yieldMany. Try to guess at intuitively what this will do before reading the explanation.

As you may have guessed, this program will print the numbers 1 through 20. What we've seen here is that, when you use monadic composition, the output from the first component is sent downstream, and then the output from the second component is sent downstream. Now let's look at the consuming side. Again, try to guess what this program will do before you read the explanation following it.

#!/usr/bin/env stack
-- stack script --resolver lts-12.21

import Conduit

sink :: Monad m => ConduitT Int o m (String, Int)
sink = do
    x <- takeC 5 .| mapC show .| foldC
    y <- sumC
    return (x, y)

main :: IO ()
main = do
    let res = runConduitPure $ yieldMany [1..10] .| sink
    print res

Let's first analyze takeC 5 .| mapC show .| foldC. This bit will take 5 elements from the stream, convert them to Strings, and then combine those Strings into one String. So if we actually have 10 elements on the stream, what happens to the other 5? Well, up until now, the answer would have been "disappears into the aether." However, we've now introduced monadic composition. In this world, those values are still sitting on the stream, ready to be consumed by whatever comes next. In our case, that's sumC.

EXERCISE Rewrite sink to not use do-notation. Hint: it'll be easier to go Applicative.

So we've seen how monadic composition works with both upstream and downstream, but in isolation. We can just as easily combine these two concepts together, and create a transformer using monadic composition.

#!/usr/bin/env stack
-- stack script --resolver lts-12.21

import Conduit

trans :: Monad m => ConduitT Int Int m ()
trans = do
    takeC 5 .| mapC (+ 1)
    mapC (* 2)

main :: IO ()
main = runConduit $ yieldMany [1..10] .| trans .| mapM_C print

Here, we've set up a conduit that takes the first 5 values it's given, adds 1 to each, and sends the result downstream. Then, it takes everything else, multiplies it by 2, and sends it downstream.

EXERCISE Modify trans so that it does something different for the first 3, second 3, and final 3 values from upstream, and drops all other values.

The only restriction we have in monadic composition is exactly what you'd expect from the types: the first three type parameters (input, output, and monad) must be the same for all components.

Primitives

We've worked with high-level functions in conduit so far. However, at its core conduit is built on top of a number of simple primitives. Combined with monadic composition, we can build up all of the more advanced functions from these primitives. Let's start with likely the more expected one: yield. It's just like the yieldMany function we've been using until now, except it works in a single value instead of a collection of them.

#!/usr/bin/env stack
-- stack script --resolver lts-12.21

import Conduit

main :: IO ()
main = runConduit $ yield 1 .| mapM_C print

Of course, we're not limited to using just a single call to yield:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21

import Conduit

main :: IO ()
main = runConduit $ (yield 1 >> yield 2) .| mapM_C print

EXERCISE Reimplement yieldMany for lists using the yield primitive and monadic composition.

Given that yield sends an output value downstream, we also need a function to get an input value from upstream. For that, we'll use await. Let's start really simple:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
{-# LANGUAGE ExtendedDefaultRules #-}
import Conduit

main :: IO ()
main = do
    -- prints: Just 1
    print $ runConduitPure $ yield 1 .| await
    -- prints: Nothing
    print $ runConduitPure $ yieldMany [] .| await

    -- Note, that the above is equivalent to the following. Work out
    -- why this works:
    print $ runConduitPure $ return () .| await
    print $ runConduitPure await

await will ask for a value from upstream, and return a Just if there is a value available. If not, it will return a Nothing.

NOTE I was specific in my phrasing of "await will ask." This has to do with the evaluation of a conduit pipeline, and how it is driven by downstream. We'll cover this in more detail in the next section.

Of course, things get much more interesting when we combine both yield and await together. For example, we can implement our own mapC function:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

myMapC :: Monad m => (i -> o) -> ConduitT i o m ()
myMapC f =
    loop
  where
    loop = do
        mx <- await
        case mx of
            Nothing -> return ()
            Just x -> do
                yield (f x)
                loop

main :: IO ()
main = runConduit $ yieldMany [1..10] .| myMapC (+ 1) .| mapM_C print

EXERCISE Try implementing filterC and mapMC. For the latter, you'll need to use the lift function.

The next primitive requires a little motivation. Let's look at a simple example of using the takeWhileC function:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

main :: IO ()
main = print $ runConduitPure $ yieldMany [1..10] .| do
    x <- takeWhileC (<= 5) .| sinkList
    y <- sinkList
    return (x, y)

As you may guess, this will result in the output ([1,2,3,4,5],[6,7,8,9,10]). Awesome. Let's go ahead and try to implement our own takeWhileC with just await and yield.

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

myTakeWhileC :: Monad m => (i -> Bool) -> ConduitT i i m ()
myTakeWhileC f =
    loop
  where
    loop = do
        mx <- await
        case mx of
            Nothing -> return ()
            Just x
                | f x -> do
                    yield x
                    loop
                | otherwise -> return ()

main :: IO ()
main = print $ runConduitPure $ yieldMany [1..10] .| do
    x <- myTakeWhileC (<= 5) .| sinkList
    y <- sinkList
    return (x, y)

I'd recommend looking over myTakeWhileC and making sure you're comfortable with what it's doing. When you've done that, run the program and compare the output. To make it easier, I'll put the output of the original (with the real takeWhileC) vs this program:

takeWhileC:
([1,2,3,4,5],[6,7,8,9,10])
myTakeWhileC:
([1,2,3,4,5],[7,8,9,10])

What happened to 6? Well, in the otherwise branch of the case statement, we've determined that the value that we received from upstream does not match our predicate function f. So what do we do with it? Well, we just throw it away! In our program, the first value to fail the predicate is 6, so it's discarded, and then our second sinkList usage grabs the next value, which is 7.

What we need is a primitive that let's us put a value back on the stream. And we have one that does just that: leftover. Let's fix up our myTakeWhileC:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

myGoodTakeWhileC :: Monad m => (i -> Bool) -> ConduitT i i m ()
myGoodTakeWhileC f =
    loop
  where
    loop = do
        mx <- await
        case mx of
            Nothing -> return ()
            Just x
                | f x -> do
                    yield x
                    loop
                | otherwise -> leftover x

main :: IO ()
main = print $ runConduitPure $ yieldMany [1..10] .| do
    x <- myGoodTakeWhileC (<= 5) .| sinkList
    y <- sinkList
    return (x, y)

As expected, this has the same output as using the real takeWhileC function.

EXERCISE Implement a peek function that gets the next value from upstream, if available, and then puts it back on the stream.

We can also call leftover as many times as we want, and even use values that didn't come from upstream, though this is a fairly unusual use case. Just to prove it's possible though:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

main :: IO ()
main = print $ runConduitPure $ return () .| do
    mapM_ leftover [1..10]
    sinkList

There are two semi-advanced concepts to get across in this example:

  1. If you run this, the result is a descending list from 10 to 1. This is because using leftover works in a LIFO (last in first out) fashion.
  2. If you take off the return () .| bit, this example will fail to compile. That's because, by using leftover, we've stated that our conduit actually takes some input from upstream. If you remember, when you use runConduitPure, the complete pipeline cannot be expected any input (it must have an input of type ()). Adding return () .| says "we're connecting you to an empty upstream component" to satisfy the type system.

Evaluation strategy

Let's talk about the evaluation strategy of a conduit pipeline. The most important thing to remember is everything is driven by downstream. To see what I mean, consider this example:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

main :: IO ()
main = runConduit $ yieldMany [1..10] .| iterMC print .| return ()

This program will generate no output. The reason is that the most downstream component is return (), which never awaits any values from upstream and immediately exits. Once it exits, the entire pipeline exits. As a result, the two upstream components are never run at all. If you wanted to instead force all of the values and just discard them, you could use sinkNull:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

main :: IO ()
main = runConduit $ yieldMany [1..10] .| iterMC print .| sinkNull

Now try and guess what the following program outputs:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

main :: IO ()
main = runConduit $ yieldMany [1..10] .| iterMC print .| return () .| sinkNull

Answer: nothing! The sinkNull will await for all values from its immediate upstream. But its immediate upstream is return (), which never yields any value, causing the sinkNull to exit immediately.

Alright, let's tweak this slightly: what will this one output:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

main :: IO ()
main = runConduit
     $ yieldMany [1..10]
    .| iterMC print
    .| liftIO (putStrLn "I was called")
    .| sinkNull

In this case, sinkNull calls await, which forces execution to defer to the next upstream component (the liftIO ... bit). In order to see if it yields, that component must be evaluated until it either (1) exits, (2) yields, or (3) awaits. We see that it exits after calling liftIO, causing the pipeline to terminate, but not before it prints its "I was called" message.

There's really not too much to understanding conduit evaluation. It mostly works the way you'd expect, as long as you remember that downstream drives.

Resource allocation

Let's copy a file with conduit:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit
import qualified System.IO as IO

main :: IO ()
main = IO.withBinaryFile "input.txt" IO.ReadMode $ \inH ->
       IO.withBinaryFile "output.txt" IO.WriteMode $ \outH ->
       runConduit $ sourceHandle inH .| sinkHandle outH

This works nicely, and follows the typical bracket pattern we typically expect in Haskell. However, it's got some downsides:

  • You have to allocate all of your resources outside of the conduit pipeline. (This is because conduit is coroutine based, and coroutines/continuations cannot guarantee a cleanup action is called.)
  • You will sometimes end up needing to allocate too many resources, or holding onto them for too long, if you allocate them in advance instead of on demand.
  • Some control flows are impossible. For example, if you wanted to write a function to traverse a directory tree, you can't open up all of the directory handles before you enter your conduit pipeline.

One slight improvement we can make is to switch over to the withSourceFile and withSinkFile helper functions, which handle the calls to withBinaryFile for you:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

main :: IO ()
main = withSourceFile "input.txt" $ \source ->
       withSinkFile "output.txt" $ \sink ->
       runConduit $ source .| sink

However, this only slightly improves ergonomics; the most of the problems above remain. To solve those (and some others), conduit provides built in support for a related package (resourcet), which allows you to allocate resources and be guaranteed that they will be cleaned up. The basic idea is that you'll have a block like:

runResourceT $ do
    foo
    bar
    baz

Any resources that foo, bar, or baz allocate have a cleanup function registered in a mutable map. When the runResourceT call exits, all of those cleanup functions are called, regardless of whether the exiting occurred normally or via an exception.

In order to do this in a conduit, we have the built-in function bracketP, which takes an allocation function and a cleanup function, and provides you a resource. Putting this all together, we can rewrite our example as:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit
import qualified System.IO as IO
import Data.ByteString (ByteString)

sourceFile' :: MonadResource m => FilePath -> ConduitT i ByteString m ()
sourceFile' fp =
    bracketP (IO.openBinaryFile fp IO.ReadMode) IO.hClose sourceHandle

sinkFile' :: MonadResource m => FilePath -> ConduitT ByteString o m ()
sinkFile' fp =
    bracketP (IO.openBinaryFile fp IO.WriteMode) IO.hClose sinkHandle

main :: IO ()
main = runResourceT
     $ runConduit
     $ sourceFile' "input.txt"
    .| sinkFile' "output.txt"

But that's certainly too tedious. Fortunately, conduit provides the sourceFile and sinkFile functions built in, and defines a helper runConduitRes which is just runResourceT . runConduit. Putting all of that together, copying a file becomes absolutely trivial:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

main :: IO ()
main = runConduitRes $ sourceFile "input.txt" .| sinkFile "output.txt"

Let's get a bit more inventive though. Let's traverse an entire directory tree and write the contents of all files with a .hs file extension into the file "all-haskell-files".

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit
import System.FilePath (takeExtension)

main :: IO ()
main = runConduitRes
     $ sourceDirectoryDeep True "."
    .| filterC (\fp -> takeExtension fp == ".hs")
    .| awaitForever sourceFile
    .| sinkFile "all-haskell-files"

What's great about this example is:

  • It guarantees that only two file handles are open at a time: the all-haskell-files destination file and whichever file is being read from.
  • It will only open as many directory handles as needed to traverse the depth of the file structure.
  • If any exceptions occur, all resources will be cleaned up.

Chunked data

I'd like to read a file, convert all of its characters to upper case, and then write it to standard output. That looks pretty straightforward:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit
import qualified Data.Text as T
import Data.Char (toUpper)

main :: IO ()
main = runConduitRes
     $ sourceFile "input.txt"
    .| decodeUtf8C
    .| mapC (T.map toUpper)
    .| encodeUtf8C
    .| stdoutC

This works just fine, but is inconvenient: isn't that mapC (T.map ...) repetition just completely jarring? The issue is that instead of having a stream of Char values, we have a stream of Text values, and our mapC function will work on the Texts. But our toUpper function works on the Chars inside of the Text. We want to use Text (or ByteString, or sometimes Vector) because it's a more efficient representation of data, but don't want to have to deal with this overhead.

This is where the chunked functions in conduit come into play. In addition to functions that work directly on the values in a stream, we have functions that work on the elements inside those values. These functions get a CE suffix instead of C, and are very straightforward to use. To see it in action:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit
import Data.Char (toUpper)

main :: IO ()
main = runConduitRes
     $ sourceFile "input.txt"
    .| decodeUtf8C
    .| omapCE toUpper
    .| encodeUtf8C
    .| stdoutC

NOTE We also had to prepend o to get the monomorphic mapping function, since Text is a monomorphic container.

We can use this for other things too. For example, let's get just the first line of content:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit
import Data.Char (toUpper)

main :: IO ()
main = runConduitRes
     $ sourceFile "input.txt"
    .| decodeUtf8C
    .| takeWhileCE (/= '\n')
    .| encodeUtf8C
    .| stdoutC

Or just the first 5 bytes:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

main :: IO ()
main = runConduitRes
     $ sourceFile "input.txt"
    .| takeCE 5
    .| stdoutC

There are many other functions available for working on chunked data. In fact, most non-chunked functions have a chunked equivalent. This means that most of the intuition you've built up for working with streams of values will automatically translate to dealing with chunked streams, a big win for binary and textual processing.

EXERCISE Try to implement the takeCE function on ByteStrings. Hint: you'll need to use leftover to make it work correctly!

ZipSink

So far we've had very linear pipelines: a component feeds into exactly one downstream component, and so on. However, sometimes we may wish to allow for multiple consumers of a single stream. As a motivating example, let's consider taking the average of a stream of Doubles. In the list world, this may look like:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
doubles :: [Double]
doubles = [1, 2, 3, 4, 5, 6]

average :: [Double] -> Double
average xs = sum xs / fromIntegral (length xs)

main :: IO ()
main = print $ average doubles

However, performance aficionados will quickly point out that this has a space leak: the list will be traversed once for the sum, kept in memory, and then traversed a second time for the length. We could work around that by using lower-level functions, but we lose composability. (Though see the foldl package for composable folding.)

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

doubles :: [Double]
doubles = [1, 2, 3, 4, 5, 6]

average :: Monad m => ConduitT Double Void m Double
average =
    getZipSink (go <$> ZipSink sumC <*> ZipSink lengthC)
  where
    go total len = total / fromIntegral len

main :: IO ()
main = print $ runConduitPure $ yieldMany doubles .| average

ZipSink is a newtype wrapper which provides an different Applicative instance than the standard one for ConduitT. Instead of sequencing the consumption of a stream, it allows two components to consume in parallel. Now, our sumC and lengthC are getting values at the same time, and then those values can be immediately thrown away. This leads to easy composition and constant memory usage.

NOTE Both the list and conduit versions of this are subject to a divide-by-zero error. You'd probably in practice want to make average return a Maybe Double.

Another real world example of ZipSink is when you want to both consume a file and calculate its cryptographic hash. Working with the cryptonite and cryptonite-conduit libraries:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit
import Crypto.Hash.Conduit (sinkHash)
import Crypto.Hash (Digest, SHA256)

main :: IO ()
main = do
    digest <- runConduitRes
            $ sourceFile "input.txt"
           .| getZipSink (ZipSink (sinkFile "output.txt") *> ZipSink sinkHash)
    print (digest :: Digest SHA256)

Or we can get slightly more inventive, and read from an HTTP connection instead of a local file:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
{-# LANGUAGE OverloadedStrings #-}
import Conduit
import Crypto.Hash.Conduit (sinkHash)
import Crypto.Hash (Digest, SHA256)
import Network.HTTP.Simple (httpSink)

main :: IO ()
main = do
    digest <- runResourceT $ httpSink "http://httpbin.org"
              (\_res -> getZipSink (ZipSink (sinkFile "output.txt") *> ZipSink sinkHash))
    print (digest :: Digest SHA256)

This provides a convenient and efficient method to consume data over a network connection.

ZipSource

Let's keep a good thing going. In addition to consuming in parallel, we may wish to produce in parallel. For this, we'll use the ZipSource newtype wrapper, which is very similar in concept to the ZipList wrapper for those familiar. As a simple example, let's create a stream of the Fibonacci numbers, together with each one's index in the sequence:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

fibs :: [Int]
fibs = 0 : 1 : zipWith (+) fibs (drop 1 fibs)

indexedFibs :: ConduitT () (Int, Int) IO ()
indexedFibs = getZipSource
    $ (,)
  <$> ZipSource (yieldMany [1..])
  <*> ZipSource (yieldMany fibs)

main :: IO ()
main = runConduit $ indexedFibs .| takeC 10 .| mapM_C print

ZipConduit

To round out the collection of newtype wrappers, we've got ZipConduit, which is certainly the most complicated of the bunch. It allows you to combine a bunch of transformers in such a way that:

  • Drain all of the ZipConduits of all yielded values, until they are all awaiting
  • Grab the next value from upstream, and feed it to all of the ZipConduits
  • Repeat

Here's a silly example of using it, which demonstrates its most common use case: focusing in on a subset of a stream. We split a stream of numbers into evens (Left) and odds (Right). Then we have two transformers that each look at only half the stream, and combine those two transformers together into a single transformer that looks at the whole stream:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

tagger :: Monad m => ConduitT Int (Either Int Int) m ()
tagger = mapC $ \i -> if even i then Left i else Right i

evens, odds :: Monad m => ConduitT Int String m ()
evens  = mapC $ \i -> "Even number: " ++ show i
odds   = mapC $ \i -> "Odd  number: " ++ show i

left :: Either l r -> Maybe l
left = either Just (const Nothing)

right :: Either l r -> Maybe r
right = either (const Nothing) Just

inside :: Monad m => ConduitT (Either Int Int) String m ()
inside = getZipConduit
    $ ZipConduit (concatMapC left  .| evens)
   *> ZipConduit (concatMapC right .| odds)

main :: IO ()
main = runConduit $ enumFromToC 1 10 .| tagger .| inside .| mapM_C putStrLn

In my experience, the most useful of the three newtype wrappers is ZipSink, but your mileage may vary.

Forced consumption

Remember that, in our evaluation method for conduit, we stop processing as soon as downstream stops. There are some cases where this is problematic, specifically when we want to ensure a specific amount of data is consumed. Consider:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

withFiveSum :: Monad m
            => ConduitT Int o m r
            -> ConduitT Int o m (r, Int)
withFiveSum inner = do
    r <- takeC 5 .| inner
    s <- sumC
    return (r, s)

main :: IO ()
main = print $ runConduitPure $ yieldMany [1..10] .| withFiveSum sinkList

Our withFiveSum function will let the provided inner conduit work on the first five values in the stream, then take the sum of the rest. All seems well, but now consider if we replace sinkList with return (). Our takeC 5 .| return () will no longer consume any of the first five values, and sumC will end up consuming them. Depending on your use case, this could be problematic, and very surprising.

We can work around this by forcing all other values to be dropped, e.g.:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

withFiveSum :: Monad m
            => ConduitT Int o m r
            -> ConduitT Int o m (r, Int)
withFiveSum inner = do
    r <- takeC 5 .| do
        r <- inner
        sinkNull
        return r
    s <- sumC
    return (r, s)

main :: IO ()
main = print $ runConduitPure $ yieldMany [1..10] .| withFiveSum (return ())

However, there's also a convenience function which captures this pattern: takeExactlyC:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

withFiveSum :: Monad m
            => ConduitT Int o m r
            -> ConduitT Int o m (r, Int)
withFiveSum inner = do
    r <- takeExactlyC 5 inner
    s <- sumC
    return (r, s)

main :: IO ()
main = print $ runConduitPure $ yieldMany [1..10] .| withFiveSum (return ())

Notice that there's no .| operator between takeExactlyC 5 and inner. That's not a typo! takeExactlyC isn't actually a conduit, it's a combinator which, when given a conduit, will generate a conduit.

EXERCISE Try to write takeExactlyC as a conduit itself, and/or convince yourself why that's impossible.

This same kind of pattern is used to deal with the stream-of-streams problem. As a motivating example, consider processing a file, and wanting to work on it one line at a time. One possibility is to simply break the stream into one Text per line, but this can be dangerous if your input is untrusted and may contain an unbounded line length. Instead, we can just do:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

main :: IO ()
main = runConduitRes $ sourceFile "input.txt" .| decodeUtf8C .| do
    len <- lineC lengthCE
    liftIO $ print len

This program will print out the length of the first line of the input file. However, by combining with the peekForeverE combinator - which will continuously run a conduit as long as there is some input available in a chunked stream - we can print out the length of each line:

#!/usr/bin/env stack
-- stack script --resolver lts-12.21
import Conduit

main :: IO ()
main = runConduitRes $ sourceFile "input.txt" .| decodeUtf8C .| peekForeverE (do
    len <- lineC lengthCE
    liftIO $ print len)

FAQs

  • How do you deal with an upstream conduit that has a return value? The special fusion functions for it, see the haddocks.
  • How do you capture unconsumed leftover values? Again, the special fusion functions for it, see the haddocks.
  • How do I run a source, take some of its output, and then run the rest of it later? Connect and resume

More exercises

Write a conduit that consumes a stream of Ints. It takes the first Int from the stream, and then multiplies all subsequent Ints by that number and sends them back downstream. You should use the mapC function for this.

Take a file and, for each line, print out the number of bytes in the line (try using bytestring directly and then conduit).

Further exercises wanted, please feel free to send PRs!

Legacy syntax

As of version 1.2.8 of conduit, released September 2016, the above used operators and function names are recommended. However, prior to that, an alternate set of functions and operators was used instead. You may still find code and documentation out there which follows the legacy syntax, so it's worth being aware of it. Basically:

  • Instead of .|, we had three operators: $=, =$, and =$=. These were all synonyms, and existed for historical reasons.
  • The $$ operator is a combination of runConduit and .|.

To put it simply in code:

x $=  y = x .| y
x =$  y = x .| y
x =$= y = x .| y
x $$  y = runConduit (x .| y)

If the old operators seem needlessly confusing/redundant... well, that's why we have new operators :).

Prior to the 1.3.0 release in February 2018, there were different data types and type synonyms available. In particular, instead of ConduitT, we had ConduitM, and we also had the following synonyms:

type Source     m o   =           ConduitM () o    m ()
type Sink     i m   r =           ConduitM i  Void m r
type Conduit  i m o   =           ConduitM i  o    m ()
type Producer   m o   = forall i. ConduitM i  o    m ()
type Consumer i m   r = forall o. ConduitM i  o    m r

These older names are all still available, but they've been deprecated to simplify the package.

Further reading

Some blogs posts making heavy usage of conduit:

If you have other articles to include, please send a PR!

conduit's People

Contributors

chrismwendt avatar danieldk avatar dimchansky avatar erikd avatar flo-dhalluin avatar joeyadams avatar jwiegley avatar kirelagin avatar luispedro avatar maxgabriel avatar meteficha avatar mgsloan avatar nathanhowell avatar ndmitchell avatar nicolast avatar ocramz avatar ppetr avatar psibi avatar rimmington avatar rycee avatar schernichkin avatar serras avatar shimuuar avatar shlevy avatar sjakobi avatar snoyberg avatar sol avatar sseveran avatar tim-semba avatar yihuang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

conduit's Issues

Cabal categories

It's confusing to find attoparsec in the Parsing category on the Hackage homepage, but not attoparsec-conduit. I'd also argue that the "Data" category isn't really very descriptive.

invalid preprocessing directive

Unfortunate mixing of multiline comments and hash symbols gives this when compiling:

Data/Conduit/Internal.hs:607:4:
     error: invalid preprocessing directive
       #-}
      ^

Data/Conduit/Internal.hs:340:4:
     error: invalid preprocessing directive
      #-}
       ^

for conduit 1.0.9
Glasgow Haskell Compiler, Version 7.6.3, stage 2 booted by GHC version 7.4.2
on mac os x Maverick (10.9)

license inconsistences

Hi, I think I noticed some inconsistencies in some of the *-conduit packages' licensing:

  • attoparsec-conduit claims BSD but src header says MIT
  • (minor) conduit/Data/Conduit/Text.hs says it is MIT
  • filesystem-conduit says BSD but src file is MIT
  • network-conduit-tls: LICENSE and .cabal disagree
  • unix-process-conduit: LICENSE and .cabal disagree

Add sourceHandle and sinkHandle to Data.Conduit.Binary

Michael,

For full duplex socket operations (eg a telnet client) its necessary to use forkIO like this:

hSetBuffering stdin NoBuffering
hSetBuffering stdout NoBuffering
_ <- forkIO $ runResourceT $ sourceHandle stdin $$ sinkHandle hsock
runResourceT $ sourceHandle hsock $$ sinkHandle stdout

but that requires functions sourceHandle and sinkHandle. If you like this then I'll whip up a patch (I already have working code for these functions).

NamedPipe or non directory/regular files causes filesystem-conduit to hang.

Hello, I was able to reproduce a hanging bug in filesystem-conduit's traverse function in which it attempts to open a fifo/NamedPipe and then proceed to block/hang on the pipe.

[paul@imaskar:~] :) $ stat .icedteaplugin/icedtea-plugin-to-appletviewer 
  File: ‘.icedteaplugin/icedtea-plugin-to-appletviewer’
  Size: 0           Blocks: 0          IO Block: 4096   fifo
Device: fd04h/64772d    Inode: 4543662     Links: 1
Access: (0700/prwx------)  Uid: ( 1000/    paul)   Gid: ( 1000/    paul)
Access: 2009-08-12 01:41:30.000000000 -0700
Modify: 2009-08-12 01:58:37.000000000 -0700
Change: 2009-08-12 01:58:37.000000000 -0700
 Birth: -

Above is a plain stat on the fifo/NamedPipe and below is the most minimal example of this case in haskell.

import Data.Conduit.Filesystem (traverse)
import qualified Data.Conduit.List as CL
import Filesystem.Path.CurrentOS (decodeString)
import Data.Conduit (($$), runResourceT)

main :: IO ()
main = do
    let p = "/home/paul/.icedteaplugin/"
    runResourceT $ traverse False (decodeString p) $$ CL.consume
    return ()

-- runghc test.hs
--
-- # This blocks and hangs
--
-- bash(8033)───runghc(12339)───ghc(12340)─┬─{ghc}(12341)
--                                         └─{ghc}(12342)
--
-- strace -p 12342
-- Process 12342 attached
-- open("/home/paul/.icedteaplugin/icedtea-plugin-to-appletviewer", O_RDONLY
--
-- Now the icedtea-plugin-to-appletviewer is a NamedPipe

More Conduit.Binary lazy ByteString functions

Could we have:

  • sinkHandleLbs
  • sinkIOHandleLbs
  • sinkFileLbs

I keep getting some ByteStrings in, appending things to them, and writing them back to files. Conduit gives me sinkLbs, but that does another full copy.

join/squash for ResourceT

I feel like it should be possible to implement this:

ResourceT (ResourceT m) a -> ResourceT m a

Though I'm not sure. This came up when I was playing with error handling in WAI and came upon a case where I wanted to reach inside ResourceT with transResourceT to map my EitherT to something else. When I have an error handler than lives in IO this is fine, because the base monad moves from EitherT IO to IO and everyone is happy. The problem is when I want to use an error handler in ResourceT IO (useful so that I can just use any Application as an error handler), I then end up with ResourceT (ResourceT IO) and at that point I'm stuck.

I don't need the extra power for now (all of my error handlers can live in IO directly, and most of them are pure), but it would be nice.

Tee with conduit iternals

@snoyberg Do you think it is possible to write a function

tee2 :: Monad m => Sink a m () -> ConduitM a a m b

The original tee function is in http://hackage.haskell.org/package/conduit-extra-0.1.1/docs/src/Data-Conduit-Extra-Pipes.html#tee, but it has type

tee :: Monad m => Sink a (ConduitM a a m) b -> ConduitM a a m b

which does not compose as well - at least as far as I understand, since I cannot plug any existing sink into tee's first argument.

I have the intuition that writing zip2 using conduit internals, similar to zipSinks should be possible. Semantics would be:

  • Let's give some names: tee2 innerSink :: ConduitM a a m b
  • The outer ConduitM would await a value x, then stuff it into innerSink.
  • Then innerSink will be done or demand a value; we would now yield x to the next outer conduit.
  • Repeat forever.
  • The return type of the inner sink would be unit so that the outer ConduitM does not need to run it to completion in order to terminate.

So it is pretty much the same as tee, but without the monad stack (and thus needing internals like NeedInput and Done).

Does this make any sense?

Fail windows build

Hackage version:

System\Win32File.hsc:3:8:
    File name does not match module name:
    Saw: `System.Win32FileRead'
    Expected: `System.Win32File'

Deallocation order of ResourceT

I just recently learned that resources are deallocated strictly in FIFO order when runResourceT terminates, which is exactly the opposite of what I expected and would need for my project.

I'm not entirely clear on what the reasoning behind this was, and would appreciate if there was a way to specify LIFO deallocation if desired. It's not immediately obvious to me, however, how resourceForkIO could be kept consistent with it's enclosing ResourceT.

Do you see a way to provide such a feature, or have any concerns?

Make it possible to lift ResourceT IO to any MonadResource

As discussed with Michael via e-mail, this would require changing the MonadResource class. I think a lot of code would benefit from being written in ResourceT IO and only lifted when necessary. The resource operations (register, release, allocate, resourceMask) would be changed to operate on ResourceT directly, and MonadResource would mainly consist of a liftResourceT operation. For compatibility, there would also have to be lifted resource operations, of course.

Perhaps there are even performance benefits to be had, because less dictionary lookups are involved?

@snoyberg If you want, we can attach the e-mail discussion here.

Missing lift in tee

tee haddocks in http://hackage.haskell.org/package/conduit-extra-0.1.1/docs/Data-Conduit-Extra-Pipes.html#v:tee:

each [1..3] >-> tee (P.mapM_ f) >-> P.mapM_ f

It took me a bit of time to figure out what this does / how it works, but I think I understand it now; explaining it a bit in the docs would be nice.

It would also be interesting to discuss how this compares to zipSinks. By the way, I currently think that this concept of "branching out" gives rise to a concept somewhere between pipe-streaming libraries and FRP, namely those that describe trees of computation (have one input and stream it into multiple computations) while usual pipes are linear. The difference to FRP is that it doesn't allow loops.

Anyway, I think that there is a lift missing in that example above. This more concrete example compiles for me:

CL.sourceList [1..3] =$= tee (lift $ CL.mapM_ print) $$ CL.mapM_ print

However, it doesn't do what I expect; the output is

1
2
3

What's going on?

(Adding this kind of non-pipes example would also be helpful.)

Generalize sinkState to GLSink

I was reworking some code from a custom sink to use something based on Data.Conduit.Util.sinkState. This didn't turn out well though, since sinkState creates a "Sink input m output", whilst my original code resulted in a "GLSink input m output".

I duplicated sinkState for now, slightly altering it to create the GLSink (basically, changing "() -> close foo" into "_ -> close foo").

Am I doing something completely wrong here, and should sinkState indeed construct a Sink and no GLSink, or could this be generalized upstream as well?

lazyConsume stack overflows on forever (yield x)

The following produces a stack overflow, instead of printing 'x' over and over:

import Control.Monad
import Control.Monad.IO.Class
import Data.Conduit
import Data.Conduit.Lazy (lazyConsume)

main = (lazyConsume $ forever $ yield 'x') >>= mapM_ print

These variants also don't work (binding return () before and after):

(lazyConsume $ forever $ yield 'x' >> return ()) >>= mapM_ print
(lazyConsume $ forever $ return () >> yield 'x') >>= mapM_ print

But these variants do work (binding liftIO (return ()) before and after):

(lazyConsume $ forever $ liftIO (return ()) >> yield 'x') >>= mapM_ print
(lazyConsume $ forever $ yield 'x' >> liftIO (return ())) >>= mapM_ print

If I understand correctly, forever $ yield 'x' expands to:

HaveOutput (HaveOutput (HaveOutput (HaveOutput (HaveOutput ( ...

lazyConsume does not insert an unsafeInterleaveIO when it recurses here.

Extending network-conduid

Hi, Michael. I interested in extending network-conduit, in particular adding UDP support. This may require some changes in sourceSocket and sinkSocket signatures (sourceSocket now relies on contention-oriented socket behavior - it will indicate connection close by returning zero-length bytestring; for connectionless protocol this is not true, zero-length bytestring is just as acceptable as any other). I’ve found 5 libraries depends on network-conduid, and I afraid my changes may break it, especially mighttpd2, which does not specify particular version of network-conduid. I’d like to know your opinion on my idea and you plans on network-conduid - should I just fork and proceed with changes, or something else?

Why is ($$) implemented using connectResume?

I hope I'm not abusing the issue mechanism, but I have just some questions where I'm not sure if I'm understanding things wrong, or if it's a bug.

Assuming #62 is implemented, could ($$) not be implemented like this?

x $$ y = runPipe (x `pipeL` y)

Allow concurrently-running conduits.

Often, I want to run different parts of a pipeline in parallel, very similar to unix pipes. I find myself avoiding conduits for tasks such as:

loadTexture "myPicture.png" $$ uploadToGraphicsCard

What I really want to do with this is something like:

concurrently (loadTexture "myPicture.png") $$ uploadToGraphicsCard

Where the "concurrently" combinator would run the "loadTexture" task in a forkIO'd thread, putting the results into a Chan, and then read from the Chan when uploadToGraphicsCard pulls a result.

This would currently be relatively annoying to implement, since (afaik) you need a concurrently combinator for sources, sinks, and channels. Is there any way to unify these?

I think with this change, conduit will be a formidable force in resource management. I don't currently know of any other library which does this, but I find myself wishing for it on many occasions.

Clearly document intended usage of transPipe

I recently helped someone on #haskell irc that misunderstood the behavior of transPipe. See http://hpaste.org/73538 for the buggy code, and the annotated fix. Basically, the documentation for transPipe should indicate that the function it is given may be used repeatedly, so giving it something like flip evalStateT 1 will probably not produce the expected result.

Why not making MonadResource subclass of Applicative?

In older version of conduit, ResourceT data-type has been a instance of Applicative, so we could write:

sourceTChan :: ResourceIO m => TChan a -> Source m a
sourceTChan ch = sourceIO (return ()) return puller
  where
    puller () = IOOpen <$> (liftIO . atomically $! readTChan ch)

In the new version of conduit-0.3 and resourcet-0.3, however, the API of manipulating Conduit was changed to use MonadResource class instead of ResourceT data-type. So I once rewrite my own library to obey that rules like below:

sourceTChan :: MonadResource m => TChan a -> Source m a
sourceTChan ch = sourceIO (return ()) return puller
  where
    puller () = IOOpen <$> (liftIO . atomically $! readTChan ch)

but this code does not compile because MonadResource is not a subclass of Applicative.
To make this code work, I have to add Applicative m to the type constraints or use liftM instead of (<$>).

So, this is my proposal: Why not making MonadResource subclass of Applicative?

Because every Monads are also Applicative and most libraries provides Applicative instances for their own Monads, I think this change does not cause tragic break on library and reduce the cost to rewrite applicative-style old code.

Documentation of Source out of sync.

The documentation of type Source m a = Pipe Void a m () states that:

"...The input parameter is set to () instead of Void since there is no way to statically guarantee that the NeedInput constructor will not be used..."

Should the docs or the definition be updated. I guess the latter...

Memory usage increases when long sequence process

Conduit-0.4 seems to contain space-leaks (maybe just a memory leak).

Following code consumes large amount of memory (/tmp/tmp is a large file):

import Control.Monad                                    
import Data.Conduit as C                                
import Data.Conduit.Binary as CB                        
import Data.Conduit.List as C                           

foo :: FilePath -> IO ()                                
foo path = runResourceT $ do
  sourceFile path $= C.sequence (CB.take 1) $$ sinkNull 

main :: IO ()                                           
main = forM_ [1..2] $ \_ -> foo "/tmp/tmp"                

Following code does not leaks, I think GHC's optimizer works good.

main :: IO ()                                           
main = foo "/tmp/tmp"                

I think the bind function of Pipe Monad (https://github.com/snoyberg/conduit/blob/master/conduit/Data/Conduit/Internal.hs#L131) creates new, a bit larger early-close function,
and the early-close function does not execute until whole data are consumed,
so finally, very large early-close function are created
(typically, almost part of it is meaningless, (return()) >> (return()) >> (return()) >> ... ).
(conduit < 0.4 has also same problem, but these consume less memory)
I think it cannot avoid current representation of conduit (pipe),
and it is critical problem for processing large amount of data (or infinite, for example, twitter streaming-api, ...).

To fixing it, for example, close function make Maybe values:

data Pipe i o m r =
    HaveOutput (Pipe i o m r) (Maybe (m r)) o
  | NeedInput (i -> Pipe i o m r) (Pipe i o m r)
  | Done (Maybe i) r
  | PipeM (m (Pipe i o m r)) (Maybe (m r))

It can get rid of meaningless (return()) chain, but how can I make return value of Pipe?
What approach to avoid this problem do you prefer?

ResourceIO alternative?

In conduit 0.2 was a wonderful class ResourceIO. How to replace it with conduit 0.3?

For example, I can't just replace it with (MonadResource m, MonadBaseControl IO m) => ... like this. Then I do it, here I get following:

No instance for (Control.Monad.Trans.Resource.MonadResource IO) arising from a use of `runCouch' Possible 
 fix: add an instance declaration for (Control.Monad.Trans.Resource.MonadResource IO) In the expression: 
 runCouch conn In the expression: runCouch conn $ couchDeleteDB n In an equation for `tearDB': tearDB n = 
 runCouch conn $ couchDeleteDB n

Naming of 'with' and 'withIO'.

Michael,

I think the names of these functions are a little too general. What do you think of the idea of:

  • Renaming them to withResourceT and withResourceIOT
  • Make 'with' and 'withIO' aliases of the above but marked as deprecated.

If you like this idea, I'll do the hacking and provide you with a pull request.

Cheers,
Erik

GHC 7.4.2 cannot build conduit 1.0.9.2

Here is the error message:

Building conduit-1.0.9.2...
Preprocessing library conduit-1.0.9.2...
[2 of 8] Compiling Data.Conduit.Internal ( Data/Conduit/Internal.hs, dist/build/Dat
a/Conduit/Internal.o )

Data/Conduit/Internal.hs:211:105:
    Cannot instantiate unification variable ‛b0’
    with a type involving foralls:
      (forall a. m1 a -> n1 a)
      -> ConduitM i o m1 b1 -> ConduitM i o n1 b1
      Perhaps you want ImpredicativeTypes
    In the expression:
        GHC.Exts.coerce
          (hoist ::
             (forall (a :: *). m a -> n a)
             -> Pipe i i o () m b -> Pipe i i o () n b) ::
          forall (m :: * -> *) (b :: *) (n :: * -> *). Monad m =>
          (forall (a :: *). m a -> n a)
          -> ConduitM i o m b -> ConduitM i o n b
    In an equation for ‛hoist’:
        hoist
          = GHC.Exts.coerce
              (hoist ::
                 (forall (a :: *). m a -> n a)
                 -> Pipe i i o () m b -> Pipe i i o () n b) ::
              forall (m :: * -> *) (b :: *) (n :: * -> *). Monad m =>
              (forall (a :: *). m a -> n a)
              -> ConduitM i o m b -> ConduitM i o n b

Using Conduit.List.drop in a pipeline has unexpected consequences

Consider following code on conduit 1.0.4.2:

import Data.CSV.Conduit
import Data.Conduit
import Data.Conduit.List as C
import Data.Default
import System.IO


test1 :: FilePath -> IO [a]
test1 fp = runResourceT $
    C.sourceFile fp $$
    intoCSV def =$
    (C.drop 1 :: Monad n => Conduit (Row ByteString) n (Row ByteString)) =$
    C.mapM_ (\ (_ :: Row ByteString) -> liftIO $ print "blerg") =$
    C.consume

This typechecks fine, yet if the "drop" is there, the mapM_ never gets called, no matter how big of a CSV file you supply the function. You will not see a single like of "blerg" printed. Comment out the drop 1 line and it'll work as expected with one "blerg" per row in your CSV input.

In fact, you can replace the 'a' type parameter with ANY type you want and it will continue to typecheck!!! Weird!

It is also annoying to use combinators like take/drop as they require you to specify types passing through them (if they contain any type-class constrained parameters) even if the later stage pipe makes it unambiguous. This is a minor issue, however, as one can always quickly jolt down the expected type.

I haven't looked into it enough, but I suspect some combination of the latest Consumer/Producer existentially quantified synonyms with rewrite rules (?) are causing odd behavior around the edges.

Update the README.

The README hasn't been updated for the last release. Conduits are a lot more badass than it makes them seem.

For example, it still talks about mutable state in the conduit implementation. I think it just needs a quick readthrough and editing session with one of the authors.

Missing mtl instances for ResourceT

ResourceT doesn't have instances for the mtl monad transformer classes, like MonadReader. Is there a reason for this, or can I add them? In particular, is there an instance for MonadError that is safe? If not, how can I safely get error-like behavior in a stack containing a ResourceT? Do I have to throw normal exceptions?

Arrow instance for conduits

I'm pretty new to conduits, but it seems like an arrow interface would be much more natural than a monadic one. Are there any plans to add an arrow instance?

Adding Functionality to `Data.Conduit.Text`

There's a lot to be wishful for in Data.Conduit.Text. Some shortcomings (in my opinion) of the text API make a few of these a bit difficult to implement (see haskell/text#25).

The most evident to me is that there should be a way to sink and source Text in a similar way to ByteStrings. However, you would have to use one of the specific decodings or manually detect the system locale, which is ugly. We could do it that way (which is what the patch, when I finish it hopefully tonight, will do), but it would be preferable to avoid the ByteStrings all together.

I propose the following, in conflict with the Data.Conduit.ByteString API (which makes since given that Data.Text and Data.ByteString also conflict each other):

sourceFile :: MonadResource m => FilePath -> Source m Text
sourceHandle :: MonadResource m => Handle -> Source m Text
sourceIOHandle :: MonadResource m => IO Handle -> Source m Text
sinkFile :: MonadResource m => FilePath -> Sink Text m ()
sinkHandle :: MonadResource m => Handle -> Sink Text m ()
sinkIOHandle :: MonadResource m => IO Handle -> Sink Text m ()
conduitFile :: MonadResource m => FilePath -> Conduit Text m Text
lines :: Monad m => Conduit Text m Text

Notably I left out sourceFileRange because that seems more specialized toward binary data.

(Also, lack of conduitHandle and friends seems odd. Any reason for this?)

Any comments are welcome.

Add support for client side code to network-conduit-tls.

(It's not something I'd need right now. I'm experimenting with different approaches to use IMAP+TLS in Haskell. Using conduit seemed like a nice way how to separate IMAP from TLS, but I can do it with other, more low-level packages as well.)

Scary types

I started to translate library [couchdb-emumerator] from http-enumerator to http-conduit. However, several data types scare me.

For example, two identical functions old and new. Compare data types.

But I need something like

MonadCouch m =>
   HT.Method
-> String
-> HT.RequestHeaders
-> HT.Ascii
-> H.ResponseConsumer m b
-> H.RequestBody m
-> ResourceT m b

How can achieve this?

"The mutable state is being accessed after cleanup" exception

I have a concurrent program, which depending on some settings spits out: Control.Monad.Trans.Resource.register': The mutable state is being accessed after cleanup. Please contact the maintainers.

I am highly unsure of where it's stemming from, and whether it's caused by my mistake or a bug of conduit or even by this GHC bug which the same program of mine suffers from in certain conditions.

Here is a short screencast describing the issue in detail. Could you please check it out?

Resumable conduits

IIRC someone mentioned this idea on haskell-cafe once. I thought this idea could be useful so I started some experiments with it. My current experiments are here. The idea is that running a (resumable) conduit against a sink produces another conduit whose result includes the next resumable conduit:

type ResumableSource = ResumableConduit ()

data ResumableConduit i m o =
    ResumableConduit (Conduit i m o) (m ())

connectResume :: Monad m
              => ResumableConduit i m o
              -> Sink o m r
              -> Sink i m (ResumableConduit i m o, r)

I kept the resumable operators in Conduit.hs with the same types as now and they're currently defined as

($$++) :: Monad m
       => ResumableSource m a
       -> Sink a m b
       -> m (ResumableSource m a, b)
($$++) src sink = mempty $$ connectResume src sink

etc. I suppose the next step would be to add new operators like =$++, =$+ and =$+- that would produce sinks instead of monadic values.

So far I haven't tested the idea, only checked that it compiles. Is it worth pursuing further?

GHC 6.12.1

I am working on backporting this library to Debian stable, and am running in to the following:

Data/Conduit/List.hs:102:13:
    Inferred type is less polymorphic than expected
      Quantified type variable `i' escapes
    In the expression: Prelude.mapM_ yield
    In the definition of `sourceList': sourceList = Prelude.mapM_ yield

I sort of know where this error is from, but not why I get it and newer GHC's do not. Was this a known bug with RankNTypes in older GHC?

Compile fail

The version on hackage won't compile currently. The api in tls=1.0.0 has changed. The error is:

(http_testing)rode:code/haskell/http_testing: cabal install network-conduit-tls 14:24:43
Resolving dependencies...
Configuring network-conduit-tls-0.6.0...
Building network-conduit-tls-0.6.0...
Preprocessing library network-conduit-tls-0.6.0...
[1 of 2] Compiling Data.Conduit.Network.TLS.Internal ( Data/Conduit/Network/TLS/Internal.hs, dist/build/Data/Conduit/Network/TLS/Internal.o )
[2 of 2] Compiling Data.Conduit.Network.TLS ( Data/Conduit/Network/TLS.hs, dist/build/Data/Conduit/Network/TLS.o )

Data/Conduit/Network/TLS.hs:66:20: Not in scope: `TLS.serverWith'

Data/Conduit/Network/TLS.hs:89:15:
`TLS.pWantClientCert' is not a (visible) constructor field name
cabal: Error: some packages failed to install:
network-conduit-tls-0.6.0 failed during the building phase. The exception was:
ExitFailure 1

It requires tls<=0.9.11 (which in turn limits to tls-extra<=0.4.7) to compile.

test failed: text utf8 raw bytes

Here is the test log:

1) text utf8 raw bytes
Exception: 'Cannot decode byte '\x0': Data.Text.Encoding.streamDecodeUtf8With: Invalid UTF-8 stream' (after 34 tests and 2 shrinks): 
[194,0]

Randomized with seed 4611685479682440334

Finished in 2.8940 seconds
134 examples, 1 failure
Test suite test: FAIL
Test suite logged to: dist/test/conduit-1.0.9.3-test.log

conduit-0.5.2.3 fails to build with GHC-7.6.1

When trying to build conduit with the recently released GHC-7.6.1, the following compile errors are emitted:

Building conduit-0.5.2.3...
Preprocessing library conduit-0.5.2.3...
[ 1 of 11] Compiling System.PosixFile ( dist/build/System/PosixFile.hs, dist/build/System/PosixFile.o )
[ 2 of 11] Compiling Data.Conduit.Internal ( Data/Conduit/Internal.hs, dist/build/Data/Conduit/Internal.o )
[ 3 of 11] Compiling Data.Conduit.Util.Source ( Data/Conduit/Util/Source.hs, dist/build/Data/Conduit/Util/Source.o )
[ 4 of 11] Compiling Data.Conduit.Util.Sink ( Data/Conduit/Util/Sink.hs, dist/build/Data/Conduit/Util/Sink.o )
[ 5 of 11] Compiling Data.Conduit.Util.Conduit ( Data/Conduit/Util/Conduit.hs, dist/build/Data/Conduit/Util/Conduit.o )
[ 6 of 11] Compiling Data.Conduit.Util ( Data/Conduit/Util.hs, dist/build/Data/Conduit/Util.o )
[ 7 of 11] Compiling Data.Conduit     ( Data/Conduit.hs, dist/build/Data/Conduit.o )
[ 8 of 11] Compiling Data.Conduit.List ( Data/Conduit/List.hs, dist/build/Data/Conduit/List.o )
[ 9 of 11] Compiling Data.Conduit.Binary ( Data/Conduit/Binary.hs, dist/build/Data/Conduit/Binary.o )

Data/Conduit/Binary.hs:55:15:
    Illegal polymorphic or qualified type: GSource m S.ByteString
    Perhaps you intended to use -XRankNTypes or -XRank2Types
    In the type signature for `sourceFile':
      sourceFile :: MonadResource m => FilePath -> GSource m S.ByteString

Data/Conduit/Binary.hs:75:17:
    Illegal polymorphic or qualified type: GSource m S.ByteString
    Perhaps you intended to use -XRankNTypes or -XRank2Types
    In the type signature for `sourceHandle':
      sourceHandle :: MonadIO m => IO.Handle -> GSource m S.ByteString
...

Support base-4.2.0.0

Would it be very hard to add support for this? I'm trying to build a WAI/Warp app for deployment on Debian stable, and this is the version of base easily available there.

Doc issue

The link in the Description: field of the conduit cabal-file 404s.

Seeming bug in UTF8 decoding

After merging in #105 by @rycee, I now get a test suite failure when compiling project-template. To reproduce:

cabal install project-template-0.1.3 --enable-tests --constraint 'conduit == 1.0.7.2'

Test output:

1) Text.ProjectTemplate.create/unpack is idempotent FAILED
Exception: 'Data.Conduit.Text.utf8: expected non-continuation byte' (after 21 tests): 
Helper (fromList [(FilePath "96f\246\203J","\DEL\130\180\EOT\169\243\202\144f\235\201\129\213\191\240)-"),(FilePath "M","L\NAK\138"),(FilePath "StMc\250d","_:\133r\ETBL\235\n%"),(FilePath "foo","\205\147\158\237\214P\233\175w\129\184\202"),(FilePath "g\178bm","\243\251\bW\150\213\207\221?\NAK"),(FilePath "q","\208\166\SUB\232"),(FilePath "r\255PVbm\195L\236Wd","\152*D\147=^U\RS\228\ETB\180U\153\235\217"),(FilePath "u1A\203PET",">j\FS\b\ETX\140\189\199T!3\226\244\131"),(FilePath "\218\196\200f9\189n","\180\DEL\244\DEL\USc\163\247k\253")])

I have not had a chance to research this yet.

conduit: warnings about deprecated functions

Just did a fresh clone, cabal configure && cabal build of conduit (0.2.2). Here are the deprecation warnings:

Control.Monad.Trans.Resource.hs

Control/Monad/Trans/Resource.hs:50:1:
    Warning: In the use of `mkTyCon'
             (imported from Data.Typeable):
             Deprecated: "either derive Typeable, or use mkTyCon3 instead"

Control/Monad/Trans/Resource.hs:66:30:
    Warning: In the use of `unsafeIOToST'
             (imported from Control.Monad.ST):
             Deprecated: "Please import from Control.Monad.ST.Unsafe instead; This will be removed in the next release"

Control/Monad/Trans/Resource.hs:67:1:
    Warning: In the use of `Lazy.unsafeIOToST'
             (imported from Control.Monad.ST.Lazy):
             Deprecated: "Please import from Control.Monad.ST.Lazy.Unsafe instead; This will be removed in the next release"

System.PosixFile.hsc

System/PosixFile.hsc:30:1:
    Warning: newtype `CInt' is used in an FFI declaration,
             but its constructor is not in scope.
             This will become an error in GHC 7.6.1.
    When checking declaration:
      foreign import ccall safe "static open" c_open
        :: CString -> Flag -> IO CInt

System/PosixFile.hsc:30:1:
    Warning: newtype `CInt' is used in an FFI declaration,
             but its constructor is not in scope.
             This will become an error in GHC 7.6.1.
    When checking declaration:
      foreign import ccall safe "static open" c_open
        :: CString -> Flag -> IO CInt

System/PosixFile.hsc:33:1:
    Warning: newtype `CInt' is used in an FFI declaration,
             but its constructor is not in scope.
             This will become an error in GHC 7.6.1.
    When checking declaration:
      foreign import ccall safe "static read" c_read
        :: FD -> Ptr Word8 -> CInt -> IO CInt

System/PosixFile.hsc:33:1:
    Warning: newtype `CInt' is used in an FFI declaration,
             but its constructor is not in scope.
             This will become an error in GHC 7.6.1.
    When checking declaration:
      foreign import ccall safe "static read" c_read
        :: FD -> Ptr Word8 -> CInt -> IO CInt

System/PosixFile.hsc:33:1:
    Warning: newtype `CInt' is used in an FFI declaration,
             but its constructor is not in scope.
             This will become an error in GHC 7.6.1.
    When checking declaration:
      foreign import ccall safe "static read" c_read
        :: FD -> Ptr Word8 -> CInt -> IO CInt

System/PosixFile.hsc:36:1:
    Warning: newtype `CInt' is used in an FFI declaration,
             but its constructor is not in scope.
             This will become an error in GHC 7.6.1.
    When checking declaration:
      foreign import ccall safe "static close" close :: FD -> IO ()

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.