Giter VIP home page Giter VIP logo

dwyl / phoenix-ecto-append-only-log-example Goto Github PK

View Code? Open in Web Editor NEW
78.0 14.0 7.0 224 KB

๐Ÿ“ A step-by-step example/tutorial showing how to build a Phoenix (Elixir) App where all data is immutable (append only). Precursor to Blockchain, IPFS or Solid!

License: GNU General Public License v2.0

JavaScript 9.60% CSS 0.12% Elixir 85.27% HTML 5.01%
learn howto tutorial elixir elixir-lang phoenix elixir-phoenix append-only kappa-architecture lambda-architecture

phoenix-ecto-append-only-log-example's Introduction

Phoenix Ecto Append-only Log Example

Build Status codecov.io


A step-by-step example to help anyone learn how build Phoenix Apps where all data is stored in an append-only log.

Why?

Read/learn this if you want:

  • Confidence in your mission-critical code; know exactly what's going on!
  • Debugging your app to be much easier as you can trace a request/change all the way through your app!
  • Analytics built-in to your App so you can effortlessly derive user behaviour metrics i.e. cohort analysis
  • All history of changes to data/records (and who made them) so users of your App can "undo" changes with ease.

If you have ever used the "undo" functionality in a program, you have experienced the power of an Append-only Log.


When data is stored in Append-only (immutable) Log, if a change is made to some data it always creates a new state (without altering history). This makes it easy to return/rewind to the previous state.

Most functional programming languages (e.g: Elixir, Elm, Lisp, Haskell, Clojure) have an "immutable data" pattern; data is always "transformed" never mutated. This makes it much faster to build reliable/predictable apps. Code is simpler and debugging is considerably easier when state is always known and never over-written.

The "immutable data" principal in the Elm Architecture is what enables the "Time Travelling Debugger" which is an incredibly powerful way to understand and debug an app. By using an Append-only Log for all data stored by our Elixir/Phoenix apps, we get a "time-travelling debugger" and complete "analytics" built-in!

It also means we are never confused about how data/state was transformed:


Note: If any these terms are unclear to you now, don't worry, we will be clarifying them below.
The main thing to remember is that using an Append-only Log to store your app's data makes it much easier to build the app because records are never modified, history is preserved and can easily be referred to

Once you overcome the initial learning curve, you will see that your apps become easy to reason about and you will "unlock" many other possibilities for useful features and functionality that will delight the people using your product/service!

You will get your work done much faster and more reliably, users will be happier with the UX and Product Owners/Managers will be able to see how data is transformed in the app; easily visualise the usage data and "flow" on analytics charts/graphs in realtime.

Who?

This example/tutorial is for all developers who have a basic understanding of Phoenix, general knowledge of database storage in web apps and want to "level up" their knowledge/skills.
People who want to improve the reliability of the product they are building. Those who want to understand more ("advanced") "distributed" application architecture including the ability to (optionally/incrementally) build on this by using IPFS and/or Blockchain in the future!

What?

Using an Append Only Log is an alternative to using Ecto's regular "CRUD" which allows overwriting and deleting data without "rollback" or "recoverability". In a "regular" Phoenix app each update over-writes the state of the record so it's impossible to retrieve it's history without having to go digging through a backup which is often a time-consuming process or simply unavailable.

Append-only Logs are an excellent approach to data storage because:

  • Data is never over-written therefore it cannot be corrupted or "lost".
  • Field-level version control and accountability for all changes is built-in.
  • All changes to columns are non-destructive additions; columns are never deleted or altered so existing code/queries never "break". This is essential for "Zero Downtime Continuous Deployment".
    • A database migration can be applied before the app server is updated/refreshed and the existing/current version of the app can continue to run like nothing happened.
  • Data is stored as a "time series" therefore it can be used for analytics. ๐Ÿ“Š ๐Ÿ“ˆ
  • "Realtime backups" are hugely simplified (compared to standard SQL/RDBMS); you simply stream the record updates to multiple storage locations/zones and can easily recover from any "outage".

Examples where an Append-only Log is useful:

  • CMS/Blog - being able to "roll back" content means you can invite your trusted readers / stakeholders to edit/improve your content without "fear" of it decaying. ๐Ÿ”
  • E-Commerce - both for cart tracking and transaction logging. ๐Ÿ›’
    • Also, the same applies for the Product catalog (which is a specific type of CMS); having version history dramatically increases confidence in the site both from an internal/vendor perspective and from end-users. This is especially useful for the reviews on e-commerce sites/apps where we want to be able to detect/see where people have updated their review following extended usage. e.g: did the product disintegrate after a short period of time? did the user give an initially unfavourable review and over time come to realise that the product is actually exceptionally durable, well-designed and great value-for-money because it has lasted twice a long as any previous product they purchased to perform the same "job to be done"? โญ๏ธ โญ๏ธ โญ๏ธ โญ๏ธ โญ๏ธ
  • Chat - a chat system should allow editing of previously sent messages for typos/inaccuracies, but that edit/revision history should be transparent not just a "message edited" banner (with no visibility of what changed). โœ๏ธ
  • Social Networking - not allowing people to delete a message without leaving a clarifying comment to promote accountability for what people write. In many cases this can reduce hate speech. ๐Ÿ˜ก ๐Ÿ’ฌ ๐Ÿ˜‡
  • Healthcare: a patient's medical data gets captured/recorded once as a "snapshot" in time. The doctor or ECG machine does not go back and "update" the value of the patients heart rate or electrophysiologic pattern. A new value is sampled at each time interval.
  • Analytics is all append-only time-series events streamed from the device to server and saved in a time-series data store.
    • Events in Analytics systems are often aggregated (using "views") into charts/graphs. The "views" of the data are "temporary tables" which store the aggregated or computed data but do not touch the underlying log/stream.
  • Banking/Finance - all transactions are append-only ledgers. If they were not accounting would be chaos and the world economy would collapse! When the "available balance" of an account is required, it is calculated from the list/log of debit/credit transactions. (a summary of the data in an account may be cached in a database "view" but it is never mutated)
  • CRM - where customer data is updated and can be incorrectly altered, having the complete history of a record and being able to "time travel" through the change log is a really good idea. ๐Ÿ•™ โ†ฉ๏ธ ๐Ÿ•ค โœ…
  • Most Other Web/Mobile Applications - you name the app, there is always a way in which an append-only log is applicable/useful/essential to the reliability/confidence users have in that app. ๐Ÿ’–

Append-only Using PostgreSQL...?

This example uses "stock" PostgreSQL and does not require any plugins. This is a deliberate choice and we use this approach in "production". This means we can use all of the power of Postgres, and deploy our app to any "Cloud" provider that supports Postgres.

Will it Scale?

Your PostgreSQL database will not be the "bottleneck" in your app/stack.

Using an Append-only Log with UUIDs as Primary Keys is all the "ground work" needed
to ensure that any app we build is prepared to scale both Vertically and Horizontally. โœ… ๐Ÿš€

For example: an AWS RDS (PostgreSQL) db.m4.16xlarge instance has 256GB of RAM and can handle 10GB/sec of "throughput". The instance has been benchmarked at 200k writes/second .

If/when our/your app reaches 10k writes/sec and needs to use one of these instances it will be insanely "successful" by definition. ๐Ÿฆ„ ๐ŸŽ‰

Don't worry about storing all the data, the insight it will give you will more than pay for itself! Once your app is successful you can hire a team of database experts to fine-tune storing record history in a cheaper object store.

Bottom line: embrace Postgres for your app, you are in good company.
Postgres can handle whatever you throw at it and loves append-only data!

If your app ever "outgrows" Postgres, you can easily migrate to CitusDB.

Prerequisites?

The only pre-requisite for understanding this example/tutorial are:

We recommend that you follow the Phoenix Chat Example (tutorial): https://github.com/dwyl/phoenix-chat-example for additional practice with Phoenix, Ecto and testing before (or after) following this example.

How?

Before you start

Make sure you have the following installed on your machine:

Make sure you have a non-default PostgreSQL user, with no more than CREATEDB privileges. If not, follow the steps below:

  • Open psql by typing psql into your terminal
  • In psql, type:
    • CREATE USER append_only;
    • ALTER USER append_only WITH PASSWORD 'postgres'; (optional, only if you want to define the password for the new user)
    • ALTER USER append_only CREATEDB;

Default users are Superusers who cannot have core actions like DELETE and UPDATE revoked. But with an additional user we can revoke these actions to ensure mutating actions don't occur accidentally (we will do this in step 2).

1. Getting started

Make a new Phoenix app:

mix phx.new append

Type y when asked if you want to install the dependencies, then follow the instructions to change directory:

cd append

Then go into your generated config file. In config/dev.exs and config/test.exs you should see a section that looks like this:

# Configure your database
config :append, Append.Repo
  username: "postgres",
  password: "postgres",
  ...

Change the username to your non-default PostgreSQL user:

  ...
  username: "append_only",
  ...

Define the datetime type for the timestamp.

config :append, Append.Repo
  migration_timestamps: [type: :naive_datetime_usec],
  username: "postgres",
  password: "postgres",

Once you've done this, create the database for your app:

mix ecto.create

2. Create the Schema

We're going to use an address book as an example. run the following generator command to create our schema:

mix phx.gen.schema Address addresses name:string address_line_1:string address_line_2:string city:string postcode:string tel:string

This will create two new files:

  • lib/append/address.ex
  • priv/repo/migrations/{timestamp}_create_addresses.exs

Before you follow the instructions in your terminal, we'll need to edit the generated migration file.

The generated migration file should look like this:

defmodule Append.Repo.Migrations.CreateAddresses do
  use Ecto.Migration

  def change do
    create table(:addresses) do
      add(:name, :string)
      add(:address_line_1, :string)
      add(:address_line_2, :string)
      add(:city, :string)
      add(:postcode, :string)
      add(:tel, :string)

      timestamps()
    end

  end
end

We need to edit it to remove update and delete privileges for our user:

defmodule Append.Repo.Migrations.CreateAddresses do
  use Ecto.Migration

  # Get name of our Ecto Repo module from our config
  @repo :append |> Application.get_env(:ecto_repos) |> List.first()
  # Get username of Ecto Repo from our config
  @db_user Application.get_env(:append, @repo)[:username]

  def change do
    ...
    execute("REVOKE UPDATE, DELETE ON TABLE addresses FROM #{@db_user}")
  end
end

For reference, this is what your migration file should look like now: priv/repo/migrations/20180912142549_create_addresses.exs

In lib/append/address.ex file define the timestamps option to use the naive_datetime_usec type

@timestamps_opts [type: :naive_datetime_usec]
schema "addresses" do
  ...

Once this is done, run:

mix ecto.migrate

and you should see the following output:

[info] == Running Append.Repo.Migrations.CreateAddresses.change/0 forward
[info] create table addresses
[info] execute "REVOKE UPDATE, DELETE ON TABLE addresses FROM append_only"
[info] == Migrated in 0.0s

Note: if you followed terminal instruction and ran mix ecto.migrate before updating the migration file, you will need to run mix ecto.drop and first update the migration file (as per the instructions) and then run: mix ecto.create && mix ecto.migrate.

3. Defining our Interface

Now that we have no way to delete or update the data, we need to define the functions we'll use to access and insert the data. To do this we'll define an Elixir behaviour with some predefined functions.

The first thing we'll do is create the file for the behaviour. Create a file called lib/append/append_only_log.ex and add to it the following code:

defmodule Append.AppendOnlyLog do
  defmacro __using__(_opts) do
    quote do
      @behaviour Append.AppendOnlyLog

    end
  end
end

Here, we're creating a macro, and defining it as a behaviour. The __using__ macro is a callback that will be injected into any module that calls use Append.AppendOnlyLog. We'll define some functions in here that can be reused by different modules. see: https://elixir-lang.org/getting-started/alias-require-and-import.html#use for more info on the __using__ macro.

The next step in defining a behaviour is to provide some callbacks that must be provided.

defmodule Append.AppendOnlyLog do
  alias Append.Repo

  @callback insert
  @callback get
  @callback all
  @callback update
  @callback delete

  defmacro __using__(_opts) do
    ...
  end
end

These are the functions we'll define in this macro to interface with the database. You may think it odd that we're defining an update function for our append-only database, but we'll get to that later.

Callback definitions are similar to typespecs, in that you can provide the types that the functions expect to receive as arguments, and what they will return.

defmodule Append.AppendOnlyLog do
  alias Append.Repo

  @callback insert(struct) :: {:ok, Ecto.Schema.t()} | {:error, Ecto.Changeset.t()}
  @callback get(integer) :: Ecto.Schema.t() | nil | no_return()
  @callback all() :: [Ecto.Schema.t()]
  @callback update(Ecto.Schema.t(), struct) :: {:ok, Ecto.Schema.t()} | {:error, Ecto.Changeset.t()}
  @callback delete(Ecto.Schema.t()) :: {:ok, Ecto.Schema.t()} | {:error, Ecto.Changeset.t()}

  defmacro __using__(_opts) do
    quote do
      @behaviour Append.AppendOnlyLog

      def insert(attrs) do
      end

      def get(id) do
      end

      def all() do
      end

      def update(item, attrs) do
      end
    end
  end
end

The next step is to define the functions themselves, but first we'll write some tests.

4. Implementing our Interface

4.1 Insert

The first thing we'll want to do is insert something into our database, so we'll put together a simple test for that. Create a directory called test/append/ and a file called test/append/address_test.exs.

defmodule Append.AddressTest do
  use Append.DataCase
  alias Append.Address

  test "add item to database" do
    assert {:ok, item} = Address.insert(%{
      name: "Thor",
      address_line_1: "The Hall",
      address_line_2: "Valhalla",
      city: "Asgard",
      postcode: "AS1 3DG",
      tel: "0800123123"
    })

    assert item.name == "Thor"
  end
end

This test will assert that an item has been correctly inserted into the database. Run mix test now, and you should see it fail.

1) test add item to database (Append.AddressTest)
     test/append/address_test.exs:5
     ** (UndefinedFunctionError) function Append.Address.insert/1 is undefined or private
     code: assert {:ok, item} = Address.insert(%{
     stacktrace:
       (append) Append.Address.insert(%{address_line_1: "The Hall", address_line_2: "Valhalla", city: "Asgard", name: "Thor", postcode: "AS1 3DG", tel: "0800123123"})
       test/append/address_test.exs:6: (test)

Now we'll go and write the code to make the test pass. The first thing we need is the actual insert/1 function body:

defmodule Append.AppendOnlyLog do
  alias Append.Repo
  ...
  defmacro __using__(_opts) do
    quote do
      @behaviour Append.AppendOnlyLog

      def insert(attrs) do
        %__MODULE__{}
        |> __MODULE__.changeset(attrs)
        |> Repo.insert()
      end
      ...
    end
  end
end

Now, because we're using a macro, everything inside the quote do, will be injected into the module that uses this macro, and so will access its context. So in this case, where we are using __MODULE__, it will be replaced with the calling module's name (Append.Address).

In order to now use this function, we need to include the macro in lib/append/address.ex, which we generated earlier:

defmodule Append.Address do
  use Ecto.Schema
  import Ecto.Changeset

  use Append.AppendOnlyLog #include the functions from this module's '__using__' macro.

  schema "addresses" do
    ...
  end

  @doc false
  def changeset(address, attrs) do
    ...
  end
end

Now run the tests again.

** (CompileError) lib/append/address.ex:4: Append.Address.__struct__/1 is undefined, cannot expand struct Append.Address
    (stdlib) lists.erl:1354: :lists.mapfoldl/3
    (elixir) expanding macro: Kernel.|>/2

Ah, an error.

Now this error may seem a little obtuse. The error is on line 4 of address.ex? That's:

use Append.AppendOnlyLog

That's because at compile time, this line is replaced with the contents of the macro, meaning the compiler isn't sure exactly which line of the macro is causing the error. This is one of the disadvantages of macros, and why they should be kept short (and used sparingly).

Luckily, there is a way we can see the stack trace of the macro. Add location: :keep to the quote do:

defmodule Append.AppendOnlyLog do
  ...
  defmacro __using__(_opts) do
    quote location: :keep do
      @behaviour Append.AppendOnlyLog

      def insert(attrs) do
        ...
      end
      ...
    end
  end
end

Now, if we run mix test again, we should see where the error actually is:

** (CompileError) lib/append/append_only_log.ex:20: Append.Address.__struct__/1 is undefined, cannot expand struct Append.Address
    (stdlib) lists.erl:1354: :lists.mapfoldl/3
    (elixir) expanding macro: Kernel.|>/2

Line 20 of append_only_log.ex:

%__MODULE__{}

So we see that trying to access the Append.Address struct is causing the error. Now this function Append.Address.__struct__/1 should be defined when we call:

schema "addresses" do

in the Address module. The problem lies in the way macros are injected into modules, and the order functions are evaluated. We could solve this by moving the use Append.AppendOnlyLog after the schema:

defmodule Append.Address do
  ...

  schema "addresses" do
    ...
  end

  use Append.AppendOnlyLog #include the functions from this module's '__using__' macro.

  ...
end

Now run mix.test and it should pass! But something doesn't quite feel right. We shouldn't need to include a 'use' macro halfway down a module to get our code to compile. And we don't! Elixir provides some fine grained control over the compile order of modules: https://hexdocs.pm/elixir/Module.html#module-module-attributes

In this case, we want to use the @before_compile attribute.

defmodule Append.AppendOnlyLog do
  ...
  defmacro __using__(_opts) do
    quote do
      @behaviour Append.AppendOnlyLog
      @before_compile unquote(__MODULE__)
    end
  end

  defmacro __before_compile__(_env) do
    quote do
      def insert(attrs) do
        %__MODULE__{}
        |> __MODULE__.changeset(attrs)
        |> Repo.insert()
      end

      def get(id) do
      end

      def all() do
      end

      def update(item, attrs) do
      end
    end
  end
end

So here we add @before_compile unquote(__MODULE__) to __using__.

unquote(__MODULE__) here, just means we want to use the __before_compile__ macro defined in this module (AppendOnlyLog), not the calling module (Address).

Then, the code we put inside __before_compile__ will be injected at the end of the calling module, meaning the schema will already be defined, and our tests should pass.

Finished in 0.1 seconds
4 tests, 0 failures

4.2 Get/All

Now that we've done the hard parts, we'll implement the rest of the functionality for our Append Only Log.

The get and all functions should be fairly simple, we just need to forward the requests to the Repo. But first, some tests.

defmodule Append.AddressTest do
  ...
  describe "get items from database" do
    test "get/1" do
      {:ok, item} = insert_address()

      assert Address.get(item.id) == item
    end

    test "all/0" do
      {:ok, _item} = insert_address()
      {:ok, _item_2} = insert_address("Loki")

      assert length(Address.all()) == 2
    end
  end

  def insert_address(name \\ "Thor") do
    Address.insert(%{
      name: name,
      address_line_1: "The Hall",
      address_line_2: "Valhalla",
      city: "Asgard",
      postcode: "AS1 3DG",
      tel: "0800123123"
    })
  end
end

You'll see we've refactored the insert call into a function so we can reuse it, and added some simple tests. Run mix test and make sure they fail, then we'll implement the functions.

defmodule Append.AppendOnlyLog do
  ...
  defmacro __before_compile__(_env) do
    quote do
      ...
      def get(id) do
        Repo.get(__MODULE__, id)
      end

      def all do
        Repo.all(__MODULE__)
      end
      ...
    end
  end
end

mix test again, and we should be all green.

4.3 Update

Now we come to the update function.
"But I thought we were only appending to the database?" I hear you ask.
This is true, but we still need to relate our existing data to the new, updated data we add.

To do this, we need to be able to reference the previous entry somehow. The simplest (conceptually) way of doing this is to provide a unique id to each entry. Note that the id will be used to represent unique entries, but it will not be unique in a table, as revisions of entries will have the same id. This is the simplest way we can link our entries, but there may be some disadvantages, which we'll look into later.

So, first we'll need to edit our schema to add a shared id to our address entries:

defmodule Append.Address do
  ...
  schema "addresses" do
    ...
    field(:entry_id, :string)

    ...
  end

  def changeset(address, attrs) do
    address
    |> insert_entry_id()
    |> cast(attrs, [:name, :address_line_1, :address_line_2, :city, :postcode, :tel, :entry_id])
    |> validate_required([:name, :address_line_1, :address_line_2, :city, :postcode, :tel, :entry_id])
  end

  def insert_entry_id(address) do
    case Map.fetch(address, :entry_id) do
      {:ok, nil} -> %{address | entry_id: Ecto.UUID.generate()}
      _ -> address
    end
  end
end

We've added a function here that will generate a unique id, and add it to our item when it's created.

Then we create a migration file:

mix ecto.gen.migration add_entry_id

That created a file called priv/repo/migrations/20180914130516_add_entry_id.exs

containing the following "blank" migration:

defmodule Append.Repo.Migrations.AddEntryId do
  use Ecto.Migration

  def change do

  end
end

Add an alter definition to the change function body:

defmodule Append.Repo.Migrations.AddEntryId do
  use Ecto.Migration

  def change do
    alter table("addresses") do
      add :entry_id, :string
    end

  end
end

This is fairly explanatory: it alters the addresses table to add the entry_id field.

Run the migration:

mix ecto.migrate

You should see the following in your terminal:

[info] == Running Append.Repo.Migrations.AddEntryId.change/0 forward
[info] alter table addresses
[info] == Migrated in 0.0s

Now, we'll write a test for the update function.

defmodule Append.AddressTest do
  ...
  test "update item in database" do
    {:ok, item} = insert_address()

    {:ok, updated_item} = Address.update(item, %{tel: "0123444444"})

    assert updated_item.name == item.name
    assert updated_item.tel != item.tel
  end
  ...
end

If you attempt to run this test with mix test, you will see:

.....

  1) test update item in database (Append.AddressTest)
     test/append/address_test.exs:25
     ** (MatchError) no match of right hand side value: nil
     code: {:ok, updated_item} = Address.update(item, %{tel: "0123444444"})
     stacktrace:
       test/append/address_test.exs:28: (test)

..

Finished in 0.2 seconds
8 tests, 1 failure

Let's implement the update function to make the test pass.

The update function itself will receive two arguments: the existing item and a map of updated attributes.

Then we'll use the insert function to create a new entry in the database, rather than update, which would overwrite the old one.

In your /lib/append/append_only_log.ex file add the following def update:

defmodule Append.AppendOnlyLog do
  ...
  defmacro __before_compile__(_env) do
    quote do
      ...
      def update(%__MODULE__{} = item, attrs) do
        item
        |> __MODULE__.changeset(attrs)
        |> Repo.insert()
      end
    end
  end
end

If we try to run our tests, we will see the following error:

1) test update item in database (Append.AddressTest)
     test/append/address_test.exs:25
     ** (Ecto.ConstraintError) constraint error when attempting to insert struct:

         * unique: addresses_pkey

This is because we can't add the item again, with the same unique id (addresses_pkey) as it already exists in the database.

We need to "clear" the :id field before attempting to update (insert):

def update(%__MODULE__{} = item, attrs) do
  item
  |> Map.put(:id, nil)
  |> Map.put(:inserted_at, nil)
  |> Map.put(:updated_at, nil)
  |> __MODULE__.changeset(attrs)
  |> Repo.insert()
end

So here we remove the original autogenerated id from the existing item, preventing us from duplicating it in the database.

We also remove the :inserted_at and :updated_at fields. Again, if we leave those in, they'll be copied over from the old item, instead of being newly generated.

Now we'll add some more tests, making sure our code so far is working as we expect it to:

defmodule Append.AddressTest do
  ...
  test "get updated item" do
    {:ok, item} = insert_address()

    {:ok, updated_item} = Address.update(item, %{tel: "0123444444"})

    assert Address.get(item.id) == updated_item
  end

  test "all/0 does not include old items" do
    {:ok, item} = insert_address()
    {:ok, _} = insert_address("Loki")
    {:ok, _} = Address.update(item, %{postcode: "W2 3EC"})

    assert length(Address.all()) == 2
  end
  ...
end

Here we're testing that the items we receive from our 'get' and 'all' functions are the new, updated items.

Run this test and...

1) test get updated item (Append.AddressTest)
     test/append/address_test.exs:34
     Assertion with == failed
     code:  assert Address.get(item.id()) == updated_item
     left:  %Append.Address{... tel: "0800123123"}
     right: %Append.Address{... tel: "0123444444"}
     stacktrace:
       test/append/address_test.exs:39: (test)

2) test all/0 does not include old items (Append.AddressTest)
     test/append/address_test.exs:43
     Assertion with == failed
     code:  assert length(Address.all()) == 2
     left:  3
     right: 2
     stacktrace:
       test/append/address_test.exs:48: (test)

We're still getting the old items.

To fix this we'll have to revisit our get function.

def get(id) do
  Repo.get(__MODULE__, id)
end

The first issue is that we're still using the id to get the item. As we know, this id will always reference the same version of the item, meaning no matter how many times we update it, the id will still point to the original, unmodified item.

Luckily, we have another way to reference the item. Our entry_id that we created earlier. Let's use that in our query:

defmodule Append.AppendOnlyLog do
  alias Append.Repo
  require Ecto.Query

  ...
  defmacro __before_compile__(_env) do
    quote do
      import Ecto.Query

      ...
      def get(entry_id) do
        query =
          from(
            m in __MODULE__,
            where: m.entry_id == ^entry_id,
            select: m
          )

        Repo.one(query)
      end
    ...
    end
  end
end

You'll notice that we're now importing Ecto.Query. We have to make sure we import Ecto.Query inside our macro, so the scope matches where we end up calling it.

Don't forget to update the tests too:

...
test "get/1" do
  {:ok, item} = insert_address()

  assert Address.get(item.entry_id) == item
end
...
test "get updated item" do
  {:ok, item} = insert_address()

  {:ok, updated_item} = Address.update(item, %{tel: "0123444444"})

  assert Address.get(item.entry_id) == updated_item
end
...

Then we'll run the tests:

test get updated item (Append.AddressTest)
     test/append/address_test.exs:34
     ** (Ecto.MultipleResultsError) expected at most one result but got 2 in query:

Another error: "expected at most one result but got 2 in query"

Of course, this makes sense, we have two items with that entry id, but we only want one. The most recent one. Let's modify our query further:

def get(entry_id) do
  query =
    from(
      m in __MODULE__,
      where: m.entry_id == ^entry_id,
      order_by: [desc: :inserted_at],
      limit: 1,
      select: m
    )

  Repo.one(query)
end

This will order our items in descending order by the inserted date, and take the most recent one.

We'll use the same query in our all function, but replacing the limit: 1 with distinct: entry_id:

def all do
  query =
    from(m in __MODULE__,
      distinct: m.entry_id,
      order_by: [desc: :inserted_at],
      select: m
    )

  Repo.all(query)
end

This ensures we get more than one item, but only the most recent of each entry_id.

4.4 Get history

A useful part of our append-only database will be the functionality to see the entire history of an item.

As usual, we'll write a test first:

defmodule Append.AddressTest do
  ...
  test "get history of item" do
    {:ok, item} = insert_address()

    {:ok, updated_item} = Address.update(item, %{
      address_line_1: "12",
      address_line_2: "Kvadraturen",
      city: "Oslo",
      postcode: "NW1 SCA",
    })

    history = Address.get_history(updated_item)

    assert length(history) == 2
    assert [h1, h2] = history
    assert h1.city == "Asgard"
    assert h2.city == "Oslo"
  end
  ...
end

Then the function:

defmodule Append.AppendOnlyLog do
  alias Append.Repo
  require Ecto.Query

  ...
  @callback get_history(Ecto.Schema.t()) :: [Ecto.Schema.t()] | no_return()

  ...
  defmacro __before_compile__(_env) do
    quote do
      import Ecto.Query

      ...
      def get_history(%__MODULE__{} = item) do
        query = from m in __MODULE__,
        where: m.entry_id == ^item.entry_id,
        select: m

        Repo.all(query)
      end
      ...
    end
  end
  ...
end

You'll notice the new callback definition at the top of the file.

Now run your tests, and you'll see that we're now able to view the whole history of the changes of all items in our database.

4.5 Delete

As you may realise, even though we are using an append only database, we still need some way to "delete" items.

Of course they won't actually be deleted, merely marked as "inactive", so they don't show anywhere unless we specifically want them to (For example in our history function).

To implement this functionality, we'll need to add a field to our schema, and to the cast function in our changeset.

defmodule Append.Address do
  schema "addresses" do
    ...
    field(:deleted, :boolean, default: false)
    ...
  end

  def changeset(address, attrs) do
    address
    |> insert_entry_id()
    |> cast(attrs, [
      ...,
      :deleted
    ])
    |> validate_required([
      ...
    ])
  end
end

and a new migration:

mix ecto.gen.migration add_deleted
defmodule Append.Repo.Migrations.AddDeleted do
  use Ecto.Migration

  def change do
    alter table("addresses") do
      add(:deleted, :boolean, default: false)
    end
  end
end

This adds a boolean field, with a default value of false. We'll use this to determine whether a given item is "deleted" or not.

As usual, before we implement it, we'll add a test for our expected functionality.

describe "delete:" do
    test "deleted items are not retrieved with 'get'" do
      {:ok, item} = insert_address()
      {:ok, _} = Address.delete(item)

      assert Address.get(item.entry_id) == nil
    end

    test "deleted items are not retrieved with 'all'" do
      {:ok, item} = insert_address()
      {:ok, _} = Address.delete(item)

      assert length(Address.all()) == 0
    end
  end

Our delete function is fairly simple:

def delete(%__MODULE__{} = item) do
  item
  |> Map.put(:id, nil)
  |> Map.put(:inserted_at, nil)
  |> Map.put(:updated_at, nil)
  |> __MODULE__.changeset(%{deleted: true})
  |> Repo.insert()
end

It acts just the same as the update function, but adds a value of deleted: true. But this is only half of the story.

We also need to make sure we don't return any deleted items when they're requested. So again, we'll have to edit our get and all functions:

def get(entry_id) do
  sub =
    from(
      m in __MODULE__,
      where: m.entry_id == ^entry_id,
      order_by: [desc: :inserted_at],
      limit: 1,
      select: m
    )

  query = from(m in subquery(sub), where: not m.deleted, select: m)

  Repo.one(query)
end

def all do
  sub =
    from(m in __MODULE__,
      distinct: m.entry_id,
      order_by: [desc: :inserted_at],
      select: m
    )

  query = from(m in subquery(sub), where: not m.deleted, select: m)

  Repo.all(query)
end

What we're doing here is taking our original query, then performing another query on the result of that, only returning the item if it has not been marked as deleted.

So now, when we run our tests, we should see that we're succesfully ignoring "deleted" items.

phoenix-ecto-append-only-log-example's People

Contributors

bmartin2015 avatar cleop avatar danwhy avatar geekoftheweek avatar iteles avatar nelsonic avatar simonlab avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

phoenix-ecto-append-only-log-example's Issues

Refactor Append-only Example from UUID to use CID

@nelsonic based on the following comment you made in here...

The work to make Phoenix append-only is "on-going". Once CID is complete we will need to integrate it into our Example: https://github.com/dwyl/phoenix-ecto-append-only-log-example and then add it to Alog (which will need to be re-worked to make the log "generic" ... tbd.)

Would I be correct in assuming that now that CID has been published, updating this example is the next priority (one of the next priorities) on the list?

Use `.city` instead of `[:city]` in history test

In the get history of an item test, I got the following:

     test/append/address_test.exs:56
     ** (UndefinedFunctionError) function Append.Address.fetch/2 is undefined (Append.Address does not implement the Access behaviour)
     code: assert h1[:city] == "Asgard"
     stacktrace:
       (append) Append.Address.fetch(%Append.Address{__meta__: #Ecto.Schema.Metadata<:loaded, "addresses">, address_line_1: "The Hall", address_line_2: "Valhalla", city: "Asgard", entry_id: "b0cccc58-f3ea-4b9d-9b57-9b79c2bbf69a", id: 100, inserted_at: ~N[2018-12-29 20:29:29.782939], name: "Thor", postcode: "AS1 3DG", tel: "0800123123", updated_at: ~N[2018-12-29 20:29:29.782939]}, :city)
       (elixir) lib/access.ex:318: Access.get/3
       test/append/address_test.exs:70: (test)

In my own code I fixed this by changing the assert statements to check h1.city instead of h1[:city] (and likewise for h2).

ERROR 42P07 (duplicate_table) relation "addresses" already exists

While attempting to run the tests on localhost I got the following error:
image

Generated append app
[debug] QUERY OK source="schema_migrations" db=2.8ms
SELECT s0."version"::bigint FROM "schema_migrations" AS s0 FOR UPDATE []
[info] == Running 20180912142549 Append.Repo.Migrations.CreateAddresses.change/0 forward
[info] create table addresses
** (Postgrex.Error) ERROR 42P07 (duplicate_table) relation "addresses" already exists
    (ecto_sql) lib/ecto/adapters/sql.ex:595: Ecto.Adapters.SQL.raise_sql_call_error/1
    (elixir) lib/enum.ex:1314: Enum."-map/2-lists^map/1-0-"/2
    (ecto_sql) lib/ecto/adapters/sql.ex:682: Ecto.Adapters.SQL.execute_ddl/4

Someone who has only just cloned the repo and installed the deps is unlikely to see this error because they won't have a duplicate project in the same namespace the way I do ...
https://github.com/nelsonic/append-only-log-ex

I need to DROP the database and re-create it.

Coverage ...

@dwyl we have a "Gold Standard" for examples/tutorials which must be followed without exception.
Tutorials aimed at beginners should always have "complete tests" to eliminate any "excuse" people may have for not writing tests in their projects.

An example is incomplete without having 100% test coverage because
it either means that there is superfluous code (which can be removed)
OR there is untested code; functionality that is "magic". Both are "below expectations".

At present this example has 56% coverage:
https://codecov.io/github/dwyl/phoenix-ecto-append-only-log-example?branch=master

image

Todo

  • fix this: codecov.io

Do we need behaviours?

The example is using behaviours to define the structure of the API interface:

@callback insert(struct) :: {:ok, Ecto.Schema.t()} | {:error, Ecto.Changeset.t()}
@callback get(integer) :: Ecto.Schema.t() | nil | no_return()
@callback all() :: [Ecto.Schema.t()]
@callback update(Ecto.Schema.t(), struct) ::
{:ok, Ecto.Schema.t()} | {:error, Ecto.Changeset.t()}
@callback get_history(Ecto.Schema.t()) :: [Ecto.Schema.t()] | no_return()

The implementation of the API is then done in the before_compile macro:

defmacro __before_compile__(_env) do
quote do
import Ecto.Query
def insert(attrs) do
%__MODULE__{}
|> __MODULE__.changeset(attrs)
|> Repo.insert()
end
def get(entry_id) do
sub =
from(
m in __MODULE__,
where: m.entry_id == ^entry_id,
order_by: [desc: :inserted_at],
limit: 1,
select: m
)
query = from(m in subquery(sub), where: not m.deleted, select: m)
Repo.one(query)
end

...
However I'm not sure behaviours are needed for the append only example.
From my understanding it makes sense to use behaviours when multiple modules need to provide the same functions but with a different logic of implemention. In our example only the AppendOnlyLog module is implementing the behaviours.

For a learning purpose it makes sense to see how behaviours works ๐Ÿ‘ but I'm not sure if they provide a lot of values for this repository?

Should we still keep them to demonstrate how they can be used?

ref: https://elixir-lang.org/getting-started/typespecs-and-behaviours.html#behaviours

Enable Travis-CI for this Project to Confirm Tests are Passing

Tests for #2 (Daniel's PR) currently pass on my Localhost:
image

But we cannot "know" this without checking out the branch and running the tests on localhost ...

Todo

Thanks! ๐Ÿ‘

Create a UI to allow people to interact with the example?

Reading the example/tutorial code is nice, ๐Ÿ‘
but I feel that having an example app on Heroku would be much more beginner-friendly. ๐Ÿค”
Who wants to make this happen? :-)

Example: https://github.com/dwyl/phoenix-chat-example

Todo

  • Create a basic Form for interacting with the Address Book
    • The form should allow new records to be created
    • Should allow existing records to be "updated" (append only of course)
  • The List of Addresses should be the default "show" view
    • List should be ordered alphabetically by name. (should this be first or last name?)
  • Display the number of revisions for each address as a counter
  • clicking the counter (number) should display the version history for that record.

`require Ecto.Query` must appear earlier

In 4.3 Update when get(id) is first changed to get(entry_id), the use of from/2 causes compilation errors unless Ecto.Query is required and imported. This require and import gets added later in 4.4 Get History, but running into these unexpected error messages was very confusing for me as an Elixir n00b trying to follow along.

Dealing with associations

An issue that arose when implementing this into a project was 'how do we deal with associated schemas in an append only way?'.

The main problem was that associating two schemas relies on the unique primary keys. So we couldn't use our UUIDs, because there a multiple of those per table. But if we used the auto-incrementing primary key, when a record was updated the id would change, and it would no longer be associated with the record in the other table.

The answer to this was to update the associations table by appending a new record to it whenever we update either of the linked records. This means we always have the latest associations, and also the full history of all associations before this; meaning we can see exactly what version of record A was first associated with record B, which could be very useful for analytics.

One more thing to note is how we only access the latest associations when getting items from the database. We use a custom query passed to Ecto.preload:

query = from(s in schema,
          distinct: s.entry_id,
          order_by: [desc: :inserted_at],
          select: s
        )

item
|> Project.Repo.preload(query)

I'll add a full writeup of how to do this in the tutorial soon.

Notes and questions on logs from reading

As someone who is new to logs I could follow the example. However whilst discussing the subject area with others it brought up key terms and subject areas that were new to me. I think it would be useful to include some of this context in the readme for those who may stumble upon this repo without knowing what it is first.

What is a log?

AKA write-ahead log, commit log, transaction log. In this repo it will not refer to 'application logging', the kind of logging you might see for error messages.

A log is one of the most simple possible storage abstractions. An append-only, totally-ordered sequence of records ordered by time. They are visualised horizontally from left to right.

They're not all that different from a file or a table. If we consider a file as an array of bytes and a table as an array of records. Then a log can be thought of as a kind of table where records are sorted by time.

Logs are event driven. They record what happened and when continuously. As the records are stored in the order that the changes occurred this means that at any point you can revert back to a given point in time by finding it in your records. They can do this in near real-time, making them ideal for analytics. They are also helpful in the event of crashes or errors as their record of the state of the data at all times means data can easily be restored. By keeping an immutable log of the history of your data it means your data is kept clean and is never lost or changed. The log is added to by publishers of data and used / acted upon by subscribers but the records themselves cannot be mutated.

Keywords

Time series database: a database system optimised for handling time series data (arrays of numbers indexed by time). They handle queries for historical data/ time zones better than relational dbs.

Data integration: making all the data an organisation has available in all its services and systems.

Log compaction: methods to tidy up a log by deleting no longer needed data.

Questions

  • Are we performing physical (the data itself) or logical (the command or calculation which results in the data) logging?
  • Are all logs time series databases?
  • Does a log contain all of the fields of data that would be captured in a relational DB schema? Does this lead to there being a lot of empty rows because some columns aren't applicable to everything?
  • Are all logs append only?
  • What is a record in the context of a log? Is it the equivalent of a row in a table? Does it have column titles so all records have the same field titles?
  • Do all logs run horizontally?
  • Is a timestamp value mandatory in a record in a log? Do they act as the unique identifier for records or does a numeric id?
  • When are horizontal scaling partitions useful? I didn't understand in what context they'd be used from the article... Would you have a separate log per user and the partition is made for each user ID?
  • The article referred a lot to 'distributed data systems' - if a log is a move away from them, what kind of system would you call a log system? Does it have a name? Is it an integrated system?

These notes and questions came from reading:
https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying

Why? What? Who? How?

Why?

Having an append-only log is incredibly useful in way more situations than most people realise.
Anywhere you would need accountability in data is an excellent candidate for immutability.

  • CRM - where customer data is updated and can be incorrectly altered, having the complete history of a record and being able to "time travel" through the change log is a really good idea.
  • CMS/Blog - being able to "roll back" content means you can invite your trusted readers / stakeholders to edit/improve your content without "fear" of it decaying.
  • E-Commerce - both for journey/story tracking and transaction reliability. Also, same applies for the Product catalog (which is a specific type of CMS); having version history dramatically increase confidence in the site both from an internal/vendor perspective and from end-users.
    • This is especially useful for the Reviews on e-commerce sites/apps where we want to be able to detect/see where people have updated their review following extended usage. e.g: was did the product disintegrate after a short period of time? did the user give an initially unfaforable e.g 3/5 stars review and with time come to realise that the product is actually exceptionally durable, well-designed and great value-for-money because it has lasted twice a long as any previous product they purchased to perform the same "job to be done".
  • Chat - a Chat system should allow editing of previously sent messages for typos/inaccuracies.
    but that edit/revision history should be transparent not "message edited" (with no visibility of what changed) and if a person deletes the a message they should have to provide a comment indicating why they are "breaking" the chain. (more on this later).
  • Most other Consumer Web/Mobile Applications - you name the app, I can illustrate exactly how an append-only log is applicable/useful/essential to the reliability/confidence in that app.
    • Forums - any sort of user-generate content where
    • Social Networking - not allowing people to delete a message without leaving a clarifying comment promote accountability for what people write and in many cases avoids most hate speech.

When a system/db does not have (field/record level) "version control" each update over-writes the state of the record so it's impossible to retrieve it without having to go digging through a backup which is often a multi-day process, cost/time prohibitive or simply unavailable.

We propose that all apps should be built with an Append Only Log at the core by default.
This is not a "new" idea. Several large companies have used the "Lambda" or "Kappa" architecture in production with superb results for reliability and scalability.
see: http://milinda.pathirage.org/kappa-architecture.com

What?

Instead of using Ecto's standard "CRUD" which allows overwriting and deleting data without "rollback" or "recoverability", we propose writing a thin "proxy" layer between the application code and the PostgreSQL database

Who?

All developers who have basic understanding of web development where data is stored in a DB
and want to "level up" their knowledge/skills and the reliability of the product they are building
with a view to understanding more ("advanced") "distributed" application architecture
including the ability to (optionally/incrementally) use IPFS and Blockchain.

How?

The purpose of this tutorial is to:

  • Make it easy for anyone to build a Phoenix/Ecto based app with an append-only log at it it's core.
  • Write a step-by-step guide that shows how to:
    • create a content type using standard Phoenix generators
      • Take your pick of what you think is the simplest/most beginner friendly content type. e.g:
        • Address Book is easy to show value of tracking updates.
  • no additional DB plugins/"add-ons" should be required to make this work,
    just "stock" PostgreSQL as downloaded or available through a DB-as-a-service provider. e.g. Heroku.

Open Questions:

  • How to reference the previous version of a record from the latest one. (keen to hear feedback on this) Happy to explore using hash of data as "parent id" thus we would have a "merkle tree" structure for all data. i.e. "Blockchain" but without the "proof of work" (factor), just a chain with the history of a record for accountability/rollback purposes no need to waste CPU cycles.
  • How to delete data (mark an item as deleted) without "destroying" the data.
  • Note: we would still have to have a "batch process" that is able to "really delete" data to comply with GDPR/"Right to be forgotten". But from the User's perspective we only need to "unlink" the data in the UI and then the batch process will delete it after a specified expiry. Similar to the recycling bin on a Desktop OS.

Similar to: (please use these a reference when writing the doc(s))

How to deal with deletions?

Although we are never deleting any records from our append only database, we still need a way to mark data as 'deleted', so it isn't shown to to users.

One solution could be to have an extra column in each table for 'deleted', and to mark that as true every time a user 'deletes' something. This would have the advantage of being easily reversible, ie. we could have a page that lists all 'deleted' items, and a button that 'undeletes' them, simply by marking 'deleted' as false.

The main issue we are facing is how to deal with deleting associations. We've come up with a way to keep associations among updated records, which should also work for deleted records (see #6). But this doesn't account for simply removing an association between two records.

The proposed solution above could work, but it might mean re-implementing a lot of what we already have. Ecto uses 'many to many' fields in schemas to automatically update composite tables. These tables currently consist of just two ids. It would probably be worth looking into how Ecto goes about updating these tables, and how easily we could instead do it ourselves.

Any suggestions or thoughts on the above two issues are welcome

Basic Address Book Example/Explanation

At present this tutorial does a good job of diving straight into the code example. โœ…
We feel that explaining the context of the example would help people understand it.

Most basic address books, like the one you have on your mobile phone, do not preserve history. This makes sense because most people only want the latest (up-to-date) version of a person's address.

But using our imagination for a bit we can easily demonstrate that having history in addresses can be highly useful.

Intro

In a "normal" Phoenix App, when a schema is generated using phx.gen ("generator") command e.g:

mix phx.gen.schema Address addresses name:string address_line_1:string 
address_line_2:string city:string postcode:string tel:string

or

mix phx.gen.html Accounts Address addresses name:string address_line_1:string 
address_line_2:string city:string postcode:string tel:string

A standard PostgreSQL Table called addresses is created with the following schema:

image

A standard PostgreSQL Table does not store the history of a record so when the record/row gets updated, we have no way of "undoing" the update. Let's consider a basic example.

Basic Example

We have a basic Address Book app for storing the addresses of our friends & family.

If we insert a record into the addresses table (see #17 (comment)) we get the following row:

id name address_line_1 address_line_2 city postcode tel inserted_at
1 Thor The Hall Valhalla Asgard AS1 3DG 123123 2019-04-25 10:01:42

This is very much "traditional CRUD" approach; the primary key (unique identifier) of the record is 1 and if we were to update this record, it would overwrite the previous version (and any history would be lost).

id name address_line_1 address_line_2 city postcode tel inserted_at updated_at
1 Thor 177A Bleecker Street c/o Dr. Strange New York NY 10012 98765 2019-03-14 10:01:42 2019-04-17 01:03:13

In our scenario above, we start out with our friend Thor's "home" address in Asgard.
Thor moves to Earth and is temporarily staying with his buddy Dr. Strange in New York.

After completing finishing his "job" on Earth, Thor moves back to Asgard to take a break from the chaos of NY. Thor forgets to leave his forwarding address assuming that everyone just knows where to send him mail.

Sadly, because we lost Thor's previous address when we updated the record, we have no idea how to contact him. Without record history, we lose contact with our friends. ๐Ÿ˜ž

Note: I have attempted to give this example in #5 (comment) but it appears to have been lost in that thread. My intention is to include this example in the "What?" section of the main README.md to help people understand the benefit of immutable data with a simple example.

Real World Example

If you have ever used an E-commerce shopping website, most of them allow you to have multiple addresses which are effectively an address "history".
image

When you update your address on Amazon, you are actually inserting a new version of you address. The way you can check this is that your previous orders that went to the previous address have not been altered.

As an end-user you have no visibility of the underlying data structure, but the reality is that all changes to your address are carefully recorded by Amazon to ensure full accountability and prevent fraud.

If a criminal was to gain access to your Amazon account, add their own address, send parcels to themself and then attempt to delete their address, it's not going to help them, their address is very much recorded in the account history and will be passed to the fraud investigation team.

Try it yourself, temporarily change your address to your Work or a Friend's address send an order to them. Then delete the address and go "Order Reports", the address is still there.

image

You might not think about address history as a "consumer", but if your account was ever hacked, you would be very grateful for the history.

Questions/thoughts on ALOG style functions using CID

relates to #22

Insert

From what I can tell insert would remain largely unchanged. We would just swap inserting the UUID for a CID we make.

Please add thoughts on this if I have overlooked anything

Get

How do we want to handle someone calling get with an old/previous CID?

Do we want to...

  • give a user the old version of the data?
  • give the user the latest version of the data?
    • how do we want to access that data? Imagine the table below but with the same field edited 100 times. With the current fields in the table, if we were to try and serve the user the most recent version we would have to make about 100 get calls, where each one would return the 'next iteration' until the end of the line.
    • Do we keep the UUID as well as add the CID. This way we could still link all entries together and get the most recent version in one call? If we keep the UUID would this make the prev column redundant (as we would be able to tell the previous version from timestamps)
  • give the user an error and have a separate function called get_previous (name tbd) that will handle returning old data?

Update

How do we want to handle someone calling the update function with an old/previous CID?

  • do we update the most recent version? (same point as get when it comes to retrieving the old data)
  • do we update the old version (I guess this would be similar to branching off)




inserted cid(PK)1 name address prev
1541609554 gVSTedHFGBetxy Bruce Wane 1007 Mountain Drive, Gotham null
1541618643 smnELuCmEaX42 Bruce Wane Rua Goncalo Afonso, Vila Madalena, Sao Paulo, 05436-100, Brazil gVSTedHFGBetxy

Questions on approach taken

  • Why is everything done using macros? We could achieve the same result by making them regular functions which takes the relevant module as an argument. Is there any benefits of one approach vs the other?

  • Is the insert function just allowing the user to skip the step where they call the relevant changeset function? Is this a good idea?

    • When inserting into the database in this way will a user only ever need on changeset function? In a regular Phoenix application there may be multiple changesets depending on what needs to be inserted/updated in the database. If a user needs to change their password for example there could be a changeset that deals specifically with just this. (Just to be clear, I cannot actually think of a situation with this approach where a user might need more than one changeset function but thought it best to raise the point in case I have missed something)
  • The get and get_by functions seem to just call their Repo
    equivalents. Are they needed?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.