Giter VIP home page Giter VIP logo

profile's Introduction

I'm Alexander 'Vijo' Ott, a Software Engineer 👨‍💻. My portfolio-website can be found here.

I primarily work in the Backend, but I also enjoy doing Fullstack work.

I work on everything from the conception, to the development and improvement, up to the deployment of software.

Programming languages

Over the years I've worked with many programming languages and I'm fine with using most of them, however I'd like to give a special mention to the following ones.

Backend languages

  • Python
  • C#
  • Java
  • Go

I really enjoy writing smaller programs in Python, but anything bigger is usually reseved for Java/C#.

Frontend languages

  • React
  • Vue
  • TypeScript/JavaScript

Fields I focus on

  • Databases
  • Security
  • Testing

Links

profile's People

Contributors

vijoplays avatar

Watchers

 avatar  avatar

profile's Issues

Testing guidelines

Notes for self:

Enterprise application: An enterprise application aims at automating or assisting an organization's inner process.
Branch coverage: Like code coverage, but focuses on different branches (e.g. if and switch statements): branch coverage=branches traveled/total number of branches.

Goal of this document:
-> Unit Testing, Automated Testing, Integration Testing (+ integration in build pipelines)
-> Nico Kuhn, Katalin als Unterstützung bei anderen Sprachen bzw dem Test Guide
-> Flip mal drüber lesen lassen, vielleicht hat der noch nen Input

Testing guidelines

Well designed API: Expose the absolute minimum of operations and states. Code that directly helps the client should be public, everything else is private and an implementation detail.

Goal of testing code

First off: Tests, and code in general, are a liability, no an asset. The more code you have, the more time you'll have to spend on fixing things. Bad tests can do more harm than good in this case.

If said code is hard to test, it's a strong sign that you should refactor your code. The code might be coupled too hard and would be easier to understand if you broke it apart. This does not mean that easily testable code is of good quality.

The goal of a Unit Test (UT): Not verifying units of code, but rather units of behaviour. This can be a few lines of code, or spanning multiple classes.

A good UT can be measured by the following criteria:

  • It verifies a small piece of code.
  • It does so quickly.
  • It does so in an isolated manner.

Quality of test code matters as much as production code!

//TODO: expand paragraph above, see chapter 2 "Tests shouldn’t verify units of code"

Coverage

While a decent tool to get into the mindset of UTing, focusing on coverage will cause issues in the long run. If the developers focus on getting f.ex. 90% code coverage, they will not focus on writing proper tests. Instead they will try to reach this arbitrary goal by writing tests that cover a large number of lines (and in the absolute worst case, they could even write code that doesn't even assert properly). This can give you a false sense of security, since your "codebase is 90% covered by tests".

A funny metaphor I heard regarding focusing on code coverage is the following:

Think of a patient in a hospital. Their high temperature might indicate a fever and is a helpful observation. But the hospital shouldn't make the proper temperature of this patient a goal to target by any means necessary. Otherwise, the hospital might end up with the quick and "efficient" solution of installing an air conditioner next to the patient and regulating their temperature by adjusting the amount of cold air flowing onto their skin. Of course, this approach doesn't make any sense.

It's not worthwhile to test trivial code. You get coverage (which doesn't tell you anything about the quality of your tests anyway), but you won't find a regression error in a test that validates a 'setName' method or similar.

The most important code to test is complex (i.e. it has many paths (if/switch/etc)) and/or domain significant (relevant to the end user's goal). Code that has these qualities AND has many collaborators is overcomplicated and should be refactored into smaller parts that can be tested more easily (e.g. split it up into a controller and algorithms).

Types of Tests

Type Goal
End-to-end The goal of an end-to-end test is to test the entire application in a single routine. It incorporates every single component of the application (e.g. UI, backend, database, other outside sources, etc).
Integration Uses out of process dependencies - but not everything (e.g. only a database, and an outside source, but not the UI).
UT An automated test verifying a small piece of code, quickly, in an isolated manner.

End-to-end

This type of test encompasses everything of a
//TODO: write something

Integration

!= ComponentTest
//TODO: write something

Unit Tests

Avoid if's in Tests: An if statement indicates that this test verifies too many behaviours at once and should be split up into multiple tests.

Do not initialize test fixtures in the constructor: Some parts of the tests you might need to reuse. For these, create private Factory functions that create these objects, rather than putting them into an initialize function. If you were to use a single object for all tests, they might change the state of said object, thus they could affect each other.

AAA Pattern

This is the same as the Given-When-Then Patter, although the latter one is easier to read for non programmers.

The Arrange, Act, Assert Pattern (AAA) is very useful for structuring UTs, by making them more readable and is a very easy adjustment to make.

Instead of simply writing a test, you structure your test to have 3 different sections. In the first one you prepare your test and set it up (Arrange), then you execute code that you want to test (Act) and lastly you check whether the behaviour of said code is correct (Assert).

This makes it very easy to skim over a test if you know what you're looking for. In practice it might look like this:

[Test]
public void Sum_of_two_numbers()                     
{
    // Arrange
    double first = 10;                               
    double second = 20;                              
    var calculator = new Calculator();               

    // Act
    double result = calculator.Sum(first, second);   

    // Assert
    Assert.Equal(30, result);                        
}

The AAA also helps in guiding you toward better UTs. Since you will only have 3 sections, you should rarely, if ever, have multiple Act and Assert sections in 1 UT. This only makes it harder to understand which part of the test failed. An exception to this rule is a test that requires a long time to setup. In this case, it would be fine to have multiple Act/Assert sections, although you should still try to avoid them.

Arrange will usually be the largest section. It can be a little bigger than the Act and Assert sections together, but if it's significantly larger you might want to move some parts of it into a private/Factory setup functions.

Act should only be a single line of code - if it's more, the system under test might need to be refactored.

Assert can contain multiple Assert statements (you should use only one Assert section, but up to multiple statements in this section). Since we're testing units of behaviour, these units may exhibit multiple outcomes - thus you should check for any relevant one.

Naming

The following naming scheme is not useful for a test:

[FunctionUnderTest]_[Scenario]_[ExpectedResult]

[FunctionUnderTest] only provides a marginal benefit: Naming the function you're testing. If you're using the AAA, then you already have a good way of navigating to the function you're testing anyway. Thus, it carries mostly downsides: If you were to rename the function, you'd have to rename every test for this function - and you're testing behaviour, not code.
All in all this naming scheme encourages you to focus on implementation details instead of the behaviour.

The above convention is fine for utility code, since it only contains simply auxiliary functions - but that is, once again, an exception.

Another anti-pattern is:

Should be

When writing tests you're dealing with facts. Replace 'should be' with 'is' and nothing's different, except the test is more readable.

Additionally, you can also use grammar to make the test easier to understand.

Guidelines

  • Do not follow a rigid naming policy. You simply can’t fit a high-level description of a complex behaviour into the narrow box of such a policy.
  • Name the tests as if you were describing them to a non-programmer, thus making it easier to focus on behaviour, rather than code.
  • Separate words by underscores to increase readability.

For example, a good test name would be similar to:

Delivery_with_a_past_date_is_invalid()

Even without knowing the code, you have a rough idea of what is supposed to happen.

Parameterized

Some testing frameworks allow the usage of using parameters to reduce code.

[InlineData(-1, false)]                            
[InlineData(0, false)]                             
[InlineData(1, false)]                             
[InlineData(2, true)]                              
[Test]
public void SomeTest(
    int daysFromNow,                               
    bool expected)                                 
{

A downside for parameterized tests is that it's harder to grasp what's going on - especially when mixing happy and unhappy/edge cases. When using them, it's a good idea to split between said cases and if the behaviour is too complicated, refrain from using parameterized tests.

Different schools of Unit Testing

There are different approaches/schools to UTing. To keep it short: I prefer the Classical School (CS), but I'll list the benefits and downsides of either of them here.

School Goal
London Focus exclusively on a single class that is being tested, by removing all outside dependencies.
Classical Run tests themselves in an isolated manner. E.g. outside dependencies affect these tests, but Test A should not change the behaviour and outcome of Test B.

London

While not specific to the London School, Mocks and Stubs are primarily used here.

Mocks: Help to emulate and examine outcoming reactions (e.g. SendMail()).
Stubs: Help to emulate incoming reactions (e.g. GetFromDatabase()). A Stub has return values, a Mock does not.

Anti-pattern: Asserting interactions with Stubs. You already know what the return value is, you don't need to assert it.

London Downsides

  • Mocks hide architectural problems.

London Upsides

  • Very easy to point to a bug, since you only need to consider the class under test, rather than a construct of outside dependencies, other classes and potentially other tests that affect the outcome of this broken test.

Classical

Classical Downsides

Classical Upsides

  • The big downside of the CS is, in order to test something, you might need to go several layers deep of code for dependencies - just to set the test up. However, this is also a big upside of the CS: This is a nice indicator that your dependencies are getting too large and harder to grasp - thus you should refactor your code to make it easier to understand (and test).

Test-Driven Development

  1. Write a failing test, with the functionality that should be added and how it should behave.
  2. Implement just enough to make the test pass. YAGNI is very important here!
  3. Refactor the code and then repeat the process of TDD.

TDD with CS: Start from the domain model and at the end have a fully functioning software. This can be harder, since you might not know what you want to implement yet.

TDD with LS: Start from the highest level and go down.

A good Unit Test

A UT has the following properties:

  1. Protection against regression
    • How well a test protects against bugs
      • Protects against false negatives
  2. Resistant to refactoring
    • How well a test can withstand refactoring of the underlying code, without failing
      • The more the test is coupled to the implementation details, the more false alarms it will generate.
        -> A test should verify the observable behaviour, not the steps that the code takes to get there: Let your test tell a story about the problem.
      • Protects against false positives
  3. Fast feedback
    • The faster your tests are, the more you can have and the more often they can run.
  4. Maintainability
    • How hard it is to understand the test: Fewer lines == more readable.
    • How hard it is to run the test: The more out of process dependencies you have, the harder it is to run the test.

Not all can always be achieved at the same time, thus you might need to concede in certain areas.

Which parts to focus on

Resistance to refactoring: Non-negotiable - the most important part of this list.

The choice between fast feedback and protection against regressions is a slider, focusing on one sacrifices the other and vice-versa - you'll need to find a balance for this yourself.

Maintainability is an odd one, since you can't do a lot to focus on it. The larger a test, and the more out of process dependencies it uses, the harder it is to maintain it. Sometimes these factors are unavoidable, but you can at least keep them in the back of your mind.

List of Abbreviations

  • AAA: Arrange, Act, Assert Pattern
  • CS: Classical School
  • LS: London School
  • TDD: Test-Driven Development
  • UT: Unit Test

Final notes

If you want to get a deeper dive for most of these topics, read up on "Unit Testing Principles, Practices, and Patterns" by Vladimir Khorikov.

//TODO:
TDD:

Focus on one feature at a time

Red -> Green -> Refactor.

  1. Write failing test (even without Intellisense)
    -> Run Test and check what the compiler error is
  2. Write just enough code to have a passing test
    -> E.g. man schreibt nur das Zeug für die Compiler Fehler und danach schaut man, dass der Test grün wird
  3. Refactor
  4. Go back to write failing test

--
//TODO: integrate into text above

Properties of production code:

  • Complexity
    -> Number of decision-making/branching points: Higher number = more complex code
  • Domain significance
    -> How important is the code for the problem domain of the project (Utility code f.ex. has no domain significance)?
    -> Complex+Domain significant code benefit the most from unit testing.
  • Number of collaborators
    -> Collaborators are out-of-process or mutable dependencies for a class/method
    -> Thus we can define quadrants for every type of combination:
    • Both: Overcomplicated code
      Example: Fat controllers
      -> Too hard to test, but too risky to leave untested. Thus, you should try to break it apart into code with a high complexity and code with many collaborators.
    • Very complex/domain significant, but few collabs: Domain model, algorithms
      -> This should be tested. It's either complex code, or important to the domain, so it should be made sure that it works as intended.
    • Many collabs, not very complex: Controllers
      -> Test them in Component Tests/overarching tests, but not specifically.
    • Neither: Trivial code
      -> Don't test it. It's a waste of time.
      ==> Goal: Get rid of overcomplicated code, and test only complex/domain significant code for a highly valuable testing suite.

Component/Integration tests:
One happy path per business scenario + any edge cases that can't be cocvered by unit tests

Interfaces (YAGNI): Interfaces with a single implementation don't provide any benefit - only use them if there's a chance you'll need multiple of them. HOWEVER, they can be used to mock out-of-process dependencies.
-> Can be used to mock a connection to get a token from a third party - if you don't need to mock that, you'll probably not need to use an interface for a single implementation.

Long tests (tests with a lengthy arrange) are allowed to have multiple acts, since not grouping them together would result in a test suite that is too slow. If that isn't the case, write a single test for each act to make them easier to understand and modify when necessary.

Testing logging:
Logging must be tested, if it's relevant for the customer, client, etc - anybody that isn't the developer. If only developers see the logs, then they are an implementation detail and shouldn't be tested.

Cleanup of Tests:
Best done before every test (if done after a test, you might skip the cleanup phase (e.g. you're debugging right now and stop the test, or the build crashes, etc)).
-> No need for a teardown phase, simply integrate it into the setup phase.

Database testing:
Using InMemoryDatabases purely for testing:
For automated pipeline testing, where you can't/don't want to setup everything every time a test runs you can use inMemory databases, but I don't recommend it:
- It wastes time on tests that is not beneficial for production
- Most of the time you're stuck with fixing the InMemory database, instead of actual code when the pipeline throws an error.
- Developers might forget to test both the InMemory and standard database.
It can be beneficial and catch errors, but it can also simply result in additional work.
Which operations to test?
Writes are crucial to test, because the stakes are high here!
Reads are not important, it's fine if you only test the most important/complex reads. You can test every read operation, but your main focus should lie on the write operations, since an incorrect write operation can cause anything between mild confusion and a very expensive database operation.

Anti-patterns:
- Testing private methods: Since we're only testing observable behaviour, we don't care about private methods. Exposing them to test them would couple tests to implementation details and thus damage your suite by making your tests harder to refactor.
- Leaking implementation details: Don't imply any specific implementation when writing tests. Instead of duplicating the implementation details, hard code its expected value (f.ex.: A calculator.Add(2, 2) method will return 4. In the test, instead of writing expected = 2 + 2, just write expected = 4 and then your result will be calculator.Add(2, 2))
- Code pollution: Adding code to your production code that is only needed for testing. Tests should help you code - in this case, you are making your code more confusing by adding irrelevant details to your production code, for the sake of testing.

Skills

Git, LaTeX, Linux, DevOps, CI/CD, SQL (Postgres)

Go, Java, C# (Spring, SpringBoot, DotNet), Python (Python Frameworks)

C, C++

JS/React/TS/Vue

Projects I've worked on: GraphQL, EF6

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.