Giter VIP home page Giter VIP logo

typeddevs / bashunit Goto Github PK

View Code? Open in Web Editor NEW
222.0 7.0 21.0 15.45 MB

Test your bash scripts in the fastest and simplest way, discover the most modern bash testing framework.

Home Page: https://bashunit.typeddevs.com/

License: MIT License

Makefile 2.06% Shell 97.94%
bash bash-script testing testing-framework unittesting hacktoberfest assertions mocking tdd test-driven-development unittest unittest-framework unittesting-library

bashunit's Introduction

Tests Static analysis Docs deployment Editorconfig checker MIT Software License


bashunit

A simple testing framework for bash scripts

Test your bash scripts in the fastest and simplest way, discover the most modern bash testing framework.

Description

bashunit is a comprehensive and lightweight testing framework for Bash, focused on the development experience. It boasts hundreds of assertions and functionalities like spies, mocks, providers and more, offers concise and clear documentation, and has a very active community.

Documentation

You can find the complete documentation for bashunit online, including installation instructions and the various features it provides, in the official bashunit documentation.

Contribute

You are welcome to contribute reporting issues, sharing ideas, or with your pull requests.

Make sure to read our contribution guide where you will find, among other things, how to set up your environment with the various tools we use to develop this framework.

Contributors

Contributors list

bashunit's People

Contributors

adatamas avatar antonio-gg-dev avatar apotterri avatar ceejatec avatar chemaclass avatar cmayo avatar cosmevalera avatar djpohly avatar fabriziofs avatar github-actions[bot] avatar h0adp0re avatar harsh-br0 avatar herbygillot avatar jesusvalera avatar khru avatar leocavalcante avatar maheshmeleti avatar miki471151 avatar raphaelstolt avatar valerangel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

bashunit's Issues

Listing not test but assertions

On the list of tests we are executing we are not showing the list of test running but the list of assertions happening:

image

This bug could relate to this Issue: #18

bashunit command docs

Add a section in the documentation to indicate the usage of the arguments that our command can receive.

Early return assertions

Right now, all of our assertions follow an if-else structure, and I believe that both reading and the development of future, more complex assertions would be simplified if we refactored them to apply the early return pattern.

Assert counting not working

Summary

When running an assertion inside other assertion the assertion count it's not been incremented

Current behavior

If we have to assertions nested it only counts 1
Assert count not working

How to reproduce

function test_xxx() {
  assertEmpty "$(assertEquals "hello world" "hello world")"
}
./bashunit ./tests/**/*.sh --filter xxx

This is only one assert but has two

function test_xxx1() {
 assertEquals "hello world" "hello world"
}

this only has one and has the same count of assertions

Expected behavior

Watcher not scrolling down

Q A
OS MacOS / Linux / Windows
Shell sh / bash / ...
bashunit version 0.0.5

Summary

Current behavior

The watch command is not capable of scrolling down, which is a problem because we do not see the end of the test output as seen in the image and it does not tolerate scrolling down, being a clear limit for someone who has many tests, like our case

image

How to reproduce

run make test/watch

Expected behavior

Be able to see all the test and the output

parameterized test

It is quite often desired to run the same test logic in various scenarios. ATM there is no dedicated facility in bats to make that easy, so I resorted to placing test logic into a dedicated function and ugly copy/pasting for invocations with different parameters where I needed it

I am not yet sure what could be the "bats way" to specify parametric tests but since there is parsing and preparation of the test code is happening anyways could be something like

BACKEND=( docker singularity )
NJOBS=( 1 10 )

@test "description" BACKEND NJOBS {
   echo "Here I am with backend=$backend njobs=$njobs"
   ...
}

and running it would result in 4 tests with adjusted descriptions:

  • description [backend=docker,njobs=1]
  • description [backend=docker,njobs=10]
  • description [backend=singularity,njobs=1]
  • description [backend=singularity,njobs=10]

and still subject to --filter selection (so e.g. I could select only the ones for backend=docker)

Possible implementations

  • Pass to the test only arrays
  • On the runner check if the test has arguments
    • If so, check if they are arrays with the same length if not throw an error
    • Iterate over each argument of the test
    • add to the name of the test the parameter passed
    • execute the test passing the arguments we are iterating

Example

https://www.baeldung.com/parameterized-tests-junit-5

Similar tools that have not been implemented yet:

bats-core/bats-core#241

"Next page" button not working

πŸ“š Description

Being on the Getting Started page, when you scroll down to the bottom and try to go to next page, instead of going to Assertions page, you go to #installation, which is a subsection from getting started page.

image

Environment docs

Detailed documentation on how to configure the library with the .env files and the use of each of its properties

Implement fake function

fake function

fake <command> [replacement code]

Fakes command and replaces it with replacement code (if code is specified) for the rest of the execution of your test. If no replacement code is specified, then it replaces command by one that echoes stdin of fake. This may be useful if you need to simulate an environment for you code under test.

Example

https://github.com/pgrange/bash_unit
For instance examples:

fake ps echo hello world
ps

will output:

hello world

We can do the same using stdin of fake:

fake ps << EOF
hello world
EOF
ps
hello world

It has been asked whether using fake results in creating actual fakes or stubs or mocks? or may be spies? or may be they are dummies? The first answer to this question is: it depends. The second is: read this great and detailed literature on this subject.

Add documentation

Improve documentation on the Readme.md and add more examples and update the olds from the repo

Total assertions count not working when parallel run enabled

Q A
OS Linux
Shell bash
bashunit version 0.4.0

Summary

I've detected this error during the development of the issue #26. I've been facing issues until I discovered that it was a bug caused by the parallel run, and this bug is present in the current version available on main.

imagen

Current behavior

When you launch the tests in parallel, the total assertion counter always reads 0.

How to reproduce

To reproduce it, you only need to set PARALLEL_RUN=true in your .env file.

Expected behavior

It's expected to count the assertions correctly just like when the tests are run in series. The final output summary of running the tests in parallel and in series should match (even if the order in which the tests are run doesn't).

Find all test on a folder

Currently

The only way to call it is passing the name of the files to execute, but from a DevExp this sucks for example:

Trying to run all test from a folder:
image

For example running all .sh of a folder, why should I filter all of them by a regex, should not the framework know what files to find and execute them?
image

Error on count of assertions and failures

Summary

I have created a test that should fail due to human error and in the error list, I see the error but it is not being counted in the final counter.

Current behavior

Full example:
bug_assert_equals_and_counters

How to reproduce

If you create a test like

function test_bug_on_assert_equals() {
  function fake_function() {
    return 1
  }
  assertEquals\
  "$SUCCESSFUL_EMPTY_MESSAGE" "$(assertSuccessfulCode "$(fake_function)")"

  assertGeneralError "$(fake_function)"
}

The output shows:

Running tests/unit/assert_test.sh
βœ— Failed: Bug on assert equals
    Expected ''
    but got 'βœ— Failed: Bug on assert equals
    Expected '1'
    to be exactly '0''

image

but the list of test failing and assertions fail it's not increasing

image

Expected behavior

Count the real number of assertions

This issue could be related to: #50

beforeEach/setUp afterEach/tearDown functions

Add auxiliary functions beforeEach/setUp and afterEach/tearDown to execute before and after each test, if they exist, to set up and tear down fixtures or other requirements for arranging our tests.

Behavior of PARALLEL_RUN

Both @Chemaclass and I found it odd that if we left PARALLEL_RUN empty in the .env file, it would run in parallel. This is because in the code it is evaluated as PARALLEL_RUN = true to run in parallel. However, the env_configuration.sh defines PARALLEL as true if it's empty.

Both @Chemaclass and I found it more intuitive that if it's empty, it should run synchronously (in the vast majority of languages, the absence of value/null/undefined is evaluated as false).

If we wanted to change this behavior, I believe we should invert the conditions in the code (i.e., use PARALLEL_RUN != false instead of PARALLEL_RUN = true) so that the code expresses a more realistic behavior.

For now, we've changed it so that the GitHub actions work correctly due to issue #39.

Another distinct matter is whether we want the GitHub actions to run in parallel (once the issue #39 is resolved). In that case, I think it would be interesting to have a separate .env for GitHub actions from the .env.example.

To summarize, I want to discuss three points in this issue.

  1. What behavior do we expect when PARALLEL_RUN is empty?
    • My opinion is that it should be evaluated as false.
  2. If we want empty PARALLEL_RUN evaluates as true, do we invert the code checks so they don't lead to confusion when reading them?
    • My opinion is that they should be inverted if we decide that an empty PARALLEL_RUN evaluates as true.
  3. How do we want it to launch in GitHub actions and how do we proceed to do it?
    • My opinion here is that they should run in parallel as soon issue #39 is fixed, and we should have a separate .env from the .env.example for GitHub actions (i.e. .env.github).

What is your opinion?

Create a filter by test name

πŸ“š Description

Add the possibility to filter the test(s) to run by function name/pattern.

πŸ§ͺ Example

Having this tests

# tests/some_tests.sh

function test_foo_1() {...}

function test_bar() {...}
# tests/more_tests.sh

function test_foo_2() {...}

function test_baz() {...}

Running bashunit tests/* --filter foo should trigger only test_foo_1 and test_foo_2 ignoring other tests.

Getting Started docs

Revise the getting started section and add precise first steps with commands and examples, especially the installing, first tests and first run steps.

Create assertExitCode

Create an assert function that verifies the exit code of the return/exit code

  • like assertExitCode
  • and assertNotExitCode

Installer should use latest not main

The installer script is using main but in order to allow installs through git modules it should be latest.

It should be as simple as changing main to latest

DIR=${1-lib}
TAG=${2-main}

cd "$(dirname "$0")"
rm -f "$DIR"/bashunit
[ -d "$DIR" ] || mkdir "$DIR"
cd "$DIR"

if [[ $TAG == main ]]; then
  echo "> Using main branch"
  git clone https://github.com/TypedDevs/bashunit temp_bashunit
  cd temp_bashunit
  ./build.sh
  cd ..
  cp temp_bashunit/bin/bashunit bashunit
  rm -rf temp_bashunit
else
  echo "> Using a concrete tag '$TAG'"
  curl -L -O -J "https://github.com/TypedDevs/bashunit/releases/download/$TAG/bashunit"
  chmod +x "bashunit"
fi

Display total test functions

πŸ“š Description

Currently, it doesn't count the total of tests but rather the total assertions that where found.

Screenshot 2023-09-07 at 16 07 07

πŸ§ͺ Expected behaviour

Similar as in PHPUnit, I would like to display the total of tests and the total of assertions as two different numbers.

Screenshot 2023-09-07 at 16 08 17

Projects root README.md

It would be nice if the README.md introduced you to the project and then acted as an index to the rest of the documents, without duplicating information.

That is, add links to the official documentation at bashunit.typeddevs.com, or to other relevant documents such as CONTRIBUTING.md or CHANGELOG.md...

Add assert_not_matches

Example:

Use:

assert_not_matches <unexpected-regex> <actual> [message]

On a real test

test_obvious_matching_with_assert_not_matches(){
  assert_not_matches "a str.*" "a string" "'a string' should not match 'a str.*'"
}
test_obvious_notmatching_with_assert_not_matches(){
  assert_not_matches "a str.*" "another string"
}

Output:

	Running test_obvious_matching_with_assert_not_matches ... FAILURE
'a string' should not match 'a str.*'
 expected regex [a str.*] should not match but matched [a string]

Source: https://github.com/pgrange/bash_unit

Add option --stop-on-failure

Currently

If a test fails, the runner will continue until finalising all tests

Expected

Add a new option --stop-on-failure that will force to stop the runner right after encountering one failing test.

Inspired by PHPUnit: https://docs.phpunit.de/en/10.3/configuration.html#the-stoponfailure-attribute

HINT

  1. Look at how the _SIMPLE_OUTPUT is implemented
  2. Look at the existing command options in the entry point of bashunit: https://github.com/TypedDevs/bashunit/blob/main/bashunit
# for example:
while [[ $# -gt 0 ]]; do
  argument="$1"
  case $argument in
    ...
    --stop-on-failure)      
      _STOP_ON_FAILURE=true
      shift
      ;;
    ...
  esac
done

Then, we could check that global variable inside the function console_results::print_failed_test() and if it's true then force an exit 1

⭐ Introducing Mush: A Perfect Fit for bashunit! ⭐

Hey there, amazing bashunit maintainers! πŸ‘‹

We've got something exciting to discuss. πŸš€

Have you ever considered using Mush into your toolkit? The structure of bashunit seems perfectly aligned with modern Rust projects, which is precisely what Mush is all about! πŸ¦€

Here's why it could be a fantastic match:

πŸ“‚ File Structure: The file structure of bashunit already seems inspired by the organization seen in modern Rust projects, which is a core principle behind Mush.

⭐ GitHub Love: We kindly invite you to give Mush a star on GitHub if you find it intriguing. Your support would mean the world to us! 🌍❀️

🍴 Fork and Collaborate: Feel free to fork Mush and propose any valuable changes you think could benefit both projects. This could be the beginning of a fruitful collaboration! 🌟

Let's explore how Mush can enhance the bashunit experience and work together to create a more powerful tool for the developer community. πŸ’ͺπŸ’¬

Looking forward to your thoughts and potential synergies! πŸ™Œ

Display also the total test failed

Scenario

Currently, we are not counting the number of tests that fail, but rather the number of asserts that fail, but we are forgetting that there could be more than one assert per test, and that if only 1 assert of the test fails, that test is already a failed test.
image

Add assert_matches

To do the implementation of Display the execution time #19 I think this kind of assertion will help

assert_matches <expected-regex> <actual> [message]
test_obvious_notmatching_with_assert_matches(){
  assert_matches "a str.*" "another string" "'another string' should not match 'a str.*'"
}
test_obvious_matching_with_assert_matches(){
  assert_matches "a[nN].t{0,1}.*r str.*" "another string"
}

example of output

Running test_obvious_notmatching_with_assert_matches ... FAILURE
'another string' should not match 'a str.*'
 expected regex [a str.*] to match [another string]
doc:2:test_obvious_notmatching_with_assert_matches()

Source of the example: https://github.com/pgrange/bash_unit

Add SIMPLE_OUTPUT env var

Status quo

Currently, you can use the option -s|--simple when running ./bashunit to display dots instead of the test names.

Expected

Allow defining SIMPLE_OUTPUT in the .env with a default:false,
but when set true then it will have the same behaviour as using --simple option

Consider

The .env configuration must be overwritten by the command configuration.
That is, if in the .env it is disabled, but --simple is used, then it should be displayed with simple mode.

Display dots instead of tests names

Current behaviour

Currently, the only output from bashunit is the test names in a pretty way.

Acceptance Criteria

We want to allow rendering dots (a dot per test?). We can enable this feature when running bashunit with a --dots flag. Eg:

./bashunit tests/* --dots

Funky behavior on the assertion count

Q A
OS MacOS / Linux / Windows
Shell sh / bash / ...
bashunit version x.y.z

Summary

Adding a new creation I did not add the assertion counter, but the counter had already been counted for

Peek 2023-09-11 00-25

Current behavior

adding a test like:

function assertExitCode() {
  local actual_exit_code=$?
  local expected_exit_code="$1"
  local label="${3:-$(normalizeFunctionName "${FUNCNAME[1]}")}"
  if [ $actual_exit_code -eq "$expected_exit_code" ]; then
    return 0
  else
    ((_ASSERTIONS_FAILED++))
    printFailedTest  "${label}" "${actual_exit_code}" "to not match" "${expected_exit_code}"
    return 1
  fi
}

Without incrementing the $_ASSERTIONS_PASSED somehow it's counting it

How to reproduce

Create a new assertion like:

function assertExitCode() {
  local actual_exit_code=$?
  local expected_exit_code="$1"
  local label="${3:-$(normalizeFunctionName "${FUNCNAME[1]}")}"
  if [ $actual_exit_code -eq "$expected_exit_code" ]; then
    #((_ASSERTIONS_PASSED++))
    return 0
  else
    ((_ASSERTIONS_FAILED++))
    printFailedTest  "${label}" "${actual_exit_code}" "to not match" "${expected_exit_code}"
    return 1
  fi
}

Run the test and look at the number of assertions, uncomment the #((_ASSERTIONS_PASSED++)) and run it again and you will see that the number does not change

Expected behavior

It should not count assertions that are not accounted for

Add --upgrade option

Expected

Add a new option --upgrade that will upgrade bashunit to the latest release.

Ideas

Using latest branch

  • You could think about using the latest branch and build bashunit from it.
    • The latest branch will/should always point to the latest stable release

Using latest release

You could consider using the latest release: https://github.com/TypedDevs/bashunit/releases/latest

  • which will point to the release page, and download from there the bashunit file

Output of the duplicated test it's not understandable

Summary

Whenever you are developing a new test if you dupliate one the output it's impossible to understand

image

make test
Running tests/acceptance/bashunit_test.sh
βœ“ Passed: Bash unit when a test fail
βœ“ Passed: Bash unit when a test passes
Running tests/functional/logic_test.sh
βœ“ Passed: Other way of using the exit code
βœ“ Passed: Should validate a non ok exit code
βœ“ Passed: Should validate an ok exit code
βœ“ Passed: Text should be equal
βœ“ Passed: Text should contain
βœ“ Passed: Text should match a regular expression
βœ“ Passed: Text should not contain
βœ“ Passed: Text should not match a regular expression
Running tests/unit/assert_test.sh
βœ“ Passed: Successful assertArrayContains
βœ“ Passed: Successful assertArrayNotContains
βœ“ Passed: Successful assertCommandNotFound
βœ“ Passed: Successful assertContains
βœ“ Passed: Successful assertEmpty
βœ“ Passed: Successful assertEquals
βœ“ Passed: Successful assertExitCode
βœ“ Passed: Successful assertGeneralError
βœ“ Passed: Successful assertMatches
βœ“ Passed: Successful assertNotContains
βœ“ Passed: Successful assertNotEmpty
βœ“ Passed: Successful assertNotEquals
βœ“ Passed: Successful assertNotMatches
βœ“ Passed: Successful assertSuccessfulCode

βœ— Failed: Successful fake
    Expected 'to be empty'
    but got 'βœ— Failed: Successful fake
    Expected '1'
    to be exactly '0''
βœ“ Passed: Successful return assertExitCode
βœ“ Passed: Unsuccessful assertArrayContains
βœ“ Passed: Unsuccessful assertArrayNotContains
βœ“ Passed: Unsuccessful assertCommandNotFound
βœ“ Passed: Unsuccessful assertContains
βœ“ Passed: Unsuccessful assertEmpty
βœ“ Passed: Unsuccessful assertEquals
βœ“ Passed: Unsuccessful assertExitCode
βœ“ Passed: Unsuccessful assertGeneralError
βœ“ Passed: Unsuccessful assertMatches
βœ“ Passed: Unsuccessful assertNotContains
βœ“ Passed: Unsuccessful assertNotEmpty
βœ“ Passed: Unsuccessful assertNotEquals
βœ“ Passed: Unsuccessful assertNotMatches
βœ“ Passed: Unsuccessful assertSuccessfulCode
βœ“ Passed: Unsuccessful return assertExitCode
Running tests/unit/console_results_test.sh
βœ“ Passed: Not render all tests passed when failed tests
βœ“ Passed: Not render failed assertions when not failed tests
βœ“ Passed: Not render failed tests when not failed tests
βœ“ Passed: Not render passed assertions when no passed tests nor assertions
βœ“ Passed: Not render passed tests when no passed tests nor assertions
βœ— Failed: Render all tests passed when not failed tests
    Expected '> Duplicate test functions found'
    to match '[42mAll tests passed[0m'
βœ— Failed: Render failed assertions when failed tests
    Expected '> Duplicate test functions found'
    to match 'Assertions:[^
]*[31m0 failed[0m[^
]*total'
βœ— Failed: Render failed tests when failed tests
    Expected '> Duplicate test functions found'
    to match 'Tests:[^
]*[31m1 failed[0m[^
]*total'
βœ— Failed: Render passed assertions when passed assertions
    Expected '> Duplicate test functions found'
    to match 'Assertions:[^
]*[32m1 passed[0m[^
]*total'
βœ— Failed: Render passed assertions when passed tests
    Expected '> Duplicate test functions found'
    to match 'Assertions:[^
]*[32m0 passed[0m[^
]*total'
βœ— Failed: Render passed tests when passed assertions
    Expected '> Duplicate test functions found'
    to match 'Tests:[^
]*[32m0 passed[0m[^
]*total'
βœ— Failed: Render passed tests when passed tests
    Expected '> Duplicate test functions found'
    to match 'Tests:[^
]*[32m1 passed[0m[^
]*total'
βœ— Failed: Render time of execution when all assertions passed
    Expected '> Duplicate test functions found'
    to match 'Time taken: [[:digit:]]+ ms'
βœ— Failed: Render time of execution when not all assertions passed
    Expected '> Duplicate test functions found'
    to match 'Time taken: [[:digit:]]+ ms'
βœ“ Passed: Should not render time of execution when all assertions passed on mac
βœ“ Passed: Should not render time of execution when not all assertions passed on mac
βœ— Failed: Total asserts is the sum of passed and failed asserts
    Expected '> Duplicate test functions found'
    to match 'Assertions:[^\n]*4 total'
βœ— Failed: Total tests is the sum of passed and failed tests
    Expected '> Duplicate test functions found'
    to match 'Tests:[^\n]*6 total'
Running tests/unit/helpers_test.sh
βœ“ Passed: CheckDuplicateFunctions with duplicates
βœ“ Passed: CheckDuplicateFunctions without duplicates
βœ“ Passed: DummyFunction is executed with execute function if exists
βœ“ Passed: GetFunctionsToRun fail when duplicates
βœ“ Passed: GetFunctionsToRun filter no matching functions should return empty
βœ“ Passed: GetFunctionsToRun no filter should return all functions
βœ“ Passed: GetFunctionsToRun with filter should return matching functions
βœ“ Passed: No function is executed with execute function if exists
βœ“ Passed: NormalizeTestFunctionName camel case
βœ“ Passed: NormalizeTestFunctionName empty
βœ“ Passed: NormalizeTestFunctionName one word
βœ“ Passed: NormalizeTestFunctionName snake case
βœ“ Passed: Successful unsetIfExists
βœ“ Passed: Unsuccessful unsetIfExists
Running tests/unit/setup_teardown_test.sh
βœ“ Passed: Counter is decremented and incremented after teardown and setup
βœ“ Passed: Counter is incremented after setup before script and setup
> Duplicate test functions found

Current behavior

duplicate a test with the same name and the output of the test failing it's not understandable

How to reproduce

duplicate the name of a test

Expected behavior

To know what test it's duplicated, not all the test

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.