Giter VIP home page Giter VIP logo

neotest's People

Contributors

akinsho avatar hiphish avatar jfpedroza avatar joshbode avatar kyazdani42 avatar lawrence-laz avatar llllvvuu avatar luismeyer95 avatar marcelbeumer avatar marilari88 avatar markemmons avatar mrcjkb avatar muniftanjim avatar oddbloke avatar olimorris avatar percyodi avatar quitlox avatar rcarriga avatar rcasia avatar refractalize avatar rouge8 avatar scottming avatar sidlatau avatar stevanmilic avatar stevearc avatar theutz avatar towry avatar wayfarer98 avatar weilbith avatar wookayin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neotest's Issues

summary short output focus on floating window

from the summary window:

  • 'o' mapping open full test output and keep focus on summary and hide floating output window on move
  • if type 'O' mapping I expect to close the full test output then open the short but it's focus on the full output floating window

TY for this plugin !

Debugging setup

When running

require("neotest").run.run({strategy = "dap"})

I keep getting

The selected configuration references adapter `nil`, but dap.adapters.nil is undefined

This is using neotest-go. I'm guessing this is because that adapter does not support DAP yet? How would I be able to tell if this were the case?

Or, in the situation where it does support DAP, I must have mis-configured something?

Here is my configuration:

require("neotest").setup({
  adapters = {
    require("neotest-python")({
      dap = { justMyCode = false },
    }),
    require("neotest-go"),
    require("neotest-rspec")
  }
})

Bug: using the vim "change window" command whilst in the toggled output floating window causes crash

When viewing the output of a test result using:
require("neotest").output.open({ enter = true })
(all other settings for the plugin are default, so this opens in a floating window)

Sometimes, through muscle memory, I will try to leave the floating window using h etc., instead of pressing q. This will often "crash" the nvim instance. I'm unsure how to get more debugging info on this, so please give me some instructions if I can provide more information.

NVIM v0.7.2
Build type: Release
LuaJIT 2.1.0-beta3

Neotest git commit: b86e558

question mark appearing next to failed tests in summary window

I have a test file called test_mytest.py with the following contents:

    3 import unittest
    2
    1
✖ 4   class MyTest(unittest.TestCase):
✔   1     def test_my_test_case(self) -> None:
    2         self.assertTrue(True)
    3
✖   4     def test_my_test_case2(self) -> None:
E   5         self.assertTrue(True)     ■ Traceback (most recent call last):    File "/root/host/test/test_mytest.py", line 9, in test_my_test_case2      self.assertTrue(False)  AssertionError: False is not true

However, when I open the summary window with :lua require("neotest").summary.toggle() the failed test shows up with a question mark (?):

neotest-python
├─ ? .local
├─ ? .vim
╰╮ ? test
 ╰╮ ? test_mytest.py
  ╰╮ ? MyTest
   ├─ ✔ test_my_test_case
   ╰─ ? test_my_test_case2

I'm using the CaskaydiaCove NF font through Windows Terminal. I'm fairly sure this isn't a font issue since I tried CodeNewRoman NF and it was the same. Besides, the tick and cross both work fine in the code window.

The automated release is failing 🚨

🚨 The automated release from the master branch failed. 🚨

I recommend you give this issue a high priority, so other packages depending on you can benefit from your bug fixes and new features again.

You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can fix this 💪.

Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.

Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.

If you are not sure how to resolve this, here are some links that can help you:

If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.


Missing package.json file.

A package.json file at the root of your project is required to release on npm.

Please follow the npm guideline to create a valid package.json file.


Good luck with your project ✨

Your semantic-release bot 📦🚀

Summary mappings

Could you please provide a quick example of how to use the summary window mappings?

The help file lists a table of mappings but I don't know how to trigger/map them properly.

I've tried lua require("neotest").summary.run() but it doesn't work.

Ruby support

The plugin looks awesome! Thanks a lot 🔥

Do you have any plans to add Ruby/RSpec functionality? Just curious 🙂

Configure status icons

How to configure the status in the signcolumn?

The running status is broken on my setup and I can't replace it with anything. Which font has this symbol? I'm using Fira Code Nerdfont

`Error executing luv callback` in custom Neotest plugin

I'm working on a Neotest plugin for Rust and I've gotten it to the point where it can run the first test fine, but on every subsequent test it fails with:

|| Error executing luv callback:
|| ...re/nvim/plugged/plenary.nvim/lua/plenary/async/async.lua:14: The coroutine failed with this message: vim/_editor.lua:0: E5560: nvim_echo must not be called in a lua loop callback
|| stack traceback:
|| 	[C]: in function 'error'
|| 	...re/nvim/plugged/plenary.nvim/lua/plenary/async/async.lua:14: in function 'callback_or_next'
|| 	...re/nvim/plugged/plenary.nvim/lua/plenary/async/async.lua:40: in function <...re/nvim/plugged/plenary.nvim/lua/plenary/async/async.lua:39>

Do you have any idea what might be causing that? I don't see the error if I comment out the file writing on lines 105-107, but without that the tests don't run. I've tried running directly with vim.loop.fs_* in sync and async modes:

    local fd = assert(vim.loop.fs_open(tmp_nextest_config, "a", 438))
    assert(vim.loop.fs_write(fd, '[profile.neotest.junit]\npath = "' .. junit_path .. '"'))
    assert(vim.loop.fs_close(fd))
    vim.loop.fs_open(tmp_nextest_config, "a", 438, function(err, fd)
        assert(not err, err)
        vim.loop.fs_write(fd, '[profile.neotest.junit]\npath = "' .. junit_path .. '"', function(err, _)
            assert(not err, err)
            vim.loop.fs_close(fd, function(err)
                assert(not err, err)
            end)
        end)
    end)

and those both see the same error...

E5108: Error executing lua ...plenary/async/async.lua:14:The coroutine failed with this message: Vim:E117: Unknown function: test#test_file

Hi there, migrating from vim-ultest and I'm stricken with the following error when calling lua require('neotest').run.run():

Screen Shot 2022-06-22 at 17 02 19

Here's my current config:

    use {
      'nvim-neotest/neotest',
      requires = {
        'nvim-lua/plenary.nvim',
        'nvim-treesitter/nvim-treesitter',
        'antoinemadec/FixCursorHold.nvim',
        'nvim-neotest/neotest-vim-test',
      },
      config = function()
        require('neotest').setup {
          adapters = {
            require 'neotest-vim-test',
          },
        }
      end,
    }

Any idea what I'm missing?

Feature request: Show output for running, in-progress tests

The output window that would show the output of the test runner won't be available until the test has completed running. Therefore we cannot see outputs for a test that is running (for quite a long time): "No output for test_xxxxx".

It'd be great if there is a way to see the progress, or some intermediate outputs streamed in the terminal for tests that are still running and in progress.

Setting up virtual text for diagnostics

It wasn't super clear what needs to be done for this.

I tried to vim.diagnostics.config({ neotest = true }) but that didn't actually end up showing failures in the virtual text...

vim-ultest test position jumps

Hi, just wondering whether it is possible to do something like this from vim-ultest?

nmap ]t <Plug>(ultest-next-fail)
nmap [t <Plug>(ultest-prev-fail)

Also, not sure what the highlight group is for the background of the floating window. Is it possible to make this partially transparent (winblend)?

Incorrect tree hierarchy when a directory and file have the same name

When a file and a directory within the same directory have the same name(excluding the file's extension) neotest flattens the tree.

With this file tree:

src
├── mod_a
│   ├── mod_b
│   │   └── mod_c.rs
│   └── mod_b.rs
└── mod_a.rs

Summary window looks like this:

╰╮  src
 ├─  mod_a
 ├─  mod_a.rs
 ├─  mod_b
 ├─  mod_b.rs
 ╰─  mod_c.rs

When it should look like this:

╰╮ src
 ├╮ mod_a
 │├╮ mod_b
 ││╰─ mod_c.rs
 │╰─ mod_b.rs
 ╰─ mod_a.rs

The issue isn't just in the summary window, return values of :parent() nodes also have the same flattened structure.

Can't run tests on describe blocks, only single tests

In a JS test file, let's say I have a bunch of grouped tests

  describe('Test group', () => {
    test('test 1', () => {
      expect(1).toBeTruthy();
    });

    test('test 2', () => {
      expect(0).toBeTruthy();
    });
  });

So I can hover the test 1 and run it, no problem. Same for test 2

However, it I want to run the whole describe block Test group, it does not work.

It will put the running indicator on the group and all subtests, but it never runs.

Attach says no running process found and output open says no output for test group

Same behaviour running from the summary window

Edit: using neotest-vim-test. This behaviour worked in ultest

Feature request: support for Google Test

Hello!

First of all, thanks for all the work put into this plugin, it's great and I never want to run tests in a separate tmux pane ever again :)

On to the issue: I'm working on creating an adapter for the Google Test framework for C++. It is not supported by vim-test, as far as I know, due to architectural difficulties. There is a separate plugin that ports that support, but the functionality is limited.

The plugin that I'm writing can be found in my repo. It works, but many features are still WIP, so I decided to open this issue in case somebody else wants to work on this (which would be much appreciated!), and to bring up some issues and suggestions.

So far I've only discovered a single issue: test discovery breaks down when opening a file outside the current project.
If I'm working on /home/me/myproject/foo.cpp and go-to-definition in /usr/include/something.h, the whole filesystem tree gets parsed, which breaks neovim in a number of ways, from going over ulimit with open files to straight up freezing while trying to parse every test file it finds. The discovering mechanic seems way too eager to discover :) Similar behavior has already been mentioned here, and is probably being worked on, but if I can help, I would love to

Furthermore, if it's okay, I would also like to suggest a couple of minor improvements. If you think they are a good fit for the plugin, I think I can add them myself.

  1. Support canceling a run from build_spec.
    If an adapter returns nil from build_spec, an error happens. With google test, the adapter has to find the executable to run. Sometimes, that may require user input, and I would like to give the user an opportunity to cancel during that input (i.e., "enter path to the executable, empty to cancel"). Otherwise the user has to press <C-C> and see some errors they have nothing to do with.
  2. Consider supporting errors in a different file.
    Consider the following use case: A test runs a function in another file, that file throws an error. Do we want to put a diagnostic at the line that the error happened? Currently the only way to report this would be to display such an error in the test's short summary. However, printing that error in a different file could result in hundreds of test reporting the same error in that one file, so maybe it's best left as is.
  3. Keeping previous results would be helpful (in a text file somewhere).
    I think pytest does this best, creating /tmp/pytest-of-username/pytest-run-<counter> directory. I implemented something similar myself for google test (code here), perhaps it would be generally useful? I sometimes check old test runs to see when did it all go so wrong.
  4. Providing an interface for an adapter to store a persistent state to disk would be nice.
    Adapters may want some tiny state. Of course, they can store it themselves under stdpath('data'), but it would be nice to store all test states in a centralized fashion. My particular adapter wants to store a simple JSON associating test files to executables they are compiled into.

Finally, I need some guidance with parametrized tests: is there a definitive way to work with them? E.g., @pytest.mark.parametrize in pytest or TEST_P in Google Test. This is really multiple tests masquerading as one and I'm not sure how to report them - should they be different nodes in the tree? Should it just be one test and whenever it's run an adapter should report that all errors happened in that one test?

Sorry for jamming all this into a single issue, if you think any of these should be worked on, I'll create separate ones.

Test reports using neotest-vim-test with rust give false positives

Running: lua require("neotest").run.run(vim.fn.expand("%")) on a rust source code file shows both false positives and false negatives.

A minimal test file is below. In this example, the function, and the test, both produce a "tick" (the silly_test test should fail).
The neotest test output shows that no tests have been run, however, running with vim-test's TestFile correctly runs the tests and shows the failed test.

fn silly_function() {}

#[cfg(test)]
mod tests {

    #[test]
    fn silly_test() {
        assert!(false);
    }
}

Some issues

I've been really enjoying ultest, so I am very excited about neotest. Thanks for building such great software.

Here are a few things I've noticed on latest neotest:

  1. lua require("neotest").output.open({ enter = true }) after running a test does not seem to work

CleanShot 2022-06-23 at 08 27 27@2x

  1. Running a test and then doing lua require("neotest").run.attach() will open the test output, but it will immediately close after the test finishes. How do I keep this window open?

  2. The status signs in the gutter area aren't rendering properly and I don't know how to change them.

CleanShot 2022-06-23 at 08 26 51@2x

The Readme says to do a :h neotest.status but all is says is

A consumer that displays the results of tests as signs beside their
declaration. This consumer is completely passive and so has no interface.

That doesn't tell me how to change the symbols.

Latest commit breaks python test discovery

The latest commit 05a700f breaks test discovery in python for me. Everything works fine with the commit before that. I'll test a bit if it affects all tests or just specific strings, maybe even pytest decorators.

Performance problems on large repos

When I try to open the summary on a test file in a repo at work, nvim freezes for a very long time.
The issue is this line:
https://github.com/rcarriga/neotest/blob/aaf2107a0d639935032d60c49a5bdda2de26b4d6/lua/neotest/client/init.lua#L408
The find_files actually completes within ~2 seconds, but then running adapter.is_test_file 250k+ times takes several minutes (vim-test adapter). After that freeze, there is a second freeze while it tries to write all the events to the logfile (hasn't finished yet).

My personal opinion, test discovery should be definitely optional, and possibly configurable. My previous job had a monorepo that was so big it was only served as a virtual filesystem, so running any type of crawling operation would have terrible side effects. Making it configurable (e.g. only these dirs, only this depth) might be nice, but would probably make more sense per-adapter instead of globally. If adapters need to control the search path, their API would have to change from testing individual files that neotest gives them to calling back into neotest with whatever their search config is. IDK if that refactor is worth it, which is why I could go either way on making the search configurable.

Another direction could be to customize the search on a per-directory basis instead of per-adapter. That avoids the need for an adapter API refactor, and in general this kind of functionality would be incredibly useful. I often work with many different types of repos on the same machine, sometimes in the same vim instance (using :tcd and one tab per project), and these projects will sometimes use the same language but require different configurations to run tests. I'd love to be able to configure my test adapters on a per-project basis. A rough proposal, it could look like:

require('neotest').setup({
  adapters = ...,
  summary = ...,
  discovery = {
    enabled = true,
  },
  projects = {
    ["~/work/big"] = {
      adapters = ...,
      discovery = {
        enabled = true,
        dirs = {'tests/unit', 'frontend/tests'},
        depth = 2
      },
    },
    ["~/personal/project"] = {
      adapters = ...,
      discovery = {
        enabled = false,
      },
    },
  },
})

I am happy to submit a PR for any parts of this once we align on a solution

Unrelated question: I'll probably be making more proposals, requests, and questions. Is filing an issue the best way to start a discussion, or would you prefer some other channel?

No tests found

I've attempted to follow the instructions to install neotest for running python unittest tests:

call plug#begin('~/.config/nvim/plugged')
Plug 'nvim-lua/plenary.nvim'
Plug 'nvim-treesitter/nvim-treesitter'
Plug 'antoinemadec/FixCursorHold.nvim'
Plug 'nvim-neotest/neotest'
Plug 'nvim-neotest/neotest-python'
call plug#end()

lua << EOF
require("neotest").setup({
  adapters = {
    require("neotest-python")({
        -- Extra arguments for nvim-dap configuration
        dap = { justMyCode = false },
        -- Command line arguments for runner
        -- Can also be a function to return dynamic values
        args = {"--log-level", "DEBUG"},
        -- Runner to use. Will use pytest if available by default.
        -- Can be a function to return dynamic value.
        runner = "unittest",

        -- Returns if a given file path is a test file.
        -- NB: This function is called a lot so don't perform any heavy tasks within it.
        is_test_file = function(file_path)
        end
    })
  }
})
EOF

Then I created this python test file:

import unittest


class MyTest(unittest.TestCase):
    def test_my_test_case(self) -> None:
        self.assertTrue(True)

Then I load vim, navigate to "my_test_case" and type :lua require("neotest").run.run() and I just get "No tests found".

Re-run tests from custom strategy

Hi again! I recently polished up my task runner plugin enough for general release, and I've written a custom neotest strategy for it that allows neotest tests to run within a task. This is already working for simple cases (including for the streaming results!)

The one thing I haven't gotten working yet is the ability to restart a test run from a task. There are some affordances for "restart task", "restart task on buffer save", "restart task on failure", etc. and at the moment they re-run the test, but neotest doesn't get the new results. The issue is that the strategy results are pull based, where the runner calls into it and expects to receive results. I have two ideas for how to rework this, and I'd love to get your opinion on if either of them would be an appropriate change, or if you have a better idea.

  1. Pass the tree and args into the strategy. Since the tree and args are all that's needed by the client to kick off a new run, giving them to the strategy would allow it to repeat the same run. On the strategy side, I could then do some magic to detect when a task should be re-used instead of creating a new one. This would be a pretty minimal change for neotest, but one thing I'm not sure of is if the tree parameter can become "stale" after mutations.
  2. Refactor neotest to make strategy results push-based. The runner would kick off the strategy, and then the strategy would be responsible for calling back into neotest to notify the client of new results. This would require more extensive changes within neotest and a refactoring of existing strategies.

Is it possible to transform names shown in test summary?

I am writing adapter to dart tests and have an issue with test names - there could be valid names with variation of single/double/triple quates:
image

Is it possible to transform namespace/test names that are shown in the test summary? I would want to remove surrounding quates.

Here is a tree-sitter query:

  local query = [[
  ;; group blocks
  (expression_statement 
    (identifier) @group (#eq? @group "group")
    (selector (argument_part (arguments (argument (string_literal) @namespace.name )))))
    @namespace.definition

  ;; tests blocks
  (expression_statement 
    (identifier) @testFunc (#any-of? @testFunc "test" "testWidgets")
    (selector (argument_part (arguments (argument (string_literal) @test.name))))) 
    @test.definition
  ]]

feature request: Lazy load

I used to lazy load vim-ultest using keys and by cmd packer properties this way:

    use {
      "rcarriga/vim-ultest",
      opt = true,
      run = ":UpdateRemotePlugins",

---- LIKE THIS
      cmd = { "Ultest", "UltestNearest", "UltestSummary" },
      keys = {
        "<Plug>(ultest-run-nearest)",
        "<Plug>(ultest-run-file)",
        "<Plug>(ultest-summary-toggle)",
      },


      requires = {
        {
          "vim-test/vim-test",
          cmd = { "TestNearest", "TestFile" },
          opt = true,
        }
      },
    }

But I checked out the documentation and there's no <Plug>(mappingToRunTest) or :CommandToRunTest maybe we should add a few commands and a few mappings

Is there other way to lazy load this plugin?

Capture and interpolate dynamic tests

I want to implement in neotest-jest the possibility to capture and interpolate dynamic tests as:

test.each([1, 2, 3])("case %u", () => {
  // implementation
});

This code should generate the following tests:

  • "case 1"
  • "case 2"
  • "case 3"

For this to work, I've created a query like:

((call_expression
  function: (call_expression
    function: (member_expression
      object: (identifier) @func_name (#any-of? @func_name "it" "test")
    )
    arguments: (arguments (_) @test.args)
  )
  arguments: (arguments (string (string_fragment) @test.name) (arrow_function))
)) @test.definition

where @test.args are the arguments passed to the each method.

For this to work, I need to do some changes in neotest:

diff --git a/lua/neotest/lib/treesitter/init.lua b/lua/neotest/lib/treesitter/init.lua
index 26b7e8c..8b44f6a 100644
--- a/lua/neotest/lib/treesitter/init.lua
+++ b/lua/neotest/lib/treesitter/init.lua
@@ -40,11 +40,13 @@ local function collect(file_path, query, source, root)
       ---@type string
       local name = vim.treesitter.get_node_text(match[query.captures[type .. ".name"]], source)
       local definition = match[query.captures[type .. ".definition"]]
+      local args = match[query.captures[type .. ".args"]]
 
       nodes:push({
         type = type,
         path = file_path,
         name = name,
+        args = args and vim.treesitter.get_node_text(args, source) or nil,
         range = { definition:range() },
       })
     end

But even with this, I need a method to generate more tests based on this args in neotest.

Is this the best way or do you have something in mind for these cases?

Feature Request: List running processes

To stop a test process, sometimes you need to know the ID. Especially when running multiple test processes and you want to stop a specific one.

It would be nice, to have a list of currently running processes with their respective ID.
Ideally, when calling neotest.run.stop({interactive=True}) or something similar it will do the following:

  • If there is only one process, just stop it
  • If there are multiple processes, select which one to stop

Running last test

Is it possible to implement a feature to run last test execution, similar to what exists for vim-test?

Feature Request - Cut the summary tree

Hi ✌

I'm still in the progress of switching from vim-ultest. And I'm wondering if you could add any options for the summary provider to get a more lean output. Having the full directory tree is a bit too much for me. I already have a file tree on the left side and open files are highlighted etc. Having the whole tree in the summary window also needs so much more horizontal space. The test names/descriptions sometimes (depending on the project) start after 15 empty characters, leading to many lines being wrapped, or if wrapping is disabled most of it is hidden.

I liked the old summary. But I see your aim to improve here with a new concept. Do you think there is a middle ground?

Thanks in advance!

Color on Linux vs Mac

Using the exact same init.lua on macOS and Arch Linux, I have different symbols colors.

On macOS, they're colored (green check mark, yellow running, red failed).
On Arch, it's just all white/light grey...

How can I get them to have the same behavior?

Can't get examples working

Can't get this working.
First I tried latest stable version from ubuntu ppa, 0.6.x
Then I updated neovim from unstable ppa to 0.8.0-dev and non of examples working, but giving a bit different output.

:lua require("neotest").run.run(vim.fn.expand("%"))

E5108: Error executing lua [string ":lua"]:1: attempt to index field 'run' (a nil value)
stack traceback:                                                                                                                                                   
        [string ":lua"]:1: in main chunk
Press ENTER or type command to continue

fzf floating window hangs/is slow to start after running Python tests

I'm not sure if this is a neotest or neotest-python issue, so I decided to open it here. I'm using fzf and fzf.vim with a GitFiles command:

command! -bang -nargs=? -complete=dir GitFiles
      \ call fzf#vim#files(FugitiveWorkTree(), fzf#vim#with_preview({'source': 'git ls-files || fd --type f'}), <bang>0)
nnoremap <C-p> :GitFiles<Cr>

After running neotest.run.run() or neotest.run.run(vim.fn.expand("%")), the fzf floating terminal window takes 10-15 seconds to populate after the window appears. Navigating the window also lags by seconds once it opens, but if you wait the lag disappears. I can reproduce this on any Python file in my work repository, but not a Rust file in the same repository.

LICENSE file

I noticed that this repo and all of the repos in this organization are missing LICENSE files. Can you add those?

Run All?

Is there a command to run the entire test suite of a project?

Text output render issue

I see some strange behavior in the test output popup. When I open it from the test file it cuts text that does not fit to popup:
image
But when I open it form test summary split it does not cut text when I scroll:
image

Feature request: Select subset of tests to execute

It would be cool if there was a way to mark tests for execution - for example via a toggle in the summary window - and then run them with a command like run.run_selected(). This would increase the flexibility of test execution.

No tests found C#

Hi,
I have used ul-test for a while but wanted to switch to NeoTest.
However I am facing issues with it not detecting my tests.

Config is the following:

call plug#begin("~/.vim/plugged")

Plug 'vim-test/vim-test'

Plug 'nvim-lua/plenary.nvim'
Plug 'nvim-treesitter/nvim-treesitter'
Plug 'antoinemadec/FixCursorHold.nvim'
Plug 'nvim-neotest/neotest'
Plug 'nvim-neotest/neotest-vim-test'

call plug#end()


lua << EOF
require("neotest").setup({
  adapters = {
    require("neotest-vim-test")
  },
})
EOF

I didn't add the ignore_file_types or allow_file_types options as I only use it for C#.

When I run require("neotest").run.run() or require("neotest").run.run(vim.fn.expand("%"))
I get a cmd popup running through a lot of files (can't really tell as it's going to fast) and then an output inside vim stating "No tests found".

I can run the tests from vim-test however. I.e. I can call :TestNearest or :TestFile from vim-test and it works.
I have also run TSInstall c_sharp so that cant be it either.

Edit: I am running Win10 and Nvim inside PowerShell.

Tests reporting as passed when they should fail

Hey thanks for this plugin, looks like it will be awesome.

I have using the vim-test adapter and when running my jest tests, even if I write failing tests they show as passing when I run them with this plugin. If I run them with vim-test they correctly show as failing.

Below is my config

local status_ok, neotest = pcall(require, "neotest")

if not status_ok then
	return
end

neotest.setup({
	adapters = {
		require("neotest-vim-test")({
			ignore_file_types = { "python", "vim", "lua" },
		}),
	},
})

Here is a screenshot
Screenshot 2022-06-08 at 13 36 29

Thanks

Help setting up a Golang adapter

Hi @rcarriga,

Great work on this plugin 🚀, really excited to see how it evolves.

I've been trying to get it working with go and have been having some trouble, having read the explanation in the README and also dug through the code of this repo and the existing adapters.

I've got the project over here https://github.com/akinsho/neotest-go (happy to move it under a more general namespace if you intend to create a neotest org since I'm sure other people might want to contribute eventually, and I might stop working with go 🤷🏿)

I've added a sub folder (neogit_go) within the repo which adds two simple examples of tests and am trying to get things working using those.

Reference: https://pkg.go.dev/testing

Issues

  • Detecting multiple test functions doesn't always work, sometimes it only works if there is one test in the file, and sometimes only works if there are multiple
  • It's not entirely clear how results should be returned, I assumed as a list of neotest.Result[] but tests seem to still show as pending when I return this list, so something seems to be missing
  • How to handle test cases specified using a table or a list or struct (map) e.g.
    (I'm guessing this case might not be possible, since it's not clear how to consistently determine what is a test case in the snippet below)
package add
import "testing"
func TestAdd(t *testing.T) {
    cases := []struct {
        desc     string
        a, b     int
        expected int
    }{
        {"TestBothZero", 0, 0, 0},
        {"TestBothPositive", 2, 5, 7},
        {"TestBothNegative", -2, -5, -7},
    }
    for _, tc := range cases {
        actual := add(tc.a, tc.b)
        if actual != tc.expected {
            t.Fatalf("%s: expected: %d got: %d for a: %d and b %d",
                tc.desc, actual, tc.expected, tc.a, tc.b)
        }
    }
}

Any advice or pointers would be much appreciated

Feature request: Jest runner

Found you on reddit. Thanks a lot for working on this.

I'm a big fan of testing and debugging, too. I've played around with similar plugin but you take it to a different level.

I want to make Jest runner and I probably need some helps. Would you mind helping me along the way?

colorcolumn/line numbers tearing caused by test results

I use neotest with python and have noticed that when I run tests the colorcolumn gets moved to the right by one character on rows where the tick/cross is displayed. The line numbers also get messed up:

E.g. this test:

This is an image

Then after running :lua require("neotest").run.run({suite=true}):

This is an image

Feature Request - Status as virtual text like in Ultest

Hey 👋🏾

I'm a long time vim-ultest user and starting to try out neotest and adapt everything to how it was before (mostly). One thing that is "missing" is the option to display the test status icons as virtual text instead of in the sign column. For me this works much better as the sign column is already pretty overloaded. Would be amazing if you could add an option to this status consumer that allows to toggle this behavior.

Thank you! 🙃

User manual problems

Hello,

The user manual has several problems:

  • pressing gO opens an empty location list
  • the manual does not follow the format as outlined in help-writing
  • missing topics, e.g. how to create adapters

It looks like you are generating the manual the same way Telescope does, which has the same problems.

vimL function must not be called in a lua loop callback

Hi,

I tried setting this up this plugin for lua testing, but I'm not able to make it work. I'm gettings this error:

Error executing luv callback:                                                                                                                                                                                                         
...ck/packer/start/plenary.nvim/lua/plenary/async/async.lua:14: The coroutine failed with this message: ...nux64/share/nvim/runtime/lua/vim/treesitter/language.lua:17: E5560: vimL function must not be called in a lua loop callback
stack traceback:
        [C]: in function 'error'
        ...ck/packer/start/plenary.nvim/lua/plenary/async/async.lua:14: in function 'callback_or_next'
        ...ck/packer/start/plenary.nvim/lua/plenary/async/async.lua:40: in function <...ck/packer/start/plenary.nvim/lua/plenary/async/async.lua:39>

Here's how I produce it:

Use this minimal.lua:

vim.cmd([[set runtimepath=$VIMRUNTIME]])
vim.cmd([[set packpath=/tmp/nvim/site]])

local package_root = "/tmp/nvim/site/pack"
local install_path = package_root .. "/packer/start/packer.nvim"

local function load_plugins()
  require("packer").startup({
    {
      "wbthomason/packer.nvim",
      "nvim-lua/plenary.nvim",
      "nvim-treesitter/nvim-treesitter",
      "antoinemadec/FixCursorHold.nvim",
      "rcarriga/neotest",
      "rcarriga/neotest-plenary",
    },
    config = {
      package_root = package_root,
      compile_path = install_path .. "/plugin/packer_compiled.lua",
    },
  })
end

_G.load_config = function(is_initial)
  if is_initial then
    vim.cmd([[runtime plugin/nvim-treesitter.lua]])
    vim.cmd([[TSUpdateSync lua]])
  end

  require("neotest").setup({
    adapters = {
      require("neotest-plenary"),
    },
  })
end

if vim.fn.isdirectory(install_path) == 0 then
  vim.fn.system({ "git", "clone", "https://github.com/wbthomason/packer.nvim", install_path })
  load_plugins()
  require("packer").sync()
  vim.cmd([[autocmd User PackerCompileDone ++once lua load_config(true)]])
else
  load_plugins()
  load_config()
end
  • Clone neotest : git clone https://github.com/rcarriga/neotest && cd neotest
  • Open up events_spec.lua file with minimal config above: nvim -u /path/to/minimal.lua tests/unit/client/events_spec.lua
  • Try running tests for whole file: :lua require("neotest").run.run(vim.fn.expand("%"))

Screenshot:

screenshot

System info:

  • OS: Manjaro Linux
  • Neovim version: Tried both v0.7.0 and latest nightly, behaves the same

`output.open_on_run` doesn't?

When I set output.open_on_run = true, I expected Neotest to open the output window after running a test, even if the test passes, but no window opened. Is this expected behavior?

This is my Neotest config:

local neotest = require("neotest")
neotest.setup({
    adapters = {
        require("neotest-python")({}),
    },
    output = {
        enabled = true,
        open_on_run = true,
    },
})

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.