nvim-neotest / neotest Goto Github PK
View Code? Open in Web Editor NEWAn extensible framework for interacting with tests within NeoVim.
License: MIT License
An extensible framework for interacting with tests within NeoVim.
License: MIT License
from the summary window:
TY for this plugin !
My terminal comes bundled with https://github.com/ryanoasis/powerline-extra-symbols and https://www.google.com/get/noto/help/emoji/ but it seems neither of those have all the font glyphs that this project uses...
Any recommendations on a font set that does include all the required glyphs?
When running
require("neotest").run.run({strategy = "dap"})
I keep getting
The selected configuration references adapter `nil`, but dap.adapters.nil is undefined
This is using neotest-go
. I'm guessing this is because that adapter does not support DAP yet? How would I be able to tell if this were the case?
Or, in the situation where it does support DAP, I must have mis-configured something?
Here is my configuration:
require("neotest").setup({
adapters = {
require("neotest-python")({
dap = { justMyCode = false },
}),
require("neotest-go"),
require("neotest-rspec")
}
})
When viewing the output of a test result using:
require("neotest").output.open({ enter = true })
(all other settings for the plugin are default, so this opens in a floating window)
Sometimes, through muscle memory, I will try to leave the floating window using h etc., instead of pressing q. This will often "crash" the nvim instance. I'm unsure how to get more debugging info on this, so please give me some instructions if I can provide more information.
NVIM v0.7.2
Build type: Release
LuaJIT 2.1.0-beta3
Neotest git commit: b86e558
Are the examples supposed to work like this?
https://github.com/rcarriga/neotest/blob/6dede28d9a08157ed955b04278cddc4faa9d8708/doc/neotest.txt#L253
They do work like this:
lua require("neotest").run.run(vim.fn.expand("%"))
Error:
Error executing lua [string ":lua"]:1: attempt to call field 'run' (a table value)
I have a test file called test_mytest.py
with the following contents:
3 import unittest
2
1
✖ 4 class MyTest(unittest.TestCase):
✔ 1 def test_my_test_case(self) -> None:
2 self.assertTrue(True)
3
✖ 4 def test_my_test_case2(self) -> None:
E 5 self.assertTrue(True) ■ Traceback (most recent call last): File "/root/host/test/test_mytest.py", line 9, in test_my_test_case2 self.assertTrue(False) AssertionError: False is not true
However, when I open the summary window with :lua require("neotest").summary.toggle()
the failed test shows up with a question mark (?):
neotest-python
├─ ? .local
├─ ? .vim
╰╮ ? test
╰╮ ? test_mytest.py
╰╮ ? MyTest
├─ ✔ test_my_test_case
╰─ ? test_my_test_case2
I'm using the CaskaydiaCove NF
font through Windows Terminal. I'm fairly sure this isn't a font issue since I tried CodeNewRoman NF
and it was the same. Besides, the tick and cross both work fine in the code window.
master
branch failed. 🚨I recommend you give this issue a high priority, so other packages depending on you can benefit from your bug fixes and new features again.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can fix this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master
branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here are some links that can help you:
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
package.json
file.A package.json file at the root of your project is required to release on npm.
Please follow the npm guideline to create a valid package.json
file.
Good luck with your project ✨
Your semantic-release bot 📦🚀
Could you please provide a quick example of how to use the summary window mappings?
The help file lists a table of mappings but I don't know how to trigger/map them properly.
I've tried lua require("neotest").summary.run()
but it doesn't work.
The plugin looks awesome! Thanks a lot 🔥
Do you have any plans to add Ruby/RSpec functionality? Just curious 🙂
How to configure the status in the signcolumn?
The running
status is broken on my setup and I can't replace it with anything. Which font has this symbol? I'm using Fira Code Nerdfont
I'm working on a Neotest plugin for Rust and I've gotten it to the point where it can run the first test fine, but on every subsequent test it fails with:
|| Error executing luv callback:
|| ...re/nvim/plugged/plenary.nvim/lua/plenary/async/async.lua:14: The coroutine failed with this message: vim/_editor.lua:0: E5560: nvim_echo must not be called in a lua loop callback
|| stack traceback:
|| [C]: in function 'error'
|| ...re/nvim/plugged/plenary.nvim/lua/plenary/async/async.lua:14: in function 'callback_or_next'
|| ...re/nvim/plugged/plenary.nvim/lua/plenary/async/async.lua:40: in function <...re/nvim/plugged/plenary.nvim/lua/plenary/async/async.lua:39>
Do you have any idea what might be causing that? I don't see the error if I comment out the file writing on lines 105-107, but without that the tests don't run. I've tried running directly with vim.loop.fs_*
in sync and async modes:
local fd = assert(vim.loop.fs_open(tmp_nextest_config, "a", 438))
assert(vim.loop.fs_write(fd, '[profile.neotest.junit]\npath = "' .. junit_path .. '"'))
assert(vim.loop.fs_close(fd))
vim.loop.fs_open(tmp_nextest_config, "a", 438, function(err, fd)
assert(not err, err)
vim.loop.fs_write(fd, '[profile.neotest.junit]\npath = "' .. junit_path .. '"', function(err, _)
assert(not err, err)
vim.loop.fs_close(fd, function(err)
assert(not err, err)
end)
end)
end)
and those both see the same error...
Hi there, migrating from vim-ultest and I'm stricken with the following error when calling lua require('neotest').run.run()
:
Here's my current config:
use {
'nvim-neotest/neotest',
requires = {
'nvim-lua/plenary.nvim',
'nvim-treesitter/nvim-treesitter',
'antoinemadec/FixCursorHold.nvim',
'nvim-neotest/neotest-vim-test',
},
config = function()
require('neotest').setup {
adapters = {
require 'neotest-vim-test',
},
}
end,
}
Any idea what I'm missing?
The output window that would show the output of the test runner won't be available until the test has completed running. Therefore we cannot see outputs for a test that is running (for quite a long time): "No output for test_xxxxx".
It'd be great if there is a way to see the progress, or some intermediate outputs streamed in the terminal for tests that are still running and in progress.
It wasn't super clear what needs to be done for this.
I tried to vim.diagnostics.config({ neotest = true })
but that didn't actually end up showing failures in the virtual text...
Hi, just wondering whether it is possible to do something like this from vim-ultest?
nmap ]t <Plug>(ultest-next-fail)
nmap [t <Plug>(ultest-prev-fail)
Also, not sure what the highlight group is for the background of the floating window. Is it possible to make this partially transparent (winblend)?
When a file and a directory within the same directory have the same name(excluding the file's extension) neotest flattens the tree.
With this file tree:
src
├── mod_a
│ ├── mod_b
│ │ └── mod_c.rs
│ └── mod_b.rs
└── mod_a.rs
Summary window looks like this:
╰╮ src
├─ mod_a
├─ mod_a.rs
├─ mod_b
├─ mod_b.rs
╰─ mod_c.rs
When it should look like this:
╰╮ src
├╮ mod_a
│├╮ mod_b
││╰─ mod_c.rs
│╰─ mod_b.rs
╰─ mod_a.rs
The issue isn't just in the summary window, return values of :parent()
nodes also have the same flattened structure.
In a JS test file, let's say I have a bunch of grouped tests
describe('Test group', () => {
test('test 1', () => {
expect(1).toBeTruthy();
});
test('test 2', () => {
expect(0).toBeTruthy();
});
});
So I can hover the test 1
and run it, no problem. Same for test 2
However, it I want to run the whole describe block Test group
, it does not work.
It will put the running indicator on the group and all subtests, but it never runs.
Attach says no running process found
and output open says no output for test group
Same behaviour running from the summary window
Edit: using neotest-vim-test
. This behaviour worked in ultest
Hello!
First of all, thanks for all the work put into this plugin, it's great and I never want to run tests in a separate tmux pane ever again :)
On to the issue: I'm working on creating an adapter for the Google Test framework for C++. It is not supported by vim-test, as far as I know, due to architectural difficulties. There is a separate plugin that ports that support, but the functionality is limited.
The plugin that I'm writing can be found in my repo. It works, but many features are still WIP, so I decided to open this issue in case somebody else wants to work on this (which would be much appreciated!), and to bring up some issues and suggestions.
So far I've only discovered a single issue: test discovery breaks down when opening a file outside the current project.
If I'm working on /home/me/myproject/foo.cpp
and go-to-definition in /usr/include/something.h
, the whole filesystem tree gets parsed, which breaks neovim in a number of ways, from going over ulimit
with open files to straight up freezing while trying to parse every test file it finds. The discovering mechanic seems way too eager to discover :) Similar behavior has already been mentioned here, and is probably being worked on, but if I can help, I would love to
Furthermore, if it's okay, I would also like to suggest a couple of minor improvements. If you think they are a good fit for the plugin, I think I can add them myself.
build_spec
.nil
from build_spec
, an error happens. With google test, the adapter has to find the executable to run. Sometimes, that may require user input, and I would like to give the user an opportunity to cancel during that input (i.e., "enter path to the executable, empty to cancel"). Otherwise the user has to press <C-C>
and see some errors they have nothing to do with.pytest
does this best, creating /tmp/pytest-of-username/pytest-run-<counter>
directory. I implemented something similar myself for google test (code here), perhaps it would be generally useful? I sometimes check old test runs to see when did it all go so wrong.stdpath('data')
, but it would be nice to store all test states in a centralized fashion. My particular adapter wants to store a simple JSON associating test files to executables they are compiled into.Finally, I need some guidance with parametrized tests: is there a definitive way to work with them? E.g., @pytest.mark.parametrize
in pytest or TEST_P
in Google Test. This is really multiple tests masquerading as one and I'm not sure how to report them - should they be different nodes in the tree? Should it just be one test and whenever it's run an adapter should report that all errors happened in that one test?
Sorry for jamming all this into a single issue, if you think any of these should be worked on, I'll create separate ones.
Running: lua require("neotest").run.run(vim.fn.expand("%"))
on a rust source code file shows both false positives and false negatives.
A minimal test file is below. In this example, the function, and the test, both produce a "tick" (the silly_test
test should fail).
The neotest test output shows that no tests have been run, however, running with vim-test's TestFile
correctly runs the tests and shows the failed test.
fn silly_function() {}
#[cfg(test)]
mod tests {
#[test]
fn silly_test() {
assert!(false);
}
}
I've been really enjoying ultest, so I am very excited about neotest. Thanks for building such great software.
Here are a few things I've noticed on latest neotest:
lua require("neotest").output.open({ enter = true })
after running a test does not seem to workRunning a test and then doing lua require("neotest").run.attach()
will open the test output, but it will immediately close after the test finishes. How do I keep this window open?
The status signs in the gutter area aren't rendering properly and I don't know how to change them.
The Readme says to do a :h neotest.status
but all is says is
A consumer that displays the results of tests as signs beside their
declaration. This consumer is completely passive and so has no interface.
That doesn't tell me how to change the symbols.
The latest commit 05a700f breaks test discovery in python for me. Everything works fine with the commit before that. I'll test a bit if it affects all tests or just specific strings, maybe even pytest decorators.
When I try to open the summary on a test file in a repo at work, nvim freezes for a very long time.
The issue is this line:
https://github.com/rcarriga/neotest/blob/aaf2107a0d639935032d60c49a5bdda2de26b4d6/lua/neotest/client/init.lua#L408
The find_files actually completes within ~2 seconds, but then running adapter.is_test_file
250k+ times takes several minutes (vim-test adapter). After that freeze, there is a second freeze while it tries to write all the events to the logfile (hasn't finished yet).
My personal opinion, test discovery should be definitely optional, and possibly configurable. My previous job had a monorepo that was so big it was only served as a virtual filesystem, so running any type of crawling operation would have terrible side effects. Making it configurable (e.g. only these dirs, only this depth) might be nice, but would probably make more sense per-adapter instead of globally. If adapters need to control the search path, their API would have to change from testing individual files that neotest gives them to calling back into neotest with whatever their search config is. IDK if that refactor is worth it, which is why I could go either way on making the search configurable.
Another direction could be to customize the search on a per-directory basis instead of per-adapter. That avoids the need for an adapter API refactor, and in general this kind of functionality would be incredibly useful. I often work with many different types of repos on the same machine, sometimes in the same vim instance (using :tcd
and one tab per project), and these projects will sometimes use the same language but require different configurations to run tests. I'd love to be able to configure my test adapters on a per-project basis. A rough proposal, it could look like:
require('neotest').setup({
adapters = ...,
summary = ...,
discovery = {
enabled = true,
},
projects = {
["~/work/big"] = {
adapters = ...,
discovery = {
enabled = true,
dirs = {'tests/unit', 'frontend/tests'},
depth = 2
},
},
["~/personal/project"] = {
adapters = ...,
discovery = {
enabled = false,
},
},
},
})
I am happy to submit a PR for any parts of this once we align on a solution
Unrelated question: I'll probably be making more proposals, requests, and questions. Is filing an issue the best way to start a discussion, or would you prefer some other channel?
I've attempted to follow the instructions to install neotest for running python unittest tests:
call plug#begin('~/.config/nvim/plugged')
Plug 'nvim-lua/plenary.nvim'
Plug 'nvim-treesitter/nvim-treesitter'
Plug 'antoinemadec/FixCursorHold.nvim'
Plug 'nvim-neotest/neotest'
Plug 'nvim-neotest/neotest-python'
call plug#end()
lua << EOF
require("neotest").setup({
adapters = {
require("neotest-python")({
-- Extra arguments for nvim-dap configuration
dap = { justMyCode = false },
-- Command line arguments for runner
-- Can also be a function to return dynamic values
args = {"--log-level", "DEBUG"},
-- Runner to use. Will use pytest if available by default.
-- Can be a function to return dynamic value.
runner = "unittest",
-- Returns if a given file path is a test file.
-- NB: This function is called a lot so don't perform any heavy tasks within it.
is_test_file = function(file_path)
end
})
}
})
EOF
Then I created this python test file:
import unittest
class MyTest(unittest.TestCase):
def test_my_test_case(self) -> None:
self.assertTrue(True)
Then I load vim, navigate to "my_test_case" and type :lua require("neotest").run.run() and I just get "No tests found".
Is there a way to do this? I couldn't figure out a way reading :h neotest.Config.summary.mappings
.
Hi again! I recently polished up my task runner plugin enough for general release, and I've written a custom neotest strategy for it that allows neotest tests to run within a task. This is already working for simple cases (including for the streaming results!)
The one thing I haven't gotten working yet is the ability to restart a test run from a task. There are some affordances for "restart task", "restart task on buffer save", "restart task on failure", etc. and at the moment they re-run the test, but neotest doesn't get the new results. The issue is that the strategy results are pull based, where the runner calls into it and expects to receive results. I have two ideas for how to rework this, and I'd love to get your opinion on if either of them would be an appropriate change, or if you have a better idea.
I am writing adapter to dart
tests and have an issue with test names - there could be valid names with variation of single/double/triple quates:
Is it possible to transform namespace/test names that are shown in the test summary? I would want to remove surrounding quates.
Here is a tree-sitter query:
local query = [[
;; group blocks
(expression_statement
(identifier) @group (#eq? @group "group")
(selector (argument_part (arguments (argument (string_literal) @namespace.name )))))
@namespace.definition
;; tests blocks
(expression_statement
(identifier) @testFunc (#any-of? @testFunc "test" "testWidgets")
(selector (argument_part (arguments (argument (string_literal) @test.name)))))
@test.definition
]]
I used to lazy load vim-ultest using keys and by cmd packer properties this way:
use {
"rcarriga/vim-ultest",
opt = true,
run = ":UpdateRemotePlugins",
---- LIKE THIS
cmd = { "Ultest", "UltestNearest", "UltestSummary" },
keys = {
"<Plug>(ultest-run-nearest)",
"<Plug>(ultest-run-file)",
"<Plug>(ultest-summary-toggle)",
},
requires = {
{
"vim-test/vim-test",
cmd = { "TestNearest", "TestFile" },
opt = true,
}
},
}
But I checked out the documentation and there's no <Plug>(mappingToRunTest)
or :CommandToRunTest
maybe we should add a few commands and a few mappings
I want to implement in neotest-jest the possibility to capture and interpolate dynamic tests as:
test.each([1, 2, 3])("case %u", () => {
// implementation
});
This code should generate the following tests:
"case 1"
"case 2"
"case 3"
For this to work, I've created a query like:
((call_expression
function: (call_expression
function: (member_expression
object: (identifier) @func_name (#any-of? @func_name "it" "test")
)
arguments: (arguments (_) @test.args)
)
arguments: (arguments (string (string_fragment) @test.name) (arrow_function))
)) @test.definition
where @test.args
are the arguments passed to the each
method.
For this to work, I need to do some changes in neotest
:
diff --git a/lua/neotest/lib/treesitter/init.lua b/lua/neotest/lib/treesitter/init.lua
index 26b7e8c..8b44f6a 100644
--- a/lua/neotest/lib/treesitter/init.lua
+++ b/lua/neotest/lib/treesitter/init.lua
@@ -40,11 +40,13 @@ local function collect(file_path, query, source, root)
---@type string
local name = vim.treesitter.get_node_text(match[query.captures[type .. ".name"]], source)
local definition = match[query.captures[type .. ".definition"]]
+ local args = match[query.captures[type .. ".args"]]
nodes:push({
type = type,
path = file_path,
name = name,
+ args = args and vim.treesitter.get_node_text(args, source) or nil,
range = { definition:range() },
})
end
But even with this, I need a method to generate more tests based on this args
in neotest
.
Is this the best way or do you have something in mind for these cases?
To stop a test process, sometimes you need to know the ID. Especially when running multiple test processes and you want to stop a specific one.
It would be nice, to have a list of currently running processes with their respective ID.
Ideally, when calling neotest.run.stop({interactive=True})
or something similar it will do the following:
Is it possible to implement a feature to run last test execution, similar to what exists for vim-test?
How to reproduce:
:lua require("neotest").output.open()
Hi ✌
I'm still in the progress of switching from vim-ultest
. And I'm wondering if you could add any options for the summary provider to get a more lean output. Having the full directory tree is a bit too much for me. I already have a file tree on the left side and open files are highlighted etc. Having the whole tree in the summary window also needs so much more horizontal space. The test names/descriptions sometimes (depending on the project) start after 15 empty characters, leading to many lines being wrapped, or if wrapping is disabled most of it is hidden.
I liked the old summary. But I see your aim to improve here with a new concept. Do you think there is a middle ground?
Thanks in advance!
Using the exact same init.lua
on macOS and Arch Linux, I have different symbols colors.
On macOS, they're colored (green check mark, yellow running, red failed).
On Arch, it's just all white/light grey...
How can I get them to have the same behavior?
Can't get this working.
First I tried latest stable version from ubuntu ppa, 0.6.x
Then I updated neovim from unstable ppa to 0.8.0-dev and non of examples working, but giving a bit different output.
:lua require("neotest").run.run(vim.fn.expand("%"))
E5108: Error executing lua [string ":lua"]:1: attempt to index field 'run' (a nil value)
stack traceback:
[string ":lua"]:1: in main chunk
Press ENTER or type command to continue
I'm not sure if this is a neotest or neotest-python issue, so I decided to open it here. I'm using fzf and fzf.vim with a GitFiles
command:
command! -bang -nargs=? -complete=dir GitFiles
\ call fzf#vim#files(FugitiveWorkTree(), fzf#vim#with_preview({'source': 'git ls-files || fd --type f'}), <bang>0)
nnoremap <C-p> :GitFiles<Cr>
After running neotest.run.run()
or neotest.run.run(vim.fn.expand("%"))
, the fzf floating terminal window takes 10-15 seconds to populate after the window appears. Navigating the window also lags by seconds once it opens, but if you wait the lag disappears. I can reproduce this on any Python file in my work repository, but not a Rust file in the same repository.
Anyone work on this Adapter?
I noticed that this repo and all of the repos in this organization are missing LICENSE
files. Can you add those?
Is there a command to run the entire test suite of a project?
It would be cool if there was a way to mark tests for execution - for example via a toggle in the summary window - and then run them with a command like run.run_selected()
. This would increase the flexibility of test execution.
Hi,
I have used ul-test for a while but wanted to switch to NeoTest.
However I am facing issues with it not detecting my tests.
Config is the following:
call plug#begin("~/.vim/plugged")
Plug 'vim-test/vim-test'
Plug 'nvim-lua/plenary.nvim'
Plug 'nvim-treesitter/nvim-treesitter'
Plug 'antoinemadec/FixCursorHold.nvim'
Plug 'nvim-neotest/neotest'
Plug 'nvim-neotest/neotest-vim-test'
call plug#end()
lua << EOF
require("neotest").setup({
adapters = {
require("neotest-vim-test")
},
})
EOF
I didn't add the ignore_file_types or allow_file_types options as I only use it for C#.
When I run require("neotest").run.run()
or require("neotest").run.run(vim.fn.expand("%"))
I get a cmd popup running through a lot of files (can't really tell as it's going to fast) and then an output inside vim stating "No tests found".
I can run the tests from vim-test however. I.e. I can call :TestNearest or :TestFile from vim-test and it works.
I have also run TSInstall c_sharp
so that cant be it either.
Edit: I am running Win10 and Nvim inside PowerShell.
Hey thanks for this plugin, looks like it will be awesome.
I have using the vim-test adapter and when running my jest tests, even if I write failing tests they show as passing when I run them with this plugin. If I run them with vim-test they correctly show as failing.
Below is my config
local status_ok, neotest = pcall(require, "neotest")
if not status_ok then
return
end
neotest.setup({
adapters = {
require("neotest-vim-test")({
ignore_file_types = { "python", "vim", "lua" },
}),
},
})
Thanks
Hi @rcarriga,
Great work on this plugin 🚀, really excited to see how it evolves.
I've been trying to get it working with go
and have been having some trouble, having read the explanation in the README and also dug through the code of this repo and the existing adapters.
I've got the project over here https://github.com/akinsho/neotest-go (happy to move it under a more general namespace if you intend to create a neotest
org since I'm sure other people might want to contribute eventually, and I might stop working with go 🤷🏿)
I've added a sub folder (neogit_go) within the repo which adds two simple examples of tests and am trying to get things working using those.
Reference: https://pkg.go.dev/testing
neotest.Result[]
but tests seem to still show as pending when I return this list, so something seems to be missingpackage add
import "testing"
func TestAdd(t *testing.T) {
cases := []struct {
desc string
a, b int
expected int
}{
{"TestBothZero", 0, 0, 0},
{"TestBothPositive", 2, 5, 7},
{"TestBothNegative", -2, -5, -7},
}
for _, tc := range cases {
actual := add(tc.a, tc.b)
if actual != tc.expected {
t.Fatalf("%s: expected: %d got: %d for a: %d and b %d",
tc.desc, actual, tc.expected, tc.a, tc.b)
}
}
}
Any advice or pointers would be much appreciated
Found you on reddit. Thanks a lot for working on this.
I'm a big fan of testing and debugging, too. I've played around with similar plugin but you take it to a different level.
I want to make Jest runner and I probably need some helps. Would you mind helping me along the way?
I see the guide on writing adapters, and I may do the work myself. Step 1, for me, is to record the task :)
Hey 👋🏾
I'm a long time vim-ultest
user and starting to try out neotest
and adapt everything to how it was before (mostly). One thing that is "missing" is the option to display the test status icons as virtual text instead of in the sign column. For me this works much better as the sign column is already pretty overloaded. Would be amazing if you could add an option to this status
consumer that allows to toggle this behavior.
Thank you! 🙃
Hello,
The user manual has several problems:
gO
opens an empty location listhelp-writing
It looks like you are generating the manual the same way Telescope does, which has the same problems.
Hi,
I tried setting this up this plugin for lua testing, but I'm not able to make it work. I'm gettings this error:
Error executing luv callback:
...ck/packer/start/plenary.nvim/lua/plenary/async/async.lua:14: The coroutine failed with this message: ...nux64/share/nvim/runtime/lua/vim/treesitter/language.lua:17: E5560: vimL function must not be called in a lua loop callback
stack traceback:
[C]: in function 'error'
...ck/packer/start/plenary.nvim/lua/plenary/async/async.lua:14: in function 'callback_or_next'
...ck/packer/start/plenary.nvim/lua/plenary/async/async.lua:40: in function <...ck/packer/start/plenary.nvim/lua/plenary/async/async.lua:39>
Here's how I produce it:
Use this minimal.lua:
vim.cmd([[set runtimepath=$VIMRUNTIME]])
vim.cmd([[set packpath=/tmp/nvim/site]])
local package_root = "/tmp/nvim/site/pack"
local install_path = package_root .. "/packer/start/packer.nvim"
local function load_plugins()
require("packer").startup({
{
"wbthomason/packer.nvim",
"nvim-lua/plenary.nvim",
"nvim-treesitter/nvim-treesitter",
"antoinemadec/FixCursorHold.nvim",
"rcarriga/neotest",
"rcarriga/neotest-plenary",
},
config = {
package_root = package_root,
compile_path = install_path .. "/plugin/packer_compiled.lua",
},
})
end
_G.load_config = function(is_initial)
if is_initial then
vim.cmd([[runtime plugin/nvim-treesitter.lua]])
vim.cmd([[TSUpdateSync lua]])
end
require("neotest").setup({
adapters = {
require("neotest-plenary"),
},
})
end
if vim.fn.isdirectory(install_path) == 0 then
vim.fn.system({ "git", "clone", "https://github.com/wbthomason/packer.nvim", install_path })
load_plugins()
require("packer").sync()
vim.cmd([[autocmd User PackerCompileDone ++once lua load_config(true)]])
else
load_plugins()
load_config()
end
git clone https://github.com/rcarriga/neotest && cd neotest
events_spec.lua
file with minimal config above: nvim -u /path/to/minimal.lua tests/unit/client/events_spec.lua
:lua require("neotest").run.run(vim.fn.expand("%"))
Screenshot:
System info:
v0.7.0
and latest nightly, behaves the sameWhen I set output.open_on_run = true
, I expected Neotest to open the output window after running a test, even if the test passes, but no window opened. Is this expected behavior?
This is my Neotest config:
local neotest = require("neotest")
neotest.setup({
adapters = {
require("neotest-python")({}),
},
output = {
enabled = true,
open_on_run = true,
},
})
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.