Giter VIP home page Giter VIP logo

morganstanley / testplan Goto Github PK

View Code? Open in Web Editor NEW
174.0 27.0 80.0 9.24 MB

Testplan, a multi-testing framework, because unit tests can only go so far..

Home Page: http://testplan.readthedocs.io

License: Apache License 2.0

Python 81.64% Shell 0.08% CMake 0.19% C++ 0.45% HTML 0.03% JavaScript 17.50% CSS 0.01% PEG.js 0.04% Gherkin 0.05%
testplan testing testing-framework blackbox-testing integration-testing multitest

testplan's Introduction

ReadTheDocs_ TravisCI_

Lifecycle Active

image

image

a multi-testing framework

..because unit tests can only go so far..

Testplan is a Python package that can start a local live environment, setup mocks, connections to services and run tests against these. It provides:

  • MultiTest a feature extensive functional testing system with a rich set of assertions and report rendering logic.
  • Built-in inheritable drivers to create a local live environment.
  • Configurable, diverse and expandable test execution mechanism including parallel execution capability.
  • Test tagging for flexible filtering and selective execution as well as generation of multiple reports (for each tag combination).
  • Integration with other unit testing frameworks (like GTest).
  • Rich, unified reports (json/PDF/XML) and soon (HTML/UI).

Basic example

This is how a very basic Testplan application looks like.

import sys

from testplan import test_plan
from testplan.testing.multitest import MultiTest, testcase, testsuite

def multiply(numA, numB):
    return numA * numB


@testsuite
class BasicSuite(object):

    @testcase
    def basic_multiply(self, env, result):
        result.equal(multiply(2, 3), 6, description='Passing assertion')
        result.equal(multiply(2, 2), 5, description='Failing assertion')


@test_plan(name='Multiply')
def main(plan):
    test = MultiTest(name='MultiplyTest',
                     suites=[BasicSuite()])
    plan.add(test)


if __name__ == '__main__':
  sys.exit(not main())

Example execution:

$ python ./test_plan.py -v
        Passing assertion - Pass
          6 == 6
        Failing assertion - Fail
          File: .../test_plan.py
          Line: 18
          4 == 5
      [basic_multiply] -> Fail
    [BasicSuite] -> Fail
  [MultiplyTest] -> Fail
[Multiply] -> Fail

System integration testing example

Testing a server and a client communication.

import sys

from testplan import test_plan
from testplan.testing.multitest import MultiTest, testsuite, testcase
from testplan.testing.multitest.driver.tcp import TCPServer, TCPClient
from testplan.common.utils.context import context


@testsuite
class TCPTestsuite(object):
    """Testsuite for server client connection testcases."""

    def setup(self, env):
        env.server.accept_connection()

    @testcase
    def send_and_receive_msg(self, env, result):
        """Basic send and receive hello message testcase."""
        msg = env.client.cfg.name
        result.log('Client is sending his name: {}'.format(msg))
        bytes_sent = env.client.send_text(msg)

        received = env.server.receive_text(size=bytes_sent)
        result.equal(received, msg, 'Server received client name')

        response = 'Hello {}'.format(received)
        result.log('Server is responding: {}'.format(response))
        bytes_sent = env.server.send_text(response)

        received = env.client.receive_text(size=bytes_sent)
        result.equal(received, response, 'Client received response')


@test_plan(name='TCPConnections')
def main(plan):
    test = MultiTest(name='TCPConnectionsTest',
                     suites=[TCPTestsuite()],
                     environment=[
                         TCPServer(name='server'),
                         TCPClient(name='client',
                                   host=context('server', '{{host}}'),
                                   port=context('server', '{{port}}'))])
    plan.add(test)


if __name__ == '__main__':
    sys.exit(not main())

Example execution:

$ python ./test_plan.py -v
        Client is sending: client
        Server received - Pass
          client == client
        Server is responding: Hello client
        Client received - Pass
          Hello client == Hello client
      [send_and_receive_msg] -> Pass
    [TCPTestsuite] -> Pass
  [TCPConnectionsTest] -> Pass
[TCPConnections] -> Pass

A persistent and human readable test evidence PDF report:

$ python ./test_plan.py --pdf report.pdf
  [TCPConnectionsTest] -> Pass
[TCPConnections] -> Pass
PDF generated at report.pdf

image

Documentation

For complete documentation that includes downloadable examples, visit this link.

Contribution

A step by step guide to contribute to Testplan framework can be found here.

License

License information here.

testplan's People

Contributors

attakay78 avatar bingenito avatar canbascilms avatar cpages avatar cpagescognex avatar dcm avatar dependabot[bot] avatar dobragab avatar husi avatar ja-louis avatar johnchiotis avatar kelliott55 avatar kn-ms avatar lambchr avatar m6ai avatar mariam-abbas-ms avatar markokitonjics avatar nakjemmyms avatar nober97 avatar pyifan avatar raoyitao avatar rnemes avatar ryan-collingham avatar ryanc414 avatar tjisanams avatar yuxuan-ms avatar zhenyu-ms avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

testplan's Issues

Need better unit-test coverage

We have good coverage with our functional tests but our unit-testing is currently better in some areas than others. Ideally we should aim for as close to 100% code coverage as reasonably possible with unit tests alone, with a unit test corresponding to every non-trivial testplan module.

Interactive example in jupyter notebook is not up-to-date

How to reproduce: clean clone of master; pip install jupyter lab; execute the Basic/test_plan_notebook.ipynb in the interactive examples directory.

I suspect due to the recent interactive refactoring the notebook has not been kept up-to-date. The plan.run() step doesn't detect interactive mode and instead says 'No tests were run' and plan.i.test raises a AttributeError.

trying to change interactive=True to interactive_port=8082 for instance then creates another issue with the reload logic:

Traceback (most recent call last):
  File "/Users/frans/Documents/GitHub/testplan/testplan/common/entity/base.py", line 860, in run(<testplan.runnable.base.TestRunner object at 0x118d39fd0>)
    target=self, http_port=self.cfg.interactive_port)
  File "/Users/frans/Documents/GitHub/testplan/testplan/runnable/interactive/base.py", line 113, in __init__(
        <testplan.runnable.interactive.base.TestRunnerIHandler object at 0x10a33e828>,
        <testplan.runnable.base.TestRunner object at 0x118d39fd0>,
        10,
        8082)
    self._reloader = ModuleReloader(extra_deps=self.cfg.extra_deps)
  File "/Users/frans/Documents/GitHub/testplan/testplan/runnable/interactive/reloader.py", line 40, in __init__(<testplan.runnable.interactive.reloader.ModuleReloader object at 0x118d39f28>, [])
    self._build_dependencies(extra_deps))
  File "/Users/frans/Documents/GitHub/testplan/testplan/runnable/interactive/reloader.py", line 81, in _build_dependencies(<testplan.runnable.interactive.reloader.ModuleReloader object at 0x118d39f28>, [])
    main_module_file = sys.modules['__main__'].__file__
AttributeError: module '__main__' has no attribute '__file__'

install-tesptlan-ui fails

In master, the install-testplan-ui script is currently failing. The build is complaining about eslint-plugin-react not being available, which was recently moved into the devDependencies. I'm not quite sure how a lint plugin is being required for the production build, but the simplest fix would be to just move it back into the main dependencies.

log matcher support

Hi team,
I was a morganer before, I remembered there was a very useful feature called log matcher when I used it, but I could not find it in this version
Do you have plans to add it, or has it been changed to other equivalent stuff?

Thanks,
Tai

Interactive mode doesn't call after_start

When MultiTest's constructor is given an after_start callable, that is called only if run from command line, not if the environment is started after a click on the UI.

Enable App driver to write to process stdin

Currently there is no way to interact with the stdin of subprocesses spawned by the App driver, without subclassing App and overriding the starting() method. It would be useful if App could support writing to stdin.

I think we can just set stdin=subprocess.PIPE by default to allow writing to stdin via the proc.stdin attribute.

PyUnit test runner does not yet work with interactive mode

As expected - the PyUnit test runner does not yet work with interactive test runner. It needs to implement both dry_run() and run_testcases_iter() methods correctly to work with interactive mode.:

Current behaviour: exception raised on setup, looks like dry_run() is implemented on the PyUnit runner but is not returning the expected type:

Starting TestRunner[PyUnit] in interactive mode
Traceback (most recent call last):
  File "C:\Users\Ryan\Documents\code\testplan\testplan\common\entity\base.py", line 861, in run
    self._ihandler = self.cfg.interactive_handler(
  File "C:\Users\Ryan\Documents\code\testplan\testplan\runnable\interactive\base.py", line 61, in __init__
    self.report = self._initial_report()
  File "C:\Users\Ryan\Documents\code\testplan\testplan\runnable\interactive\base.py", line 675, in _initial_report
    test_report = test.dry_run().report
AttributeError: 'TestGroupReport' object has no attribute 'report'

Propogate sys.path down into process pool workers

Currently when process pool workers are started they have a completely different sys.path to the parent process. This is confusing for users who might reasonably expect to be able to schedule modules that are already on their sys.path without explicitly specifying the path. Worse, if a task is scheduled from a module "tasks" it will actually try to find that task under testplan.runners.pools.tasks, since the directory of the child.py script is placed first on the child's sys.path!

As an enhancement we should try to replicate the parent's sys.path in the child at the point the child is spawned. We can do this simply by setting the PYTHONPATH environment variable before spawning the child process.

LogMatcher looks like it might be very slow with large log files.

Hi Ken,

I’m not quite sure why the tell/seek mechanism has been abandoned in favour of reading the entire file into memory, but I agree it seems quite inefficient for large log files. Possibly we can optimize this by porting the old mechanism over.

Feel free to raise an issue on the github repo for this. Not sure how soon we would get around to looking at it exactly, but if you want to change it yourself we’d appreciate a PR!

Thanks,
Ryan


I seem to remember that it was possible to push stuff into some process under test and kind of stay in lockstep by waiting for the process log to be updated. I think some of our sniffer tests did that.

Trying to set up something similar I noticed that this version of LogMatcher appears to re-read the entire log file each time a match is attempted. I see it keeps a line number so the next match is the next match but it is expensive. A bit of ftell/fseek might save a lot of time and memory for large logs.

class LogMatcher(object):
"""
Single line matcher for text files (usually log files). Once matched, it
remembers the line number of the match and subsequent matches are scanned
from the current line number. This can be useful when matched lines are not
unique for the entire log file.
"""

def __init__(self, log_path):
    """
    :param log_path: Path to the log file.
    :type log_path: ``str``
    """

    self.log_path = log_path
    self.line_no = 0

def match(self, regex):
    """
    Matches each line in the log file from the current line number to the
    end of the file. If a match is found the line number is stored and the
    match is returned. If no match is found an Exception is raised.

    :param regex: compiled regular expression (``re.compile``)
    :type regex: ``re.Pattern``

    :return: The regex match or raise an Exception if no match is found.
    :rtype: ``re.Match``
    """
    with open(self.log_path, 'r') as log:
        lines = log.readlines()
        for line_no, line in enumerate(lines[self.line_no:], self.line_no):
            match = regex.match(line)
            if match:
                self.line_no = line_no + 1
                return match

    raise ValueError('No matches found')

Interactive reload test is unstable

The interactive mode reload test is currently unstable and has been disabled. The following traceback is seen:

14:03:58 Traceback (most recent call last):
14:03:58   File "interactive_executable.py", line 67, in <module>
14:03:58     plan.i.reload()
14:03:58   File "/var/tmp/pjenkslv/jenkins/workspace/ets-testplan-custom-rhel6-python-3.4.2/ets/testplan/testplan/src/lib/testplan/oss/runnable/interactive/base.py", line 489, in reload
14:03:58     self._reloader.reload(tests, rebuild_dependencies=rebuild_dependencies)
14:03:58   File "/var/tmp/pjenkslv/jenkins/workspace/ets-testplan-custom-rhel6-python-3.4.2/ets/testplan/testplan/src/lib/testplan/oss/runnable/interactive/reloader.py", line 219, in reload
14:03:58     self._reload_deps(filepath, reloaded, suite_dict)
14:03:58   File "/var/tmp/pjenkslv/jenkins/workspace/ets-testplan-custom-rhel6-python-3.4.2/ets/testplan/testplan/src/lib/testplan/oss/runnable/interactive/reloader.py", line 194, in _reload_deps
14:03:58     reload_module(to_reload)
14:03:58   File "/ms/dist/python/PROJ/core/3.4.4/exec/lib/python3.4/importlib/__init__.py", line 149, in reload
14:03:58     methods.exec(module)
14:03:58   File "<frozen importlib._bootstrap>", line 1134, in exec
14:03:58 AttributeError: 'NoneType' object has no attribute 'name'

So far this has only been seen when running with python 3, but the test has been disabled entirely until we understand why it is failing better.

Report webpage is not printable

The print button on toolbar is not very useful. Because it just prints the current testcase.
We need a printable webpage for printing whole report.

Test Reports to include comments and attachments

We are using junit test reporting, but it is missing result.log() used in test plans and also unable to see any attachments.
What are some suggested reporting plugin's which cover's above details.
(for Jenkins)
thanks.

test_timeout fails on python 3.4

In test timeout we read the Testplan output JSON from a tempfile.NamedTemporaryFile() - by default this file is opened in a binary mode (specifically mode "w+b"). This causes an issue with Python versions between 3.0 and 3.5 since the json library requires files to be opened in text mode - otherwise it throws an error:

TypeError: the JSON object must be str, not 'bytes'

Add a single combined install script

Currently we have two main install scripts: the setup.py for the python dependencies, and the install_testplan_ui.py script for the npm dependencies. While it is useful to have flexibility to not install the UI if not required, it's a bit cumbersome to have to run two separate install scripts - it should be possible to install all of testplan with a single command.

Implement additional assertions for reporting

We are missing some report focused assertions.
The assertion itself needs implementing and then support in the GUI & PDF reports too.

  1. Markdown: accepts a snippet of markdown formatted text that will be rendered on the output report
  2. Graph: used for plotting data sets from a test execution for vizualization in the report. NB a contender is javascript flot but lets consider alternatives
  3. Attach: Attach a file, or all files in a directory to the report.
  4. Codelog: Log a code snippet in the report, with syntax highlighting appropriate for the language.

install-testplan-ui raises exception on MacOS Catalina

The virtualenv install on MacOS raises an exception:

  File "/Users/blah/projectName/test/functional/testplan-oss/lib/python3.8/site-packages/testplan/runners/pools/remote.py", line 668, in RemotePoolConfig
    default_hostname = socket.gethostbyname(socket.gethostname())
socket.gaierror: [Errno 8] nodename nor servname provided, or not known

I fixed it by specifying "localhost" instead of the gethostname() method:

--- a/testplan/runners/pools/remote.py
+++ b/testplan/runners/pools/remote.py
@@ -665,7 +665,10 @@ class RemotePoolConfig(PoolConfig):
     resource entity.
     """

-    default_hostname = socket.gethostbyname(socket.gethostname())
+    if sys.platform == "darwin":
+        default_hostname = socket.gethostbyname("localhost")
+    else:
+        default_hostname = socket.gethostbyname(socket.gethostname())
     default_workspace_root = workspace_root()

     @classmethod

solution reference

Should I raise a PR or can you update the code? Thanks.

Add testplan to PyPI and npm to simplify install

Currently, users have to install testplan directly from the github archive (or git clone the source), and run a separate script to build the UI from source. It would be great if we could leverage the standard python/JS package distribution services and make the Testplan python package installable from PyPI via pip install testplan and the UI installable from npm via npm install -g testplan-ui.

Potentially the setup.py could automatically run the npm install command, making the entire install as simple as pip install testplan - though possibly it's better to leave the UI install as a separate step, so that users who don't want the UI can install just the Python Testplan framework.

Non-existent parameters to Testplan class do not cause error

I noticed that adding complete garbage parameters to the Testplan class constructor does not cause an error. E.g.:

import testplan
testplan.Testplan(
    name="MyPlan",
    foobarbaz=True)

I would expect the foobarbaz to cause a TypeError but it does not. This may cause confusion in case of a minor typo of a parameter leading to unexpected behaviour.

I haven't checked but likely this problem isn't specific to the Testplan class itself but might be due to the common configuration validation. It should be set up to raise an Exception if unrecognised parameters are passed.

install testplan ui fails with not-very-clear error message

Description

Hi everyone, how are you doing?

How to install the ui is not very well documented (minimum requirements for instance). It was relatively easy to find out I needed yarn and pegjs but still after that install-testplan-ui doesn't work as expected. I suspect I may just be running unsupported versions (in debian 10.5 and RHEL 6).

Note that a version of tesptlan I downloaded on June 23, 2020 can be installed with the same node/yarn setup, but then I have another issue where some missing keys in dictmatch would lead the js to crash and I wanted to try with a more recent version to see whether it may have been solved before filing an issue for that one.

Steps to Reproduce

(venv) frans@nas:~$python3 -m venv
(venv) frans@nas:~$source venv/bin/activate
(venv) frans@nas:~$git clone https://github.com/Morgan-Stanley/testplan.git
(venv) frans@nas:~$sudo apt-get install npm
(venv) frans@nas:~$sudo npm install -g yarn

(venv) frans@nas:~$ node --version
v10.21.0

(venv) frans@nas:~$ install-testplan-ui
INFO:Installing Testplan UI dependencies...
INFO:Installing to path: /data/frans/home/venv/lib/python3.7/site-packages/testplan/web_ui/testing
INFO:Building Testplan UI...
Traceback (most recent call last):
  File "/data/frans/home/venv/bin/install-testplan-ui", line 117, in <module>
    install_and_build_ui(path=args.path, dev=args.dev, verbose=args.verbose)
  File "/data/frans/home/venv/bin/install-testplan-ui", line 110, in install_and_build_ui
    stderr=subprocess.STDOUT,
  File "/usr/lib/python3.7/subprocess.py", line 347, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command 'yarn run build' returned non-zero exit status 127.

(venv) frans@nas:~$sudo npm install -g pegjs
...
(venv) frans@nas:~$ install-testplan-ui
INFO:Installing Testplan UI dependencies...
INFO:Installing to path: /data/frans/home/venv/lib/python3.7/site-packages/testplan/web_ui/testing
INFO:Building Testplan UI...
Traceback (most recent call last):
  File "/data/frans/home/venv/bin/install-testplan-ui", line 117, in <module>
    install_and_build_ui(path=args.path, dev=args.dev, verbose=args.verbose)
  File "/data/frans/home/venv/bin/install-testplan-ui", line 110, in install_and_build_ui
    stderr=subprocess.STDOUT,
  File "/usr/lib/python3.7/subprocess.py", line 347, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command 'yarn run build' returned non-zero exit status 1.
(venv) frans@nas:~$ cd venv/lib/python3.7/site-packages/testplan/web_ui/testing/
(venv) frans@nas:~/venv/lib/python3.7/site-packages/testplan/web_ui/testing$
(venv) frans@nas:~/venv/lib/python3.7/site-packages/testplan/web_ui/testing$ /usr/local/bin/yarn run build
yarn run v1.22.10
$ pegjs --format es src/Parser/SearchFieldParser.pegjs
Module format must be one of "amd", "commonjs", "globals", and "umd".
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

(venv) frans@nas:~/venv/lib/python3.7/site-packages/testplan/web_ui/testing$ cat /etc/debian_version
10.5
(venv) frans@nas:~/venv/lib/python3.7/site-packages/testplan/web_ui/testing$ pegjs --version
PEG.js 0.10.0


Expected behavior:

Either something that says that my dependencies are wrong, or something along these lines; or a successful install

Actual behavior:

not very clear error message

Software

  • PEG.js: 0.10.0
  • Node.js: 10.210
  • NPM or Yarn: npm 5.8.0
  • Browser: N/A
  • OS: Linux debian 10.5
  • Editor: vi

Timeout test leaks threads/processes

The test_timeout functional testcase does not clean up all threads/processes after it finishes. In particular, process pool workers are still running and may try to communicate with a completely different Pool after they finish executing!

profiling my plan

Hi guys, thanks again for open-sourcing testplan.
Having written a testplan that starts a few processes and use the TCP server etc, I notice some of my tests are slow. I tried to use the builtin profiler in intellij or cProfile to find out about where and what and I notice that the testplan is actually executed as a separate thread, which escapes the default profilers. For profiling purposes, I'd like to be able to escape that and have a mode where I can run single-threaded. I couldn't find anything in the doc in that respect. Do you have a canonical way to do that? Would you be interested by a pull request?

FXConverter example is unstable

FXConverter example has started failing and blocking CI jobs. Looks like the driver is failing to start correctly:

While starting resource [converter]
14:43:20 E                 Traceback (most recent call last):
14:43:20 E                   File "/var/tmp/pjenkslv/jenkins/workspace/ets-testplan-continuous-rhel6-python-2.7.9/ets/testplan/testplan/src/oss/testplan/common/entity/base.py", line 143, in start(
14:43:20 E                         Environment[[('server', <testplan.testing.multitest.driver.tcp.server.TCPServer object at 0x2b73c5c6e590>), ('con...)
14:43:20 E                     resource.wait(resource.STATUS.STARTED)
14:43:20 E                   File "/var/tmp/pjenkslv/jenkins/workspace/ets-testplan-continuous-rhel6-python-2.7.9/ets/testplan/testplan/src/oss/testplan/common/entity/base.py", line 435, in wait(<driver.FXConverter object at 0x2b73c5c6e8d0>, 'STARTED', 3600)
14:43:20 E                     self._wait_handlers[target_status](timeout=timeout)
14:43:20 E                   File "/var/tmp/pjenkslv/jenkins/workspace/ets-testplan-continuous-rhel6-python-2.7.9/ets/testplan/testplan/src/oss/testplan/testing/multitest/driver/base.py", line 170, in _wait_started(<driver.FXConverter object at 0x2b73c5c6e8d0>, 3600)
14:43:20 E                     self.started_check(timeout=timeout)
14:43:20 E                   File "/var/tmp/pjenkslv/jenkins/workspace/ets-testplan-continuous-rhel6-python-2.7.9/ets/testplan/testplan/src/oss/examples/App/FXConverter/driver.py", line 25, in started_check(<driver.FXConverter object at 0x2b73c5c6e8d0>, 3600)
14:43:20 E                     super(FXConverter, self).started_check(timeout=timeout)
14:43:20 E                   File "/var/tmp/pjenkslv/jenkins/workspace/ets-testplan-continuous-rhel6-python-2.7.9/ets/testplan/testplan/src/oss/testplan/testing/multitest/driver/base.py", line 152, in started_check(<driver.FXConverter object at 0x2b73c5c6e8d0>, 3600)
14:43:20 E                     raise_on_timeout=True)
14:43:20 E                   File "/var/tmp/pjenkslv/jenkins/workspace/ets-testplan-continuous-rhel6-python-2.7.9/ets/testplan/testplan/src/oss/testplan/common/utils/timing.py", line 150, in wait(<function <lambda> at 0x2b73c5c990c8>, 60, 0.05, True)
14:43:20 E                     raise TimeoutException(msg)
14:43:20 E                 TimeoutException: Timeout after 60 seconds.
14:43:20 E                 Timed out starting FXConverter(converter): unmatched log_regexps in /var/tmp/pjenkbld/testplan/fxconverter/TestFXConverter/converter/stdout.
14:43:20 E                  log_regexps matched: []
14:43:20 E                 Unmatched: ["REGEX('Converter started.')", "REGEX('.*Listener on: (?P<listen_address>.*)')"]
14:43:20 E                 While starting resource [converter]
14:43:20 E                 Traceback (most recent call last):
14:43:20 E                   File "/var/tmp/pjenkslv/jenkins/workspace/ets-testplan-continuous-rhel6-python-2.7.9/ets/testplan/testplan/src/oss/testplan/common/entity/base.py", line 143, in start(
14:43:20 E                         Environment[[('server', <testplan.testing.multitest.driver.tcp.server.TCPServer object at 0x2b73c5c6e590>), ('con...)
14:43:20 E                     resource.wait(resource.STATUS.STARTED)
14:43:20 E                   File "/var/tmp/pjenkslv/jenkins/workspace/ets-testplan-continuous-rhel6-python-2.7.9/ets/testplan/testplan/src/oss/testplan/common/entity/base.py", line 435, in wait(<driver.FXConverter object at 0x2b73c5c6e8d0>, 'STARTED', 3600)
14:43:20 E                     self._wait_handlers[target_status](timeout=timeout)
14:43:20 E                   File "/var/tmp/pjenkslv/jenkins/workspace/ets-testplan-continuous-rhel6-python-2.7.9/ets/testplan/testplan/src/oss/testplan/testing/multitest/driver/base.py", line 170, in _wait_started(<driver.FXConverter object at 0x2b73c5c6e8d0>, 3600)
14:43:20 E                     self.started_check(timeout=timeout)
14:43:20 E                   File "/var/tmp/pjenkslv/jenkins/workspace/ets-testplan-continuous-rhel6-python-2.7.9/ets/testplan/testplan/src/oss/examples/App/FXConverter/driver.py", line 25, in started_check(<driver.FXConverter object at 0x2b73c5c6e8d0>, 3600)
14:43:20 E                     super(FXConverter, self).started_check(timeout=timeout)
14:43:20 E                   File "/var/tmp/pjenkslv/jenkins/workspace/ets-testplan-continuous-rhel6-python-2.7.9/ets/testplan/testplan/src/oss/testplan/testing/multitest/driver/base.py", line 152, in started_check(<driver.FXConverter object at 0x2b73c5c6e8d0>, 3600)
14:43:20 E                     raise_on_timeout=True)
14:43:20 E                   File "/var/tmp/pjenkslv/jenkins/workspace/ets-testplan-continuous-rhel6-python-2.7.9/ets/testplan/testplan/src/oss/testplan/common/utils/timing.py", line 150, in wait(<function <lambda> at 0x2b73c5c990c8>, 60, 0.05, True)
14:43:20 E                     raise TimeoutException(msg)
14:43:20 E                 TimeoutException: Timeout after 60 seconds.
14:43:20 E                 Timed out starting FXConverter(converter): unmatched log_regexps in /var/tmp/pjenkbld/testplan/fxconverter/TestFXConverter/converter/stdout.
14:43:20 E                  log_regexps matched: []
14:43:20 E                 Unmatched: ["REGEX('Converter started.')", "REGEX('.*Listener on: (?P<listen_address>.*)')"]

RunnableManager is useless on its own

I would think that RunnableManager can get an object derived from Runnable and run it. However, that is far from true, RunnableManager is intended to be used like

  • Create a config class derived from RunnableManagerConfig, including ConfigOption('runnable'): is_subclass(Runnable)
  • Create your class derived from RunnableManager with config class of your config

Then this piece of code in RunnableManager creates your Runnable instance from the __init__ of RunnableManager` (?!):

    def _initialize_runnable(self, **options):
        runnable_class = self._cfg.runnable
        runnable_config = dict(**options)
        return runnable_class(**runnable_config)

Meaning that your derived Runnable must have the same config options as RunnableManager (?!), including runnable option, which is completely useless.

The initialization of the runnable happens in the init, I can't see why it is deferred and hidden. Instead, the RunnableManager API should look like this:

  runnable = YourRunnable(**args)
  FeasibleRunnableManager(runnable=runnable).run()

Which is not hard to achieve, but this should be the default implementation.

class FeasibleRunnableManagerConfig(RunnableManagerConfig):
  @classmethod
  def get_options(cls):
      return {
        ConfigOption('runnable'): lambda obj: isinstance(obj, Runnable),
      }

class FeasibleRunnableManager(RunnableManager):
  CONFIG = FeasibleRunnableManagerConfig
  def _initialize_runnable(self, **options):
    return self._cfg.runnable

LogMatcher is slow for matching large files

When matching moderately large log files (thousands of lines or more) the LogMatcher takes a very long time to match lines. This is due to it sleeping for a fixed time between each match iteration. Fix is simple - only sleep when there is no line read (i.e. we are waiting for the application to produce a log line), not when there are unread lines still to process.

roadmap and restapi test support

Hello, I would like to ask if you might provide a (feature) "Roadmap.md". And do you plan to support OpenApi endpoints with JSON payloads or is it already possible, and I didn't see it? We work on an Open Bank API Project, and I check if we could make use of your test framework in this context, too.

WebServerExporter should be able to display an existing JSON

The WebServerExporter.export does two things: exporting the JSON and then starting a web server that will open that JSON.

If these two things are separated to two functions, we can use WebServerExporter to display a JSON that already exists, either as a standalone app or a custom test_plan. It is extremely useful to view reports generated by CI.

Required changes are small, see master...dobragab:webserver-display

Hotfix - add additional type checks for binary test data serialization

User got the large stacktrace at the end of this post while his test was trying to serialize
binary data. The last three lines are instructive:

  File "/usr/local/lib/python3.7/site-packages/testplan/report/testing/schemas.py", line 90, in _binary_to_hex_list
    for b in bytearray(binary_obj)
TypeError: an integer is required

... which points us to testplan/report/testing/schemas.py:90:

class EntriesField(fields.Field):
    ...
    def _binary_to_hex_list(binary_obj):
        # make sure the hex repr is capitalized and leftpad'd with a zero
        # because '0x0C' is better than '0xc'.
        return [
            "0x{}".format(hex(b)[2:].upper().zfill(2))
            for b in bytearray(binary_obj)
        ]

Delving into what additional typecheck we should add be here that's Python 2 & 3
compatible, we see that there are a number of different outputs for various
inputs between the two versions:

bytearray input Python2 output / error Python3 output / error
None TypeError: 'NoneType' object is not iterable TypeError: 'NoneType' object is not iterable
True bytearray(b'\x00') bytearray(b'\x00')
False bytearray(b'') bytearray(b'')
100 bytearray(b'\x00\x00\x00\x00\x00\x0...') bytearray(b'\x00\x00\x00\x00\x00\x0...')
0 bytearray(b'') bytearray(b'')
-1000 ValueError: negative count ValueError: negative count
1000.0 TypeError: 'float' object is not iterable TypeError: 'float' object is not iterable
0.0 TypeError: 'float' object is not iterable TypeError: 'float' object is not iterable
-100.0 TypeError: 'float' object is not iterable TypeError: 'float' object is not iterable
'' bytearray(b'') TypeError: string argument without an encoding
b'' bytearray(b'') bytearray(b'')
u'' TypeError: unicode argument without an encoding TypeError: string argument without an encoding
'a' bytearray(b'a') TypeError: string argument without an encoding
b'a' bytearray(b'a') bytearray(b'a')
u'a' TypeError: unicode argument without an encoding TypeError: string argument without an encoding
[] bytearray(b'') bytearray(b'')
{} bytearray(b'') bytearray(b'')
[ None ] TypeError: an integer or string of size 1 is required TypeError: an integer is required
[ True ] bytearray(b'\x01') bytearray(b'\x01')
[ False ] bytearray(b'\x00') bytearray(b'\x00')
[ 100 ] bytearray(b'd') bytearray(b'd')
[ 0 ] bytearray(b'\x00') bytearray(b'\x00')
[ -1000 ] ValueError: byte must be in range(0, 256) ValueError: byte must be in range(0, 256)
[ 1000.0 ] TypeError: an integer or string of size 1 is required TypeError: an integer is required
[ 0.0 ] TypeError: an integer or string of size 1 is required TypeError: an integer is required
[ -100.0 ] TypeError: an integer or string of size 1 is required TypeError: an integer is required
[ '' ] ValueError: string must be of size 1 TypeError: an integer is required
[ b'' ] ValueError: string must be of size 1 TypeError: an integer is required
[ u'' ] TypeError: an integer or string of size 1 is required TypeError: an integer is required
[ 'a' ] bytearray(b'a') TypeError: an integer is required
[ b'a' ] bytearray(b'a') TypeError: an integer is required
[ u'a' ] TypeError: an integer or string of size 1 is required TypeError: an integer is required
[ [] ] TypeError: an integer or string of size 1 is required TypeError: an integer is required
[ {} ] TypeError: an integer or string of size 1 is required TypeError: an integer is required
{ None: None } TypeError: an integer or string of size 1 is required TypeError: an integer is required
{ True: None } bytearray(b'\x01') bytearray(b'\x01')
{ False: None } bytearray(b'\x00') bytearray(b'\x00')
{ 100: None } bytearray(b'd') bytearray(b'd')
{ 0: None } bytearray(b'\x00') bytearray(b'\x00')
{ -1000: None } ValueError: byte must be in range(0, 256) ValueError: byte must be in range(0, 256)
{ 1000.0: None } TypeError: an integer or string of size 1 is required TypeError: an integer is required
{ 0.0: None } TypeError: an integer or string of size 1 is required TypeError: an integer is required
{ -100.0: None } TypeError: an integer or string of size 1 is required TypeError: an integer is required
{ '': None } ValueError: string must be of size 1 TypeError: an integer is required
{ b'': None } ValueError: string must be of size 1 TypeError: an integer is required
{ u'': None } TypeError: an integer or string of size 1 is required TypeError: an integer is required
{ 'a': None } bytearray(b'a') TypeError: an integer is required
{ b'a': None } bytearray(b'a') TypeError: an integer is required
{ u'a': None } TypeError: an integer or string of size 1 is required TypeError: an integer is required
{ []: None } TypeError: unhashable type: 'list' TypeError: unhashable type: 'list'
{ {}: None } TypeError: unhashable type: 'dict' TypeError: unhashable type: 'dict'

This breaks down to:

  • no error for bool, List[bool], and Dict[bool, Any] types
  • no error for int types in the range 0 to 256, or List[int] /
    Dict[int, Any] where the ints are in that range
  • no error for bytes type (which encompasses str in Python2)
  • error for List[bytes] / Dict[int, Any]
Original stacktrace reported
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/testplan/report/testing/schemas.py", line 145, in _serialize
    json.dumps(value, ensure_ascii=True)
  File "/usr/local/lib/python3.7/json/__init__.py", line 231, in dumps
    return _default_encoder.encode(obj)
  File "/usr/local/lib/python3.7/json/encoder.py", line 199, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/usr/local/lib/python3.7/json/encoder.py", line 257, in iterencode
    return _iterencode(o, 0)
  File "/usr/local/lib/python3.7/json/encoder.py", line 179, in default
    raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type bytes is not JSON serializable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/testplan/report/testing/schemas.py", line 119, in _render_unencodable_bytes_by_callable
    json.dumps(datacp, ensure_ascii=True)
  File "/usr/local/lib/python3.7/json/__init__.py", line 231, in dumps
    return _default_encoder.encode(obj)
  File "/usr/local/lib/python3.7/json/encoder.py", line 199, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/usr/local/lib/python3.7/json/encoder.py", line 257, in iterencode
    return _iterencode(o, 0)
  File "/usr/local/lib/python3.7/json/encoder.py", line 179, in default
    raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type bytes is not JSON serializable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/testplan/report/testing/schemas.py", line 119, in _render_unencodable_bytes_by_callable
    json.dumps(datacp, ensure_ascii=True)
  File "/usr/local/lib/python3.7/json/__init__.py", line 231, in dumps
    return _default_encoder.encode(obj)
  File "/usr/local/lib/python3.7/json/encoder.py", line 199, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/usr/local/lib/python3.7/json/encoder.py", line 257, in iterencode
    return _iterencode(o, 0)
  File "/usr/local/lib/python3.7/json/encoder.py", line 179, in default
    raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type bytes is not JSON serializable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/testplan/report/testing/schemas.py", line 119, in _render_unencodable_bytes_by_callable
    json.dumps(datacp, ensure_ascii=True)
  File "/usr/local/lib/python3.7/json/__init__.py", line 231, in dumps
    return _default_encoder.encode(obj)
  File "/usr/local/lib/python3.7/json/encoder.py", line 199, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/usr/local/lib/python3.7/json/encoder.py", line 257, in iterencode
    return _iterencode(o, 0)
  File "/usr/local/lib/python3.7/json/encoder.py", line 179, in default
    raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type bytes is not JSON serializable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/testplan/report/testing/schemas.py", line 119, in _render_unencodable_bytes_by_callable
    json.dumps(datacp, ensure_ascii=True)
  File "/usr/local/lib/python3.7/json/__init__.py", line 231, in dumps
    return _default_encoder.encode(obj)
  File "/usr/local/lib/python3.7/json/encoder.py", line 199, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/usr/local/lib/python3.7/json/encoder.py", line 257, in iterencode
    return _iterencode(o, 0)
  File "/usr/local/lib/python3.7/json/encoder.py", line 179, in default
    raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type bytes is not JSON serializable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/testplan/common/exporters/__init__.py", line 22, in run_exporter
    exporter.export(source)
  File "/usr/local/lib/python3.7/site-packages/testplan_ms/exporters/testing/testdb.py", line 213, in export
    data = test_plan_schema.dump(source).data
  File "/usr/local/lib/python3.7/site-packages/marshmallow/schema.py", line 439, in dump
    **kwargs
  File "/usr/local/lib/python3.7/site-packages/marshmallow/marshalling.py", line 147, in serialize
    index=(index if index_errors else None)
  File "/usr/local/lib/python3.7/site-packages/marshmallow/marshalling.py", line 68, in call_and_store
    value = getter_func(data)
  File "/usr/local/lib/python3.7/site-packages/marshmallow/marshalling.py", line 141, in <lambda>
    getter = lambda d: field_obj.serialize(attr_name, d, accessor=accessor)
  File "/usr/local/lib/python3.7/site-packages/marshmallow/fields.py", line 252, in serialize
    return self._serialize(value, attr, obj)
  File "/usr/local/lib/python3.7/site-packages/testplan/common/serialization/fields.py", line 304, in _serialize
    return [self._serialize(nobj, attr, obj) for nobj in nested_obj]
  File "/usr/local/lib/python3.7/site-packages/testplan/common/serialization/fields.py", line 304, in <listcomp>
    return [self._serialize(nobj, attr, obj) for nobj in nested_obj]
  File "/usr/local/lib/python3.7/site-packages/testplan/common/serialization/fields.py", line 317, in _serialize
    nested_obj, many=False, update_fields=False
  File "/usr/local/lib/python3.7/site-packages/marshmallow/schema.py", line 439, in dump
    **kwargs
  File "/usr/local/lib/python3.7/site-packages/marshmallow/marshalling.py", line 147, in serialize
    index=(index if index_errors else None)
  File "/usr/local/lib/python3.7/site-packages/marshmallow/marshalling.py", line 68, in call_and_store
    value = getter_func(data)
  File "/usr/local/lib/python3.7/site-packages/marshmallow/marshalling.py", line 141, in <lambda>
  getter = lambda d: field_obj.serialize(attr_name, d, accessor=accessor)
  File "/usr/local/lib/python3.7/site-packages/marshmallow/fields.py", line 252, in serialize
    return self._serialize(value, attr, obj)
  File "/usr/local/lib/python3.7/site-packages/testplan/common/serialization/fields.py", line 304, in _serialize
    return [self._serialize(nobj, attr, obj) for nobj in nested_obj]
  File "/usr/local/lib/python3.7/site-packages/testplan/common/serialization/fields.py", line 304, in <listcomp>
    return [self._serialize(nobj, attr, obj) for nobj in nested_obj]
  File "/usr/local/lib/python3.7/site-packages/testplan/common/serialization/fields.py", line 317, in _serialize
    nested_obj, many=False, update_fields=False
  File "/usr/local/lib/python3.7/site-packages/marshmallow/schema.py", line 439, in dump
    **kwargs
  File "/usr/local/lib/python3.7/site-packages/marshmallow/marshalling.py", line 147, in serialize
    index=(index if index_errors else None)
  File "/usr/local/lib/python3.7/site-packages/marshmallow/marshalling.py", line 68, in call_and_store
    value = getter_func(data)
  File "/usr/local/lib/python3.7/site-packages/marshmallow/marshalling.py", line 141, in <lambda>
    getter = lambda d: field_obj.serialize(attr_name, d, accessor=accessor)
  File "/usr/local/lib/python3.7/site-packages/marshmallow/fields.py", line 252, in serialize
    return self._serialize(value, attr, obj)
  File "/usr/local/lib/python3.7/site-packages/testplan/common/serialization/fields.py", line 304, in _serialize
    return [self._serialize(nobj, attr, obj) for nobj in nested_obj]
  File "/usr/local/lib/python3.7/site-packages/testplan/common/serialization/fields.py", line 304, in <listcomp>
    return [self._serialize(nobj, attr, obj) for nobj in nested_obj]
  File "/usr/local/lib/python3.7/site-packages/testplan/common/serialization/fields.py", line 317, in _serialize
    nested_obj, many=False, update_fields=False
  File "/usr/local/lib/python3.7/site-packages/marshmallow/schema.py", line 439, in dump
    **kwargs
  File "/usr/local/lib/python3.7/site-packages/marshmallow/marshalling.py", line 147, in serialize
    index=(index if index_errors else None)
  File "/usr/local/lib/python3.7/site-packages/marshmallow/marshalling.py", line 68, in call_and_store
    value = getter_func(data)
  File "/usr/local/lib/python3.7/site-packages/marshmallow/marshalling.py", line 141, in <lambda>
    getter = lambda d: field_obj.serialize(attr_name, d, accessor=accessor)
  File "/usr/local/lib/python3.7/site-packages/marshmallow/fields.py", line 252, in serialize
    return self._serialize(value, attr, obj)
  File "/usr/local/lib/python3.7/site-packages/marshmallow/fields.py", line 570, in _serialize
    return [self.container._serialize(each, attr, obj) for each in value]
  File "/usr/local/lib/python3.7/site-packages/marshmallow/fields.py", line 570, in <listcomp>
    return [self.container._serialize(each, attr, obj) for each in value]
  File "/usr/local/lib/python3.7/site-packages/testplan/report/testing/schemas.py", line 149, in _serialize
    data=value, binary_serializer=self._binary_to_hex_list
  File "/usr/local/lib/python3.7/site-packages/testplan/report/testing/schemas.py", line 127, in _render_unencodable_bytes_by_callable
    recurse_lvl=(recurse_lvl + 1),
  File "/usr/local/lib/python3.7/site-packages/testplan/report/testing/schemas.py", line 135, in _render_unencodable_bytes_by_callable
    recurse_lvl=(recurse_lvl + 1),
  File "/usr/local/lib/python3.7/site-packages/testplan/report/testing/schemas.py", line 135, in _render_unencodable_bytes_by_callable
    recurse_lvl=(recurse_lvl + 1),
  File "/usr/local/lib/python3.7/site-packages/testplan/report/testing/schemas.py", line 138, in _render_unencodable_bytes_by_callable
    return {self._BYTES_KEY: binary_serializer(datacp)}
  File "/usr/local/lib/python3.7/site-packages/testplan/report/testing/schemas.py", line 90, in _binary_to_hex_list
    for b in bytearray(binary_obj)
TypeError: an integer is required

Interactive mode broken

This example runs fine with python test_plan.py -v but it fails in interactive mode (after adding it in testplan decorator, as it didn't work with command line) with the following HTTP error:

Error response

Error code 501.
Message: Unsupported method ('GET').
Error code explanation: 501 = Server does not support this operation.

Logs:

WARNING: Command line argument for "file_log_level" will be overridden by the one programmatically defined in Testplan[972fd3a3-afd9-404d-840b-eee838cd1627] constructor
WARNING: Command line argument for "shuffle_seed" will be overridden by the one programmatically defined in Testplan[972fd3a3-afd9-404d-840b-eee838cd1627] constructor
WARNING: Command line argument for "interactive" will be overridden by the one programmatically defined in Testplan[972fd3a3-afd9-404d-840b-eee838cd1627] constructor
Starting TestRunner[Multiply] in interactive mode
Starting TestRunnerIHandler for TestRunner[Multiply]
TestRunnerHTTPHandler listening on: 127.0.0.1:58274
127.0.0.1 - - [03/Sep/2019 13:31:48] code 501, message Unsupported method ('GET')
127.0.0.1 - - [03/Sep/2019 13:31:48] "GET / HTTP/1.1" 501 -
127.0.0.1 - - [03/Sep/2019 13:31:48] code 501, message Unsupported method ('GET')
127.0.0.1 - - [03/Sep/2019 13:31:48] "GET /favicon.ico HTTP/1.1" 501 -

The same happens for other examples, even interactive.Basic example.

Better unicode support for python2

Many classes have a type schema that validates constructor params. Currently a lot of these validate strings against str, which in python 2 means that unicode strings are not allowed. Most of the time there is not a good reason to disallow unicode strings.

In python 2 the basestring type can be used to check if a string is either str or unicode. For python 3, basesting can be used via from past.builtins import basestring.

Primary devs are not mentioned in the doc

I was reading through the docs (really nice docs BTW), and noticed that current acitve commiters are missing from the Authors section, is this just the docs are out-of-date?

Authors
The authors of Testplan framework are:

John Chiotis
Can Bascil
Christopher Lamb
Kevin Elliott
Testplan was originally developed by developers for developers to test software. It was created inside Morgan Stanley by a small dev team out of their own need to test internal C++ infrastructure libraries. It has since grown to become widely used for all manner of system testing needs, and we’re now very proud to see it released as open source.

We would like to make a special acknowledgment to Amaury C, Adam C, Francois V. and Oliver M. for their pioneering work and contribution!

npm audit reports security vulnerabilities and deprecated packages

Running npm audit reports a number of security vulnerabilities in dependencies of the web UI, and running npm install reports a number of packages are marked as deprecated. We should update dependencies to silence these warnings.

In particular, we are currently using a very old version of react-scripts (v1) and should upgrade to latest v3 if possible to fix most of the audit warnings.

FixServer misleading error message

in testplan/common/utils/sockets/fix/server.py:451 the error message states 'there are more connections active' also when there is 0 connection active.
Ideally should say 'more or no connections active' or even better have 3 cases 0, 1 and more.
Happy to provide a pull request if you would like one :-)

How does interactive reload work?

I saw there's an interactive reloader class, with a corresponding test.

However, I couldn't make it work in a real world example. Expected:

  1. start the interactive environment
  2. run a test, fails a simple assertion like result.equal(5, 2+2)
  3. change the test code to fix it
  4. click some button on the interactive UI
  5. run the test again, now it passes

I tried adding a dummy testcase that calls plan.i.reload() as a substitution for step 4, but it didn't work.

install-testplan-ui works on Windows

The documentation states that install-testplan-ui can't be called on Windows if installed from archive (zip).

However, it should be noted that it can actually been run, and will run fine if node and npm is installed, when invoked via its full path (which is dependent on Python and package installation).

python "C:\Program Files\Python37\Scripts\install-testplan-ui"

install-testplan-ui installed in virtualenv fails to find the TESTPLAN_UI_DIR

when installing testplan's UI in a virtual environment using pip install <testplan url> and then attempting to run install-testplan-ui; TESTPLAN_UI_DIR evaluates to venvroot/bin/testplan/web_ui/testing or venvroot\scripts\testplan\web_ui\testing. They don't exist when set up that way.

Running install-testplan-ui from the unzipped package source root directory does work, however.

How to add testcase into testsuite programtical

Dear there,
Our test cases are created dynamically from external conditions and constraints, it is difficult to code all of the testcases function beforehand.

How to create a testcase through API? not use @testcase to annotate member function.
And add testcase into testsuite programtical?

Thanks for your attention.

Abstract classes should use the abc module

Testplan is structured to make heavy use of inheritance between abstract base classes and concrete subclasses. To ease the development effort of creating a new subclass, we should leverage the standard library abc module to mark classes as abstract via the ABCMeta metaclass and to mark particular methods as abstract via the abstractmethod decorator.

Not only will this make it clearer visually which classes and methods are abstract, it will also enable a runtime check to ensure that all abstract methods are overridden by subclasses, therefore making it less likely to hit a NotImplementedError later at runtime.

scikit-learn latest version requires python 3

When installing for Python 2:

RuntimeError: Scikit-learn requires Python 3.5 or later. The current Python version is 2.7.15 installed in /home/travis/virtualenv/python2.7.15/bin/python.

We need to restrict the version of scitkit-learn to a version that supports Python 2. From their docs:

"Scikit-learn 0.20 was the last version to support Python2.7"

Also, scikit-learn is not a dependency of the testplan package - only for a couple of the downloadable examples. So we should move it out of setup.py and into the requirements.txt.

Update schema to >=0.7.0

Currently we cannot update to the latest schema 0.7.0 because of an incompatible change: if a class or other callable is specified as a default value, schema will try to instantiate that class rather than just setting the class itself as the default. In several places we rely on classes or other callables being passed as default values. So that we can update to the latest schema, this would need to change.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.