Test Junkie Documentation

This project is in alpha, if you want it to succeed, consider giving it a star on GitHub!

command line interface

Test Junkie comes with a powerful but extremely intuitive set of command line utilities which lets you:
  1. Create test configurations which you can reuse and check in-to your codebase
  2. Run tests using those configurations or set values explicitly via terminal at runtime
  3. Auditing & searching through existing tests which can be integrated with Git hooks or to get more insights into your current test coverage
In order to invoke Test Junkie CLI, type tj in your terminal.
Use tj -h to see currently supported set of sub-commands. Each sub-command has its own help menu available. For example, if I want to see what is available for the command: config, I can type tj config -h.

You can use tj version to see which Test Junkie version and which Python interpreter is being used to process inputs.

run command of the CLI allows to execute tests without requiring any additional code.


There are many supported arguments that allow for many flexible run configurations. Here are some examples:

  1. tj run - will run everything it can find in the --sources with the default, conservative settings.
  2. tj run -c 2FA OAuth -T 10 -S 2 - Tells Test Junkie to run up to 2 suites with no more than 10 tests in parallel and only run tests that cover the two components: 2FA & OAuth.
  3. tj run -q -m --html_report "/path/to/file.html" - will run everything it can find in the --sources with the default, conservative settings but will enable system resource monitoring, suppress all test output and save test results in HTML report.
  4. tj run --config "/path/to/your_config.cfg" - will run according to the settings defined in the custom config. Other settings can still be passed in explicitly even with the --config, they will take precedence over whats defined in the config.


Commands presented above assume you have --sources saved in the config. See config section for more.
For a complete list of supported arguments: tj run -h.

audit command of the CLI allows to quickly scan existing codebase and aggregate test data based on the audit query.


Here are some of the audits that you can do:

  1. Show me all tests broken down by individual qa/dev owner: tj audit owners
  2. Show me all tests that are not tagged: tj audit tags --no-tags
  3. Show me test owners for specific application component: tj audit owners --components 2FA OAuth
  4. Show me regression coverage for each application component: tj audit components
  5. Show me test suites that don't have owners defined: tj audit suites --no-owners


Commands presented above assume you have --sources saved in the config. See config section for more.
For a complete list of possible audit commands: tj audit -h.
All audit commands share arguments, for a complete list of supported arguments: tj audit suites -h

The main goal of config command is to let you easily persist test configuration for later use with other commands like run & audit.


Config command exposes the following sub-commands:

  1. show - Will display current configuration
  2. update - Allows to update configuration settings
  3. restore - Allows to restore configuration settings back to its original state

show

tj config show --all will display all of the settings that are currently saved in the config. It will also tell you where the config is saved so you can copy it or make edits to the file directly.

update

Consider the following: tj run --sources /path/to/tests /another/path/to/tests. No need to pass the path for the test directories/files every time the run command is used, save it in the config: tj config update --sources /path/to/tests /another/path/to/tests. Now use the run command: tj run or the audit command: tj audit suites and it will automatically run against the sources that were saved in the config.


You can always override config values by passing settings through the command line directly as well. For instance: tj run --sources /path/to/tests/suite.py will override the previously saved settings.

restore

tj config restore --all will restore all of the settings to its original values.
Settings can, also, be restored individually: tj config restore --sources --owners.

@suite

Test Junkie enforces suite based test architecture. Thus all tests must be defined within a class and that class must be decorated with @Suite(). See Basic Usage for an example. This decorator supports many different properties which allow you to optimize execution of your tests.

from test_junkie.decorators import Suite

@Suite()
class ExampleTestSuite:
    ...
property usage example
meta @Suite(meta=meta(name="Suite Name", known_bugs=[11111, 22222, 33333]))
retry @Suite(retry=2)
skip @Suite(skip=True)
listener @Suite(listener=YourListenerClass)
rules @Suite(rules=YourRulesClass)
parameters @Suite(parameters=[{"user": "joe@example.com", "pass": "example"}, {...}, {...}])
parallel restriction @Suite(pr=[TestSuiteObject])
parallelized @Suite(parallelized=False)
priority @Suite(priority=1)
feature @Suite(feature="Demo")
owner @Suite(owner="John Doe")

@beforeclass

Use this decorator to prioritize execution of a decorated function at the very beginning of a test @Suite().

If exception is thrown with in such decorated function, it will trigger on_ignore event for all of the tests in scope of the current @Suite() and none of the tests will run.

from test_junkie.decorators import beforeClass

...
@beforeClass()
def before_class():
    ...

This decorator does not support any special properties. However, the function which is decorated, can benefit from class level parametrisation.

@beforetest

This decorator will prioritize execution of a decorated function before every @test() case in the @Suite().

If exception is thrown with in such decorated function, it will trigger either the on_failure or the on_error event for the test in scope.

from test_junkie.decorators import beforeTest

...
@beforeTest()
def before_test():
...

This decorator does not support any special properties. However, the function which is decorated, can benefit from class level parametrisation.

This decorator can be configured to be skipped by individual tests in the suite. See @test() for a usage examples.

@test

This decorator tells Test Junkie that the function that is getting decorated is a test and all tests must be defined within a class. See Basic Usage for an example.

If code produces exception in the decorated function, it will trigger either the on_failure or the on_error event. Otherwise if the test is successful, on_success event will trigger.

from test_junkie.decorators import test

...
@test()
def example_test():
...

This decorator supports many different properties which allow you to optimize execution of your tests.

property usage example
meta @test(meta=meta(name="Suite Name", known_bugs=[11111, 22222, 33333]))
retry @test(retry=2)
skip @test(skip=True)
parallelized_parameters @test(parallelized_parameters=True)
parameters @test(parameters=[{"user": "joe@example.com", "pass": "example"}, {...}, {...}])
parallel restriction @test(pr=[TestSuiteObject.test_function_object])
parallelized @test(parallelized=False)
priority @test(priority=1)
tags @test(tags=["critical", "pre-deploy", "post-deploy"])
component @test(component="Authentication")
owner @test(owner="John Doe")
retry_on @test(retry_on=[AssertionException])
no_retry_on @test(no_retry_on=[TimeoutError])
skip_before_test @test(skip_before_test=True)
skip_before_test_rule @test(skip_before_test_rule=True)
skip_after_test @test(skip_after_test=True)
skip_after_test_rule @test(skip_after_test_rule=True)

@aftertest

Use this decorator to de-prioritize execution of a decorated function for the very end of a @test() case.

If code produces exception in the decorated function, it will trigger either the on_failure or the on_error event.

@afterTest() runs even when @test() produces an exception. If @test() has already thrown an exception and @afterTest() produces one as well, in the event listener you can access both of the tracebacks and exception objects but in the general console output, only the @test() traceback will be displayed.

from test_junkie.decorators import afterTest

...
@afterTest()
def after_test():
...

This decorator does not support any special properties. However, the function which is decorated, can benefit from class level parametrisation.

This decorator can be configured to be skipped by individual tests in the suite. See @test() for a usage examples.

@afterclass

Use this decorator to de-prioritize execution of a decorated function for the very end of a test suite. If suite retry or parameters are configured, @afterClass() will run for each retry/suite parameter.

If code produces exception in the decorated function, it won't have any effects on the status of the tests and it wont show up in the console output. However, depending on the exception thrown, either the on_after_class_failure or on_after_class_error event will get triggered and the ability to capture the exception is there if required.

from test_junkie.decorators import afterClass

...
@afterClass()
def after_class():
...

This decorator does not support any special properties. However, the function which is decorated, can benefit from class level parametrisation.

@grouprules

This decorator allows to define rules for a particular set of test suites and it must be used from within the subclass of the Rules.

Inside a function that is decorated with @GroupRules() decorator, you can import test suites and define groups which then you can use to create @beforeGroup() & @afterGroup() rules. You can define as many rules as you like. For a more complete example, see the Rules section.

@beforegroup

As soon as any one of the suites in the group is about to start running its tests, this group rule will be executed. In the event that an exception is produced, all of the suites in the group will be ignored and they will not be retried.

@GroupRules()
def group_rules(self):

    from ... import ExampleSuite1
    from ... import ExampleSuite2
    @beforeGroup([ExampleSuite1, ExampleSuite2])
    def before_group_a():
        # your code
        pass

    from ... import ExampleSuite3
    from ... import ExampleSuite4
    @beforeGroup([ExampleSuite3, ExampleSuite4])
    def before_group_b():
        # your code
        pass

@aftergroup

As soon as all of the suites in the group are done running, this group rule will be executed. This rule applies to suites that were used to initiate the Runner instance. Meaning, even if you have 20 suites, and running only 1 of them and that 1 suite is part of an @afterGroup() rule that was configured for 20 suites, this rule will still get executed upon completion of that 1 suite. The other 19 suites will never run, thus the condition is considered satisfied.

If code produces exception in the decorated function, it won't have any effects on the status of the tests and it wont show up in the console output. However, depending on the exception thrown, either on_after_group_failure or on_after_group_error event will get triggered and the ability to capture the exception is there if required.

@GroupRules()
def group_rules(self):

    from ... import ExampleSuite1
    from ... import ExampleSuite2
    @afterGroup([ExampleSuite1, ExampleSuite2])
    def after_group_a():
        # your code
        pass

    from ... import ExampleSuite3
    from ... import ExampleSuite4
    @afterGroup([ExampleSuite3, ExampleSuite4])
    def after_group_b():
        # your code
        pass

retry

With Test Junkie you can retry tests in a generic way at test and suite level. In addition, you can further control retries based on the type of exceptions!

Suite retries - once initial execution of tests is completed and assuming there are unsuccessful tests and suite retry is higher than 1, Test Junkie will re-run this suite again up to the number of retries that you have set or until all of the tests pass (whichever comes first). With suite retries, @beforeClass() & @afterClass() will run again and test level retries will apply again.

Test retries - once initial execution of a test is completed and assuming that test was unsuccessful, it will be retried immediately up to the number of retries that you have set or until the test passes (whichever comes first).

from test_junkie.decorators import Suite, test

@Suite(retry=2)  # default is 1
class ExampleSuite:

    @test(retry=2)  # default is 1
    def example_test(self):
        # this test will run 4 times in total
        # suite_retry x test_retry = total test retries
        assert True is False
Every single test execution is logged in memory and can be viewed in the HTML report or you can access the raw data via the JSON report!

retry on specific exception

...
@test(retry=2, retry_on=[AssertionError])
def example_test(self):
    # will be retried
    assert True is False
...

no retry on specific exception

...
@test(retry=2, no_retry_on=[AssertionError])
def example_test(self):
    # won't be retried
    assert True is False
...

parametrization

Test Junkie has the best parametrization engine of any test runner. Parametrization is allowed at suite and test level.

Suite parametrization - In order to parametrize a suite, you need to use the parameters property of the @Suite() decorator. Parametrization at suite level can be configured to apply to all of the specially decorated functions, like:

  1. @beforeClass()
  2. @beforeTest()
  3. @test()
  4. @afterTest()
  5. @afterClass()
By default, none of those functions are parameterized. However, the moment you add suite_parameter to the function's signature, that function becomes parameterized.

@Suite(parameters=[1, 2])
class ExampleSuite:

    @beforeClass()
    def before_class(self, suite_parameter):
        print(suite_parameter)

    @beforeTest()
    def before_test(self, suite_parameter):
        print(suite_parameter)

Test parametrization - While you can, certainly, use suite level parameters in the test, you can, also, parameterize the test with its own set of parameters. In order to parametrize a test, you need to use the parameters property of the @test() decorator and add parameter to the function's signature.

Tests can run with test and suite level parameters at the same time. Without any parameters at all or either only with suite or test level parameters as you see fit.

@Suite(parameters=[1, 2])
class ExampleSuite:

    @test(parameters=[10, 20])
    def example_test(self, parameter, suite_parameter):
        print(parameter)
        print(suite_parameter)

Inputs - for both @Suite() & @test(), parameters property takes a list object of any data types. However, it can also take a function or a method object - in which case it will run the object when time comes to execute the suite. This, coupled with parallel execution, can potentially save you a lot of time when running test suites which have slow running functions that generate parameters.

If you are using retries in a parameterized test, only parameters that the test did not pass with will be retried.

parallel execution

Test Junkie supports multi-threading out of the box and you can allocate N number of threads to be used as a threadpool for your suites and/or tests. Thread allocation is done through the run() method of the Runner instance.

enable parallelized parameters

In addition, parameterized tests can be executed in parallel meaning that each parameter of that test can be running in parallel. This option is only available for test cases and not test suites. By default, this is turned off for every single test but you can set parallelized_parameters property of the @test() decorator to True in order to enable it.

...
@test(parallelized_parameters=True, parameters=[...])
def example_test(parameter):
    ...

disable parallelized mode

When running in multi-threaded mode, you have the option to disable it for specific suites and/or tests. Set parallelized property of the @test() or @Suite() decorator to False and it will get executed sequentially with other similar suites/tests after all parallelized suites/tests have finished.

@Suite(parallelized=False)
class ExampleSuite:
    @test(parallelized=False)
    def example_test():
    ...

avoiding suite/test conflicts

It's not uncommon for tests to run into conflicts if executed at the same time and you can see a hypothetical example of it in the advanced example. Test Junkie allows to configure specific suites/tests so they never get executed at the same time by leverage the pr (parallel restriction) property of the @test() or @Suite() decorator. It accepts either a list of class objects (for suite decorator) or method objects (for test decorator).

@Suite(pr=[TestSuiteA])
class TestSuiteB:
    @test(pr=[TestSuiteC.example_test])
    def example_test():
    ...
pr is bi-directional, if you set pr in TestSuiteB for TestSuiteA then you don't need to set it in TestSuiteA

rules

Rules allow for reusable logic that can be applied to @Suite() and/or group of suites, much like the Listeners.

In order to create rules, you need to create a new class and inherit from Rules. After that, overwrite the desirable functions and assign the newly created class to a suite of your choice using the @Suite() decorator.

from test_junkie.decorators import GroupRules, afterGroup
from test_junkie.rules import Rules


class TestRules(Rules):

    def __init__(self, **kwargs):

        Rules.__init__(self, **kwargs)

    def before_class(self):
        """
        @beforeClass() event handling applies
        """
        # your code
        pass

    def before_test(self, **kwargs):
        """
        @beforeTest() event handling applies
        through kwargs you can access a copy of the TestObject in the current context
        kwargs.get("test")
        """
        # your code
        pass

    def after_test(self, **kwargs):
        """
        @afterTest() event handling applies
        through kwargs you can access a copy of the TestObject in the current context
        kwargs.get("test")
        """
        # your code
        pass

    def after_class(self):
        """
        @afterClass() event handling applies
        """
        # your code
        pass

    @GroupRules()
    def group_rules(self):

        from my.suites.examples.ExampleSuite1 import ExampleSuite1
        from my.suites.examples.ExampleSuite2 import ExampleSuite2
        @beforeGroup([ExampleSuite1, ExampleSuite2])
        def before_group_a():
            # your code
            pass
        @afterGroup([ExampleSuite1, ExampleSuite2])
        def after_group_a():
            # your code
            pass

        from my.suites.examples.ExampleSuite3 import ExampleSuite3
        from my.suites.examples.ExampleSuite4 import ExampleSuite4
        @beforeGroup([ExampleSuite1, ExampleSuite2])
        def before_group_b():
            # your code
            pass
        @afterGroup([ExampleSuite3, ExampleSuite4])
        def after_group_b():
            # your code
            pass
                

priority

Test Junkie allows to influence execution priority for suites & tests. Note that I said influence. Priority cannot be guaranteed when running in threaded mode and having either parallelized properties set to False in any of your suites or parallel restriction configured. Test Junkie will always try to find a suite or a test to run in its current scope. So if it comes to a suite/test that is currently restricted and cannot be executed, it will move on without waiting for condition to clear. It was implemented this way on purpose to increase performance.

Priority is integer based and it starts at 1, which is the highest priority index, and goes up from there. To set priority use the priority property of the @Suite() or the @test() decorator.

  1. Suites & Tests without priority and disabled parallelized execution get de-prioritized the most
  2. Suites & Tests without priority and enabled parallelized execution get prioritized higher
  3. Suites & Tests with priority get prioritised according to the priority that was set. However, they are always prioritised above those that do not have any priority

listeners

Test Junkie allows to define listeners which allow to execute code on a specific suite/test event. Defining listeners is optional. This feature is typically useful when building large frameworks as it allows for seamless integration for reporting, post processing of errors, calculation of test metrics, alerts, artifact collection etc.

In order to create a listener, create a new subclass of a Listener. After that, overwrite desired functions (events) to use in your framework. Every event that you overwrite, must accept **kwargs in the function signature.

Listeners can be assigned to specific suites and are supported by the @Suite() decorator, much like the Rules.

from test_junkie.listener import Listener

class MyTestListener(Listener):

    def __init__(self, **kwargs):

        Listener.__init__(self, **kwargs)
    ...
Depending on the event, you will have access to:
  1. Suite & test meta if applicable
  2. Suite & test parameters if applicable
  3. Exception Object & Formatted Traceback - only for events that are triggered due to an exception
  4. SuiteObject - available in all of the events
  5. TestObject - available only for test related events
Those will be accessible through the function's kwargs.

on in progress

Event is triggered as soon as Test Junkie begins processing the @test(). If test is parameterized, this event will be trigger for each of the parameters as they get executed.

...
def on_in_progress(self, **kwargs):
    # your code
    print(kwargs)
    ...

on success

Event is triggered after test has successfully executed, that means @beforeTest() (if applicable), @test(), and @afterTest() (if applicable) have ran without producing an exception.

...
def on_success(self, **kwargs):
    # your code
    print(kwargs)
    ...

on failure

Event is triggered after test has produced AssertionError. AssertionError must be unhandled and thrown during the code execution in functions decorated with either @beforeTest(), @test(), or @afterTest()

...
def on_failure(self, **kwargs):
    # exception & traceback available
    # your code
    print(kwargs)
    ...

on error

Event is triggered after test has produced any exception other than AssertionError. Exception must be unhandled and thrown during the code execution in functions decorated with either @beforeTest(), @test(), or @afterTest()

...
def on_error(self, **kwargs):
    # exception & traceback available
    # your code
    print(kwargs)
    ...

on ignore

Event is triggered when a function decorated with @beforeClass() produces an exception or when incorrect inputs are passed in for the @test() decorator properties.

...
def on_ignore(self, **kwargs):
    # exception & traceback available
    # your code
    print(kwargs)
    ...

on cancel

Event is triggered sometime after test execution has been canceled for an active suite(s). For all of the remaining tests in the suite, this event will fire.

...
def on_cancel(self, **kwargs):
    # your code
    print(kwargs)
    ...

on skip

Event is triggered when tests are skipped. Skip is supported by the @test() decorator. Tests can, also, be skipped when regression is triggered for specific Features, Components, Tags, or Assignees.

...
def on_skip(self, **kwargs):
    # your code
    print(kwargs)
    ...

on complete

Event is triggered regardless of the outcome of the @test(). If test is parameterized, this event will be trigger for each of the parameters as they get processed and the tests finish.

...
def on_complete(self, **kwargs):
    # your code
    print(kwargs)
    ...

on class in progress

Event is triggered as soon as Test Junkie starts processing a @Suite().

...
def on_class_in_progress(self, **kwargs):
    # your code
    print(kwargs)
    ...

on before class failure

Event is triggered only when a function decorated with @beforeClass() produces AssertionError. It will, also, trigger on_ignore for all of the tests in scope of that suite.

...
def on_before_class_failure(self, **kwargs):
    # exception & traceback available
    # your code
    print(kwargs)
    ...

on before class error

Event is triggered only when a function decorated with @beforeClass() produces exception other than AssertionError. It will, also, trigger on_ignore for all of the tests in scope of that suite.

...
def on_before_class_error(self, **kwargs):
    # exception & traceback available
    # your code
    print(kwargs)
    ...

on after class failure

Event is triggered only when a function decorated with @afterClass() produces AssertionError.

...
def on_after_class_failure(self, **kwargs):
    # exception & traceback available
    # your code
    print(kwargs)
    ...

on after class error

Event is triggered only when a function decorated with @afterClass() produces exception other than AssertionError.

...
def on_after_class_error(self, **kwargs):
    # exception & traceback available
    # your code
    print(kwargs)
    ...

on class skip

Event is triggered when suites are skipped. Skip is supported by the @Suite() decorator. Suites can, also, be skipped when regression is triggered for specific Features, Components, Tags, or Assignees.

...
def on_class_skip(self, **kwargs):
    # your code
    print(kwargs)
    ...

on class cancel

Event is triggered sometime after test execution has been canceled. This event will trigger for all of the test suites that were still in queue and not in progress at that time.

...
def on_class_cancel(self, **kwargs):
    # your code
    print(kwargs)
    ...

on class ignore

Event is triggered when Test Junkie detects bad arguments being used for @Suite() properties.

For example, if you pass in empty parameters list, it does not make sense to run any tests in the suite because its assumed that either the setup functions or the tests rely on those parameters and sense they are empty the test scenarios will not be able to complete thus Test Junkie will ignore the @Suite().

...
def on_class_ignore(self, **kwargs):
    # exception & traceback available
    # your code
    print(kwargs)
    ...

on class complete

Event is triggered when Test Junkie is done running all of the tests within the @Suite() that includes retrying any failed tests and running all suite level parameters, if applicable. If class was skipped or canceled, this even wont trigger.

...
def on_class_complete(self, **kwargs):
    # your code
    print(kwargs)
    ...

on before group failure

Event is triggered when a @beforeGroup() rule produces an AssertionError.

...
def on_before_group_failure(self, **kwargs):
    # your code
    print(kwargs)
    ...

on before group error

Event is triggered when a @beforeGroup() rule produces an exception other than AssertionError.

...
def on_before_group_error(self, **kwargs):
    # your code
    print(kwargs)
    ...

on after group failure

Event is triggered when an @afterGroup() rule produces an AssertionError.

...
def on_after_group_failure(self, **kwargs):
    # your code
    print(kwargs)
    ...

on after group error

Event is triggered when an @afterGroup() rule produces an exception other than AssertionError.

...
def on_after_group_error(self, **kwargs):
    # your code
    print(kwargs)
    ...

meta

Meta has absolutely no effect on how Test Junkie runs the tests and can be of any data type. Meta is supported by the @Suite() & @test() decorators

Why is this useful? You can use meta to set properties such as:

  1. Test name, suite name, description, expected results and anything else that can be useful in reports
  2. Test case IDs - if you have a test management system, leverage it to link test scripts directly to the test cases and further integrations can be implemented from there
  3. Bug ticket IDs - if you have a bug tracking system (like Jira), leverage it to link your test cases with issues that are already known and allow you to process failures in a different manner (as a known failure for instance).
Test Junkie can do a lot in terms of reporting, but no cookiecutter tool will ever be better than a custom solution built in the house for reporting by an experienced programmer. Thus Test Junkie aims to aid in getting useful data for such reports.

from test_junkie.decorators import Suite, test
from test_junkie.meta import meta


@Suite(listener=MyTestListener, meta=meta(name="some value", id=123))
class ExampleSuite:

    @test(meta=meta(name="some value", expected="some value",
                    known_bugs=[111, 222, 333], id=321))
    def example_test(self):
        pass

accessing meta through listeners

Meta data that was set in the code above can be accessed through the Listeners like so:

from test_junkie.listener import Listener


class MyTestListener(Listener):

    def __init__(self, **kwargs):

        Listener.__init__(self, **kwargs)

    def on_success(self, **kwargs):

        class_meta = kwargs.get("properties").get("class_meta")
        test_meta = kwargs.get("properties").get("test_meta")

        print("Suite name: {name}".format(name=class_meta["name"]))
        print("Suite ID: {id}".format(id=class_meta["id"]))
        print("Test name: {name}".format(name=test_meta["name"]))
        print("Test ID: {id}".format(id=test_meta["id"]))
        print("Expected result: {expected}".format(expected=test_meta["expected"]))
        print("Known bugs: {bugs}".format(bugs=test_meta["known_bugs"]))

updating meta

Meta data can be updated and/or added from within @test() using Meta.update(). Keep in mind, only @test() level meta can be updated, @Suite() level meta should never change. Meta.update() takes 3 positional arguments, those arguments are required in order to locate correct TestObject:

  1. self - class instance of the current test
  2. parameter - if test is parameterized with tests level parameters. If test does not have test level parameters, do not pass anything.
  3. suite_parameter - if test is parameterized with suite level parameters. If test does not have suite level parameters, do not pass anything.
Any other arguments that are passed in to the Meta.update(), will be pushed to the test's meta definition.

from test_junkie.meta import Meta
...
@test(parameters=[...])
def example_test(self, parameter, suite_parameter):
    # this particular test is running with test and suite level parameters, thus to update the meta...
    Meta.update(self, parameter=parameter, suite_parameter=suite_parameter,
                name="new value", expected="new value")
...
@test(parameters=[...])
def example_test(self, parameter):
    # this particular test is running only with test level parameters, thus to update the meta...
    Meta.update(self, parameter=parameter, name="new value", expected="new value")
...
@test()
def example_test(self):
    # this particular test has no parameters, thus to update the meta...
    Meta.update(self, name="new value", expected="new value")
...

assign tests

Have a large QA team that uses the same framework? Test Junkie allows to assign suites and tests to specific members of the team. If using HTML report, it'll allow to see performance metrics broken down by assignees. In addition, assignees can be accessed in the Listeners which can be of use for custom reporting.

Both the @Suite() & @test() decorators supports assignees. Using the @Suite() decorator to set the owner will set assignees for all of the tests in scope of that suite. To overwrite assignee of any particular test, set the owner using the @test() decorator.

categorize suites by features

Categorizing the suites by the features that those suites are testing, not only allows to see KPI metrics broken down by features if using the HTML Report but also allows to run feature specific regression tests.

Features can be defined using the @Suite() decorator.

categorize tests by components

Categorizing the tests by the components that those tests are covering, not only allows to see KPI metrics broken down by components if using the HTML Report but also allows to run component specific regression tests.

Components can be defined using the @test() decorator.

tags

Similar to the Features & Components but tags have one-to-many mapping concept. Meaning a single test can be covering multiple areas of the platform, including more then one component and maybe even more then one feature. Ideally tests should be as small as possible and test tiny small functionality but in the real world its often not the case, especially if we are talking about end to end tests. So in those cases where test covers multiple areas of the platform, you can use tags to tell Test Junkie and your peers what the test is covering at a glance.

Tags are supported only by the @test() decorator and it, also, allows to see KPI metrics broken down by tags if using the HTML Report and allows to run tag specific regression.

Using tag_config which the run() method of the Runner instance can take, allows to pick which tags to run tests for and which to skip, giving a lot of flexibility.

tag config

  1. run_on_match_all - Will run test cases that match all of the tags in the list
  2. run_on_match_any - Will run test cases that match at least one tag in the list
  3. skip_on_match_all - Will skip test cases that match all of the tags in the list
  4. skip_on_match_any - Will skip test cases that match at least one tag in the list

All of the configs can be used at the same time. However, this is the order that will be honored: skip_on_match_all -> skip_on_match_any -> run_on_match_all -> run_on_match_any which ever matches first will be executed or skipped. If something is skipped, it will trigger the on_skip event.

from test_junkie.runner import Runner

runner = Runner([ExampleSuite1, ExampleSuite2, ExampleSuite3])
runner.run(tag_config={"run_on_match_all": ["...", "..."],
                       "run_on_match_any": ["...", "..."],
                       "skip_on_match_all ": ["...", "..."],
                       "skip_on_match_any ": ["...", "..."]})

html report

Test Junkie is tracking a number of metrics during test execution and it can generate HTML reports based on those metrics. Here you can see a live demo.

In order to generate a report like that:

from test_junkie.runner import Runner
runner = Runner(suites=[...], html_report="/path/to/file/report.html", monitor_resources=True)
runner.run()

monitor_resources=True enables monitoring of the MEM & CPU usage during test execution which will be rendered in the HTML report as a trending graph.

xml report

Test Junkie can also produce, Jenkins friendly, basic XML reports.

from test_junkie.runner import Runner
runner = Runner(suites=[...], xml_report="/path/to/file/report.xml")
runner.run()

json report

JSON reports are used under the hood for all of the other reports produced by Test Junkie. You can use JSON reports to slice the data in the way that is meaningful to you.

JSON reports can be extracted from a number of objects, all of which will be accessible after the test have finished executing:

from test_junkie.runner import Runner
runner = Runner(suites=[...])
aggregator = runner.run()

raw metrics

You can access raw metrics data from SuiteObject and/or from TestObject

suite_objects = runner.get_executed_suites()
for suite in suite_objects:
    test_objects = suite.get_test_objects()
    print(suite.metrics.get_metrics())
    for test in test_objects:
        print(test.metrics.get_metrics())

reports

You can, also, access reports that are created by Test Junkie in its JSON form via the Aggregator object.

print(aggregator.get_report_by_tags())
print(aggregator.get_report_by_features())
print(aggregator.get_basic_report())
print(aggregator.get_report_by_owner())

advanced test suite example

Following snippet show how to leverage decorator options in order to optimize the execution of your tests.

All of the suites are shown as if they are in one file, this is just for demonstration purposes, in reality you should have them all in separate files.

                        
from test_junkie.decorators import Suite, test, afterTest, beforeTest, beforeClass, afterClass
from test_junkie.meta import meta, Meta


@Suite(parameters=[{"login": "mike@example.com", "password": "example", "admin": True},
                   {"login": "sally@example.com", "password": "example", "admin": False}])
class LoginSuite:

    @beforeClass()
    def before_class(self, suite_parameter):  # yes, we just parameterized this function, seen that anywhere else?
        # Lets assume we have some code here to login with
        # username . . . suite_parameter["login"]
        # password . . . suite_parameter["password"]
        # This is our, hypothetical, pre-requirement before we run the tests
        # If this step were to fail, the tests would have been ignored
        pass

    @afterClass()
    def after_class(self):
        # Here, generally, we would have clean up logic.
        # For the sake of this example, lets assume we logout
        # from the account that we logged into during @beforeClass()
        # no `suite_parameter` in method signature,
        # logout process would likely be the same irrespective of the account
        pass

    @test(parameters=["page_url_1", "page_url_2", "page_url_3"])
    def validate_user_login_in_header(self, parameter, suite_parameter):
        # Lets assume that in this test case we are going to be validating
        # the header. We need to make sure that email that user logged in with
        # is displayed on every page so we will make this test parameterized.

        # By doing so we will know exactly which pages pass/fail without
        # writing any extra logic in the test itself to log all the failures
        # and complete testing all the pages which would be required if you
        # were to use a loop inside the test case for instance.

        # Now we would use something like Webdriver to open the parameter in order to land on the page
        # and assert that suite_parameter["username"] in the expected place
        pass

    @test(parameters=["admin_page_url_1", "admin_page_url_2"])
    def validate_access_rights(self, parameter, suite_parameter):
        # Similar to the above test case, but instead we are validating
        # access right privileges for different user groups.
        # Using same principal with the parameterized test approach.

        # Now we would also use Webdriver to open the parameter in order to land on the page
        # and assert that the page is accessible if suite_parameter["admin"] is True


@Suite(pr=[LoginSuite],
       parameters=[{"login": "mike@example.com", "password": "example", "user_id": 1},
                   {"login": "sally@example.com", "password": "example", "user_id": 2}])
class EditAccountCredentialsSuite:
    """
    It is risky to run this suite with the LoginSuite above because if
    the suites happen to run in parallel and credentials get updated
    it can cause the LoginSuite to fail during the login process.

    Therefore, we are going to restrict this suite using the `pr` property, this will insure that
    LoginSuite and EditAccountCredentialsSuite will never run in parallel thus removing any risk
    when you run Test Junkie in multi-threaded mode.
    """

    @test(priority=1, retry=2)  # this test, in case of failure, will be retried twice
    def reset_password(self, suite_parameter):  # this test is now parameterised with parameters from the suite
        # Lets assume in this test we will be resetting password of the
        # username . . . suite_parameter["login"]
        # and then validate that the hash value gets updated in the database

        # We will need to know login when submitting the passowrd reset request, thus we need to make sure that
        # we don't run this test in parallel with edit_login() test bellow.
        # We will use decorator properties to prioritize this test over anything else in this suite
        # which means it will get kicked off first and then we will disable parallelized mode for the
        # edit_login() test so it will have to wait for this test to finish.
        pass

    @test(parallelized=False, meta=meta(expected="Able to change account login"))
    def edit_login(self, suite_parameter):
        # Lets assume in this test we will be changing login for . . . suite_parameter["login"]
        # with the current number of tests and settings, this test will run last

        Meta.update(self, suite_parameter=suite_parameter, name="Edit Login: {}".format(suite_parameter["login"]))
        # Making this call, gives you option to update meta from within the test case
        # make sure, when making this call, you did not override suite_parameter with a different value
        # or update any of its content

    @afterClass()
    def after_class(self, suite_parameter):
        # Will reset everything back to default values for the
        # user . . . suite_parameter["user_id"]
        # and we know the original value based on suite_parameter["login"]
        # This will insure other suites that are using same credentials, wont be at risk
        pass


@Suite(listener=MyTestListener,  # This will assign a dedicated listener that you created
       retry=2,  # Suite will run up to 2 times but only for tests that did not pass
       owner="Chuck Norris",  # defined the owner of this suite, has effects on the reporting
       feature="Analytics",  # defines a feature that is being tested by the tests in this suite,
                             # has effects on the reporting and can be used by the Runner
                             # to run regression only for this feature
       meta=meta(name="Example",  # sets meta, most usable for custom reporting, accessible in MyTestListener
                 known_failures_ticket_ids=[1, 2, 3]))  # can use to reference bug tickets for instance in your reporting
class ExampleTestSuite:

    @beforeTest()
    def before_test(self):
        pass

    @afterTest()
    def after_test(self):
        pass

    @test(component="Datatable",  # defines the component that this test is validating,
                                  # has effects on the reporting and can be used by the Runner
                                  # to run regression only for this component
          tags=["table", "critical", "ui"],  # defines tags that this test is validating,
                                             # has effects on the reporting and can be used by the Runner
                                             # to run regression only for specific tags
          )
    def something_to_test1(self, parameter):
        pass

    @test(skip_before_test=True,  # if you don't want to run before_test for s specific test in the suite, no problem
          skip_after_test=True)  # also no problem, you are welcome!
    def something_to_test2(self):
        pass

running tests

This section shows examples on how to programmatically run tests with different configurations.

running in threaded mode

from test_junkie.runner import Runner
from ... import ExampleTestSuite1
from ... import ExampleTestSuite2
from ... import ExampleTestSuite3

runner = Runner([ExampleTestSuite1, ExampleTestSuite2, ExampleTestSuite3])
runner.run(suite_multithreading_limit=5, test_multithreading_limit=5)

running regression for a feature

Out of the suites used to initiate the runner instance, Test Junkie will run those that match the features.

runner = Runner(suites=[...])
runner.run(features=["Login"])

running regression for a component

Out of the suites used to initiate the runner instance, Test Junkie will run those that match the components.

runner = Runner(suites=[...])
runner.run(components=["OAuth"])

running regression for tags

Out of the suites used to initiate the runner instance, Test Junkie will run those that match the configuration of the tags.

runner = Runner(suites=[...])
runner.run(tag_config={"run_on_match_all": ["component_a", "critical"],
                       "skip_on_match_any": ["trivial", "known_failure"]})

running tests assigned to a specific person

Out of the suites used to initiate the runner instance, Test Junkie will run those that match the assignees.

runner = Runner(suites=[...])
runner.run(owners=["John Doe", "Jane Doe"])

running specific tests

Out of the suites used to initiate the runner instance, Test Junkie will run specific tests, all you have to do is specify the test object to run.

runner = Runner(suites=[...])
runner.run(tests=[ExampleTestSuite1.example_test_1, ExampleTestSuite1.example_test_2])
If you don't want to use test objects, you can, also, use strings.
runner = Runner(suites=[...])
runner.run(tests=["example_test_1", "example_test_2"])

canceling tests

If you are integrating Test Junkie into a bigger framework, its possible that you would like to programmatically stop test execution. Good news that Test Junkie allows, gracefully, to do just that. If you call cancel() on the Runner Object, the Runner will start canceling suites and tests, which will trigger respective event listeners:

  1. on_cancel
  2. on_class_cancel
Canceling execution, does not abruptly stop the Runner - all of the suites will still "run" but it will be similar to skipping which will allow suites & tests to quickly, but in their natural fashion, finish running without locking up any of the resources on the machine where it runs.

from test_junkie.runner import Runner
from ... import ExampleTestSuite1
from ... import ExampleTestSuite2
from ... import ExampleTestSuite3

runner = Runner([ExampleTestSuite1, ExampleTestSuite2, ExampleTestSuite3])
runner.run(suite_multithreading_limit=5, test_multithreading_limit=5)
runner.cancel()

runner

Runner object is used to run the tests. Runner can be initialized with a number of different properties:

  1. suites - required parameter, takes a list of class objects decorated with @Suite(). This is how to tell Test Junkie which suites to care about.
  2. monitor_resources - optional parameter, takes a bool. This turns on CPU & MEM monitoring. See HTML Report.
  3. html_report - optional parameter, takes a str. This enables HTML Report generation.
  4. xml_report - optional parameter, takes a str. This enables XML Report generation.
  5. config - optional parameter, takes a path to file as a str. Config can be created using CLI and saved to a location of your choice. Explicitly passed in args to the Runner & Runner.run() will take precedence over all other settings, otherwise settings in the config will be used.

exposed methods

Runner exposes 3 methods:

  1. run() - This method is special. Not only, as the name suggests, it initiates the actual test cycle but it, also, allows to define more configurations for running your tests. All of which are optional:
    1. features - takes a list of features. aka: runner.run(features=["Login"]).
    2. components - takes a list of components. aka: runner.run(components=["2FactorAuth"]).
    3. owners - takes a list of owners. aka: runner.run(owners=["John Cena"]).
    4. tag_config - takes a dict such as runner.run(tag_config={"run_on_match_all": ["pre_deploy", "critical"]}). See Tags for more.
    5. tests - takes a list of @test() objects or a list of strings. example:
      • - runner.run(tests=[LoginSuite.positive_login, LoginSuite.negative_login])
      • - runner.run(tests=["positive_login", "negative_login"])
    6. suite_multithreading_limit - takes an int which enables parallel execution for test suites.
    7. test_multithreading_limit - takes an int which enables parallel execution for tests.
    8. quiet - boolean, will silence all output of the tests.
  2. cancel() - read more about canceling tests here.
  3. get_executed_suites() - This will return a list of test_junkie.objects.SuiteObjects. SuiteObject can be used to analyze anything from test results to performance of tests and much much more.

limiter

Limiter allows to control throttling and truncation mechanism for Test Junkie. Note that Limiter has a "master switch". If Limiter.ACTIVE is set to False it will disable all of the limits including the default ones.

throttling

There are cases when you don't want to kick off suites and/or tests back to back when running tests in parallel. Test Junkie gives you the ability to throttle down how frequently a new @test() and/or @Suite() can get kicked off.

from test_junkie.objects import Limiter

# Suites will be kicked off on a 3 second interval
Limiter.SUITE_THROTTLING = 3

# Tests will be kicked off on a 1 second interval
Limiter.TEST_THROTTLING = 1

truncation

Test Junkie will truncate long exception messages to keep tracebacks to a sensible size which by default is 3000 characters. You can control the threshold limit. Note that this will also apply for the tracebacks/exceptions you see in the HTML report.

from test_junkie.objects import Limiter

# Will increase char limit to 10000 for all exception messages
Limiter.EXCEPTION_MESSAGE_LIMIT = 10000

# Will increase char limit to 10000 for all tracebacks
Limiter.TRACEBACK_LIMIT = 10000

suiteobject

Throughout a test suite execution, as tests pass, fail or otherwise complete their life cycle - respective events are triggered in the listeners interface. No matter what event is triggered aka on_in_progress or on_failure etc, it has access to the SuiteObject.

Bellow is an example of how to access the SuiteObject from the on_failure event. You can use the same method in any of the events thought.

Remember that when calling any of the available methods to get data. When you assign the return value to a variable and if you later modify that variable, you may be modifying the actual value in the SuiteObject that Test Junkie is working with when running the tests. It is critical for the proper functionality of the TJ that those values are modified only by TJ, therefore, make a copy of the return value, especially if you are planning to manipulate it later.

from test_junkie.listener import Listener

class MyTestListener(Listener):

    def __init__(self, **kwargs):

        Listener.__init__(self, **kwargs)

    def on_failure(self, **kwargs):
        properties = kwargs.get("properties")
        test_junkie_suite_object = properties["jm"]["jso"]
        # ^ this is your TJ Suite Object, you can now use any of the
        # listed methods down bellow to get desired data from it
        # DO NOT OVERRIDE ANY OF ITS PROPERTIES
        # Always make a copy of the value you are getting.
        # For example, if return is an integer - this is how it would look
        total_tests = int(test_junkie_suite_object.get_test_count())
        # wrapping the return in the int(), makes new value instead of making
        # a pointer to the value returned otherwise
        # now you can add/subtract etc from the total_tests variable without
        # accidentally effecting TJ
...

SuiteObject allows you to get a lot of information about the current state of the suite and its related metrics. You may use those metrics, for example:

  • 1. to decide if you want to terminate the entire run and fail the build
  • 2. or if failure of tests for specific test suite reaches its threshold you may want to send an alert/email to appropriate team/person.

This is just two examples, there is no right or wrong way to use this data.

get test count

Return Type: Integer
Function: get_test_count()
Example call: SuiteObject.get_test_count()

This call does not account for parameterized tests. Aka If you have 1 test with 10 different parameters and one test without any parameters in the same suite, this call will return 2.

2

get class name

Return Type: String
Function: get_class_name()
Example call: SuiteObject.get_class_name()

Use to get the name of the test suite.

ExampleSuite

get class instance

Return Type: Class
Function: get_class_instance()
Example call: SuiteObject.get_class_instance()

Use to get the actual instance of the suite that you defined and all of its functions and variables. You can therefore manipulate and even impact the tests in that instance to a degree.

< your.test.framework.suites.ExampleSuite.ExampleSuite object at 0x000002166D868070 >

get class object

Return Type: Class
Function: get_class_object()
Example call: SuiteObject.get_class_object()

While get_class_instance is used to get the instance of the class, this method is used to just get the class object.

< your.test.framework.suites.ExampleSuite.ExampleSuite >

get class module

Return Type: String
Function: get_class_module()
Example call: SuiteObject.get_class_module()

Use to get the module of the test suite class.

your.test.framework.suites.ExampleSuite

get test objects

Return Type: List
Function: get_test_objects()
Example call: SuiteObject.get_test_objects()

Use to get the list of the test functions within a test suite.

[< ExampleSuite.example_1 >, < ExampleSuite.example_2 >]

get test function names

Return Type: List
Function: get_test_function_names()
Example call: SuiteObject.get_test_function_names()

Similar to get_test_objects but instead returns a list of strings, where strings are the names of the test functions.

['example_1', 'example_2']

get test components

Return Type: Set
Function: get_test_components()
Example call: SuiteObject.get_test_components()

Use to get a set of all the components under test in a suite. If you did not define any components for at least 1 test, None will always be returned, in addition to the components that you have defined. None is treated as a component, just undefined - this is OK.

{'Example', None}

get test tags

Return Type: Set
Function: get_test_tags()
Example call: SuiteObject.get_test_tags()

Use to get a set of all the tags under test in a suite.

{'smoke', 'login'}

get feature

Return Type: String
Function: get_feature()
Example call: SuiteObject.get_feature()

Use to get the feature under test in the suite. Suite can only have one feature defines. But features can be made up of multiple components.

ExampleFeature

get priority

Return Type: Integer OR NoneType
Function: get_priority()
Example call: SuiteObject.get_priority()

Returns the priority of the suite. If priority is not defined for the suite, returns None.

1
None

get runtime

Return Type: Float
Function: get_runtime()
Example call: SuiteObject.get_runtime()

Returns the time in seconds for how long a suite has been running for.

8.082839965820312

get retry limit

Return Type: Integer
Function: get_retry_limit()
Example call: SuiteObject.get_retry_limit()

Use to find out the retry limit for the suite.

2

get meta

Return Type: Dictionary
Function: get_meta()
Example call: SuiteObject.get_meta()

Used to retrieve the meta information of the suite if such was defined.

{"some_key": "some value"}

get unsuccessful tests

Return Type: List
Function: get_unsuccessful_tests()
Example call: SuiteObject.get_unsuccessful_tests()

Returns functions of the unsuccessful tests in the suite at the time.

 [<ExampleSuite.example_1>]

has unsuccessful tests

Return Type: Boolean
Function: has_unsuccessful_tests()
Example call: SuiteObject.has_unsuccessful_tests()

Use to find out if a suite has any tests that did not run successfully.

True

get status

Return Type: String
Function: get_status()
Example call: SuiteObject.get_status()

Used to get the status of the suite at the time.

fail

get owner

Return Type: String
Function: get_owner()
Example call: SuiteObject.get_owner()

Owner is defined via meta, therefore you can obtain this value also with get_meta call and then access it via dictionary. However, this function will return the exact owner as a string.

Artur Spirin

get parallel restrictions

Return Type: List
Function: get_parallel_restrictions()
Example call: SuiteObject.get_parallel_restrictions()

Used to get the list of suites that this suite should not be running with at the same time.

[]

get parameters

Return Type: Boolean
Function: get_parameters()
Example call: SuiteObject.get_parameters()

Used to get suite parameters if such were defined.

["user1@example.com", "user2@example.com"]

is parallelized

Return Type: Boolean
Function: is_parallelized()
Example call: SuiteObject.is_parallelized()

Use to find out if a suite can run in parallel with other suites.

True

get data by tags

Return Type: Boolean
Function: get_data_by_tags()
Example call: SuiteObject.get_data_by_tags()

Test Junkie tracks a number of metrics for each suite, you can access most of them via this call.

{
   "_totals_":{
      "performance":[
         0.0,
         0.0,
         0.0009999275207519531,
         0.0,
         0.0,
         0.0,
         0.0,
         0.0,
         0.0
      ],
      "exceptions":[
         "AssertionError()",
         "AssertionError()",
         "AssertionError()",
         "AssertionError()",
         "AssertionError()",
         "AssertionError()",
         "AssertionError()"
      ],
      "retries":[
         2,
         2,
         2,
         1,
         1,
         1
      ],
      "total":6,
      "success":2,
      "skip":0,
      "fail":4,
      "cancel":0,
      "ignore":0,
      "error":0
   },
   "login":{
      "performance":[
         0.0,
         0.0,
         0.0009999275207519531,
         0.0,
         0.0,
         0.0,
         0.0
      ],
      "exceptions":[
         "AssertionError()",
         "AssertionError()",
         "AssertionError()",
         "AssertionError()",
         "AssertionError()",
         "AssertionError()",
         "AssertionError()"
      ],
      "retries":[
         2,
         2,
         2,
         1
      ],
      "total":4,
      "success":0,
      "skip":0,
      "fail":4,
      "cancel":0,
      "ignore":0,
      "error":0
   },
   "smoke":{
      "performance":[
         0.0,
         0.0
      ],
      "exceptions":[

      ],
      "retries":[
         1,
         1
      ],
      "total":2,
      "success":2,
      "skip":0,
      "fail":0,
      "cancel":0,
      "ignore":0,
      "error":0
   }
}

testobject

coming soon

footer-background-top