CLI#

Bowtie is a versatile tool which you can use to investigate any or all of the implementations it supports. Below are a few sample command lines you might be interested in.

Running Commands Against All Implementations

Many of Bowtie’s subcommands take a -i / --implementation option which specifies which implementations you wish to run against. In general, these same subcommands allow repeating this argument multiple times to run across multiple implementations. In many or even most cases, you may be interested in running against all implementations Bowtie supports. For somewhat boring reasons (partially having to do with the GitHub API) this “run against all implementations” functionality is slightly nontrivial to implement in a seamless way, though doing so is nevertheless tracked in this issue.

In the interim, it’s often convenient to use a local checkout of Bowtie in order to list this information.

Specifically, all supported implementations live in the implementations/ directory, and therefore you can construct a string of -i arguments using a small bit of shell vomit. If you have cloned Bowtie to /path/to/bowtie you should be able to use $(ls /path/to/bowtie/implementations/ | sed 's/^| /-i /') in any command to expand out to all implementations. See below for a full example.

Examples#

Validating a Specific Instance Against One or More Implementations#

The bowtie validate subcommand can be used to test arbitrary schemas and instances against any implementation Bowtie supports.

Given some collection of implementations to check – here perhaps two Javascript implementations – it takes a single schema and one or more instances to check against it:

$ bowtie validate -i js-ajv -i js-hyperjump <(printf '{"type": "integer"}') <(printf 37) <(printf '"foo"')

Note that the schema and instance arguments are expected to be files, and that therefore the above makes use of normal shell process substitution to pass some examples on the command line.

Piping this output to bowtie summary is often the intended outcome (though not always, as you also may upload the output it gives to https://bowtie.report/ as a local report). For summarizing the results in the terminal however, the above command when summarized produces:

$ bowtie validate -i js-ajv -i js-hyperjump <(printf '{"type": "integer"}') <(printf 37) <(printf '"foo"') | bowtie summary
2023-11-02 15:43.10 [debug    ] Will speak                     dialect=https://json-schema.org/draft/2020-12/schema
2023-11-02 15:43.10 [info     ] Finished                       count=1
                                        Bowtie
┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Schema                                                                            ┃
┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│                                                                                   │
│ {                      Instance   ajv (javascript)   hyperjump-jsv (javascript)   │
│   "type": "integer"   ──────────────────────────────────────────────────────────  │
│ }                      37         valid              valid                        │
│                        "foo"      invalid            invalid                      │
│                                                                                   │
└─────────────────────┴──────────────────────────────────────────────────────────────┘
                                    2 tests ran

Running a Single Test Suite File#

To run the draft 7 type-keyword tests on the Lua jsonschema implementation, run:

$ bowtie suite -i lua-jsonschema https://github.com/json-schema-org/JSON-Schema-Test-Suite/blob/main/tests/draft7/type.json | bowtie summary --show failures

Running the Official Suite Across All Implementations#

The following will run all Draft 7 tests from the official test suite (which it will automatically retrieve) across all implementations supporting Draft 7, showing a summary of any test failures.

$ bowtie suite $(ls /path/to/bowtie/implementations/ | sed 's/^| /-i /') https://github.com/json-schema-org/JSON-Schema-Test-Suite/tree/main/tests/draft7 | bowtie summary --show failures

Running Test Suite Tests From Local Checkouts#

Providing a local path to the test suite can be used as well, which is useful if you have local changes:

$ bowtie suite $(ls /path/to/bowtie/implementations/ | sed 's/^| /-i /') ~/path/to/json-schema-org/suite/tests/draft2020-12/ | bowtie summary --show failures

Checking An Implementation Functions On Basic Input#

If you wish to verify that a particular implementation works on your machine (e.g. if you suspect a problem with the container image, or otherwise aren’t seeing results), you can run bowtie smoke. E.g., to verify the Golang jsonschema implementation is functioning, you can run:

$ bowtie smoke -i go-jsonschema

Enabling Shell Tab Completion#

The Bowtie CLI supports tab completion using the click module's built-in support. Below are short instructions for your shell using the default configuration paths.

Add this to ~/.bashrc:

$ eval "$(_BOWTIE_COMPLETE=bash_source bowtie)"

Using eval means that the command is invoked and evaluated every time a shell is started, which can delay shell responsiveness. To speed it up, write the generated script to a file, then source that.

Save the script somewhere.

$ _BOWTIE_COMPLETE=bash_source bowtie > ~/.bowtie-complete.bash

Source the file in ~/.bashrc.

$ . ~/.bowtie-complete.bash

After modifying your shell configuration, you may need to start a new shell in order for the changes to be loaded.

Reference#

bowtie#

A meta-validator for the JSON Schema specifications.

Bowtie gives you access to the JSON Schema ecosystem across every programming language and implementation.

It lets you compare implementations either to each other or to known correct results from the official JSON Schema test suite.

bowtie [OPTIONS] COMMAND [ARGS]...

Options

--version#

Show the version and exit.

-L, --log-level <log_level>#

How verbose should Bowtie be?

Default:

warning

Options:

debug | info | warning | error | critical

If you don’t know where to begin, bowtie validate --help or bowtie suite --help are likely good places to start.

Full documentation can also be found at https://docs.bowtie.report

badges#

Generate Bowtie badges for implementations using a previous Bowtie run.

Will generate badges for any existing dialects, and ignore any for which a report was not generated.

bowtie badges [OPTIONS]

Options

--site <site>#

The path to a previously generated collection of reports, used to generate the badges.

Default:

site

filter-dialects#

Output dialect URIs matching a given criteria.

If any implementations are provided, filter dialects supported by all the given implementations.

bowtie filter-dialects [OPTIONS]

Options

-i, --implementation <IMPLEMENTATION>#

A container image which implements the bowtie IO protocol.

-T, --read-timeout <SECONDS>#

An explicit timeout to wait for each implementation to respond to each instance being validated. Set this to 0 if you wish to wait forever, though note that this means you may end up waiting … forever!

Default:

2.0

-d, --dialect <URI_OR_NAME>#

Filter from the given list of dialects only.

-l, --latest#

Show only the latest dialect.

-b, --boolean-schemas, -B, --no-boolean-schemas#

If provided, show only dialects which do (or do not) support boolean schemas. Otherwise show either kind.

filter-implementations#

Output implementations which match the given criteria.

Useful for piping or otherwise using the resulting output for further Bowtie commands.

bowtie filter-implementations [OPTIONS]

Options

-i, --implementation <IMPLEMENTATION>#

A container image which implements the bowtie IO protocol.

-T, --read-timeout <SECONDS>#

An explicit timeout to wait for each implementation to respond to each instance being validated. Set this to 0 if you wish to wait forever, though note that this means you may end up waiting … forever!

Default:

2.0

-d, --supports-dialect <URI_OR_NAME>#

Only include implementations supporting the given dialect or dialect short name.

-l, --language <LANGUAGE>#

Only include implementations in the given programming language

Options:

c++ | clojure | cpp | dotnet | go | java | javascript | js | kotlin | lua | php | python | ruby | rust | scala | ts | typescript

info#

Show information about a supported implementation.

bowtie info [OPTIONS]

Options

-i, --implementation <IMPLEMENTATION>#

A container image which implements the bowtie IO protocol.

-T, --read-timeout <SECONDS>#

An explicit timeout to wait for each implementation to respond to each instance being validated. Set this to 0 if you wish to wait forever, though note that this means you may end up waiting … forever!

Default:

2.0

-f, --format <format>#

What format to use for the output

Default:

pretty if stdout is a tty, otherwise JSON

Options:

json | pretty | markdown

run#

Run test cases written directly in Bowtie’s testing format.

This is generally useful if you wish to hand-author which schemas to include in the schema registry, or otherwise exactly control the contents of a test case.

bowtie run [OPTIONS] [INPUT]

Options

-i, --implementation <IMPLEMENTATION>#

Required A container image which implements the bowtie IO protocol.

-D, --dialect <URI_OR_NAME>#

A URI or shortname identifying the dialect of each test. Possible shortnames include: 2019, 2019-09, 201909, 2020, 2020-12, 202012, 3, 4, 6, 7, draft2019-09, draft201909, draft2020-12, draft202012, draft3, draft4, draft6, draft7.

-k, --filter <GLOB>#

Only run cases whose description match the given glob pattern.

-x, --fail-fast#

Stop running immediately after the first failure or error.

--max-fail <COUNT>#

Stop running once COUNT tests fail in total across implementations.

--max-error <COUNT>#

Stop running once COUNT tests error in total across implementations.

-S, --set-schema#

Explicitly set $schema in all (non-boolean) case schemas sent to implementations. Note this of course means what is passed to implementations will differ from what is provided in the input.

Default:

False

-T, --read-timeout <SECONDS>#

An explicit timeout to wait for each implementation to respond to each instance being validated. Set this to 0 if you wish to wait forever, though note that this means you may end up waiting … forever!

Default:

2.0

-V, --validate-implementations#

When speaking to implementations (provided via -i), validate the requests and responses sent to them under Bowtie’s JSON Schema specification. Generally, this option protects against broken Bowtie implementations and can be left at its default (of off) unless you are developing a new implementation container.

Arguments

INPUT#

Optional argument

smoke#

Smoke test implementations for basic correctness against Bowtie’s protocol.

bowtie smoke [OPTIONS]

Options

-i, --implementation <IMPLEMENTATION>#

A container image which implements the bowtie IO protocol.

-T, --read-timeout <SECONDS>#

An explicit timeout to wait for each implementation to respond to each instance being validated. Set this to 0 if you wish to wait forever, though note that this means you may end up waiting … forever!

Default:

2.0

-q, --quiet#

Don’t print any output, just exit with nonzero status on failure.

-f, --format <format>#

What format to use for the output

Default:

pretty if stdout is a tty, otherwise JSON

Options:

json | pretty | markdown

suite#

Run the official JSON Schema test suite against any implementation.

Supports a number of possible inputs:

  • file paths found on the local file system containing tests, e.g.:

    • {PATH}/tests/draft7 to run the draft 7 version’s tests out of a local checkout of the test suite

    • {PATH}/tests/draft7/foo.json to run just one file from a checkout

  • URLs to the test suite repository hosted on GitHub, e.g.:

    • https://github.com/json-schema-org/JSON-Schema-Test-Suite/blob/main/tests/draft7/ to run a version directly from any branch which exists in GitHub

    • https://github.com/json-schema-org/JSON-Schema-Test-Suite/blob/main/tests/draft7/foo.json to run a single file directly from a branch which exists in GitHub

  • short name versions of the previous URLs (similar to those providable to bowtie validate --dialect, e.g.:

    • 7, to run the draft 7 tests directly from GitHub (as in the URL example above)

bowtie suite [OPTIONS] INPUT

Options

-i, --implementation <IMPLEMENTATION>#

Required A container image which implements the bowtie IO protocol.

-k, --filter <GLOB>#

Only run cases whose description match the given glob pattern.

-x, --fail-fast#

Stop running immediately after the first failure or error.

--max-fail <COUNT>#

Stop running once COUNT tests fail in total across implementations.

--max-error <COUNT>#

Stop running once COUNT tests error in total across implementations.

-S, --set-schema#

Explicitly set $schema in all (non-boolean) case schemas sent to implementations. Note this of course means what is passed to implementations will differ from what is provided in the input.

Default:

False

-T, --read-timeout <SECONDS>#

An explicit timeout to wait for each implementation to respond to each instance being validated. Set this to 0 if you wish to wait forever, though note that this means you may end up waiting … forever!

Default:

2.0

-V, --validate-implementations#

When speaking to implementations (provided via -i), validate the requests and responses sent to them under Bowtie’s JSON Schema specification. Generally, this option protects against broken Bowtie implementations and can be left at its default (of off) unless you are developing a new implementation container.

Arguments

INPUT#

Required argument

summary#

Generate an (in-terminal) summary of a Bowtie run.

bowtie summary [OPTIONS] [INPUT]

Options

-f, --format <format>#

What format to use for the output

Default:

pretty if stdout is a tty, otherwise JSON

Options:

json | pretty | markdown

-s, --show <show>#

Configure whether to display validation results (whether instances are valid or not) or test failure results (whether the validation results match expected validation results)

Default:

validation

Options:

failures | validation

Arguments

INPUT#

Optional argument

validate#

Validate instances under a schema across any supported implementation.

bowtie validate [OPTIONS] SCHEMA [INSTANCES]...

Options

-i, --implementation <IMPLEMENTATION>#

Required A container image which implements the bowtie IO protocol.

-D, --dialect <URI_OR_NAME>#

A URI or shortname identifying the dialect of each test. Possible shortnames include: 2019, 2019-09, 201909, 2020, 2020-12, 202012, 3, 4, 6, 7, draft2019-09, draft201909, draft2020-12, draft202012, draft3, draft4, draft6, draft7.

-S, --set-schema#

Explicitly set $schema in all (non-boolean) case schemas sent to implementations. Note this of course means what is passed to implementations will differ from what is provided in the input.

Default:

False

-T, --read-timeout <SECONDS>#

An explicit timeout to wait for each implementation to respond to each instance being validated. Set this to 0 if you wish to wait forever, though note that this means you may end up waiting … forever!

Default:

2.0

-V, --validate-implementations#

When speaking to implementations (provided via -i), validate the requests and responses sent to them under Bowtie’s JSON Schema specification. Generally, this option protects against broken Bowtie implementations and can be left at its default (of off) unless you are developing a new implementation container.

-d, --description <description>#

A (human-readable) description for this test case.

--expect <expect>#

Expect the given input to be considered valid or invalid, or else (with ‘any’) to allow either result.

Default:

any

Options:

valid | invalid | any

Arguments

SCHEMA#

Required argument

INSTANCES#

Optional argument(s)