catcher_modules.service package

Submodules

catcher_modules.service.docker module

class catcher_modules.service.docker.Docker(**kwargs)[source]

Allows you to start/stop/disconnect/connect/exec commands, get logs and statuses of Docker containers. Is very useful when you need to run something like Mockserver and/or simulate network disconnects.

Input:
Start:run container. Return hash.
  • image: container’s image.
  • name: container’s name. Optional
  • cmd: command to run in the container. Optional
  • detached: should it be run detached? Optional (default is True)
  • ports: dictionary of ports to bind. Keys - container ports, values - host ports.
  • environment: a dictionary of environment variables
  • volumes: a dictionary of volumes
  • network: network name. Optional (default is current test’s name)
Stop:stop a container.
  • name: container’s name. Optional
  • hash: container’s hash. Optional Either name or hash should present
Status:get the container status.
  • name: container’s name. Optional
  • hash: container’s hash. Optional Either name or hash should present
Disconnect:disconnect a container from a network (network failure simulation)
  • name: container’s name. Optional
  • hash: container’s hash. Optional Either name or hash should present
  • network: network name. Optional (default is current test’s name)
Connect:connect a container to a network. All containers share the same network per test.
  • name: container’s name. Optional
  • hash: container’s hash. Optional Either name or hash should present
  • network: network name. Optional (default is current test’s name)
Exec:execute a command inside a running container.
  • name: container’s name. Optional
  • hash: container’s hash. Optional Either name or hash should present
  • cmd: command to execute.
  • dir: directory, where this command will be executed. Optional
  • user: user to execute this command. Optional (default is root)
  • environment: a dictionary of environment variables
Logs:get container’s logs.
  • name: container’s name. Optional
  • hash: container’s hash. Optional Either name or hash should present
Examples:

Run blocking command in a new container and check the output.

steps:
    - docker:
        start:
            image: 'alpine'
            cmd: 'echo hello world'
            detached: false
        register: {echo: '{{ OUTPUT.strip() }}'}
    - check:
        equals: {the: '{{ echo }}', is: 'hello world'}

Start named container detached with volumes and environment.

- docker:
    start:
        image: 'my-backend-service'
        name: 'mock server'
        ports:
            '1080/tcp': 8000
        environment:
            POOL_SIZE: 20
            OTHER_URL: {{ service1.url }}
        volumes:
            '{{ CURRENT_DIR }}/data': '/data'
            '/tmp/logs': '/var/log/service'

Exec command on running container.

- docker:
    start:
        image: 'postgres:alpine'
        environment:
            POSTGRES_PASSWORD: test
            POSTGRES_USER: user
            POSTGRES_DB: test
    register: {hash: '{{ OUTPUT }}'}
...
- docker:
    exec:
        hash: '{{ hash }}'
        cmd: >
            psql -U user -d test -c                     "CREATE TABLE test(rno integer, name character varying)"
    register: {create_result: '{{ OUTPUT.strip() }}'}

Get container’s logs.

- docker:
    start:
        image: 'alpine'
        cmd: 'echo hello world'
    register: {id: '{{ OUTPUT }}'}
- docker:
    logs:
        hash: '{{ id }}'
    register: {out: '{{ OUTPUT.strip() }}'}
- check:
    equals: {the: '{{ out }}', is: 'hello world'}

Disconnect a container from a network.

- docker:
    disconnect:
        hash: '{{ hash }}'
- http:
    get:
        url: 'http://localhost:8000/some/path'
        should_fail: true
- docker:
    connect:
        hash: '{{ hash }}'

catcher_modules.service.elastic module

class catcher_modules.service.elastic.Elastic(**kwargs)[source]

Allows you to get data from Elasticsearch. Useful, when your services push their logs there and you need to check the logs automatically from the test.

Input:
Search:search elastic
  • url: RFC-1738 compatible (can contain user credentials) server url.
  • index: ES index (database).
  • query: your query to run.
  • <other param>: you can add any param here (see Search with limiting fields for an example)
Refresh:Trigger a refresh for an index.
  • url: RFC-1738 compatible (can contain user credentials) server url.
  • index: ES index (database).
Examples:

Search with limiting fields

elastic:
    search:
        url: 'http://127.0.0.1:9200'
        index: test
        query:
            match: {payload : "three"}
        _source: ['name']
    register: {docs: '{{ OUTPUT }}'}

Connect to multiple ES instances. One simple and one secured

elastic:
    search:
        url:
            - 'http://127.0.0.1:9200'
            - 'https://{{ user }}:{{ secret }}@{{ host2 }}:443'
        index: test
        query: {match_all: {}}

Refresh index

elastic:
    refresh:
        url: 'http://127.0.0.1:9092'
        index: test

In bool query must and should are lists

elastic:
    search:
        url: 'http://127.0.0.1:9200'
        index: test
        query:
            bool:
                must:
                    - term: {shape: "round"}
                    - bool:
                        should:
                            - term: {color: "red"}
                            - term: {color": "blue"}

catcher_modules.service.s3 module

class catcher_modules.service.s3.S3(**kwargs)[source]

Allows you to get/put/list/delete files in Amazon S3

Useful hint: for local testing you can use Minio run in docker as it is S3 API compatible.

Input:
Config:s3 config object, used in other s3 commands.
  • key_id: access key id
  • secret_key: secret for the access key
  • region: region. Optional.
  • url: endpoint_url url. Can be used to run against Minio. Optional
Put:put file to s3
  • config: s3 config object
  • path: path including the filename. First dir treats like a bucket.
    F.e. /my_bucket/subdir/file or my_bucket/subfir/file
  • content: file’s content. Optional
  • content_resource: path to a file. Optional. Either content or content_resource must be set.
Get:Get file from s3
  • config: s3 config object
  • path: path including the filename
List:List S3 directory
  • config: s3 config object
  • path: path to the directory being listed
Delete:Delete file or directory from S3
  • config: s3 config object
  • path: path to the deleted
  • recursive: if path is directory and recursive is true - will delete directory with all content. Optional,
    default is false.
Examples:

Put data into s3

s3:
    put:
        config: '{{ s3_config }}'
        path: /foo/bar/file.csv
        content: '{{ my_data }}'

Get data from s3

s3:
    get:
        config: '{{ s3_config }}'
        path: /foo/bar/file.csv
    register: {csv: '{{ OUTPUT }}'}

List files

s3:
    list:
        config: '{{ s3_config }}'
        path: /foo/bar/
    register: {files: '{{ OUTPUT }}'}

Delete file

s3:
    delete:
        config: '{{ s3_config }}'
        path: '/remove/me'
        recursive: true

catcher_modules.service.prepare module

class catcher_modules.service.prepare.Prepare(**kwargs)[source]

Used for bulk actions to prepare test data. Is useful when you need to prepare a lot of data. This step consists of 3 parts:

  1. write sql ddl schema file (optional) - describe all tables/schemas/privileges needed to be created
  2. prepare data in a csv file (optional)
  3. call Catcher’s prepare step to populate csv content into the database

Both sql schema and csv file supports templates.

Important:

  • populate step is designed to be supported by all steps (in future). Currently it is supported only by Postges/Oracle/MSSql/MySql/SQLite steps.
  • to populate json as Postgres Json data type you need to use use_json: true flag
Input:
Populate:Populate existing service with predefined data.
  • <service_name>: See each own step’s documentation for the parameters description and
    information. Note, that not all steps are compatible with prepare step.
  • variables: Variables, which will override state (only for this prepare step).

Please, keep it mind, that resources directory is used for all data and schema files.

Populate existing postgres with data from pg_data_file.

steps:
    - prepare:
        populate:
            postgres:
                conf: {{ pg_conf }}
                schema: {{ pg_schema_file }}
                data: {{ pg_data_file }}

Multiple populates and can be run at the same time. This will populate existing s3 with data, start local salesforce and postgres in docker and populates them as well.

steps:
    - prepare:
        populate:
            s3:
                conf: {{ s3_url }}
                path: {{ s3_path }}
                data: {{ s3_data }}
            postgres:
                conf: {{ pg_conf }}
                schema: {{ pg_schema_file }}
                data: {{ pg_data_file }}

Prepare step with variables override.

- prepare:
     populate:
       postgres:
            conf: '{{ postgres_conf }}'
            schema: create_personal_data_customer.sql
       variables:
            email: '{{ random("email") }}'

catcher_modules.service.expect module

class catcher_modules.service.expect.Expect(**kwargs)[source]

This is the opposite for prepare. It compares expected data from csv to what you have in the database. csv file supports templates.

Important:

  • populate step is designed to be supported by all steps (in future). Currently it is supported only by Postges/Oracle/MSSql/MySql/SQLite steps.
  • Schema comparison is not implemented.
  • You can use strict comparison (only data from csv should be in the table, in the same order as csv) or the default one (just check if the data is there)
Input:
Compare:Compare the existing data with expected one.
  • <service_name>: See each own step’s documentation for the parameters description and
    information. Note, that not all steps are compatible with prepare step.

Check expected schema and data in postgres.

steps:
    - expect:
        compare:
            postgres:
                url: {{ pg_conf }}
                schema: {{ expected_schema_file }}
                data: {{ expected_data_file }}

Check data in s3 and redshift.

steps:
    - expect:
        compare:
            s3:
                url: {{ s3_url }}
                path: {{ expected_path }}
                csv:
                    header: true
                    headers: {{ expected_headers }}
            redshift:
                url: {{ redshift_url }}
                schema: {{ expected_schema }}
                data: {{ expected_data }}

catcher_modules.service.email module

class catcher_modules.service.email.Message(message)[source]
class catcher_modules.service.email.Email(**kwargs)[source]

Allows you to send and receive emails via IMAP protocol.

Config:
  • host: mailserver’s host
  • port: mailserver’s host. Optional. Default is 993.
  • user: your username
  • pass: your password
  • ssl: use tls. Optional Default is true.
  • starttls: use starttls. Optional Default is false.
Filter:search filter object. All fields are optional. For more details and filter options please see the readme’s of https://github.com/martinrusev/imbox library.
  • unread: boolean. If true will get only unread messages. Default is false.
  • sent_from: Get only messages sent from this address.
  • sent_to: Get only messages sent to this address.
  • date__lt: Get messages received before specific date.
  • date__gt: Get messages received after specific date.
  • date__on: Get messages received on a specific date.
  • subject: Get messages whose subjects contain specified string.
  • folder: Get messages from a specific folder.
Input:
Receive:get a list of messages, matching search criteria. From recent to old.
  • config: email’s config object.
  • filter: add search filter. Optional.
  • ack: mark as read. Optional Default is false.
  • limit: limit return result to N messages. Optional Default is unlimited.
    Only messages who fit the limit will be marked as read, if ack is true.
Send:send an email
  • config: email’s config object.
  • from: from email
  • to: to email or list of emails
  • cc: list of cc. Optional
  • bcc: list of bcc. Optional
  • subject: subject. Optional Default is empty string.
  • plain: message’s text. Optional
  • html: message’s text in html format. Optional Either plain or html should present.
  • attachments: list with attachment filenames from resources dir. Optional
Message:for fields, available in message please see Message
Examples:

Read all messages, take the last one and check subject

variables:
    email_config:
        host: 'imap.google.com'
        user: 'my_user@google.com'
        pass: 'my_pass'
steps:
    - email:
        receive:
            conf: '{{ email_config }}'
        register: {last_mail: '{{ OUTPUT[0] }}'}
    - check: {equals: {the: '{{ last_mail.subject }}', is: 'Test Subject'}}

Read 2 last unread messages and mark them read

- email:
      receive:
          config: '{{ email_conf }}'
          filter: {unread: true}
          ack: true
          limit: 2

Find unread message containing blog name in subject and mark as read

- email:
      receive:
          config: '{{ email_conf }}'
          filter: {unread: true, subject: 'justtech.blog'}
          ack: true
          limit: 1

Send message in html format

- email:
      send:
          config: '{{ email_conf }}'
          to: 'test@test.com'
          from: 'me@test.com'
          subject: 'test_subject'
          html: '
          <html>
              <body>
                <p>Hi,<br>
                   How are you?<br>
                   <a href="http://example.com">Link</a>
                </p>
              </body>
          </html>'