Last time I wrote about sharing examples in Minitest. This time I want to show an idea I had for a long time about reusing the same test to verify system’s behavior on different levels.
Let’s say we’re building a simple signup application. We may end up with a test like this:
(Check out full code here: https://github.com/wojtekmach/signups)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Now, let’s say we also want to have an API. Often times we are testing the same two scenarios as above, usually with the same test data:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Finally, we also have the lower level test that’s using the application logic directly:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
We can extract the common part from all tests into helper methods like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
|
As I am writing this, without thinking about it, I was just gonna work on cleaning up the 3rd test but, which is kind of the point of this post, there isn’t anything to clean up there. There’s no duplication that’s worth extracting out or some test/production API quirks worth hiding. Since we fully control the application code we can design it however we want.
This brings us back to the title of this post about reusing the same test on different levels. What I want to do is to design an interface that will behave like the Signup
class, but under the hood will either call the application logic directly or use Web UI or API. The test must be written in such a way it’s easy to inject dependencies.
Here’s one approach; I write it as a module that will be later included into concrete test cases.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
What’s an @app
? It’s an object that knows how to construct object that can play a role of a Signup
. Object that can play role of @app
need only to implement #signup
message. For Signup
role they need #submit
, #valid?
and #error_messages
. Here are possible implementations:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
|
Now we can write the remaining concrete test cases:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
|
There’s a few nice benefits about this design.
For one thing this setup is highly configurable. We can easily switch certain levels on and off. What’s more, we can take this configuration further and for the UI & API tests point them to live servers (e.g. staging.example.com) instead of local servers on development machine. This has added benefit that we can find more errors this way, like for example asset pipeline & general deployment issues, DNS etc. Granted, this works extremely well for a simple application as such that’s basically stateless but it should still be doable for more complex cases.
This test design also forced us to write mostly production (albeit not used by the production app) code and just a little bit of simple test code. A nice side effect of this is I think you’d generally keep this code more organized if it’s not a part of the test suite. More importantly though as a way of testing the app we built client libraries to access API (See http://robots.thoughtbot.com/how-to-test-sinatra-based-web-services) and the Web UI. If you’re lucky enough to have a dedicated QA team they may appreciate that they can drive the app using quite convenient interface yet still be able to access raw features of capybara etc.
Finally, there’s one more thing maybe worth mentioning. If we have 2 instances of the app running on app1.example.com
and app2.example.com
it’s entirely possible to configure app1
’s controllers to use APIClient
(instead of simply App
) pointed to app2.example.com
without a single change in the application code. Again, probably not that useful but I think it’s pretty cool :–)