In need of some input regarding unit testing problems.

kepler

New member
I'm doing a unit testing workshop for our developers and since the principles of unit testing are fairly simple, instead of just trotting out the usual mantra I wanted to try something a bit more practical.

So, I'm trying to find all the common problems developers come across when unit testing, like "If I want to test method A, but method A calls method B and method B calls method C, how do I test method A in isolation?" or "method A relies on dependency B, but I can't mock B out because it doesn't implement an interface"

The problem is, I'm having difficulty finding much actual elaboration of any problems beyond "unit testing is teh sux/slow/pointless".

So, any personal experiences or pointers are appreciated.

Cheers.
 
Unit Testing is not just something you just bolt on at the end of the development cycle. Instead it is something that you plan for as part of your design. Good designs test naturally and simply. Focusing on the testability of your design is what will create better designs and solid unit testing. This is because you are forced into creating testable components and compartmentalizing your designs. Building components that stand on their own and are testable is one of the keys to building good object oriented designs. So my advice is not to try and pound the round peg into a square hole. Instead look into why your design process produces code that is so difficult to unit test and fix that.
 
Unit Testing is not just something you just bolt on at the end of the development cycle. Instead it is something that you plan for as part of your design. Good designs test naturally and simply. Focusing on the testability of your design is what will create better designs and solid unit testing. This is because you are forced into creating testable components and compartmentalizing your designs. Building components that stand on their own and are testable is one of the keys to building good object oriented designs. So my advice is not to try and pound the round peg into a square hole. Instead look into why your design process produces code that is so difficult to unit test and fix that.
That's exactly what I've been trying to get across. Also you're completely right that if you get to a point where you're trying to write a UT for an existing method that's already hard wired and your test is going to be a headache you've probably made mistakes in your design (let alone the mistake of retro-fitting unit tests) but I know the guys will be sat there asking questions like that, and if I turn round and say 'well, that's a design/project problem' they're probably going to switch off thinking that I'm not helping them.

The big problem I have had is that the software we work on here tends to be established products (all to market before I got here) that require small bits of regular maintenance/extension, without changing the core components. So, we've never really had the 'big new project' where you could design for testability from the start. We do have one now though, and the core components have to be UTd, so I'm against the clock. :bleh:

Also, because the guys are so completely new to UT I think I'd probably have trouble pushing for real test driven development, as I think it's a distinct skill and takes a fair bit of talent and discipline to do well - but they don't know how to do it and (other than what I can do) they aren't going to receive training on how to do it.

With that in mind I think allowing them to UT code-first seems to be the most realistic in-road, but doing do with knowledge and fore-thought of things like needing to be able to test the logic in isolation, allowing for things like inversion of control (so abstracting external entities like the Db, filesystem etc to interfaces and letting them be injected/set in the production code) and because of the IoC knowing that it's going to be unit tested for things like the Db running out of connections, network connection going down, disk running out of space, etc.

Hopefully that way they'll all be able to get their feet wet in UT, and still keep some of the design benefits of TDD but without the mental 180 of thinking about tests first, and how to follow it without violating the design rules they have drummed into them.
 
Can you fake the feedback data from methods 2 and 3, and in that way test method 1?

I think that you will find it hard selling it to the team. They'll see it as MORE work rather than less.
 
Oh it is a hard sell for a dept with no current UT experience and it's definitely more work in the short term at least. It does tend to improve the design and lower the support and maintenance costs though. Which, for our core components, should result in a pretty hefty payoff.

I'm really looking for common problems (like the ABC example) rather than answers though, I should be able to work them answers out for myself. :D
 
Well I'll be following this thread with interest. I've studied Unit Testing at Uni and barely implemented it one time in a group project. Can't offer wisdom I'm afraid!
 
That's okay, I offer precious little myself. :D

So far, I have

To what extent should I be looking to test private methods, if at all?
To what degree should mock objects be used?
If I don't have time to UT all of the code, where do I concentrate my efforts?
Why do my tests keep breaking when I change my code, and isn't it too expensive to fix?
If I make a mistake in my code, aren't I likely to also make the same mistake in my tests?
If method A calls method B aren't I breaking the isolation principle by testing A?
What if the class under test is laborious to setup?
What can I test for, using mocking, that I can't test for without it?
What if my dependency is a sealed class and has no interface?
How do I test the logic in my user interface?

but there have to be more.

I'm actually finding it quite difficult. I'm pointedly avoiding all the 'pro-unit testing' resources and concentrating solely on critiques of unit testing, looking for things that challenge my own assumptions and biases, but some of it is just painful to read. Things like 'Unit testing is pointless, we're developers, we don't know how to test.", or "Unit testing is pointless, my tests take ages to run.", "A test that needs to be updated every time the product changes is not really a test at all.", "Unit testing doesn't catch bugs.", with absolutely no self-critique or analysis, and then they all go and reference each other as supporting evidence that unit testing sucks. I've been sat combo facepalming all morning.
 
To what extent should I be looking to test private methods, if at all?
Private methods are implicitedly tested when testing your public methods. There shouldn't be a need to write specific unit tests for them.


To what degree should mock objects be used?
Do whatever it takes to test. Mock component can help isolate certain layers of your code. We try and unit test each layer independently so they can be useful. But, if you can try and test combinations of layers as well. We also have Unit Tests that validate output through multiple layers also. We've also designed specific unit test logging that can be used as validation points in-between components.


If I don't have time to UT all of the code, where do I concentrate my efforts?
Concentrate on your core components. You should have a framework layer somewhere in your design. These components are critical to all business logic and are also the easiest to unit test.


Why do my tests keep breaking when I change my code, and isn't it too expensive to fix?
If you keep breaking your unit tests you must not have a very good design to begin with. Most modern programming languages have plenty of capability to maintain backward compatability of new releases. If you are constantly breaking your unit tests you likely don't have a good design process. Creating quality software requires you to be good at all methodologies. You can't expect Unit Testing to compensate for your lack of quality designs.


If I make a mistake in my code, aren't I likely to also make the same mistake in my tests?
All Unit Tests need to have verification points. If you code this validation wrong, your test would likely fail when there is no bug in your software. Whenever a test fails you need to find out why. The alternative is that you accidently make a test pass that should have failed. The way to avoid this is to create both positive and negative scenarios for each test case. It's highly unlikely that a positive test would return a false positive and a negative test could return a false negative for the same test case.

The better question to ask is how can I be sure my tests are comprehensive. This is where the real challenge is. You need to look at all of your use cases supported by a specific interface and make sure you cover all of them. Defining the right test cases can be complex. Missing a scenario that could occur in real-worl usage can be common. The good thing about unit testing is that if you find out you missed something, you can just plug the hole later. Over time you have a comprehsive test suite that you can have confidence in.


If method A calls method B aren't I breaking the isolation principle by testing A?
Yes. But, so what? Is this supposed to be an argument against unit testing? The goal of creating unit tests is to build a test suite that you can use to have confidence in your software. IMO, it doesn't matter how you do it. Taking some sort of purist view is completely pointless in my book.

What if the class under test is laborious to setup?
Unit Tests are automated so the setup should be a one time effort. The alternatives are to manually setup a test of laborious system over and over again. Build some common utilities to help you automate testing. We've built communications tools, serialization services, and other services utilized by our Unit Test framework. Certain scenarios wouldn't even be possible to test without them. But they would be hard or even impossible to test without them. What other option is there? Not to test at all?

What can I test for, using mocking, that I can't test for without it?
Whatever makes sense goven the funcitonality of the components being tested. There are no hard and fast rules.

What if my dependency is a sealed class and has no interface?
You mean a C# sealed class? A sealed class can't be inherited. Having no interface on this class makes no sense? How could anyone use this class? Did you mean an abstract class? You can test an abstract class by having your test harness inherit it.

How do I test the logic in my user interface?
Don't have logic in your user interface. Separate your UI and your business logic and test your business logic using unit tests. This is the better way to design your code anyway. You can easily change UI technologies and be sure that you have quality business logic that is unaffected.


You haven't mentioned anything about continuous integration yet? Continuous integration is another good reason to have quality unit test suites. There are many CI systems that can integrate with source control and build systems. The basic concept is that you can conitinually run unit tests with every build to immediately ensure that they past with every change youy check in. This is a powerful way to identify issues early in the development cycle. Google "Cruise Control" for more info.
 
Stack Overflow might prove useful for those sort of questions...

For example:

http://stackoverflow.com/questions/301693/why-didnt-unit-testing-work-out-for-your-project
That was a great help. I've distilled the responses down to

1471759.jpg


Looks like a good starting point. Cheers bud. :up:
 
Back
Top