Putting the Talent (Point) in TDD
Last week, Talent Point sponsored our first Udacity Friends London event, an ongoing meetup series/fortnightly study session. Setup and organised by one of Udacity's Content Developers, Jose Nieto, Udacity Friends aims to give people a space to progress their technical learning collaboratively, with current and former students working together to overcome blockers and share their knowledge.
When we first heard that Udacity Friends were looking for a space, it was a no-brainer. With our recent move to fancy new offices, we've been actively looking to host events and meetups, allowing us to get more directly involved with the community. Partnering with a team like Udacity, a company that shares many of our core values around embracing learning and collaboration, was just a great fit. Personally, I got to meet some fascinating, super friendly people, and get a chance to test my own technical know-how at the same time.
I have to say, it was a great learning experience! And you know what the key thing I learned was? We Talent Pointers know SO MUCH without even realising it!
We may not be true "techies" (certainly in a hands-on sense), but we talk about this stuff day-in and day-out with people who are. Our work brings us into contact with just about everyone across the technical spectrum, from junior developers, through QA and DevOps, right up to CTOs. That means we are exposed to pretty much every development process, trendy tool and engineering paradigm out there, and need to understand them all sufficiently to work out whether someone else has genuine knowledge about them or is just blagging it!
TDD: Best Practice or The Bane of a Developer’s Existence?
One area that particularly stood out to me over the course of the evening is how few people at the session had a solid understanding of TDD (Test-Driven Development). Unfortunately, a lot of engineers and developers have never had the opportunity to work in an environment that truly supports TDD, which means its application and importance is still widely misunderstood.
One person even admitted to "hating TDD" because it doesn't make sense to him logically, and that seems completely fair. I mean, when you really think about it, on paper it doesn't make much sense! How can you create the test for something without having the thing you want to test? #paradox, right?!
However, when you actually see how it’s used in practice, it begins to click into place. But perhaps I'm getting ahead of myself here. First of all, let's just take a look at what TDD is and then explain why it isn't a paradox at all.
If you're a non-techie person, just stick with me here, I promise it will make sense – and I won’t actually mention any tech jargon whatsoever!
Basically, using TDD means that every single new line of code or new piece of functionality first has a test written for it to ensure that it does what you actually want it to do. That means each element you work on has to be broken down into the smallest, testable pieces first. The test is then run, fails because the new feature doesn't exist yet, and you then write your code to change that "fail" to a "pass".
Once you get to the "pass" point, you run all of the previous tests that you've created for all the other features of your software, making sure that these all still pass as well. If they all do, you review the new code (normally with someone else), see if you can improve it without causing any tests to fail, and then move on to the next small piece of functionality. (P.S. if you want a more detailed explanation, check this out.)
How does that work in practice? Well, let's say you wanted to add a newsletter signup to a website, letting a user submit their email address. Approaching this using TDD, the first step is to work out what you actually want to happen; in other words, what is your test case? In this case, the answer is that you want to receive an email address that the user has entered and sent to you.
If you've ever done any development, you may already notice the power of TDD. Before writing a single line of code, you already have a strong outline of the functionality that is needed: an input box for the user, a button to "submit" the information, and a mechanism to retrieve/store that data. But rather than instantly building that setup, your first step is to write a test that looks for a received email address and then run it. Why? Because that's what makes this test-driven and that's how we flesh out exactly what we need.
That test will need to check that the input box and button exist, that a user has typed something in, that the button has been pressed, and that the data received matches the input. Your test will fail, but as you add each element and get closer to the final functionality, your test will fail by decreasing amounts until, eventually, it passes. Hooray! Your newsletter signup works!
You now run all of the other tests already built for your web page (your testing suite), and once they all pass, you're done. Your new functionality can be made live, your new test becomes part of that suite, and now any future changes will also ensure that your newsletter form still works. Nice job!
So why does that all matter? Why does it make sense to build a test first, rather than write your code and then design the test afterwards (you know, the more traditional model)?
Because a test is only as good as its instructions.
Let me give you a plain-English example – compare the two scenarios (your test cases) below:
Write a function that returns the word "Even" when the input is an even number, or "Odd" for an odd number.
There are 12 months in the year, each month has between 28 and 31 days, and each day is 24 hrs long. Write a function that, when the input is a month which has 28 or 30 days, returns the word Even, and when the month has 29 or 31 days – returns the word Odd.
At first glance, the one on the left is much easier to understand (but is also much simpler), but even so it's clear that both would return the same "result" i.e. the word "Even" or the word "Odd". If you wrote the test at the end, it might look the same for either scenario, but WHAT they are testing and HOW you would go about building the solution is completely different.
And that’s why TDD is so important! If you know what the end result is, you can then build your solution in such a way as to focus on satisfying a specific result. It reduces code complexity, ensures that engineers know exactly what is expected of the functionality they're creating, and gives a measurable guideline on what "success" should look like – in this case, returning "Even" or "Odd" correctly.
It's like performing requirements gathering or a discovery phase at the start of a major project. No one would consider committing to a six-plus month project without first scoping what the project was seeking to achieve, and putting in place a roadmap with key deliverables to allow progress to be tracked. TDD allows that same approach to be taken with individual user stories or tickets, ensuring that time isn't wasted barking up the wrong tree and creating incredibly efficient code. (Plus, it massively reduces bugs).
Driving Hiring With TDD?
Having spent half an hour discussing all of the above, and even putting some of the principles into practice on my Code Wars account (handily boosting me to the top of Talent Point leader board!), something else clicked for me. Defining what "success" should look like... ensuring processes are deliverable and achievable from the very start... using common sense to pick a task apart and determine its core requirements...
Wait a minute, isn't that what I do for a job?!
It turns out that the fundamental reasoning behind TDD is something anyone at Talent Point would instantly recognise, as are the key benefits to the approach. A TPTDD, perhaps, or even test-driven hiring... it'll probably never catch on!
For instance, just take how we approach planning a hire: the first step is to sit down with the customer and scope out the baseline requirements for the role. Even our language shares common trends with TDD, because that's exactly the first step a TDD practitioner would use as well. Once we have a firm understanding of the problems the role will help our customer solve, the actual core reason for the hire if you will, then we move on to the next step: breaking it down into the smallest parts.
We work with the Hiring Manager to define the exact criteria that an applicant will need to fulfil to not just get the job, but succeed in the role. That means throwing out or refining any requirements that are vague, unnecessary or unrealistic; much like coding using TDD, we're forced to critically evaluate what we're doing and ensure that it is actually reasonable.
Once we've defined our smallest parts, we then work out what success looks like, in other words how we would test for the perfect applicant! That takes the form of screening questions, detailed information in our Campaign Briefs, and defining the interview process, but each step is effectively intertwined so can be considered the same test. The entire hiring campaign is then built around these tests, with a successful "pass" state meaning a hire has been made.
So that covers the first and second steps of TDD: define and test. But what about the third? What about refactoring, or improving your code once it passes the test? You guess it, we do that as well!
Through review sessions and retrospectives, we critically evaluate how the campaign ran and look for ways to make it more efficient during future iterations. The idea that every hire should be repeatable is pretty integral to our service, so it's important that we're always tweaking and updating our campaigns to ensure they are as optimised as possible.
We can even take this a step further and look at another plain-English example, except instead of giving you two coding conundrums, we'll take a look at two approaches to planning a tech hire:
Customer X is looking for a Software Engineer who follows best practice.
Customer X is looking for a Software Engineer who can give specific examples of situations where applying SOLID design principles has helped them and their team become better coders.
Now that first example is fine, but very generic; it's not clear what you're actually testing for. The second one looks a lot more like a TDD test case; it's clear about what success will look like. In other words, it’s not enough to know what you want, you have to know how this person is going to demonstrate that they pass your “test case”. That's the first step of every role or campaign that we manage, and it aligns very nicely with the thinking behind TDD!
So when Talent Point help our customers create crib sheets to better interview for a technical hire, that is exactly what we are doing: we are TDDing the interview! We start by figuring out the end result first (what the right answer or coding exercise is) and working our way backwards from there to “build” the solution (i.e. the right interview questions or the right tech test).
Marge has highlighted yet another case where an idea developed within the tech sector can be more broadly applied in other industries. We've already blogged about how Agile methodologies should be adopted universally, and we'll be following up in the future with a look at how another common part of software development is a core concept of every hiring model – who knows what else there is still waiting to be discovered! If that sounds interesting, then be sure to keep an eye on the blog over the coming weeks.
Top illustration sourced and created with: Vecteezy
Test-driven development icon by Richard Slater from the Noun Project