Getting Started with TDD: A Practical Guide to Beginning a Lasting Practice
To TDD, or not to TDD; that is the question.Test-driven development (TDD) can be a controversial subject. Numerous books and articles praise it, and many mentors and managers encourage it, or even require it. Nevertheless, new-comers to TDD often experience an unfortunate sequence of events:Developer begins researching TDD.Developer finds a plethora of materials evangelizing TDD as fantastic, a great way to produce quality code.Developer sees that the materials describe TDD as a series of strict steps to follow. Generally speaking, these steps are known as the “Red-Green-Refactor” cycle, requiring a TTD’ing developer to write a failing test first, then to add just enough production code so the test passes, and then to refactor the code if needed. This process continues until the production code has enough test coverage. Developer tries TDD, attempting to adhere to the steps.Developer gets frustrated because this process is difficult and not going as-advertised.Developer sours on TDD, telling others “it just isn’t for me.”Having seen many developers experience this and swear off TDD early in their attempts, I’ve come to believe that the problem is not the developer, nor is it TDD per se. Instead, the problem is the ever-present myth that TDD requires adherence to a rigid procedure. It’s all well and good to suggest on paper that a developer should always start by writing a failing test first. But as always, it is necessary to contend with reality. What if the developer is new to the programming language at hand? What if the developer is unfamiliar with the relevant testing framework? What if the developer is under time pressure from their job? So much effort is already taking place under any of these circumstances. To advise a TDD new-commer to adhere faithfully to the “Red-Green-Refactor” cycle under any of these scenarios seems unfair and bound to end badly. Suggesting Change In ApproachSo what is the solution? I think it’s twofold. First, rather than letting the myth of strict, procedural TDD dominate, developers should view TDD as a practice specific to an individual. Like playing piano, meditating, weight lifting, or any other practice, a developer should start based on their current skill level, without judgment or angst from unreasonable standards. With this approach, the developer can take a healthy view of strict TDD: it can be a particular standard or ideal to work toward over time, but it is not the end all, be all. Adherence may or may not be achieved. What matters is the continued, consistent practice of TDD and the resulting improvements to the codebase and the developer’s skills.Second, experienced developers should provide practical suggestions for how to start that practice. These suggestions should keep in mind those who are trying simultaneously to learn a language or framework. Toward that end, this article has two parts. Part 1 has tips that I wish I heard when, as a new developer, I was told I had to start TDD’ing. Part 2 walks through an example of applying those tips when thinking about implementing a hypothetical feature. The example uses pseudo-code to focus on the thought process instead of a strict implementation.Who Is This Article For?My wish is that both Part 1 and Part 2 below offer some value to any developer, of any skill level, curious about TDD. This wish is tempered by a few realities:As just mentioned, the tips here have a retrospective bent: “what I wish I heard…as a new developer.”In order to have code examples in Part 1, I had to pick a language. I picked JavaScript.In an attempt to avoid over-abstraction and offer something relevant to a developer’s day-to-day-work, I wanted Part 2 to provide examples within the context of a popular language framework. I picked React.As a result, I suspect that readers who will get the most out of this article meet the profile of being new to TDD with a bit of JavaScript and React experience. That being said, I tried to keep Part 1, discussing tips for a TDD practice, general enough so that the use of JavaScript there is incidental. Hopefully any developer interested in starting or continuing a TDD practice will derive some value there, regardless of the developer’s preferred language. For a non-JavaScript, non-React developer, Part 2 is a harder sell, but I’d suggest giving it a quick gloss nonetheless, especially if additional discussion of the tips, regardless of context, would help make them more concrete.A few other things to keep in mind: the article assumes that the reader has some general familiarity with concepts such as unit tests, integration tests, mocking, TDD, and SOLID principles. The article does not review these concepts in detail in an attempt to remain at a reasonable length. In terms of the terminology used below, “test code” refers to the unit or integration tests a developer plans to run, while “production code” means the code that the tests are testing. Part 1: Practical Tips for Starting a TDD

To TDD, or not to TDD; that is the question.
Test-driven development (TDD) can be a controversial subject. Numerous books and articles praise it, and many mentors and managers encourage it, or even require it. Nevertheless, new-comers to TDD often experience an unfortunate sequence of events:
- Developer begins researching TDD.
- Developer finds a plethora of materials evangelizing TDD as fantastic, a great way to produce quality code.
- Developer sees that the materials describe TDD as a series of strict steps to follow. Generally speaking, these steps are known as the “Red-Green-Refactor” cycle, requiring a TTD’ing developer to write a failing test first, then to add just enough production code so the test passes, and then to refactor the code if needed. This process continues until the production code has enough test coverage.
- Developer tries TDD, attempting to adhere to the steps.
- Developer gets frustrated because this process is difficult and not going as-advertised.
- Developer sours on TDD, telling others “it just isn’t for me.”
Having seen many developers experience this and swear off TDD early in their attempts, I’ve come to believe that the problem is not the developer, nor is it TDD per se. Instead, the problem is the ever-present myth that TDD requires adherence to a rigid procedure. It’s all well and good to suggest on paper that a developer should always start by writing a failing test first. But as always, it is necessary to contend with reality.
What if the developer is new to the programming language at hand? What if the developer is unfamiliar with the relevant testing framework? What if the developer is under time pressure from their job? So much effort is already taking place under any of these circumstances. To advise a TDD new-commer to adhere faithfully to the “Red-Green-Refactor” cycle under any of these scenarios seems unfair and bound to end badly.
Suggesting Change In Approach
So what is the solution? I think it’s twofold. First, rather than letting the myth of strict, procedural TDD dominate, developers should view TDD as a practice specific to an individual. Like playing piano, meditating, weight lifting, or any other practice, a developer should start based on their current skill level, without judgment or angst from unreasonable standards. With this approach, the developer can take a healthy view of strict TDD: it can be a particular standard or ideal to work toward over time, but it is not the end all, be all. Adherence may or may not be achieved. What matters is the continued, consistent practice of TDD and the resulting improvements to the codebase and the developer’s skills.
Second, experienced developers should provide practical suggestions for how to start that practice. These suggestions should keep in mind those who are trying simultaneously to learn a language or framework. Toward that end, this article has two parts. Part 1 has tips that I wish I heard when, as a new developer, I was told I had to start TDD’ing. Part 2 walks through an example of applying those tips when thinking about implementing a hypothetical feature. The example uses pseudo-code to focus on the thought process instead of a strict implementation.
Who Is This Article For?
My wish is that both Part 1 and Part 2 below offer some value to any developer, of any skill level, curious about TDD. This wish is tempered by a few realities:
- As just mentioned, the tips here have a retrospective bent: “what I wish I heard…as a new developer.”
- In order to have code examples in Part 1, I had to pick a language. I picked JavaScript.
- In an attempt to avoid over-abstraction and offer something relevant to a developer’s day-to-day-work, I wanted Part 2 to provide examples within the context of a popular language framework. I picked React.
As a result, I suspect that readers who will get the most out of this article meet the profile of being new to TDD with a bit of JavaScript and React experience. That being said, I tried to keep Part 1, discussing tips for a TDD practice, general enough so that the use of JavaScript there is incidental. Hopefully any developer interested in starting or continuing a TDD practice will derive some value there, regardless of the developer’s preferred language. For a non-JavaScript, non-React developer, Part 2 is a harder sell, but I’d suggest giving it a quick gloss nonetheless, especially if additional discussion of the tips, regardless of context, would help make them more concrete.
A few other things to keep in mind: the article assumes that the reader has some general familiarity with concepts such as unit tests, integration tests, mocking, TDD, and SOLID principles. The article does not review these concepts in detail in an attempt to remain at a reasonable length. In terms of the terminology used below, “test code” refers to the unit or integration tests a developer plans to run, while “production code” means the code that the tests are testing.
Part 1: Practical Tips for Starting a TDD Practice
To start, here is a list of the tips I wish I had heard when, as a new developer, I started making an attempt at TDD.
- It’s OK to think first! You should think first!
- It’s OK if writing a test first means writing only the test descriptions, without anything else.
- Before writing production code, consider how to organize your code, as much as possible, into pure functions.
- Don’t forget the result of TDD: Test coverage that will never be 100%.
- Before writing production code, consider whether you can put code that you don’t own inside a wrapper.
- Before writing production code, consider whether you can use dependency injection to create a complex piece of functionality out of smaller pieces of functionality.
- With anything responsible for the display of UI, consider whether it can be separate and humble.
Each of these tips is discussed in turn below.
1) It’s OK to think first! You should think first!
When TDDing, you do not have to write a failing test as the very first thing you do. Picture the folks who first came up with the idea of TDD and the folks who really advocated for it during the 1990s. They are amazing programmers. They know programming languages, testing frameworks, and programming patterns like the back of their hands. Their deep skills and experience give them the mental capacity and intuition to simply sit down and write failing tests first. And they love where it takes them.
So that was the first step they strongly advocated for and how they sold TDD to the mainstream. For those attempting TDD for the first time, their skill and intuition may not be the same as the original advocates. That’s perfectly fine. Skill and intuition improve with time. So if you’re trying TDD and finding that it’s not intuitive to simply sit down and write a failing test first, it’s totally ok if the first thing you do is think.
2) It’s OK if writing a test first means writing only the test descriptions, without anything else.
Let’s say you have been given the requirements of a feature to implement. You have a rough sense of how to begin, but you’re not 100% sure. You recognize that your implementation could change as you go.
Traditional, strict TDD says the first thing to do is write a failing test first. But developers often dread the thought of writing tests in such a scenario. If their assumptions and production code change drastically in the course of implementing the feature, any previously written tests could become worthless. The developer would then have to trash all their previous efforts spent on writing tests and come up with completely new tests, costing more effort and time.
Rather than struggling to take a “test first” approach based on uncertain, volatile assumptions in the first place, you can simply jot down a series of test descriptions for the tests you want to have. You should be able to do this using the conventions of the testing framework at hand, even if you are just getting to know that framework.
For example, below are some test descriptions in JavaScript, assuming Jest as the testing framework. Some of them employ the “Given/When/Then” approach, or a riff on it.
```js describe('the product page', () => { test(`given a successful fetch of product data, when the user visits the product page, the user sees the product description`, () => { // Nothing here intentionally }); test(`given an unsuccessful fetch of product data, when the user visits the product page, the user sees an error popup`, () => { // Nothing here intentionally }); }); describe('calculateProductCost()', () => { test('given a user in New York, the product cost includes the correct sales tax', () => { // Nothing here intentionally }); }); ```
The tests don’t serve as true tests at this point of course. But writing out the test descriptions in bulk this ways means several important things:
- you thought first
- you did a variation of strict TDD, writing a test first, at least in part
- you thought about how to implement your production code in a testable way
- you now have a checklist of the features you need to implement in a testable way
- you have started some documentation for future developers (including yourself) on the behavior of the components under test
From there, you can start to work on your production code, going back to fill in the tests when confident with how you will implement the production code.
But what if writing the tests turns out to be difficult? That’s a code smell indicating your production code may need a better design. The next few tips should help minimize such predicaments.
3) Before writing production code, consider how to organize your code, as much as possible, into pure functions.
This tip takes a page out of functional programming. Functional programmers love “pure functions.” A pure function is a function that will always return the same output if you give it the same input (i.e., arguments). This is not to say that pure functions return the same thing all the time. It is to say they are unflappably consistent, given the inputs. They are akin to a mathematical calculation in that way. Here are some very simple examples, again in JavaScript.
```js export function addThree(num) { return num + 3; } export function append(array, item) { array.push(item); return array; } export function update(object, name, value) { object[name] = value; return object; } ```
Note a few things about these examples:
- If you feed each one the same arguments over and over again, it will return the same result.
- Each function generally deals with data manipulation or calculations. Each takes as inputs everything it needs to do that manipulation or calculation.
- If instead, the body of a function relied on a variable declared outside the function, something else in the system could manipulate that variable. The function’s return value could be inconsistent, depending on what happened elsewhere in the system to that variable. The function would then fail to be a pure function.
- Writing tests for these pure functions should be straightforward, because the functions are so predictable. The tests are simply examples asserting that each pure function produces the expected output.
The last point is why you should strive, as much as possible, to delegate your implementation details to pure functions. It makes testing easier. Of course, the included examples are simple and contrived. You may be working on something that involves complex data manipulation or calculations, using strings, arrays, dictionaries, and all other sorts of data structures, maybe even all at once. Perhaps you are processing data that your app fetched from an external API. Perhaps you are trying to figure out how to represent a move on a chess board.
No matter what the task, the key is to ask, before writing any test or any production code, “Where are the hotspots ripe for doing something with a pure function, one that will always return the same value if you give it the same arguments?” If you can organize a significant portion of your code that way, then you should be able to get a significant portion of your code under test in a fairly straightforward way.
Even better would be to break any large pure functions into smaller pure functions that follow the single responsibility principle as much as possible. This will make them ripe for reusability and for combining them in novel ways in the future.
4) Don’t forget the result of TDD: Test coverage that will never be 100%.
The previous paragraph happily mentioned getting “a portion of our code under test.” This reference to a fraction of the whole brings us to a bit of philosophy: Why engage in the practice of TDD in the first place? In my opinion, one answer is that it hones your intuition for how to maximize test coverage of our production code while minimizing pain from writing tests.
“Coverage” here means both surface area and insurance. In terms of surface area, you want tests to protect as much of your production code as possible, just as you’d want a fire insurance policy to cover as many square feet of your house as possible. In terms of insurance, the protection offered by running the tests includes:
- reassurance that your code actually does what you intend
- an alarm system that goes off if someone breaks production code with future changes
- documentation of the intent behind the production code and how to consume it
Nevertheless, 100% test coverage–protecting every nook and cranny of code–is impractical, if not impossible, given the time limitations of a professional developer with real-world stakeholders. Perfection should not be the goal of a TDD practice, similar to any other practice, and this is totally okay. A lofty-enough goal of your own TDD practice can be striving to maximize test coverage with due deference to the many variables at play under your own unique circumstances. These include the complexity of the production code, external time pressure, and your own familiarity with the language and frameworks at hand.
Jumping back to the previous tip, once you organize your production code, as much as possible, into pure functions and add tests for those functions, pat yourself on the back. You’re striving to maximize test coverage and leaning on pure functions to help minimize testing pain.
But in most systems and for most tasks, there is code that simply can’t be shoehorned into pure functions. What else can you do to increase test coverage? The next few tips detail some options.
5) Before writing production code, consider whether you can put code that you don’t own inside a wrapper.
One category of code that cannot be coerced into pure functions is code you do not own. Take, for example, the following:
- consuming a third-party library
- making network calls
- reading from or writing to a database
- anything else involving inputs and outputs, including reading files, logging, printing to a console or screen, etc.
Let’s lump such things under the phrase “externally owned interfaces.” Implementing the raw code that makes them work was not your responsibility. You do not own the code, and you have no control over it. Is it your responsibility to test externally owned interfaces, to prove that they work? Arguably no. You should trust them to work. But the issue becomes gray since your code depends on them, and you’re trying to maximize test coverage for your code.
At first blush when writing tests, externally owned interfaces seem ripe for mocking. But sometimes mocking things you don’t own requires complex research to figure out how to do it or even requires introducing additional third-party libraries. Figuring out how to mock these things can consume valuable time and lead to frustration whether or not you are attempting TDD.
Worse still, if you are sprinkling an externally owned interface all over your code base, you could be planting a time bomb that will blow up at some point. What if the owners of the interface make a critical update requiring changes everywhere it is used? What if you need to swap out the interface for another one, due to security vulnerabilities or a change in behavior needed? Direct calls to the externally owned interface all over your code base would leave you with the tedious and error prone task of making changes throughout your repository and likely updating all of your tests.
Putting a wrapper around externally owned interfaces offers a quick and simple path for dealing with these problems. By “wrapper,” I mean a function that simply calls the externally owned interface. When your code needs the externally owned interface, your code calls only to the wrapper, with no idea of what’s inside. The benefits of this approach include:
- Your tests only have to mock the wrapper, which you can make mocking incredibly simple. You avoid complexity that can come from directly mocking externally owned interfaces.
- If you have to replace your externally owned interface or change something about it, you only have to do so in one place, the body of the wrapper. This is because the rest of your code calls only the wrapper and knows nothing about its internals. If you have mocked the wrapper in your tests, the tests are still valid and require no updates as well.
The key to reaping these benefits is to make sure each wrapper consists purely of code that you do not own. This way, your code can continue to trust the wrapper, the wrappers do not need any tests in their own right, and your tests that mock the wrappers remain reliable. This all happens because the only thing inside the wrapper is something you’d trust anyway if called directly.
Any options or other data required by the externally owned interface can be passed as arguments to the wrapper. Below are some examples, with descriptions embedded in the comments.
```js ////////////////////////////////// // Example 1: Wrapping a logger // ////////////////////////////////// /* * Below, so long as production code talks only to `logInfo`, replace * `console.log` with anything you like, such as saving logs to a file or * using a third-party logging service. The code and testing mocks * remain unchanged. */ export function logInfo(data) { console.log(data); } ////////////////////////////////////// // Example 2: Wrapping a data store // ////////////////////////////////////// /* * A similar concept applies here to the `save` function below. * Above saves `data` to a specific text file, but you * could easily swap out the target for something else, like a * database, without changing the production code or the testing mocks. * When you need to test procedures that save the data, mocking the * `save` wrapper would be straightforward, needing to * know nothing about node’s `fs` library. */ const fs = require('node:fs'); export function save(data) { fs.writeFile('/opt/dataStore.txt', data, (err) => { if (err) { throw new Error(err); } else { return true; } }); } ////////////////////////////////////////// // Example 3: Wrapping network requests // ////////////////////////////////////////// /* * Below wraps JavaScript’s fetch API and keeps the wrapper “pure” by passing * the code as arguments to the wrapper. */ function logApiResults(data) { console.log(data); } async function fetcher(url, options = {}, handler) { return await fetch(url, options) .then((response) => response.json()) .then((json) => { if (handler) return handler(json); return json; }); } /* * Invocation: */ await fetcher( 'https://www.awesome-api.com', { method: 'POST', body: 'Gimmie' }, logApiResults, ); ```
Ultimately, whether to wrap an externally owned interface is a judgment call. Wrapping can offer the benefits described above, but it can also lead to tedium if everything is wrapped or you are dealing with multiple interfaces from a single library. And you will have to prepare to answer questions from other developers as to why the wrapper is there in the first place. One’s intuition for whether to wrap improves with time. To recap, a few questions that tip the scales in favor of a wrapper are:
- Are you about to use this externally owned interface in several different files, so your code base will become entangled with it?
- Is it likely that your repository will change from using one externally owned library or API to using another?
- Does the wrapper hide a complex interface that the rest of your code does not need to know about?
- And, of course, does the wrapper make testing easier, especially when setting up mocks?
But how could wrapping externally owned interfaces make mocking easier? The next tip, on dependency injection, has some examples.
6) Before writing production code, consider whether you can use dependency injection to create a complex piece of functionality out of smaller pieces of functionality.
Imagine you’re working on a feature, and you’ve already done some thinking and coding. You’ve taken the approach of writing production code and tests for smaller pieces that you will combine to bring about the feature, akin to building blocks. Your pieces are organized, as much as possible, into pure functions with unit tests and pure wrappers. Each seems to follow the single responsibility principle.
Now it is time to combine these building blocks into bigger code units to get closer to completing the feature. In other words, you want to integrate building blocks together. But how should TDD be employed? What should these “integration” tests look like?
If the programming language or testing framework at hand is new or new-ish to you, this may warrant thinking about the shape of your production code first. There are different paths you can take. You might prefer one to the other, depending on your experience level with the programming language or testing framework.
Say the feature involves retrieving data from a particular URL, updating the data object with a `source` property, and then saving the update to a file. One option to integrate your building blocks is to import the building blocks and create a function that calls each one directly, passing the results of one onto the next. It might look something like this. (Note: the code below reuses code from examples above.)
```js
import { fetcher } from './fetcher';
import { update } from './update';
import { save } from './save';
export async function saveApiData() {
const source = 'https://www.awesome-api.com';
const data = await fetcher(source);
const updatedData = update(data, 'source', source);
save(updatedData);
}
```
When writing a test, you could come up with something like this, which relies on Jest’s `mock` API.
```js
import { fetcher } from './fetcher';
import { save } from './save';
import { saveApiData } from './saveApiData';
jest.mock('./fetcher', () => ({
__esModule: true,
fetcher: jest.fn(),
}));
jest.mock('./save', () => ({
__esModule: true,
save: jest.fn(),
}));
describe('saveApiData', () => {
test('saveApiData saves data from the source with an updated source property', async () => {
const apiData = { user: 'betty@betty.com' };
const source = 'https://www.awesome-api.com';
fetcher.mockImplementation(async () => apiData);
await saveApiData();
expect(save.mock.calls[0][0]).toEqual({ ...apiData, source });
});
});
```
If you’re just starting to TDD and don’t know Jest very well, coming up with this test might be challenging. You’ve got to know exactly what the testing framework is asking for and exactly how to get the mocks to give you what you want. I’ve been writing Jest tests for over two and a half years. I still have to remind myself about the proper Jest incantations to get such things to work. Writing such a test first, and even thinking about it, is difficult.
So what if you took a different approach to your production code, using dependency injection to make the integration code more abstract? For example, you could pass in the `source`, `fetcher`, and `save` as arguments. While you’re at it, you could rename the function to something more generic, from `saveApiData` to `saveData`.
```js
import { update } from './update';
export async function saveData(source, fetcher, save) {
const data = await fetcher(source);
const updatedData = update(data, 'source', source);
save(updatedData);
}
```
What does this do for your tests? It makes them much easier to write, I’d posit, and therefore easier to do some variation of TDD. Your tests wouldn’t even have to use Jest mocks. You could simply “roll your own mocks”, such as the `fetcher` and `save` functions below.
```js
import { saveData } from './saveData';
describe('saveData', () => {
let argumentPassedToSave;
const apiData = { user: 'betty@betty.com' };
const source = 'https://www.awesome-api.com';
async function fetcher(url) {
return apiData;
}
function save(data) {
argumentPassedToSave = data;
}
test('saveApiData saves data from the source with an updated source property', async () => {
await saveData(source, fetcher, save);
expect(argumentPassedToSave).toEqual({ ...apiData, source });
});
});
```
There are a few important things to note about this test:
- The test requires no special peculiarities or unique function calls required by the testing framework, such as the incantations for Jest setting up mocks. In other words, in this example, the test is no longer coupled to the Jest mocking API. If you decide to move your repository’s testing framework from Jest to Vitest, the more tests you have like this, the easier the job will be.
- By taking advantage of the prior tip above and wrapping `fetch` within `fetcher`, the test avoids all the additional complexity that can come with mocking thorny externally owned interfaces such as `fetch`. The test is cleaner and easier to understand at first glance.
- The test does not require importing other file dependencies or knowing where they are in the repository. The test does not change, for example, if production code dependencies such as the `fetcher` and `save` functions move to different files or directories. The only production code that the test requires is the function being tested.
To me, the dependency injection approach results in a test that is easier to write, read, and understand compared to the first way using Jest mocks. And when you think about your tests before you write them, this test is much easier to think about at the outset. If you agree with all this, then dependency injection is a good tool to keep in your belt during the course of your TDD practice.
One further thing to note: As an added benefit, dependency injection has left the production code more flexible and reusable. You could call `saveData` with any `source` or `fetcher` arguments, such as ones that would cause `saveData` read from a particular file. You could also call `saveData` with any `save` argument, such as one that would save the file to a custom database. The `saveData` function would be none the wiser and would not need to change. All these reasons are why dependency injection is good to keep in mind.
7) With anything responsible for the display of UI, can you strive to keep it separate and humble?
This last tip can be seen as an extension of at least three other tips: using pure functions, wrapping, and dependency injection. Roughly speaking, it’s the humble object pattern, or at least a close cousin of it. It applies when a developer cannot completely delegate the responsibility of displaying data to an externally owned interface.
For example, when your application responds to a browser’s network request with JSON data or when your application `console.log`s some data, your code is delegating the responsibility for displaying that data to something else, an externally owned interface. Something you do not own will ingest the data and display it to a user. The particulars and implementation of that display are outside your control.
Things are not always so clear cut, however, especially when doing frontend work. Take typical React feature work, for example. Frequently it involves fetching data, styling the data to be displayed, handling what will happen when the user clicks a particular button, etc. It is simply not an option to hand off data and delegate its display carefree to something else.
The goal of this tip is to get as close to delegation as possible in a scenario like this, in a way that is as trustworthy as possible. The gist is to strive for a separation between the code that gets or manipulates data from code that displays data. And the code that is responsible for displaying data should be “humble”: it knows as little as possible about the specifics of the data it displays and knows nothing about where the data came from. For frontend work, this includes knowing nothing about the actions or side effects it triggers when a user clicks on something it displays, such as a button or an image. Everything the humble code does know about is an abstraction.
It’s the humility of the humble code that allows it to reflect three of the testing tips above: pure functions, wrappers, and dependency injection. Let’s take a simple example, using a React component whose responsibility is handling the display of some offer to the users of a webpage. Say the component is called `
```js
export function OfferLayout({ headingText, offerText, buttonText, callback }) {
return (
<>
{headingText}
{offerText}
>
);
}
```
Note three things about this example. First, `
Because `
```js
import { describe, expect, test } from 'vitest';
import { render, screen } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
import { OfferLayout } from './OfferLayout';
describe(' ', () => {
test('it renders the given text', () => {
const props = {
headingText: 'Click the button!',
offerText: 'By clicking Submit, you agree!',
buttonText: 'Submit',
};
render( );
const renderedHeading = screen.getByRole('heading');
expect(renderedHeading).toBeVisible();
expect(renderedHeading).toHaveTextContent(props.headingText);
const renderedOffer = screen.getByText(props.offerText);
expect(renderedOffer).toBeVisible();
const renderedButton = screen.getByRole('button');
expect(renderedButton).toBeVisible();
expect(renderedButton).toHaveTextContent(props.buttonText);
});
test('when the user clicks the button, the callback is called', async () => {
const user = userEvent.setup();
let callbackCalled = false;
const props = {
callback: () => {
callbackCalled = true;
},
};
render( );
await user.click(screen.getByRole('button'));
expect(callbackCalled).toEqual(true);
});
});
```
As an illustration of how the tests have become easier to write, note that above uses Vitest instead of Jest. The tests would be virtually identical using either testing framework. Only the first import statement, importing Vitest, would have to change. Leaning on the anticipation of using a pure function such as `
Now let’s look at a component that uses `
````js
import { OfferLayout } from './OfferLayout';
export function RaffleOffer() {
const props = {
headingText: 'Enter raffle to win!',
offerText: ‘By clicking Submit, you agree to our terms and conditions.’,
buttonText: 'Click to enter!',
callback: () => console.log('Offer accepted!'),
};
return ;
}
```
`
And this brings up the third thing to note about `
Would anything about the discussion above change if, say, `
```js
export function OfferLayout({ headingText, children, buttonText, callback }) {
return (
<>
{headingText}
{children}
>
);
}
```
Not really. `
Part 2: Applying The Tips To A (Hypothetical) Task, Using React
In an attempt to draw a practical connection between the tips in Part 1 and the day-to-day tasks faced by developers, Part 2 discusses the tips in the context of implementing a hypothetical feature. The discussion represents a potential line of thought that considers all the tips above while still at the earliest stage of a potential TDD session (“It’s OK to think first! You should think first!”). The line of thought evaluates how different coding approaches could help to maximize test coverage for the feature while minimizing testing pain during development. Fair warning: Part 2 leans on React much more than Part 1, so admittedly, Part 2 may not be for everyone.
This discussion uses pseudo code when illustrating an idea. It does not go into a long drawn out example demonstrating “the right way to TDD”. Instead the discussion is meant to serve as food for thought on how to set yourself up for a successful TDD practice when you get a feature to implement. Once you have thought first and considered how to maximize test coverage while minimizing testing pain, you are in a position to TDD to the degree your experience and skill allows. This could be anywhere from just writing the test descriptions to practicing strict TDD with “Red-Green-Refactor.”
Here’s the assignment: There’s a React web app that displays information about the characters in different shows. In the immediate, the stakeholders want a component that will display character data for the show Avatar: The Last Airbender. When a user visits the page for Avatar, the app should do the following:
- Make a call to a third-party API to get character information in JSON format.
- Transform the data; for example, combining some of the data in different JSON fields.
- Display the character data in an awesome looking table. It will have a good amount of styling to make it look awesome. The stakeholders want animal characters to be listed first.
- Until the character data is fetched and transformed, display the word “Loading…” to the user.
- Unrealistically, ignore any error cases, since this is a fairly simple example and a high-level discussion.
Here are a few other relevant factors:
- Right now, the app has to display data for only the show Avatar: The Last Airbender. But soon, the app will have to call various third-party APIs to get data for different shows.
- It’s possible that each third-party API will return the character data in a unique format or structure. The code will have to do some transformation of the data to mold it to a standard expected by the app.
Let’s think first: how can the code implement this in a way that maximizes test coverage while minimizing test pain?
The code could lump all of this into one component, something like this:
```js
import { useState, useEffect } from 'react';
export function AvatarPage() {
const [dataToDisplay, setDataToDisplay] = useState(null);
useEffect(() => {
async function getData() {
const rawData = await fetch(
'https://api.sampleapis.com/avatar/characters',
);
const jsonData = await rawData.json();
const transformedData = // Transform jsonData with loops, concatenation, etc.
setDataToDisplay(transformedData);
}
getData();
}, [setDataToDisplay]);
return (
{dataToDisplay ? (
{
// Nested classes and tags here to make the table look awesome.
//
// Logic here looks to `dataToDisplay` and ensures animals are displayed first
}
) : (
Loading...
)}
);
}
```
With this approach, what would the tests look like? At a minimum, they would be mocking `fetch` to return some hard-coded data expected from this particular character API (`https://api.sampleapis.com/avatar/characters`).
Then the test could check the UI for expected Avatar-related strings when the fetch mock returned data. The tests would have to make sure that animals are listed first somehow, and that the Loader was visible if the `fetch` mock didn’t return any data. The test coverage might be decent, covering this component as a whole under its different states. Mocking `fetch` might cause a good amount of testing pain, however, due to the hoops that have to be jumped through.
It is possible this testing pain would spread throughout the code base when adding pages for additional shows, if developers start copying this component and its tests for the new pages, tweaking them as needed to work with the unique data structure coming from each different API. All in all, a summary of the test coverage and test pain, if you lump your implementation into one component like above, seems to be:
```js
// Perhaps decent test coverage as a *whole*, with some testing pain
export function AvatarPage() {
```
Are there ways the situation could be improved? Let’s consider some of the tips in this article.
- Before writing production code, consider how to organize your code, as much as possible, into pure functions.
The data transformation seems like a good candidate for being extracted into one or more pure functions. The structure of the data coming from the character API should be apparent, as well as the structure of the new data to create using the character API’s data.
So it should be fairly straightforward to write a test for a function that takes the expected API data as an argument and returns the transformed data. This pure function might have a name like `transform`. Getting test coverage for the portion of the code that represents `transform` should be doable with minimal testing pain.
- Before writing production code, consider whether you can put code that you don’t own inside a wrapper.
The application does not own `fetch` (or whatever is responsible for making network calls). The application also does not own the `.json()` method that streams data from the network response for the application to consume. These could be wrapped together in a single function, say, `fetcher`. Anything `fetcher` needs, such as a URL, could be passed as arguments when calling `fetcher`.
Because `fetcher` consists solely of externally owned interfaces, the application does not need test coverage for it. You can simply trust that `fetcher`’s externally owned interfaces do their jobs.
```js
export async function fetcher(url) {
const rawData = await fetch(url);
return await rawData.json();
}
```
Tests for code that call `fetcher` would need to mock `fetcher` at some point, likely relying on a testing framework’s mock API. But at least `fetch` itself is not being mocked, with all of its attenuated pain points. The wrapper thus gives some testing pain relief. And as an added benefit, `fetcher` is reusable, potentially providing such relief to other areas of the codebase.
- With anything responsible for the display of UI, consider whether it can be separate and humble.
The feature at hand definitely requires some styling decisions. This includes what the table will look like and the condition whether to show “Loading…” or the table.
All of these UI-related decisions could be contained in a separate component, say, `
```js
export function TableWithLoader({ data }) {
return (
<>
{data ? (
data.map((el) => {
return (
<>
{
// Awesome table styling
}
>
);
})
) : (
Loading...
)}
>
);
}
````
The feature requires that animals appear first in the table. The rough outline of `
As for testing `
These musings leave `
```js
import { useState, useEffect } from 'react';
export function AvatarPage() {
const [dataToDisplay, setDataToDisplay] = useState(null);
useEffect(() => {
async function getData() {
// No test coverage needed for this wrapper
const jsonData = await fetcher(
'https://api.sampleapis.com/avatar/characters',
);
// Test coverage good with pain minimized; pure function here
const transformedData = transform(jsonData);
setDataToDisplay(transformedData);
}
getData();
}, [setDataToDisplay]);
// Test coverage good with pain minimized; pure function here
return ;
}
```
Note that not everything is tested, so test coverage is less than 100%. But pound for pound, most of the feature’s requirements reside in `transform` and `
One way to increase test coverage would be with an integration test, one that would be similar to the original ideas for testing `
Speaking of sewing things together, there’s one last tip to consider:
- Before writing production code, consider whether you can use dependency injection to create a complex piece of functionality out of smaller pieces of functionality.
If the tests for the feature were limited to the original idea – one integration test for `
`
This suggests reimagining `
```js
import { useState, useEffect } from 'react';
export function AvatarPage({ fetcher, transform }) {
const [dataToDisplay, setDataToDisplay] = useState(null);
useEffect(() => {
async function getData() {
const jsonData = await fetcher();
const transformedData = transform(jsonData);
setDataToDisplay(transformedData);
}
getData();
}, [fetcher, transform, setDataToDisplay]);
// ...more to come
}
```
This approach removes the need to hard-code a specific URL in the component. We’ll get to how that URL is set shortly. But the critical thing to note now is that this whole component is much more generic and decoupled from any specific show. You might as well rename the component to something more generic: `
```js
export function CharacterPage({ fetcher, transform }) {
```
Now thanks to this renaming and dependency injection, the component invites reuse when fetching and displaying character information for different shows.
But going back to `fetcher`, how does `
```js
import { CharacterPage } from './CharacterPage';
import { transformAvatarData } from './transformers/avatar';
function fetcher(url) {
return function () {
return fetch(url);
};
}
export function CharacterPageParent() {
const avatarFetcher = fetcher('https://api.sampleapis.com/avatar/characters');
return (
);
}
```
Another benefit of injecting dependencies into `
With that, a rough outline of the `
```js
// Getting test coverage around `CharacterPage` should involve
// minimal testing pain, and the tests will give some assurance that
// the components and React-isms have been properly sewn together.
export function CharacterPage({ fetcher, transform }) {
const [dataToDisplay, setDataToDisplay] = useState(null);
useEffect(() => {
async function getData() {
// `fetcher` has been tweaked slightly, but it's still pretty much
// a pure wrapper, with no test coverage needed
const jsonData = await fetcher();
// Test coverage good with pain minimized; pure function here
const transformedData = transform(jsonData);
setDataToDisplay(transformedData);
}
getData();
}, [fetcher, transform, setDataToDisplay]);
// Test coverage good with pain minimized; pure function here
return ;
}
```
With all this in mind, you should have a fair degree of confidence that the feature will have good test coverage. It seems the code base will also end up with some good patterns, like the reusable component `
From a TDD perspective, all this thinking, done before writing any production code, is a huge step, arguably the most important one. An keeps testing and test coverage at the forefront when initially considering how to get a feature done. If you take an approach like this, it should leave you in a good spot to practice TDD to the degree you feel comfortable. That might be just writing test descriptions with empty tests, trying “Red-Green-Refactor” with the pure functions, or some other way.
The important thing is that the TDD practice produces good, well-tested code, while meeting you exactly where you are. This way, you are challenged just enough and have increased the odds of continuing your TDD practice going forward.