Transitions are an important tool in our D3 arsenal that we can leverage to relate understanding of visualizations to others. They can be used to draw attention to some representation of the data, emphasize change, or highlight the effects of user interaction, among other things. Given these benefits, it’s critical in a test-driven culture like ours that tests be able to handle transitions.
This blog has touched extensively on our journey with D3, which can be read about here:
The first post mentioned above discusses the challenges involved in testing D3. The D3 API is chained, which makes it difficult to assert that one part of the chain is called correctly. Additionally, D3 often results in hard-to-reason-about output, like the
d attribute of a generated path. The post linked above discusses tips for unit testing D3. However, it does not specifically discuss how to test transitions.
Testing transitions is an added challenge because it’s difficult to consistently simulate time moving forward in the tests. D3’s timers behave differently from native timeouts/intervals, so you can’t use fake timers in a mocking/stubbing library like sinon. In D3 v. 3.x, we could get around this by stubbing
Date.now and using a d3 utility to immediately execute active timers. Here’s what that looks like:
Using the above method, we could write a test like this:
However, D3 v 4.0 was a sweeping rewrite that changed the underlying behavior of timers for the better. Changes in the rewrite made it so that time freezes when the page is in the background. When the page comes back to the foreground, transitions pick up where they left off. In order to do this, the rewrite switched to using
performance.now instead of
Date.now and caches the returned values. This means we cannot simply replace
performance.now in our
clockTick method above, since flushing the timers does not necessarily invoke
One strategy we looked into was requesting a new animation frame in the
clockTick method, which worked most of the time, but ultimately resulted in flaky tests. We rely heavily on automated testing for our build and deployment processes. Introducing flakiness to tests causes build errors and deployment failures that tie up engineering resources and delay us getting features, bug fixes, and enhancements to our clients. Flaky tests are simply unacceptable in our culture.
Another strategy we considered was forking a new version of D3 for testing and not caching
performance.now there. This, however, would have been challenging and expensive to maintain, as D3 v. 4.0 is modularly comprised of several inter-dependent packages. We would have to clone several of them and deal with merge conflicts every time we wanted to upgrade to newer releases of D3.
We had hit a wall with our upgrade to D3 v 4.0. Rather than continue thinking of ways to replicate the testing approach we already had, we took a step back and revisited our objectives. The purpose of testing is to gain confidence that the code works as expected and to prevent or provide visibility into regressions. We definitely got that confidence from our fine-grained
clockTick-style testing since we could write tests that asserted on attributes of elements after a transition of a specified duration. However, using unit tests to gain confidence the transition occurred over the correct duration seemed to provide relatively little value. It was akin to asserting that a hard-coded value in our code was as expected. If there was a way to assert on the state of things after transitions had completed, regardless of the timing, that would give us enough confidence that our code works as expected.
With this in mind, we wrote a utility to stub and force transitions to make them execute to their end state immediately. We replace transition methods in our tests to simply return the selection itself. This way, any modifications chained on a transitioning selection happen immediately. For tweening, we had to exert a little bit of extra effort to call the timing function with the end value for each element in the selection. And for start/end callbacks, we invoke them immediately. This may not be comprehensive enough yet, but we’ve been extending it as we encounter unhandled cases. Here’s the code:
Here’s how the earlier test would look using our new utility:
In the end, we landed on an approach to testing transitions using a similar path we take to building features – clarifying objectives, establishing requirements and weighing tradeoffs. Afterwards, we evaluated and discarded an established testing pattern while still maintaining confidence in our tests. This way, we moved forward on shipping delightful features on top of the improved D3 library.
Nothing in this communication should be construed as an offer, recommendation, or solicitation to buy or sell any security. Wealthfront’s financial advisory and planning services, provided to investors who become clients pursuant to a written agreement, are designed to aid our clients in preparing for their financial futures and allow them to personalize their assumptions for their portfolios. Additionally, Wealthfront and its affiliates do not provide tax advice and investors are encouraged to consult with their personal tax advisors.
All investing involves risk, including the possible loss of money you invest, and past performance does not guarantee future performance. Wealthfront and its affiliates rely on information from various sources believed to be reliable, including clients and third parties, but cannot guarantee the accuracy and completeness of that information.