Over the years, I’ve tested hundreds of players (if not more). This includes U10s in talent ID settings, pre and post-pubescent juniors in academies and even professional players competing on tour. Like most coaches, I tested players at predetermined intervals throughout the year. My process has since changed.

Given the research on the subject, my personal experiences and conversations with elite coaches in various tennis settings (and across many other sports beyond tennis); traditional testing practices have essentially been thrown out the window.

So what are coaches doing now? They’re monitoring. You may be wondering how that differs from testing; let us explore.

Traditional Testing

As mentioned, testing has traditionally been scheduled at various pre-selected times of the year. In junior settings, the first testing session of the year happens in September; after the summer season, and usually after a short rest period. Then, some very organized coaches, will schedule another testing session in the middle of the fall and one more just prior to the holiday break.

The process looks something like this. Coaches run players through a battery of tests - performing one after the other. Often times, players never even see the results themselves, let alone how the scores compare to that of their peers. In most settings, these testing periods are so far apart that we don’t really know if our training intervention was successful and which factors contributed to the results. Did they improve because of training? Or a growth spurt? Did their scores diminish because of fatigue or because of a previous session? Or perhaps long-term overtraining played a role?

Other factors are also at play. How’s their nutrition? Are they in exam time? Is sleep hygiene an issue? Did they recently play a tournament?

And lastly, does anyone even see the testing results? Ok, good coaches do present the results to the rest of the coaching team, players and even parents. But from what I’ve seen, that’s NOT the norm. A colleague of mine - who was working for a federation with some of the top U14 players in the country - told me he was waiting for testing results for months! Yes, months!

As you can see, the way testing has traditionally been done is anything but effective.

The approach at Mattspoint differs - and I’ll share it below.

Training is Testing and Testing is Training

Everything a player does is a test. Their strength workout on Wednesday is a test. Their flexibility session on Friday morning is a test. Their velocity-specific serve practice is a test. Their ITF tournament is a test - actually, a tournament is more like an exam...but you get the point.

You see, we should think of training as a daily and/or weekly process of monitoring. A process where the results feed into our training programs and form what’s called a ‘feedback loop’.

Do this instead of quarterly testing, and you’ll gain WAY more insight into your player - and what truly requires attention. Because at the end of the day, it’s impossible for anyone to predict how a player will adapt to a training program - especially one that is programmed 12 weeks from now. Which is another reason monitoring fits into planning so nicely - they both feed off one another.

How do we do it?

This monitoring process can take on a number of shapes. But first, consider this. A typical battery of tests might include the following (this is ‘partially’ derived from Tennis Canada’s National Training Centre):

  • Speed: 20m Sprint

  • Agility: COD Test

  • Explosive Strength (lower): Vertical Jump

  • Explosive Strength (upper): Med Ball Side Throw & Overhead Throw

  • Endurance: Shuttle Run

Now, there’s nothing inherently wrong with any of these tests (some are better than others). But why not measure a player’s 20m or 10m acceleration time every week, for a 6-week period? Ask yourself the same question for any other quality or performance measure that you’re attempting to improve. This strategy provides significantly more insight than simply measuring a quality 2-3 times a year.

Below are a couple examples of the various performance measures we track and how we track them. I’ve also attached the spreadsheet in the ‘Learn More’ section below.

Just keep this in mind, we don’t necessarily have to track every single exercise, load and rep etc. We must select the key exercises or movements that are the most representative quality we’re after. In the example below, you’ll notice that we are tracking a few strength exercises, explosive strength movements, a modified COD drill and some on-court specific power indicators (serve, forehand and backhand speeds). These will also change from program to program and athlete to athlete; coaches should adjust accordingly.

  Figure 1 - Monitoring Overview

Figure 1 - Monitoring Overview

There are different variations here - the one I implement depends on who I’m coaching and how much involvement I have. Some players don’t want to track items every day but they’ll go through a reflective process at the end of each week. I still prefer that over the alternative: not tracking anything at all.

Monitoring Training Load & Player Health

Tracking training load is probably the most important metric that I monitor. The research (Coutts et al 2010) is quite clear on this - simply knowing ‘how hard’ AND ‘how long’ a training session was, can provide us with a wealth of information. Not only that, it basically provides a broad view of the training process.

  Figure 2 - Daily Monitoring

Figure 2 - Daily Monitoring

As you can see from figure 2 above, we are tracking RPE (rate of perceived exertion) on a scale of 1-10 along with the total number of tennis and physical hours a player is completing on each given day.

I also ask players a series of questions based on the Hooper-Mackinnon questionnaire - which , according to the literature, is a valid and reliable measure of health & wellness. In other words, is the player fatigued? Stressed? How well have they been sleeping? And so on.

Combining these two forms can allow us to gain further insight into the athlete’s training current status, how they are handling training and whether we are actually making progress.

Not only is it useful to see what the training week is looking like now - which gives us the ability to adapt the program and schedule on the fly - but it also allows us to look back, potentially catch trends and even gain insight into why an athlete was injured, why they had a terrific performance or why they aren’t increasing one of their key indicator movements. Often times, it’s a result of a combination of factors - including training load, lifestyle factors (stress, sleep) and more.

The Coach's Eye

Great coaches have an almost sixth sense when it comes to their observation skills. They can often pick up on things in real-time - where less experienced coaches have trouble seeing after slow-motion analyses. How did they develop this? Through practice. Over years, they watch their players intently. They analyze video. They study the movements, behaviours and attitudes of the the world’s best.

If you’re a tennis coach, I strongly advise you to learn more about the physical side of things, not only through books and courses (like this one), but through implementation and observation. Either administer the physical prep program yourself or be an active part of the process with the coach in charge.

The same thing is true for physical prep coaches. Many simply do the off-court portion and forgo the rest. You will gain far more insight into each player’s strengths and weaknesses, if you watch them move ON the tennis court, then you ever will OFF the tennis court. So watch and be involved in tennis practices.

In any case, monitoring does not replace this observational quality. It enhances it. You combine what you see with what you don’t and now you have a bigger picture at your disposal. That’s the how we merge science with experience.

A Final Note

You might be a coach who’s already monitoring training load, key variables, exercises and so on. If so, kudos to you. You might even be implementing other questionnaires (as there are many that could be valuable). For the moment, the aim here is to keep things simple and in perspective - which is why we’ve chosen this particular structure.

The key is to monitor what’s important, with consistently, with regularity and over an extended period of time (weeks, months and even years). The hard part is setting it up and getting buy-in from players - but once it’s there, we can get a wealth of useful information.


Learn More

Vern Gambetta post

Tennis Canada Testing Sheet

Monitoring spreadsheet

Hooper-Mackinnon Study (or PDF from CSI)



Member Login
Welcome, (First Name)!

Forgot? Show
Log In
Enter Member Area
My Profile Not a member? Sign up. Log Out