Iteration 2 – Testing Evaluation

The final version of the game can be found here

For testing, I wanted to take a more hands-off approach than in Iteration 1. This was because I wanted to see how a user would naturally respond to what was on-screen as this would provide useful feedback as to how intuitive the UI design was.

My testing objectives were:

  • To see if my UI could be figured out by the user without any input on my behalf.
  • To see how players prefer to play the game – more upgrades or more tentacles?
  • To see if viewing the tutorial made any difference on a player’s success at the game.
  • To find out what players thought the primary objective/theme of the game was.

From these objectives, I compiled a list of questions I wanted to ask my testers, as well as any quantitative data that I wished to track. These questions would give me my qualitative data:

  1. How were the controls to use?
  2. What was the aim of the game?
  3. How did you find the UI? Was it clear what everything did?
  4. Any other feedback?

I wanted to see how different player choices stacked up against how much health they had left at the end of the game. Players tended to have more health if they went for a mixture of upgrades and tentacles, whereas players that either forgot to upgrade or summon regularly tended to suffer. However, I do not think that the data shows one dominant strategy which is what I wanted: I didn’t want players to feel forced to always go for one path and I believe that I have given them good flexibility.

I asked players to rate the games fun and difficulty on a scale of 1 – 10. I did this to see if players who thought the game was more difficult also found the game less fun, or if they thought the game was too easy and therefore boring. However, after looking at my graph, I can see no correlation between a player’s fun and difficulty ratings for the game. I was pleased that the average fun rating was high and had little variation as it shows a strong enjoyment across the board. I found the changes in the difficulty interesting, as there was no clear trend, and I do not think the average accurately represents the data. There was high variation in the data, with a range of 1 – 7. I think this is a result of personal player preferences, rather than inconsistencies in how the game plays.

Generally speaking I got positive feedback on the controls, but a suggestion I was given was to explain the drag and drop mechanic during the tutorial, as Brad found that he would forget to drag the tentacles the first time. After that, no players would have issues with remembering how the mechanic worked.

Interestingly, there was some variation in how players responded when asked about the aim of the game. 5 people discussed protecting yourself/keep yourself alive whereas 4 responses were about killing all the enemies that come to attack you. While these are very similar, it was cool to see the different interpretations of the game’s aim.

When I asked the testers if the UI was clear, there was a split in the responses. 4 players said that they either needed the tutorial to understand, or that it made the UI significantly clearer, while the rest said that it was clear what the UI did without the prior explanation.

After asking my testers for any additional feedback, 3 people asked for more monsters that they could summon. This is a feature that I would like to add in the future, but I had decided that it was out of scope for this project. Another suggestion was that the tentacles could change in some way visually whenever you upgrade them, to give the player more feedback that the upgrade had made a difference. I really like this idea, and it would be a feature that I would add in the future. It is a common way of showing progression in other games within this genre and it would give the game more visual variety.