Our first sprint is simultaneously both easy and hard to judge: easy because our inexperience offered us up many failures to reflect on; hard because despite this we still have a lot of blind spots, like the many that came before. Given our inexperience, and some misunderstandings about what we were expected to produce, the beginning of our sprint was quite chaotic. The day before our first sprint planning meeting, we had only a few issues on our board and none with the proper template, blissfully unaware of the severity of their absence. With a clarifying email from the Professor we managed to convene a last-minute meeting on a Sunday evening and piece together a healthier issue board. I believe this anecdote helps illustrate both the trajectory of our shortcomings as well as our reactive, rather than proactive, approach. It is no wonder that our “finished” issue board contained many disparate goals, all of which were either pure minutia or could almost be epics on their own right – with a smattering of spike projects that I realize now we may be best off skipping.
For example, some of our issues included setting up repositories for each aspect of our project (i.e. front and back end) as well as finding the schools style guide. These were obviously weighted one but had zero been an option it would have certainly been more appropriate. A weight of one indicates at least a days’ worth of work given our two-week time frame, but each of these tasks took mere minutes. This is not by any means a problem, but small tasks like the repositories would be best bundled in the future. In contrast, our issue for creating the REST API is far too broad and lacks detail; being far closer to an epic than a single issue, which we did appropriately create. The issues for the isRegistered and isApproved stubs are far closer to an appropriate size given each is a discrete entity and can be given an effective definition of done. Unfortunately, we did not give an effective definition, and as such we may have a literal Java class with the isRegistered method in it but no idea if that is effective enough on its own. We ended up adding tests to each stub but did not outline that in the user story or in the comments on the issue.
In the future, we are going to add a definition of done to our issue templates as well as adding more fine detail to each issue, like including our proposed tests, and otherwise being more active in the comments of the issues by outlining our process more thoroughly. For fear of sounding too negative I want to highlight the causes for some of these shortcomings and the things we did well. In fact, the first is a combination of both. We managed to all meet consistently in person which is why we did not comment consistently on our issues, as we did not need to, we could just turn to each other and speak our mind. We realize now however that any important conclusions we came to about what to or not to do will be helpful to future collaborators. I think that dovetails nicely with what we did well more broadly, which is being very active and willing to set aside plenty of time and work hard on what we had outlined. I feel incredibly confident in the ability of our group to get work done and well, and I am looking forward to this next sprint with these lessons in mind.