the Kaiten team case / Hebrew

the Kaiten team case / Hebrew

In order for the company to work smoothly, for tasks to move predictably through stages, and for the result to be obtained exactly on time, it is important to correctly estimate the time for completing tasks. Mistakes at the stage of such planning can be costly, which is why many companies deal with the problem of estimating tasks.

Hello Khabr. My name is Arthur Neck. I am a Kanban consultant, founder of Neogenda and managing partner of Kaiten. In this article, I will share the experience of the Kaiten company in finding a working way to estimate the time of tasks: where did they start, what did they encounter and what did they choose.

First steps: mastering the first academic method

In the early days of our team, we often faced problems with predicting the time of completion of tasks. Without a clear strategy when determining deadlines, we either set too tight deadlines, or finished projects many times faster than planned, and they were stuck because other departments were waiting for them, for example, only in a week. Both of these scenarios did not suit us. As a result, they decided to find a clear algorithm for estimating the time of tasks.

One of the classic ways of determining the deadlines for tasks is predictive estimation. In fact, this is an assumption in which an abstract value is used and compared with the real time of tasks. It was from this method that we started our journey in the search for a method of estimating the time of tasks.

The approach implies that you need to take a complex task, divide it into subtasks, and divide the subtasks into separate processes, each of which should be evaluated.

For example, when creating a program, you can select arrays of development, testing, and design tasks, and separate processes in each of the arrays.

In our case, it was simple to test this method “in battle”: we use Kaiten for project management and first on the dashboard we divide all projects into parts, for which different specialists and departments are responsible. That is, it was enough for us to see what tasks and subtasks a project usually consists of, and to estimate how much time each individual stage takes.

The logic is as follows:

  • we take the task – for example, the development of a new feature of the program;

  • let’s see what subtasks it consisted of;

  • we estimate how much time it took for each stage — for example, for testing and for rolling out to prod.

The next time a similar task comes to the team, when estimating the execution time, you can start from the results obtained – for example, immediately set aside ten hours for tests and release.

This approach allows not only to predict the time for completing the entire task, but also to calculate the workload for sprints — whether, for example, we will have time to implement a new feature in the backend within one sprint or not.

Theoretically, everything is fine, but in practice we quickly realized that the approach gives a significant error: in the evaluation of a task from ten subtasks, we have to add up ten known values ​​(the duration of each subtask) and get the total time. Such a designer does not take into account that some processes can be performed simultaneously, while others require long-term preparation, which no one considers.

As a result, we quickly abandoned this method.

Attempt number two: leveling up

We decided to test a method that refuses to be tied to sprints and conventional man-hours, and takes into account only the actual time for completing the entire task.

Here the algorithm is as follows:

  • We open the project management service (for example, we did it through Kaiten), records, notes or other sources and find all similar tasks.

  • From all sorted tasks, we select projects with approximately the same number of subtasks. The main thing is that tasks of different scales do not fall into one sample. For example, we didn’t mix app development with the development of a single new feature.

  • Let’s see how long the path of the task from Backlog to Released usually takes.

Such an analysis should be sufficient to determine how long a particular task is most often completed.

Moreover, the more tasks that were similar in complexity and scope were completed, the better. Here everything is simple: if there are five tasks in the sample and two of them were done in 3 days, and the others – in 1, 2 and 5 days, forecasting is reduced to divination on coffee grounds. Analytics on a sample of 100 tasks will be clearly more accurate and objective.

We followed this path — studied detailed statistics in Kaiten by time and completed tasks with a distribution chart, and then put the most frequently encountered terms into the forecast.

But even this approach turned out to be imperfect.

First, the approach is based on the fact that tasks of the same volume and complexity can be completed in approximately the same time. But in real conditions, the principle “if we succeeded then, we can do it now” does not work.

Second, in the book “Agile: Estimating and Planning Projects” Mike Kohn put forward the theory that the probability of the implementation of a task and the timing of its implementation can be compared to the number of Story Points falling on the task. For example, if a one-point story can be realized in 12 hours, then a three-point story takes 36 hours. The proposed theory is good because it allows you to build linear correlation graphs, that is, simply predict the time of tasks.

Our experience has shown that in real conditions the distribution of problem solving time is not normal, but logarithmic.

Moreover, we encountered the fact that there are cases when both a seemingly simple task and a deliberately complex one take the same amount of time. Not a rule, but a fairly common situation. Thus, there is neither a predicted dependence on complexity, nor the possibility of evaluating the task “in a vacuum”.

Analysis of problems and development of one’s approach

After testing the second classical method and obtaining unsuccessful results, we realized that the cause of errors in forecasting by any of the basic and encyclopedic methods is not taking into account all the factors that affect the task completion time. In such conditions, any environment becomes poorly predictable.

As a result, we decided to move away from the concept of “universal” methods, which practically do not work and have not taken root in us, and developed our own approach. According to it, when receiving a task for its evaluation, we take into account many variables, including:

  • Rest, holidays, mood. Some of the specialists may simply be on vacation, and without them the team will not be able to complete the difficult task at the same pace.

  • The degree of formation of the customer’s request. You can hold dozens of meetings, receive hundreds of files with a technical task, but still not understand what the customer wants to receive as a result. Moreover, the problem is not necessarily with the customer, who cannot formulate an opinion – maybe it’s just that the conditional developer did not explain the rules for filling out the TOR, or the manager decided to take on a raw task with the wording “they are technicians, they can see better – they will understand in the process.” They often “don’t understand” or “misunderstand”, so it will take more time to solve the problem than promised by statistics.

  • Difficulty of the task. It is important to delve into the task and identify at least some of the pitfalls at the stage of its agreement. This is necessary because the task itself may be simple and doable in two or three days, but will have an inexplicable “weight” that will become a heavy burden and require a few extra weeks.

  • Availability of sufficient resources, specialists, competencies. The question of competence is especially acute – the team may be fully staffed, but, for example, due to “staff turnover” it will not be the same people who did a similar task last time. As a result, the team will not have experience in solving a specific case.

  • Unforeseen circumstances, which may be lost from the planning horizon or occur unexpectedly. You can’t be sure that a conditional frontend developer won’t break his leg tomorrow or a tester won’t write a resignation letter. It is simply impossible to guarantee the absence of such “surprises”, so a certain reserve must be laid down in terms of solving the task.

  • Loading the command. The team may be overwhelmed. In this case, even a simple task may not have enough time or the focus will be shifted from it.

Moreover, after several “combat tests” of this approach, we realized that it is better to solve not a direct, but an inverse task, that is, not “when will it be done”, but “will it be done by August 20th.”

With this approach, it is easier to take into account all factors, assess the team’s strengths, and even specify the tasks with the help of the following questions in the spirit of:

  • Did you understand the complexity of the task?

  • Will we only do this task?

  • Is it clear how to solve the problem?

  • Is there dependence on external contractors?

It is clear that if the team will perform only one task and does not depend on anyone, it is easier to specify the correct deadline – predictability is ensured.

By following this approach, focused on accounting for many variables without reference to past experience, we were able to ensure high accuracy of forecasting. The error, if any, is minimal and no more than permissible. We currently use this pattern as the main one and do not plan to deviate from it.

Recommendations based on our experience

To correctly predict the time of tasks, you need to:

  • Categorize all incoming tasks: unclear, simple, complex. Moreover, it is better to refuse immediately from incomprehensible ones.

  • Collect the maximum amount of information regarding tasks: specify the customer’s wishes, form requirements. The more data at the input, the higher the accuracy of the forecast.

  • Consider available resources and contingencies. Ideally, lay down time with a margin.

  • Limit the tasks that are in the work. The complexity of the tasks must be acceptable at the current load of the team, and the number of simultaneously executed tasks is limited.

Following these recommendations will make the team work system stable and the time estimate valid.

Related posts