How to stop rubbing against robots and stop their degradation

How to stop rubbing against robots and stop their degradation

Point A: Automation

We are an IT company that helps small businesses as an outsourced back office. We maintain accounting, submit reports, prepare personnel documents and calculate salaries.

Today, we serve 1,500 active sole proprietorships and LLCs throughout Russia. And in order not to drown in reports and not to inflate the staff, as is typical, a lot of attention was paid to automation.

Some small part of tasks are completely closed by robots. Another small part is closed manually, by people. But to solve 95% of all tasks, both are involved.

In other words, until recently we lived in a paradigm:

  • We will describe the entire business process

  • Let’s see which part bothers people the most and takes a lot of time

  • We are automating this section

  • Let’s leave to people only that which cannot be easily and quickly automated or takes a minimum of human time. For example, check at work and press 3 buttons to get the result.

This approach is perfect when you are a startup or just launching a new product. That is, when speed and a quick start are important.

You don’t try to immediately cover 100% of the exceptions and deviations from the mainstream scenario. If you try to immediately write an ideal robot that knows how to wipe everything, does not degrade and supports 100% of scenarios, then you will simply be late, the market will be occupied by someone else, and the money for development will quickly run out.

But in any IT project, like ours, there comes a moment when it is important to stop considering yourself a startup and start living by the rules of “stable production”.

Point About: Awareness

How did we understand that it is “time” to rebuild the kitchen?

Unfortunately, no way. Recently we realized that we were already late 😐

In recent years, we have managed to automate a lot of processes according to the model described above. And it began to seem to us that with this level of automation, we could free some workers from their daily routine without losing quality and increasing processing times. Moreover, personnel transfers were expected in one department, and at once throughout the company.

When they started talking to department heads (we have technologists), they found out that it is not possible to remove anyone from the process – the workload will not allow it.

How so? So much has been simplified and accelerated for you… half of the work is in the works. Why can’t we take half of the people and let them go?

Well, somehow they don’t save jobs. They certainly don’t take on half the work.

They began to understand and delve into robots. It turned out that they managed to degrade a lot, and with different, sometimes amazing speed.

One algorithm promised to close 90% of situations without human intervention. After a year, he sends at best 20% of reports, other accountants do it manually.

And all because it was pre-programmed as follows: success will occur if conditions A, B, C and D coincide. If at least one condition does not work, the robot raises its paws and the task goes to the towel (accountant).

It was at this stage that we realized what was going on. People just sat and rubbed their asses with degrading robots for months.

We did not even begin to clarify why the accountants did not report the degradation of the robots. The answer is obvious: they don’t monitor it, and they don’t even always know which scenarios the robot covers on its own, and which ones are left behind.

In the dry residue:

  • Our mistake is that we lived too long in “startup” mode and did not review the processes. It was always important for us to automate 90% of the work as soon as possible, and if something doesn’t work somewhere, an executor (accountant, payer, lawyer, it doesn’t matter) will come and fix it manually.

  • Now we have to sort out the fact that we have hundreds of robots and tens of thousands of lines of code written in startup mode. They degrade every day, and no one signals about it.

  • We haven’t removed people from processes for too long. Well, it should have been done immediately after the introduction of the robot.

  • No one in the company monitored the level of robot degradation.

From these mistakes, a new concept with a daring name was born.

Point P: Stop rubbing robots’ ass

We decided to fundamentally change the approach to automation in three key points.

Now we automate not what will be done faster. And not even what bothers people the most. Now the key parameter for selecting the automation object this is 100% human exclusion from the process. That is, we automate only those parts of the process from which it will be possible to completely remove the executor (accountant, lawyer).

Yes, with this approach, the percentage of automation will be lower, because in many places there are some tricks and nuances that cannot be algorithmized. But in the long run, it will save us from the degradation of robots and give transparency in whether the mechanism works or not.

The second point: we will not allow job to pretend that it closes jobs for all clients or covers all scenarios. Now we will clearly understand even before launching the robot, for which clients and for which scenarios it will 100% work, and for which 100% it will not. This will allow more transparent planning of the workload on executors (accountants, payers, etc.).

In other words, today we are trying to reach a situation where we can trust robots 100% and control the area of ​​their coverage (20-40-70% of tasks).

The third important difference is that if the robot breaks down, The task of “come and finish it with your hands” falls not on the executor, but on the developer.

Yes, yes, if for some reason the robot did not send the report to the tax office, and tomorrow is the last day, the developer will deal with it, without the possibility of transferring the task to the accountant 😱

First, it will give motivation to write the code so that it does not break for as long as possible. Second, only developers will be able to understand why the robot degrades and quickly fix it. In other words, we leave the job to fail only in “unknown cases” that have not been properly programmed. So, the developer has to deal with the fall.

Development teams become responsible for controlling degradation. And everyone in the company knows it and agrees with it.

Dot Z: Life

As soon as they decided on a new paradigm, and this happened a few months ago, they immediately began to change the processes within themselves. We have three separate development teams with different tasks and goals. And there was no resistance from any of the teams: everyone accepted the idea with a bang. Although it took us 2-3 meetings to reach a common understanding of the concept and discuss all concerns.

However, now there is a clear plan for further life:

  • all new robots, without exception, are written immediately in a new way (we have already finished with two)

  • we realize and design new processes immediately in a new way

  • we try to upgrade old robots as much as possible and according to the degree of importance

The developers themselves note:

The new approach has the potential to save development time. Because now most of the developer’s time, especially the one on duty, is spent on investigating situations created by the old paradigm (he had a job, but he didn’t do it).

Developers have an expectation that this will improve the overall health of the system.

And how is your process design arranged?

Related posts