The tail wags the dog. The software package insists on, or is configured to insist on, meaningless statuses and priorities, so they're filled in arbitrarily.
The service desk suffers particularly from having tools that make it easy to measure something. Whatever that something might be, when it is measured and rewarded, behaviour will follow.
The behaviour will follow the something - most certainly not the something else that the business, the management or the customers actually value.
One good exercise, to drive this problem home, is to run a simulation, or a thought-experiment simulation, where the software tools have all broken. The aim is to develop tools on paper, or laminated cards, or similar mechanical devices that allow proper handling of incidents. [the HP/AXELOS/G2G3's race to results simulation does just this]
Quite a lot of the ITIL advice turns out to be essential. Quite a lot of the stuff that software tools do turns out to be of no value at all.
It is a good way of demonstrating the wisdom inherent in ITIL, which is the value it gives to service management and service governance.
The solution to this situation lies in understanding exactly what the value of a particular incident is to the business and how best to react to mitigate the damage and then, only then, deciding how to design the process and software to support it.
As always, if you don't spend time, effort and money on working out exactly what the requirements are, you'll end up spending much more trying to fix the mess your assumptions have made.
First understand requirements. Then, iteratively, work to produce the best design you can. Then measure how it meets the requirements - don't measure things just because they're there or easy to measure - and improve what you're doing so it meets those requirements better.
Plan - Requirements - Design - Deployment - Operation - Improvement - Plan is not as snappy as Plan - Do - Check - Act, but it's more acurate: PRDDOI rather than PDCA.