?

Log in

No account? Create an account

Josh-D. S. Davis

Xaminmo / Omnimax / Max Omni / Mad Scientist / Midnight Shadow / Radiation Master

Previous Entry Share Flag Next Entry
Network engineering
Josh 201604 KWP
joshdavis
A call center is functionally similar to a network (as are highway/streets, and many other things).

If my work were a network, it would be designed as such.

A packet comes in
The header is processed (2-10 mins, avg 5)
Data is requested and processed (2m-8h)
When large amounts of processing are required, the connection may be closed.

If a new connection comes in during offline processing,
that new connection preempts the processor.
Any information not saved is generally lost due to buffer purge.

If a connection is blocked, it will pick a different processor.
If the connection is to query a pending workload,
The new processor must review the stored information.
Any missing or lost information must be rerequested and reprocessed.

Workload packets vary in size from 2 minutes to 8 hours.
Each complete transmission is a varied number of packets.

The network will be designed with enough processing/bandwidth to handle average load, not peak load.

Load will generally be classified as high number of incoming connections, though sometimes there are gaps of varing durations.

Some processors with superior cache may be able to process an offline workload while handling an inbound connection for a different workload.

Some workloads will go away if the connection is delayed.
Some workloads will constantly retry connection rapidly if delayed.
Some workloads will attempt preemption through nonmaskable interrupts if delayed.
Some workloads will wait indefinitely once offline workload has been submitted.
Some workloads will actually wait out any connection response delay
Some workloads can only be processed during certain times.
Some connections can only be made during certain times.

There can only be one connection per processor.

There are response criteria requirements.
Average delay to answer = 2 mins, max delay = 10 mins
Workload must be handled in priority order (1+, 1, 2, 3, 4)
But remember, inbound connections preempt offline workloads.

Processors must work on billable customer time 53.5% of the time.
Processors are allowed roughly 10% power-off time (Vacation)
Processors are allowed 12% upgrade time (Education).
Processors are allowed 8% cooldown time (two 15 min breaks/day)
Processors are required to attend roughly 1-2% reprogramming (meetings)

Three of the 11 processors are required to maintain administrative functions and to assist the other processors. This was previously deemed to be 25% and 50% of their workload, which does not count as billable time.

We are allowed up to 50% repair time (sickness and accident) which is not factored into utilization

So when we weekly have 20+ minute delays, monthly have 40+ min delays, but the average workload, but we're still only at "53%" billable time, this means we don't warrant additional processors.

Customer satisfaction has dropped steadily this year.

"Why has customer sat dropped?"
"No, you don't warrant additional staff."
"No, you may not transfer workload without approval from the admin processors who are currently busy with their own workloads."
"No, even though we told you that all was well prior to assessment, during assessment, we will rate you lower, based on different standards, because we don't have enough money to rate you properly."
"No, we don't have anywhere to send the more expensive processors, and they haven't depreciated, so we can't replace them with cheaper processors.

Critical points that seem to be unnoticed here:
A) If you run a processor at 85%, you've exceeded an efficiency threshold and will actually get LESS workload out of it.
B) If you exceed the operating characteristics of a processor for short periods of time it actually may not recover immediately, or ever, and it's efficiency may be permanently reduced.
C) If you only have enough processing power for your average workload, then your peak workload will suffer a bottleneck.
D) Bottlenecked workloads tend to spill over and affect more than just peak times. (Such as when the accident was cleared 2 hours ago. There never was a big enough gap in traffic, so people are still slowing down because the person in front of them had to slow because 2 hours ago everyone was stopped. The bubble continues until traffic lets up to well below average).

It's all about resource management. Resource management and performance characteristics of resource limitations should be required training for all managers.
Tags:


  • 1
(Deleted comment)
I too agree that internal costs are less expensive than internal costs. :) We do stuggle greatly.

The problem is that The Kernel/LVM team got three new contractors and picked up some people from beaverton. TSM got 2 new people from internal transfers. We are not set for additional staffing, even from internal hire only.

Cost analytic functions should always factor in customer satisfaction/loyalty. It takes a long time to repair/rebuild and such a short time to destroy.

  • 1