Ethical IIoT

First, I guess, here's the permanently immortalized (at least until I get bored of this site and redo it, removing all this content in the process) series of Mastodon toots that inspired this thonking.

i'm unironically worried about the ethics of what i'm doing at work basically we're creating dashboards for production data, and putting running speeds up on the screen that people other than operators would be looking at and i worry that people are going to see these speed gauges not-maxed-out and then pressure the operators to run things faster, even though there's real reasons why they can't and the speed has to be set a certain way

i'm not so much worried about human safety or anything from a physical standpoint, since that's not really a concern since if the machine runs too fast and there's a web break, it's tissue, it just makes a mess but i just don't want to feel like i was part of the reason that someone got unduly reprimanded because a bigwig shithead thinks they know more about running the machine than the operators

it's just that nothing boils my piss more than when dumbshit bigwigs think they know more than the experienced people who are actually getting things done, and then make unreasonable demands based on incorrect assumptions and not understanding how anything works

- source: me, on October 10, 2019

For some context, IIoT stands for Indsutrial Internet of Things. In my life, that involves connecting industrial machines up to the internet so that the people who buy that equipment can see how it's performing. This equipment is the sort of thing that needs an operator or team of operators to run successfully, and many of these operators — at least the more experienced ones — have extremely intimate knowledge of how to run these machines to their utmost performance levels. Their managers probably don't.

I know full well how absolutely shitty some managers can be when presented with an easy way to see that something isn't being used to its utmost maximum capacity, even if that capacity isn't realistically achievable or usable given the circumstances. I really don't want anything I make being used in a way that it makes someones life worse at the expense of some manager who probably gets paid way too much to do way too little. I especially don't want any of this to be the cause of someone losing their career just because some manager didn't see a number on a gauge getting bigger.

Again, like I said, there's no real human safety issue here. If these machines run too fast, you probably just get a broken web of paper material and it's not going to cause any other problems; it'll just be a pain in the face to clean up.

There are some specific things I'm doing to mitigate this, some things I'm taking upon myself, and some things that are agreed upon in our organization that have a side effect of improving the ethical impact of these IIoT systems.

Do not make it look like an unachievable standard is expected.

For instance, some managerial people have said they want to see red/yellow/green regions on the gauges so they can see, at a glance, how things are performing. I will not put something into the "green" region if it is not realistically achievable given the circumstances. This is agreed upon by the people I work with on this, because such information would be useless, let alone potentially harmful.

Do not show more information than someone needs or would be expected to understand.

That is, for instance, don't show the owner of the company the current list of faults currently affecting the machine. Don't show a salesperson how fast the machine is running. This information will not be understood by them properly. They could see a fault appearing many times and think it needs fixing immediately. However, one could ask the operator who is responsible for that machine and they could confirm it is something that happens in the normal operation of the machine and either can't be mitigated, shouldn't be mitigated, or would not improve anything by being mitigated.

Do not blame.

Do not implement any feature that would associate a specific operator or worker with a fault or problem. There is no way for an automatic system to determine if a person who happened to be logged into the machine's interface was actually responsible for a fault, breakage, or other issue. Placing blame via this system would be wholly unproductive and damaging.

I'm aware that there are unavoidable problems with these systems, and they will be misused by people who see them as a quick way to make themselves look better at the expense of others. These systems could also be used to pit people against each other, for instance, by showing two separate production lines and making their production into a competition. Turning someone's career into a competition against their will is absolutely deplorable.

That being said, I hope that some of the work I'm doing can be a net positive for those who are impacted by these systems.

Originally published November 7, 2019. Last updated November 7, 2019.