AI could teach us to be better humans
Until recently I worked with a particularly difficult TL. He fitted the sterotype of Silicon Valley psychopath that sees the people around them as disposeable tools to achieve his goals. This person was (and is) very smart, and was one of the early adopters of AI in my environment. Eventually that got him an internal transfer to another more AI-centric position so he moved on… and came back changed.
Some months after, he came back to my division to give a talk about one of the tools he was building. The tranformation was visible from the first minute… the same person that, until less than a year ago was throwing his (perfectly capable and fine) juniors and collaborators under the bus for arbitraty thing that were clearly above their pay grade was now advocating for empathy, patience and understanding towards the agents.
That talk awoke something in me, which has led to the realization that many of the best practices to use AI agents translates very well to working with humans, in many cases by only replacing “agent” with “person”. Here are some examples from my own experience so far:
- Agents have a limited contexts which needs to be managed with care… just like us: there is only so much information a person can hold in their brain at any given time. With agents, we reason about the value of keeping the context tailored to the task at hand to avoid attention dillution and prevent it from mixing two concepts that should not be mixed. Oh my, I wonder where I have seen that before!
- We provide agents with skills, documented runbooks describing how to perform specific tasks so that they can perform better. How many projects have I been through where all the original developers were gone and I wish there was some documentation at all?
- Agents perform better when they receive clear instruction and explanations of what they need to do. Just like my junior colleagues! Same goes for checking on how they are doing and interrupting the process to course correct when they go off-track.
- Complex tasks are better split into smaller sub-tasks that are shipped of to specialized sub-agents with their own context. This results in bigger overall context for the task, and bigger performance since each instance has tailored context (see first point). Is this not analogous to the concept of division of labour? Serializing work instead of multitasking? Teamwork?
To some degree, I am even surprised that anyone would consider any of these “novel” thinking. And if I allow myself to indulge in cynism for a second, maybe we are all a bit like my colleague that we are willing to adapt to the needs of the machine and not the needs of each other.
Who knows, perhaps a positive side effect of this AI revolution is that we improve our understanding of ourselves and learn how to interact with each other better. It is definitely helping me be a better techlead and affecting how I run my team already.