Not long ago, I was speaking to a Leadership Expert about what makes a good leader and what makes a bad leader. Inherently we all know there is a difference and yet it is hard to define or even put a set of rules to it, the difference is often fuzzy and gray at the borders of good and bad, like anything I suppose. We all can look the other way if a less-than-optimal leader leads a team to success, and sometimes we can even say someone is a good leader even if the team fails.
So, maybe we need to talk about definitions, for if we can define it, we can replace the human weakness and program an AI machine to do it! Wouldn’t that in the end be best for all concerned, rather than deals and arbitrary rules set forth by avoidance and politics? Think about that for a moment.
Now then, regarding humans being flawed and making mistakes, sure, true enough and we can eliminate those perceived mistakes using Artificial Intelligence. Is that a good reason to remove the leadership hierarchy and replace them all with artificial intelligent computers and robots? That would be interesting to see the US Congress with no people in it, or the EU or UN chambers with lots of robots or just computers calculating the needs of humans via a networked central nervous system of input from across the civilization.
Still, humans have some other viable traits when it comes to leadership, as a good leader can recover from their mistakes and make a bad situation become an opportunity, I used to do that all the time. Those who don’t try, don’t make waves, play it safe, but never find the innovations in the market so their companies run redline like all the other NASCAR race cars going around the same track, with the same rules and going the same direction, that’s definitely thinking inside the stadium and although supposedly “on-track” it’s those who charter a new route who create change – they are the ones who own the game, question is can they get enough followership to make things happen, and will that group follow a fearless leader through hell without stopping to do something great? Hmm? Another interesting topic – and even Tom Peters would stop to think on this one. So great point – people are flawed, maybe that’s a good thing, after all ‘junk DNA’ isn’t really junk after all.
An AI leader wouldn’t make mistakes like humans, thus, it would do everything the same way, and incrementally get better but only to a point without any real creative input, or recovery from a future mistake, wow, this topic is getting more philosophical than scientific, so I will leave you with these thoughts to ponder. Be Great!