Sunday, February 01, 2026

What is needed to work in the age of Generative AI

Last week I was at CMU-Heinz for a fireside chat type event with students in the various MS in Analytics programs there. One question that I got was what were the skills needed to succeed in an environment with AI, and even into the future.  Then I spoke about being able to program because you need to learn how to think deliberately, being able to connect technical capabilities with end business needs (because this has always been how analytics fails), and as I think about it more having a better understanding of knowledge. Because if you believe that your education and training is about learning sets of facts and recipes, AI will eat you alive. So your understanding of your field has to be greater than facts and procedures.


First, why learn computer programming when AI can write code faster than you. Microsoft has a set of studies showing a high (40%) rate of errors, yet their programmers also say they are more productive. Because, as I use AI at work, I find that it is helpful in creating good structure and framework scaffolding, especially when I have to context switch (I regularly switch between three data stacks at work, each of them have many people who spend all their time in one) or if I am applying methodologies new to me or my organization.  But because I am competent, I can correct it as I go, and the fact that there were originally errors is not a big concern, because I was going to revise everything anyway


Another reason to learn programming is you learn to think in a different way. The ancient Greek philosophers had students learn geometry before philosophy. Not because geometry and math is beautiful (even though they are), but because with geometry comes proofs. And geometric proofs is about how much you can understand starting with a minimum amount of assumptions (Euclid's five axioms). And you now have experience in determining an objective truth, no appeals to authority, no claims of different point of view. And your logic is in the open, to be critiqued on their own merits. Far different from my friend in grad school who claimed that perception is reality. And only then were you fit to move into the realm of ideas, where even facts have to be evaluated.


Programming languages differ from natural languages in their precision. Every statement has a single clear meaning. And this is different than natural languages, where the ambiguity of human life plays a role. So to work with anything regarding computers, it is helpful to recognize that computers will work with language in different ways we do, it handles ambiguity differently than people, and how it will use randomness to handle the difference (which is key to how Generative AI works).


The next is the link between the capabilities of technology and the needs of the business.  According to everyone who has studied project failures in depth, failed communications between the business partner and the analysts is the biggest cause of project failure. And data projects have a failure rate between 80-90% (this range has been persistent in studies over decades in the data world, and it is consistent across definitions of failure and different segments in data analytics, data engineering, or data reporting (dashboards)). Being able to understand the business needs of end customers as well as understanding potential classes of technology solutions leads to asking better questions and getting value out of the applications of technology.  The main reason for breakdowns in communication is ego and arrogance.  From the technology side, there is often a belief that the customers are idiots who do not know what they want, so the technology people should just build something and pitch it back to the customers.  This is mirrored by business people who think that technology is a a turnkey product so they should not interact with the people who are creating the solution.  A third variation is when upper leadership decides to act as an intermediary between the analysts and the end customer. The logic here is generally that the leader believes both the technologists/analysts and the end customer have no communications skills, therefore the leader will handle all of the communications and give the requirements to the analysts.  All of these are wrong. Especially in anything involving data, details matter, and the entire project involves discovery of details that no-one realized were important at the beginning. So the analyst and the end customer need to be regularly reviewing these discoveries, and adapting along the way.  And only the end customer (because they are closest to the problem on the ground and know what kinds of actions can be taken) and the analyst (because they will be representing the detail in models and they know what the range of alternative models can do) together can make those decisions.  Without that direct communication (potentially facilitated by someone who knows both sides), a project falls into the trap of solving the wrong problem. And this requires people who understand purpose and can determine the impact of nuance, both of which Generative AI does badly in.


The third category of future work is understanding your field.  Computers are very good at retrieving facts, if those facts are in its knowledge base. Gen AI is better than prior technologies because it is not as sensitive to getting the wording precise.  Computers are also very good at following instructions if those instructions are given. (people also tend to be better at things when they are given good instructions). So, if this is the extent of your subject expertise, you are in trouble.  In software development, there is actually a very large workforce like this, whose careers are built on the ability to fill out a given framework or instructions. But if your place in the world is built on more than knowing facts or following recipes, if there is actual understanding that has to be applied on a situation specific basis, there is still room for you. Without that understanding, an organization can execute perfectly, a solution to the wrong problem. Which is worthless. So you need the level of understanding that allows for good judgement, and you need to be working in an organization that allows for its employees to use that judgement.


A last criteria is based on a number of conversations I've had.  Many people express that they believe in the answers Generative AI gives because of the massive investment these companies have made, the smart people they have hired, and the belief that these companies would ensure correctness. I had to explain that these are industries and communities that have historically claimed they had no interest in accuracy or correctness. And until recently, viewed the paying customer as king and only sought to full the demand. Ethics was not part of the conversation.  And what they delivered, did not come with guarantees other than it does what it does.  This attitude that you did not question the authority that came with wealth and success is the first thing that has to be broken before people can use Gen AI productively.  Both of my kids do it.  I design the rollout and presentation of projects at work to make sure my business partners who are using Gen AI view its output skeptically and looking for specific types of flaws.  As an avid reader of science fiction over the years, much of which addresses AI as part of society, and worry much less about the power of AI than I do about people who use the output of AI without being critical thinking. It is the kind of following that leads people to enact policies without analysis, and punishes people. And the outcomes are the fault of the people who followed the AI. Because AI has no goals, purpose, or conscience beyond that of its user.

No comments: