Monday, February 09, 2026
Claude's Constitution on the role of the analyst
Tuesday, February 03, 2026
Book review: Little Brother by Cory Doctorow
Sunday, February 01, 2026
What is needed to work in the age of Generative AI
Last week I was at CMU-Heinz for a fireside chat type event with students in the various MS in Analytics programs there. One question that I got was what were the skills needed to succeed in an environment with AI, and even into the future. Then I spoke about being able to program because you need to learn how to think deliberately, being able to connect technical capabilities with end business needs (because this has always been how analytics fails), and as I think about it more having a better understanding of knowledge. Because if you believe that your education and training is about learning sets of facts and recipes, AI will eat you alive. So your understanding of your field has to be greater than facts and procedures.
First, why learn computer programming when AI can write code faster than you. Microsoft has a set of studies showing a high (40%) rate of errors, yet their programmers also say they are more productive. Because, as I use AI at work, I find that it is helpful in creating good structure and framework scaffolding, especially when I have to context switch (I regularly switch between three data stacks at work, each of them have many people who spend all their time in one) or if I am applying methodologies new to me or my organization. But because I am competent, I can correct it as I go, and the fact that there were originally errors is not a big concern, because I was going to revise everything anyway
Another reason to learn programming is you learn to think in a different way. The ancient Greek philosophers had students learn geometry before philosophy. Not because geometry and math is beautiful (even though they are), but because with geometry comes proofs. And geometric proofs is about how much you can understand starting with a minimum amount of assumptions (Euclid's five axioms). And you now have experience in determining an objective truth, no appeals to authority, no claims of different point of view. And your logic is in the open, to be critiqued on their own merits. Far different from my friend in grad school who claimed that perception is reality. And only then were you fit to move into the realm of ideas, where even facts have to be evaluated.
Programming languages differ from natural languages in their precision. Every statement has a single clear meaning. And this is different than natural languages, where the ambiguity of human life plays a role. So to work with anything regarding computers, it is helpful to recognize that computers will work with language in different ways we do, it handles ambiguity differently than people, and how it will use randomness to handle the difference (which is key to how Generative AI works).
The next is the link between the capabilities of technology and the needs of the business. According to everyone who has studied project failures in depth, failed communications between the business partner and the analysts is the biggest cause of project failure. And data projects have a failure rate between 80-90% (this range has been persistent in studies over decades in the data world, and it is consistent across definitions of failure and different segments in data analytics, data engineering, or data reporting (dashboards)). Being able to understand the business needs of end customers as well as understanding potential classes of technology solutions leads to asking better questions and getting value out of the applications of technology. The main reason for breakdowns in communication is ego and arrogance. From the technology side, there is often a belief that the customers are idiots who do not know what they want, so the technology people should just build something and pitch it back to the customers. This is mirrored by business people who think that technology is a a turnkey product so they should not interact with the people who are creating the solution. A third variation is when upper leadership decides to act as an intermediary between the analysts and the end customer. The logic here is generally that the leader believes both the technologists/analysts and the end customer have no communications skills, therefore the leader will handle all of the communications and give the requirements to the analysts. All of these are wrong. Especially in anything involving data, details matter, and the entire project involves discovery of details that no-one realized were important at the beginning. So the analyst and the end customer need to be regularly reviewing these discoveries, and adapting along the way. And only the end customer (because they are closest to the problem on the ground and know what kinds of actions can be taken) and the analyst (because they will be representing the detail in models and they know what the range of alternative models can do) together can make those decisions. Without that direct communication (potentially facilitated by someone who knows both sides), a project falls into the trap of solving the wrong problem. And this requires people who understand purpose and can determine the impact of nuance, both of which Generative AI does badly in.
The third category of future work is understanding your field. Computers are very good at retrieving facts, if those facts are in its knowledge base. Gen AI is better than prior technologies because it is not as sensitive to getting the wording precise. Computers are also very good at following instructions if those instructions are given. (people also tend to be better at things when they are given good instructions). So, if this is the extent of your subject expertise, you are in trouble. In software development, there is actually a very large workforce like this, whose careers are built on the ability to fill out a given framework or instructions. But if your place in the world is built on more than knowing facts or following recipes, if there is actual understanding that has to be applied on a situation specific basis, there is still room for you. Without that understanding, an organization can execute perfectly, a solution to the wrong problem. Which is worthless. So you need the level of understanding that allows for good judgement, and you need to be working in an organization that allows for its employees to use that judgement.
A last criteria is based on a number of conversations I've had. Many people express that they believe in the answers Generative AI gives because of the massive investment these companies have made, the smart people they have hired, and the belief that these companies would ensure correctness. I had to explain that these are industries and communities that have historically claimed they had no interest in accuracy or correctness. And until recently, viewed the paying customer as king and only sought to full the demand. Ethics was not part of the conversation. And what they delivered, did not come with guarantees other than it does what it does. This attitude that you did not question the authority that came with wealth and success is the first thing that has to be broken before people can use Gen AI productively. Both of my kids do it. I design the rollout and presentation of projects at work to make sure my business partners who are using Gen AI view its output skeptically and looking for specific types of flaws. As an avid reader of science fiction over the years, much of which addresses AI as part of society, and worry much less about the power of AI than I do about people who use the output of AI without being critical thinking. It is the kind of following that leads people to enact policies without analysis, and punishes people. And the outcomes are the fault of the people who followed the AI. Because AI has no goals, purpose, or conscience beyond that of its user.
Wednesday, January 21, 2026
Book review: Tools and Weapons: The Promise and the Peril of the Digital Age by Brad Smith
Tools and Weapons: The Promise and the Peril of the Digital Age by Brad SmithMy rating: 4 of 5 stars
The author, Brad Smith, was General Counsel for Microsoft (he was also President of Microsoft, but his role as counsel is more relevant for this book). The book is a discussion of privacy in a context where governments and large corporations hold immense amounts of personal and business data, and there is a large temptation for corporations to take advantage of that knowledge or governments to access that information, for governments in pursuit of legal action or suppression. So much of the book is about cases that involved Microsoft and how they developed a stance on corporate responsibilities to their customers on privacy matters, specifically in cases where government demanded customer data.
Each chapter revolves around a policy argument that played out in public forums, regulatory, legislative, and in the courts in the U.S. and Europe. And it is in a backdrop where technology companies used to believe that as technology companies they did not have an interest in policy. But as companies became less sellers of goods and more providers of services, in particular of data storage and cloud based communications services, they became targets of government and criminal action to access customer data without consent. In each chapter Smith introduces the context, then introduces a historical principal that predated cloud computing, and he makes the argument that the choices and policies used to govern oud based computing services should be the same that governed the same type of services in the pre-digital age.
The overall philosophy he gives is that Microsoft is a custodian of customer's data, not the owner. And as custodian it will protect the customer's property (data). And throughout the book he identifies allies (who have similar philosophies of protecting customer's/citizen's property and privacy) who only differ in details. And those that he as to be contentious with, because they are seeking to use and profit from individuals data or desire access for investigations. (and Microsoft in these cases wants a transparent process for doing this what protects their customers, who have the rights of citizens/residents)
Clearly, Smith is proud off his work, and believes that protecting the privacy of Microsoft customers, even in the face of government pressure, is the right thing to do (with a procedure for governments to prevent harm to other citizen's rights, life, or property. But he does acknowledge allies, Google and several European governments come across very well here. So a reader has to be mindful that he does have rose colored glasses on Microsoft's journey in this topic.
I appreciate the view of a non-technology person on these topics. As he is a lawyer, his perspective is to look at issues that seem very new because of the pace of technology change, and recognize that the issues have existed and debated before the digital age. As the infrastructure is owned by multi-national corporations, the relative power of industry and government is different. But the idea that industry desiring to protect the interests of their customers and government desiring the safety of its citizens should align is one worth engaging in.
View all my reviews