Sunday, February 01, 2026

What is needed to work in the age of Generative AI

Last week I was at CMU-Heinz for a fireside chat type event with students in the various MS in Analytics programs there. One question that I got was what were the skills needed to succeed in an environment with AI, and even into the future.  Then I spoke about being able to program because you need to learn how to think deliberately, being able to connect technical capabilities with end business needs (because this has always been how analytics fails), and as I think about it more having a better understanding of knowledge. Because if you believe that your education and training is about learning sets of facts and recipes, AI will eat you alive. So your understanding of your field has to be greater than facts and procedures.


First, why learn computer programming when AI can write code faster than you. Microsoft has a set of studies showing a high (40%) rate of errors, yet their programmers also say they are more productive. Because, as I use AI at work, I find that it is helpful in creating good structure and framework scaffolding, especially when I have to context switch (I regularly switch between three data stacks at work, each of them have many people who spend all their time in one) or if I am applying methodologies new to me or my organization.  But because I am competent, I can correct it as I go, and the fact that there were originally errors is not a big concern, because I was going to revise everything anyway


Another reason to learn programming is you learn to think in a different way. The ancient Greek philosophers had students learn geometry before philosophy. Not because geometry and math is beautiful (even though they are), but because with geometry comes proofs. And geometric proofs is about how much you can understand starting with a minimum amount of assumptions (Euclid's five axioms). And you now have experience in determining an objective truth, no appeals to authority, no claims of different point of view. And your logic is in the open, to be critiqued on their own merits. Far different from my friend in grad school who claimed that perception is reality. And only then were you fit to move into the realm of ideas, where even facts have to be evaluated.


Programming languages differ from natural languages in their precision. Every statement has a single clear meaning. And this is different than natural languages, where the ambiguity of human life plays a role. So to work with anything regarding computers, it is helpful to recognize that computers will work with language in different ways we do, it handles ambiguity differently than people, and how it will use randomness to handle the difference (which is key to how Generative AI works).


The next is the link between the capabilities of technology and the needs of the business.  According to everyone who has studied project failures in depth, failed communications between the business partner and the analysts is the biggest cause of project failure. And data projects have a failure rate between 80-90% (this range has been persistent in studies over decades in the data world, and it is consistent across definitions of failure and different segments in data analytics, data engineering, or data reporting (dashboards)). Being able to understand the business needs of end customers as well as understanding potential classes of technology solutions leads to asking better questions and getting value out of the applications of technology.  The main reason for breakdowns in communication is ego and arrogance.  From the technology side, there is often a belief that the customers are idiots who do not know what they want, so the technology people should just build something and pitch it back to the customers.  This is mirrored by business people who think that technology is a a turnkey product so they should not interact with the people who are creating the solution.  A third variation is when upper leadership decides to act as an intermediary between the analysts and the end customer. The logic here is generally that the leader believes both the technologists/analysts and the end customer have no communications skills, therefore the leader will handle all of the communications and give the requirements to the analysts.  All of these are wrong. Especially in anything involving data, details matter, and the entire project involves discovery of details that no-one realized were important at the beginning. So the analyst and the end customer need to be regularly reviewing these discoveries, and adapting along the way.  And only the end customer (because they are closest to the problem on the ground and know what kinds of actions can be taken) and the analyst (because they will be representing the detail in models and they know what the range of alternative models can do) together can make those decisions.  Without that direct communication (potentially facilitated by someone who knows both sides), a project falls into the trap of solving the wrong problem. And this requires people who understand purpose and can determine the impact of nuance, both of which Generative AI does badly in.


The third category of future work is understanding your field.  Computers are very good at retrieving facts, if those facts are in its knowledge base. Gen AI is better than prior technologies because it is not as sensitive to getting the wording precise.  Computers are also very good at following instructions if those instructions are given. (people also tend to be better at things when they are given good instructions). So, if this is the extent of your subject expertise, you are in trouble.  In software development, there is actually a very large workforce like this, whose careers are built on the ability to fill out a given framework or instructions. But if your place in the world is built on more than knowing facts or following recipes, if there is actual understanding that has to be applied on a situation specific basis, there is still room for you. Without that understanding, an organization can execute perfectly, a solution to the wrong problem. Which is worthless. So you need the level of understanding that allows for good judgement, and you need to be working in an organization that allows for its employees to use that judgement.


A last criteria is based on a number of conversations I've had.  Many people express that they believe in the answers Generative AI gives because of the massive investment these companies have made, the smart people they have hired, and the belief that these companies would ensure correctness. I had to explain that these are industries and communities that have historically claimed they had no interest in accuracy or correctness. And until recently, viewed the paying customer as king and only sought to full the demand. Ethics was not part of the conversation.  And what they delivered, did not come with guarantees other than it does what it does.  This attitude that you did not question the authority that came with wealth and success is the first thing that has to be broken before people can use Gen AI productively.  Both of my kids do it.  I design the rollout and presentation of projects at work to make sure my business partners who are using Gen AI view its output skeptically and looking for specific types of flaws.  As an avid reader of science fiction over the years, much of which addresses AI as part of society, and worry much less about the power of AI than I do about people who use the output of AI without being critical thinking. It is the kind of following that leads people to enact policies without analysis, and punishes people. And the outcomes are the fault of the people who followed the AI. Because AI has no goals, purpose, or conscience beyond that of its user.

Wednesday, January 21, 2026

Book review: Tools and Weapons: The Promise and the Peril of the Digital Age by Brad Smith

 

Tools and Weapons: The Promise and the Peril of the Digital AgeTools and Weapons: The Promise and the Peril of the Digital Age by Brad Smith
My rating: 4 of 5 stars

The author, Brad Smith, was General Counsel for Microsoft (he was also President of Microsoft, but his role as counsel is more relevant for this book). The book is a discussion of privacy in a context where governments and large corporations hold immense amounts of personal and business data, and there is a large temptation for corporations to take advantage of that knowledge or governments to access that information, for governments in pursuit of legal action or suppression. So much of the book is about cases that involved Microsoft and how they developed a stance on corporate responsibilities to their customers on privacy matters, specifically in cases where government demanded customer data.

Each chapter revolves around a policy argument that played out in public forums, regulatory, legislative, and in the courts in the U.S. and Europe. And it is in a backdrop where technology companies used to believe that as technology companies they did not have an interest in policy. But as companies became less sellers of goods and more providers of services, in particular of data storage and cloud based communications services, they became targets of government and criminal action to access customer data without consent. In each chapter Smith introduces the context, then introduces a historical principal that predated cloud computing, and he makes the argument that the choices and policies used to govern oud based computing services should be the same that governed the same type of services in the pre-digital age.

The overall philosophy he gives is that Microsoft is a custodian of customer's data, not the owner. And as custodian it will protect the customer's property (data). And throughout the book he identifies allies (who have similar philosophies of protecting customer's/citizen's property and privacy) who only differ in details. And those that he as to be contentious with, because they are seeking to use and profit from individuals data or desire access for investigations. (and Microsoft in these cases wants a transparent process for doing this what protects their customers, who have the rights of citizens/residents)

Clearly, Smith is proud off his work, and believes that protecting the privacy of Microsoft customers, even in the face of government pressure, is the right thing to do (with a procedure for governments to prevent harm to other citizen's rights, life, or property. But he does acknowledge allies, Google and several European governments come across very well here. So a reader has to be mindful that he does have rose colored glasses on Microsoft's journey in this topic.

I appreciate the view of a non-technology person on these topics. As he is a lawyer, his perspective is to look at issues that seem very new because of the pace of technology change, and recognize that the issues have existed and debated before the digital age. As the infrastructure is owned by multi-national corporations, the relative power of industry and government is different. But the idea that industry desiring to protect the interests of their customers and government desiring the safety of its citizens should align is one worth engaging in.

View all my reviews

Monday, December 29, 2025

Reflections on the Advent of OR: Using Generative AI in Analytics and Agile Operations Research

In December 2025 I participated in the Advent of OR (https://adventofor.com) which was a 24 day exercise that guided participants through an optimization project. And instead of just solving problems and creating models, the Advent of OR walked through an entire project life cycle, using the INFORMS Analytics Framework.


While I am not a student or early career who was the target audience, I took part, and I had three goals.


1. Use a new programming toolkit. I used VS Code with R and Quarto.  I usually use R Studio and I wanted to try R on VS Code.  And I think Quarto is the future replacing Jupyter Notebooks for Python and a natural evolution from R Markdown.

2. Practice in optimization. In the Operations Research world,  I am NOT an optimization person. My thesis was applied probability (queuing) and my methods research has been in simulation (one stream in ranking & selection and another stream in Bayesian methods for input modeling)

3. Using Generative AI. I wanted to see how generative AI does in an operations research project.  And I wanted to do it right in a setting where I can give it references to guide it.  Note: I have found that Generative AI favors descriptive statistics, machine learning, and hypothesis based statistics over other forms of analytics, so it needs some guidance.


Toolkit


I had to set up VS Code with the R extensions, Quarto (and extension), ompr and the glpk with associated R ROI packages, and to make sure everything worked, I download the repository for OR_using_R by Tim Anderson.  Then to render the book (meaning I made sure all the code ran) I had to install texlive with xetex and extra fonts.  Generative AI (I had Gemini CLI installed) was very helpful in all of the system administration tasks since it could figure out what was needed every time there was an error message.


Data analysis and Optimization


Working with the data sets, it read in the data, (I had to give it some corrections along the way to help it recognize the data types.  When the data files were read in, it recognized that the data sets did not correspond in granularity.  In the R markdown file it created, in addition to generating the code that read in the data and created summaries, it also identified a number of questions and concerns about the data and created questions for the stakeholder.  This was a good set of questions that corresponded to what others put forward.  


It also did well with the optimization.  Given an optimization textbook, I first asked the generative AI for a mathematical formulation based on the project description. Similarly, it created a process for determining what kind of problem this was and worked through that process to determine that this was a linear programming optimization problem.


Next was a LP formulation using OMPR.  The first formulation was straight forward.  I went and had the GenAI break out the formulation into its own R script to enforce a separation of concerns between the data handling, optimization model, and output processing.


I generally also ask for docstrings as I go, and the Gen AI did this for both the model as well as various handling functions. I generally read the docstrings to ensure they say what I expected them to say. When it did not, since the docstrings were written based on the code, I took it to mean that the code was not right (this exposed a mistake in the initial formulation of the LP in OMPR).  Similarly, I had the Gen AI write unit tests for the constraints and a mock problem to test the optimization.


Agile Operations Research


One of the aspects of having the Advent of OR over 24 days is that it rotates topics between the art of modeling, implementing and managing models, and interactions with stakeholders.  There are a couple of very important points. First is that interacting with stakeholders is not something that is at the beginning and end of project and ignored in the middle.  There needs to be stakeholder engagement throughout the modeling process.  A second point that has come out in the conversations on LinkedIn is that the most common cause of project failure across data analytics are communication failures, in particular between the analyst and the end customer. While this can have many causes (including management inserting themselves in between the analyst and end customer), as analysts we must have that direct interaction from the beginning of the project (business problem formulation in the INFORMS Analytics Framework)


In the early stages of the project, one factor we need to face is failure of imagination. The first level for analytics is that our stakeholders often do not know what is possible across the full range of analytics.  Often a problem is presented as a request for a tool, but for the results of the project to have any value, it has to address the end problem, so business problem formulation has to start with the end problem, determine what kind of information from data can help the decision makers address the problem, and then we can start discussing what methods can provide results in the form that will be useful.  Currently, because of media hype, the initial request can be for a dashboard, or a predictive model, or a generative AI tool.  As operations research analysts we can also bring to bear statistics, forecasting, optimization, simulation, and queueing; and different ways of applying those methods to give different kinds of results that can be delivered to decision makers to make better decisions.


After the business problem formulation, the next big change in the project will occur when presenting the first minimum viable model to the end user. This is the first model that uses a minimum acceptable subset of the data and model that covers the most essential aspects of the smallest version of the problem. The reason this is important is before this, all conversations are abstract and theoretical. The first time a model with outputs is presented to an end user, the end user will start to imagine how they would use these results in real situations that have happened in the past. And they will start telling about all of the considerations they account for, the information they need to gather to make decisions, and who they need to consult and coordinate with. And this can change the entire project.  And from experience, I do not think it matters how much work is done at higher levels to define the project, the first time a model is presented to an end user the project will change so that the outcomes can be usable to the business. So it is best to make that happen as early as possible so that change causes the least disruption to the work in progress.


The idea of rapid cycles of iteration and feedback from the customer, and the willingness to accept changes to the project due to that interaction are the hallmarks of agile development methodologies in the software development world. Having regular rounds of model iteration where additional elements are added to the model, and getting feedback from stakeholders to confirm that the project is on the right track to produce something useful. And just like the software development world has experienced, this is more likely to lead to useful product, and actually faster than attempting to follow a rigid path that leads to something irrelevant.  


Conclusion


The Advent of OR proved to be a valuable exercise, offering a full-cycle project experience that highlighted two critical modern aspects of Operations Research: the integration of Generative AI and the necessity of an Agile approach. Generative AI demonstrated significant utility in accelerating system setup and basic modeling tasks, freeing up the analyst for higher-level problem-solving. More importantly, the experience reinforced that project success hinges on continuous, direct stakeholder engagement, mirroring the principles of Agile development. By prioritizing early delivery of a Minimum Viable Model, analysts can gain crucial feedback that aligns the project with real business needs, ultimately reducing the risk of communication-based failure and ensuring the final product is relevant and utilized.


Monday, November 17, 2025

Thoughts on mentoring within the analytics profession

Our careers and lives follow unique trajectories and structures. While we all have our own paths, it is helpful to have people who have gone ahead on similar paths to share experiences and thoughts on the future. Part of our professional development are mentorship relationships, which can be done in a wide range of settings, relationships, and time frames.

I am going to define mentoring as a longer term, unstructured professional relationship, with the focus of the relationship being the personal growth of the mentee.  Typically, the basis of the relationship is that the mentor has gone on a path that the mentee is on themself, and the insights of time may be helpful for the mentee's development.

One thing that distinguishes mentorship relationships from other professional relationships is that mentorship relationships are holistic.  They look more than just the task at hand, or even a job position. The mentorship relationship may be career focused, but it will look at the whole person, and will recognize that overarching goals can change with life events, even life events outside their occupation. So, while a supervisor/manager can be a mentor, this is really not apparent until after the manager relationship has ended, and the relationship has become larger than the roles both individuals had when the relationship started.

As we all have unique life paths, we cannot expect that any one person has gone on the same path that we are on, but mentors bring not only their own life experience, but also the experiences of those whom they have lived life alongside. They have seen the decisions and choices of others, and how those decisions have advanced the goals, or not. They have seen people whose lives have taken them on different paths, and so have a broader view on what the future can hold than those whose view of the world is from the relatively structured life of home and school.

What topics come up? The focus on a mentorship relationship is on the growth of the mentee. In the context of technical professionals, this is the professional growth, but as part of a full life. So, with an understanding of the long term goals of the mentee, it can be working through broader issues on a project, such as other points of view.  It can be soft skills or relational skills working with co-workers, superiors, juniors, or outside colleagues (customers, business partners, etc.).  It can be suggestions on how to stretch as a person, to see and work through things from a broader perspective, and the skills needed to do this.  A mentor can be a sounding board, providing different points of view (especially on the behalf of people who may not be good at communicate their point of view). It can be how to handle work/life balance, looking at a whole person. It could also include looking at alternative paths, that different positions or even career paths may be more suited for the goals of the mentee. 

How does a mentorship relationship start? Like all relationships, you can never tell if a relationship is going to be long term at the beginning.  But you have to begin somewhere. A first conversation is often about a particular topic, one that is of mutual interest. (and this initial meeting is sometimes arranged by organizations such as a company or a professional organization trying to promote mentorship among employees or members). After the first few conversations about that first topic, you should have observed if the relationship is broader than that first topic, and you can talk about if you want to continue meeting about topics as they come up.

What does the mentor get out of this relationship? Typically, people who are in mentoring relationships also have other rich relationships, which is how they get the background that makes them valuable as a mentor.  Over time, the relationship becomes driven by both concern and curiosity about the other's experiences in life. Often that includes issues that are more apparent to someone at an earlier stage of life or career. A mentorship relationship can then become one of an ongoing set of relationships that makes up a life well lived, and the ultimate hope, even when it is not an expectation, is that a relationship be one that lasts.

Do mentorship relationships last?  Sometimes. Organizations such as workplaces and professional societies will often organize mentorship relationships, but these are always based on a topic of interest in the moment, and these relationships typically start out with short term boundaries. But, like all relationships, a short term relationship is what has potential to broaden into something longer. Does the relationship broaden beyond the topic where it was started?  Do conversations evolve organically and feel natural when they branch into new topics? Over time, can the relationship feel like something that lasts as both sides grow and change (as all growing people do). So the transition from a formal, temporary relationship with a defined schedule and defined boundaries changes into something more long term and fluid. And a mentor/mentee relationships begins to feel more like professional colleagues, each moving through life and careers on adjacent paths. 

Can mentorships relationships be informal? Yes, in the sense friendships are informal. In professional society meetings, it is common to see someone and immediately follow up from a conversation from a year ago, just like old friends. So you can have a relationship where you only see each other on occasion, but immediately pick up where you left off, just as old friends do. But the key is the long term relationship, that the conversations are about growing people, not only about topic at hand.

Are there aspects of Analytics that mentorship relationships are especially helpful? One area are the soft skills, the skills of working with colleagues, managers, and customers that is not part of the standard training of a technical professional. A mentor can relate to what the other person may be thinking and help the mentee develop that sense of empathy for others that make them more effective professionally.  A second aspect is dealing with the hype that often accompanies the profession. The most recent example is the rise of Generative AI, but similar waves of publicity occurred around deep learning, big data, and machine learning in general. A mentor can place new ideas and concepts in the context of everything else a mentee knows, in contrast to teachers or thought leaders whose responsibility at any given point in time is single topic focused. A third aspect is a sense of what a mentee may need to be a well rounded professional. Training programs and classes tend to be singularly focused with a specific goal, but professional growth needs to be holistic, and designing such a path needs the attention of a person who is looking at the whole person.

Mentorship presents the potential of a valuable relationship, fostering personal and professional growth through a holistic and potentially long-term connection. It goes beyond task-oriented guidance, embracing the mentee's whole person, from developing crucial soft skills and navigating career paths to contextualizing industry trends. While it can begin focused on specific topics or within formal programs, at their best mentorships evolve into enduring relationships, offering mutual benefits and enriching the lives of both mentor and mentee. In dynamic fields like Analytics, such relationships are particularly vital, providing the comprehensive support needed to cultivate well-rounded, effective professionals in changing times.

If you are interested in mentoring relationships, I would look to your professional society. If you are in analytics, I would recommend you look at INFORMS and their mentoring programs (Video on the value of mentoring in analytics)  It is a professional society for advanced analytics (broadly defined) and is vendor, tool, and methodology neutral, which is important for a field that sees major changes over the course of decades.

Wednesday, October 22, 2025

A tale of two Corne: one month with split keyboards

My split keyboard journey started with two Corne keyboards and a Sofle all purchased over a period of two months. The Sofle is from Ergomech and is a Bluetooth enabled.  I use it as a wired work keyboard.  The two Corne keyboards are from YMDK, purchased through Amazon. One is an MX, the other is an MX low profile.  Here, I talk about the two Cornes. I will look at (1) Buying from YMDK on Amazon, (2) use of the two keyboards, (3) the keymap journey.






I bought the keyboards from YMDK on Amazon.com. YMDK markets pre-soldiered, hot-swappable,wired and wireless  (2..4GHz dongle) Corne 4.1 keyboards with 3D printed enclosed case with 46 keys  (3x6 +5). Also a wireless Sofle.  I wanted wired only to make my first steps into the split keyboards with fewer complications, in particular the wireless versions are powered by replacable button cell batteries, that I did not want to deal with. I immediately flashed both keyboards with 4.1 Vial versions of the Corne firmware and that had no problems.

I have one keyboard with MX Akko Dracula linear switches (35 g weight) and XDA profile PBT keycaps, and one low profile with kaith Deep Sea Whale Low Profile choc v2 (silent tactile).  The first keyboard I ordered was a refurbished MX switches. The right side did not work, and I suspect this is because Amazon refurbished items usually are returns, and the prior owner probably shorted the keyboard. A couple back and forths with YMDK customer service (there is a link on Amazon to them, and it is not too hard to get the YMDK customer service email address.) and we decided to return it through Amazon and I ordered a new one. The Low profile keyboard was not a problem. The website make it clear that it could take either Kaith 1333 (v1) or 1353  (v2) switches. So I got 1353 switches and used Wormier Low profile (skyline) key caps.  So, with Amazon return policies, I found buying from YMDK and working with their customer service reasonably good, although I am leary about any complications like wireless.

For using the keyboards, I use them with my own laptop and I have one at a standing desk that I use for both my work and personal laptop (I take a standing session and connect my laptop to a docking adapter). And I also bring the low profile Corne with me when I go in to the office (my acrylic sandwich Sofle looks a little fragile with its openings and the suspended acrylic oled screen cover). I like the low profile version. With the 3D printed case and low profile keys and switches, it feels relatively durable and no big failure points like catching on something. I warp up the halves in a bandana and put it with the cables into a lined bag and that seems to work well. I'm not sure I would like the low profile keyboard as my only keyboard, it feels tough because I'm always bottoming out compared to my MX Sofle. But as a secondary keyboard to provide variety I think it does well. And it does get attention when I take it around :-)

With the Akko Dracula switches, I think the light switches don't work for me. I am constantly having accidental key presses with mod-tap keys that I don't with the silent tactile keys with 45g weights. I think I'm going to put that keyboard aside until I feel like getting new switches for it.

The keymap journey is an ongoing one. I think I'm at a point that I only make small changes a few days apart.  Some big choices along the way in roughly the order I settled on them.

  • QWERTY- I'm staying with the QWERTY layout. I know it well, and I am not so fast a typist that any keymap optimization can make any meaningful difference.


  • Numbers. I started with numbers being a top row of a layer. Eventually I realized that when I need numbers, I need several, so I switched to making a numeric keypad with arithmetic symbols on one side, then the other symbols on the other side of the layer.
  • Symbols. There are seven'ish pairs that need to be taken care of. I touch type, so I wanted the pairs that are usually on the same key together.  So these are:  `~, -_, =+, [], {}, '", \|.  The quotes get taken care of by moving enter to the thumb cluster, so quotes stay in place. =+ and -_ I put with the numeric keypad. So I had two rows of the symbol layer were the []\'` on one row and the shifted version of those keys {}|"~ were in the row below that.  I put the brackets on the outside (left row) because I ended up putting all the brackets on combos, so I still have them on this layer, but on the edge.  And since I program in R, I put the ` and ~ closer to the index finger.  The top row of the symbol layer I put logical operator symbols:  <>&|!, (less than, greater than, and, or, not).  Making two columns of all the bracket keys (except parens) and keeping them in an order I would be able to remember.


  • Navigation. The top row of the navigation layer are the symbols that are the shifted number row.  For the remaining two rows, the right side is navigation, left side mouse control. Right side is centered on hjkl, because I use VIM and my fingers already know those keys.  Below those are horizontal keys, beginning of line, previous word, next word, end of line. I put page up and page down on the two keys to the right of those. On the far right I have beginning and end of document, but I don't think I use those that much. The mouse cluster is xdcv.  Then f,s are the click buttons, gb are scroll up and down.  za are scroll left and right.  I use these a surprising amount of the time (especially to help recover after accidental mod presses)


  • Adjust layer:  Left half is for controlling the keyboard (lighting), right half is media controls.  I have volume mute, down, using hjk, media back, play/pause, next anm,  l; are screen brightness.  ./ are zoom out/in.  I never really learned to use the keys because these were always on the function row and every keyboard had them in a different place. Now that I got to put them where I wanted, I use them a surprising amount.


  • Screen navigation.  I had window management (switch windows, move windows) on the thumb cluster. I figured if I was in this layer I did not need space, enter, backspace/delete. I knew these keys, but they were awkward on a normal keyboard
  • Programming key combos.  I made combos for the bracket symbols ()<>{}{} with left on the left side and right bracket on the right. I tend to use these instead of the normal typewriter layout.  I have additional combos for : <- |> # for R programming (the I also have combos for open file and the command palette in Visual Studio Code.,
  • Mod-tap and layer-tap. I have an extra layer tap keys on g and h, which I use to mirror my layer keys so each layer is accessible from each side of the keyboard. For example, the number pad, I can either use it single handed with the thumb holding the layer key, or use the index finger on the other side. I usually use it one handed if I only need one key on the number pad, opposite hand if I need more. I let my fingers decide. I also have (from outside in) home row mods of layer, shift, control, alt, gui. But I took out the gui modtap as I was doing too many accidental mod presses, especially with the Akko Dracula switches. (I think the other keys are not as noticable because they have momentary effects, but GUI and menu lead to something happenning.)
  • Thumb clusters, I have ended up with the thumb cluster being Insert(held Ctrl), Enter (held GUI), raise layer (navigation), lower layer (numpad/symbol), space, backspace (held Alt). I realized that the menu key was also right click on the mouse (and the key on the mouse layer works), 
Some other decisions along the way
  • Delete/Backspace. I first left backpace next to P and delete key in the thumb cluster, but I repeated used delte when I meant backspace, so I moved backspace to the thumb cluster and had delete in the corner.  It makes for the same Ctrl-Alt-Delete chord that I'm used to.
  • Escape and tab. I started with escape in the corner, but my fingers always wanted tab to be next to Q. So Tab went into the corner and Escape went where CAPS LOCK usually is. I don't use CAPS LOCK much,  so I made a combo with both space keys as something easy to remember if I ever want it.
  • Numpad. I started with the  numbers on the top row of a layer, but they were always awkward, just like they are normally. Then I realized they could be a numpad, with room for arithmetic keys around them. I also tried both left and right sides, and ended up with the right side. Because - and _ are used so much, I had those under the index finger and =+ went to the other side of the numpad.
  • Shift and Ctrl/Alt keys. I tried out putting Shift in the thumb cluster and Ctrl, Alt in the corners where shift usually was, but changing that muscle memory was too hard.
  • Space and Enter. I tried space on the left first, then saw I was making too many errors so I switched them.
Observations from use.
  • I don't miss the number row. The only time I notice it is when typing passwords or other things like phone numbers where muscle memory knew where the numbers are, but I'm creating a new set of muscle memory.  And the symbol keys always needed a layer key (Shift).
  • I use the media, zoom, and window management keys all the time. I never used them on a regular keyboard because I could not remember where every keyboards keep them and they were odd key combinations (odd to me). So these mean I am using the mouse a lot less.
  • After one month, using a regular keyboard feels uncomfortable because it felt cramped and flat (I have a variety of tenting solutions) I don't remember it being much more comfortable when I started using split keyboards, but that is probably because of dealing with all the changes in geometry.
  • I noticed that I only use the sixth column on the base layer with Tab-Escape-Shift and Delete-<'>-Shift.  So I am only six combos away from switching.to a 5X3 Corne (must resist . . .)



Friday, October 03, 2025

Setting up a keymap for a Corne split keyboard to be used for data analytics

Continuing my dive into the rabbit hole known as split keyboards, I got a Corne keyboard from YMDK on Amazon.com.  A Corne is an open source keyboard (circuit board and source code are freely available. The original creator is not in the keyboard building and selling business so he lets others improve his design and sell them) It is a 3 row X 6 column + 3 (thumb row) key per side board column staggered board. The goal here is comfort from being able to independently place the halves of the keyboard under my fingers and reduce the need to twist my wrists. (i.e. reduce repetitive strain injury)

The trick with this keyboard is what to do with they symbol and control modifiers. Also known as the keymap. The answer is creating layers, such as the shift layer used for capital letters and the symbols under the number keys. So I have a symbol/navigation layer and a number/mouse control layer.

There are a few general principles I had in making the keyboard.

  1. The baseline is the standard QWERTY keyboard. I spent a lifetime building up my muscle memory and I'm not going to throw it away.
  2. I wanted to try a numpad on the right side that should also be associated with common math symbols.
  3. I wanted VI type navigation keys  (i.e. h, j, k, l correspond to left, down, up, right)
  4. Keep the symbols associated with shifted number keys on the top row, in order
  5. For symbols that did not make the base layer or the math layer, keep the symbol and the shifted symbol together (shifted symbol below the main one)y
  6. As I use it and make mistakes, move characters to the key my fingers wanted them to be.
So, the base layer is as much of the QWERTY layout as could fit.  The left column had tab above escape above shift. I started out with the escape in the corner and tab under it, but clearly my pinky wanted the tab to be next to Q. For the right column, I started out with the backspace next to P and Delete under my thumb, but a bit of use led me to switch them.  The left thumb keys had insert, enter, and lower layer (symbols and navigation), with the insert key doubling as Control when held down (also known as mod-tap). The Enter key doubled as Alt when held. The right thumb keys were raise layer (numpad, math symbols, and mouse control), space (Control when held) and backspace (alt when held). I also set up home row mods, where both hands the home row keys doubled as shift, control, alt, Command when held.  


The lower layer was for symbols and navigation.  The top row had the symbols that would normally be on the shifted number keys.  The left side had the keys that were displaced from the base layer: []{}\|`~.  The right side had the VI arrow keys on h, j, k, l. The row below those where navigation within the row: beginning of row, previous word, next word, end of row. The column to the right had page up and page down. The last column on the right had top and bottom of document (or cell for Jupyter notebooks)



The raise layer was mouse controls, numpad, and math symbols. Left side had mouse controls (left, right, up, down, clicks, scroll) and math logic symbols &|!<>.  Right side was a numpad, with -+/*_= around the numbers.  



The third or adjust layer were to control the keyboard or computer.  Left side was the keyboard, specifically lighting.  Right side were media controls, screen brightness, and screen zoom.



What really made this work was the use of combos.  I made combos (two key combinations) for the bracket type symbols which were mirrored on the left and right sides. This covered <>, (), [], {}.  In the inside columns, I made combos for :, <-, |>, # which are used in R.

Actually, this was a major modification. My initial layout had the numbers along the top row on the raise layer. But while I don't use numbers that much, when I need them I need many of them, and a numpad is less awkward.  I think.  I will know after much more use.

And here are my keyboards.  I have two Corne's, one MX with Akko Dracula (low weight linear) and XDA profile keycaps and a low profile with Kaith low profile silent tactile with low profile MX keycaps.  And a Sofle with Oetemu silent tactile switches and XDA keycaps. The Keychron K12 with Cherry Reds and OEM profile keys is what I was using before as a reference.  The Sofle is used with my work computer (having a number row is useful for making passwords smoother). The low profile Corne is packed in a bandana and a bag for travel. The MX Corne is used for my personal/non-work laptop. Since I got my first Corne in late August, this represents a big dive into the rabbit hole of split keyboards. 

For tenting, the Sofle has M5 bolts that came with it from Ergomech, the low profile Corne is using Steepy laptop risers which is what I take when out and about.  And the MX Corne is on Cooper Cases MagSafe stands. 






Tuesday, September 23, 2025

Book review: AI Snake Oil by Arvind Narayanan and Sayash Kapoor

AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the DifferenceAI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference by Arvind Narayanan
My rating: 4 of 5 stars

I read AI Snake Oil as part of the INFORMS Book Club. I work with predictive AI and generative AI at work, and I describe what I do as figuring out how AI fails, then work with my business partners to develop a process and application to make AI useful and productive. This book falls into the category of demonstrating how AI fails.

There are several chapters, each with a discussion of a way that AI fails and how the authors figured it out. But they have a pattern. First, the failures in AI is in part due to how a particular model is trained. If the training data does not match the intended use, such as the data actually represents one characteristic but the model is being used for something else. Next, they discuss that the people who made the model do not always have incentive to get it right. In particular, the large AI companies do not have incentive to either evaluate the quality of the models or improve them.

Some things I think they do well.
1. Differentiate between various generations of AI. They specifically break out predictive AI, generative AI, and symbolic AI. Each of which work differently than the others.
2. Focus on the training data. This is where AI models need to be examined (by definition, AI does not include a description of the system, so predictive and generative AI have to learn about the world through large amounts of diverse data.) And failures come from the data not matching the setting where a model is applied.
3. Be skeptical of claims that come from computer companies. I always say don't let people selling you things define terms. They also say don't let industry set the rules, the standards, or barriers of entry. Because their goal is to defend their market share, not the benefit of society.

This is a good book to read, especially as part of a discussion. Highly recommended


View all my reviews