Sunday, July 13, 2025

Adventures in core.logic: learning clojure and logic programming with help from Gen AI

 


This past month my project has been to learn logic programming, and as a vehicle to do this, learn clojure (again).  For those who are not computer scientists, logic programming is one of the four main computer programming paradigms:  procedural (what most people learn in an introductory programming class), object oriented (what most computer science programs and professional programmers aim for, Java, C++, C#, Ruby are all examples of OO languages), functional programming (Lisp and its relatives), and logic programming.  The closest most people get to logic programming is SQL, which is declarative and works by expressing the outcome, but not the steps to get there.  The most well known language is Prolog.  A more recent expression of logic programming, is miniKanren, which is a Domain Specific Language originally implemented in Scheme, but there are other implementations, whose quality seems to be related to how well functional programming is implemented in those languages.  This essay looks at (1) learning clojure (a Lisp that runs on the java virtual machine, (2) learning logic programming (3) learning core.logic, which is the implementation of miniKanren on clojure, and (4) using Generative AI to help with all these things.

This is my second exposure to Clojure, which is a Lisp (a functional programming language) that runs on the Java Virtual Machine. The big draw is that it provides a functional programming way of working that allows use of all Java libraries.  As a data scientist, the advantage of functional programming is that this is a much better style of programming when doing data manipulation. For example, using R with the tidyverse is functional style programming in that you perform operations on data frames that return data frames, and this allows the use of piping/sequencing of functions that conform to this pattern. (Pandas in Python is a flawed version of this as not all functions in Pandas follows this rule)

My first run with Clojure was around 2014 (so says my Github timeline). At the time the Incanter project was trying to establish it as a data analysis environment on the JVM. With the goal of being used in corporate IT departments that had standardized on the JVM (which places obsticals to using Python or R).  And it was good enough that I had written a model and associated analysis in Clojure for an attempted startup (a clean implementation which was not done at any of our home organizations). But the Incanter project stalled. And more recently a broader effort to provide data analysis/scientific computing capabilities into Clojure shows promise. Scicloj.  One standard mantra that I can confirm.  Lisp makes the claim that it has very little syntax, it is easy to learn.  And I would agree. After almost 10 years, a short online course and a review of some books I had from 10 years ago I was pretty up to speed.  Because when everything is a list, the question then becomes what is the form of that list for the task/function/library at hand.  Which is easier than any other language that I work with where I have to learn the philosophy of every package I use. (or collection in the case of the tidyverse on R).  In addition, the tooling was easier. Visual Studio Code has the Calva extension, which makes working with Clojure projects automatic (pretty much anything on the Java virtual machine needs an IDE to handle the project setup, so a good IDE is essential.)

For learning logic programming, I started with some Prolog materials, because that would allow me to focus on the logic and thinking part (Prolog is also fairly sparse in syntax).   I got Adventure in Prolog by Dennis Merritt and followed along with implementing the Nani adventure game as well as the geneology exercise that was developed over the entire book.  But I was always going to move to miniKanren, becuase in any conceivable use, I would be integrating logic programming into something else.

My first two attempts to moving from Prolog to a programming language were with Julia and Clojure.  With Julia, there was Julog (which is attempt to follow Prolog patterns but in the Julia language). This seemed servicable, although all I did was the adventure game. Then I looked at the miniKanren projects.  All of them were the beginnings of an implementation, but not complete enough to do anythihng.  (Scheme and miniKanren both have a reputation for being the target of a budding language creator's first target because they are so simple to write, but then the said creator's attention goes somewhere else).  And even though I have also used Julia in the past, I basically had to learn it over again as it changes every version (I review books by computer publishers, so I have had a chance to look at Julia every now and then, and it does feel like I'm starting over again every time).

Clojure has the advantage the the main language is very stable (and since it is a Lisp it has the advantage of having seen the history of language decisions, good and bad).  They have a fun graphic where the show the history of the source code changing which looks like layers instead of comparable graphics for other language projects that look like landslides.  But the same cannot be said about core.logic.  When core.logic first came out it was a unique in the sense that it was an implementation of logic programming that was in a relatively mainstream computing environment (because logic programming makes a lot more sense on a Lisp type programming environment than on a Algol type object oriented/procedural programming environment).  So there are a lot of early tutorials. But around version 0.8.5 or so there was a major change in the core.logic library organization, and a sub library was created to hold all of the non-logic things. Which includes things like facts and data.  But this broke all of the tutorials. And like faddish things, noone updated their tutorials. So all of the tutorials that everyone points to was from 0.7.6 or so. So as I repeated the Adventure in Prolog exercises, the getting started introduction was easy, but I had to discover that there was a new way of doing things that involved actual data (as opposed to being logic exercises) and I redid the Nani adventure and the bird expert system using the new core.logic and core.logic.pldb structure.  

The bird expert system exercise was particularly difficult. I actually did not do this set of exercises when I went through the Adventure in Prolog book (because it did not actually start until about halfway through).  So I tried to start from someone else's Prolog solution.  And that completely failed.  So I used OpenAI's ChatGPT and Google Gemini to help me. So neither of them completely got it right, but they got me on the right track. So my solution does not look anything like the Prolog solution. And the types of mistakes that the Gen AI did were interesting.

Generative AI works by going through the training data (essentially the internet), and using the tokens (roughly a word, sometime part of a word and sometimes a phrase) in the query, identifies other uses of that set of tokens and comes up with a probability of options for the next token.  Then chooses the next token randomly based on the calculated probabilities. Then, including the token the Gen AI just added, repeats the same and get the next token. And repeats.  The randomness is what gives Gen AI its creativity instead of just being a search engine. But it also leads to mistakes, as the Gen AI does not actually understand any of its source texts, so it does not recognize the context of its sources or the fact that some sources may not actually go with others.

This gets more problamatic in a subject like core.logic, where the majority of the texts on the internet are out of date, in a breaking way. Normally I say that Gen AI is particularly good at computing related topics, but that is because of the vast quantity of material available on various message boards programmers and computing professionals frequent to ask questions and get them answered.  Clojure core.logic is very different, as there is not much material (Clojure is not one of the more common languages, and logic programming is also a small niche), and there are at least three different eras, which are not mutually compatable.  And since modern examples do not overwhelm historical ones in quantity, things get mixed together. 

Now, how big of a problem is this.  In my experiences using Generative AI to aid in programming (again, I am a data scientist, so I am interested in data type issues), Generative AI is good for giving programming structure and style (which is very useful, (re-)learning new APIs is time consuming), but it regularly gets logic and the model wrong. But as a scientist, logic and the model are things I am good at, so I don't mind examining code to correct the logic and model, I wanted the help in getting the thing into a running state!  This is why despite Microsoft reporting 40% error rates in Copilot generated code and OpenAI reporting 70% failure in software engineering project when using Gen AI, professional programmers still find Generative AI to be very useful.  It does get things like how to work with an API right, and has pretty good programming style (with appropriate commenting!)  But logic, which the Gen AI gets wrong, is something that any competent programmer does not mind doing themselves.

The key for using Generative AI is the same as other things. It is good for style and structure. Not so good for facts and logic. But that is what subject matter experts are good at. (and most subject matter experts are not so good at style and structure)  So a trained SME can play to a Gen AI strengths and deal with the weaknesses. But only if the human is paying attention to this. 

Next steps, repeating the Adventure in Prolog exercise, but using the Kanren library in Python,



Tuesday, June 17, 2025

Why take opportunities for public speaking as an analytics professional

For many of us in technical fields, public speaking often feels like a skill we left behind in school or perhaps dusted off for job interviews, especially if our roles involved training or teaching. Once we're in the professional world, the focus tends to shift solely to our day-to-day tasks, and public speaking opportunities seem to dwindle. However, effective communication is crucial for professional growth, and unfortunately, workplaces don't always provide sufficient feedback on technical presentations.

This is where engaging with local professional communities can be incredibly valuable. While I've had the privilege of speaking at professional society conferences, I've also found immense benefit in giving talks within local technical organizations. Many metropolitan areas are familiar with these as "Meetups," named after the platform that serves as their online home. These local speaking engagements offer distinct advantages compared to academic talks or large industry conferences.

Low-Stakes Practice Environment


One significant benefit of giving technical talks locally is the opportunity for low-stakes public speaking practice. These communities are typically smaller, comprising individuals genuinely interested in professional development. Because many members also use these meetings as a platform to share their own insights, the environment is inherently supportive and sympathetic. It's a space free from the competitiveness that can sometimes arise when individuals are trying to build a reputation or feel they're in direct competition. This fosters a very friendly atmosphere for honing your presentation skills, where attendees genuinely want to see you succeed.

Sharpening Your Communication


Secondly, preparing a talk for a public audience compels you to think critically about what truly matters. In a work setting, it's easy to gloss over foundational concepts because everyone involved in a project is assumed to have that background. In a public forum, you're required to identify the essential information and ensure you cover it as necessary background. This is particularly true for work-related topics when you might need to use public datasets (as most companies don't permit the use of proprietary data for more informal talks). This process forces you to consider what's important for your audience and what's technically crucial. It's an excellent exercise in organizing your thoughts and effectively communicating them, a skill that translates seamlessly back to your work when you realize not everyone on your team has the same background knowledge.

Building Professional Community


Finally, these local groups are instrumental in fostering community. Recent articles in local Pittsburgh publications have highlighted the increasing difficulty of forming social connections after school, and professional colleagues, while valuable, often don't entirely fill this gap due to shorter average tenures at companies and the inherent limitations of work-only relationships. Professional organizations offer the unique advantage of being specific enough to align with shared interests, yet broad enough to expose you to ideas beyond your immediate work. Giving a talk provides a natural reason for others to engage with you, sparking discussions and building relationships that can extend far beyond any single job.

Monday, April 14, 2025

(DRAFT) What do university departments provide to the employers of their students (data science)

 I gave a talk at the 2025 INFORMS (Institute for Operations Research and the Management Sciences) Analytics+ conference (i.e., industry practice focused as opposed to research focused) on Where Should the Analysts Live: Organizing Analytics within the Enterprise. The talk was a result of many organizations asking if analytics should be managed within companies centralized or de-centralized.  One of the topics that came up is the fact that much of the practice of data science is learned on the job.  For some people, they may ask if this is the job of universities. I would argue that the practice of data science is so large that this is an impossible ask. I do so from the perspective of someone who for a while was an industry focused professor within an R1 engineering department.

First, what is data science?  Drew Conway still gives the best definition that I have seen in the data science Venn Diagram





Math/stats are the full range of analytical methods as well as the scientific method (the 'science' of data science).  Hacking skills are the computer programming, software engineering, and data engineering specific to working with data (as opposed to what is generally emphasized by academic computer science). Substantive expertise is the subject domain of the work, but it also includes the specifics of the company such as understanding its markets, its customers, and its strategy.

Math/stats is in principle the domain of our university departments.  But university departments are specialists (and research faculty are hyper specialists.  There are two problems with expecting university departments to cover the full range of math/stats that may be needed at a particular company.  First, university departments focus on a particular domain, so it is not expected that they cover other areas of data analysis that a company may need based on their particular interests. Second, they have limited time and unless you are at a very large state university with a particular mission to cover the full range of a subject area, the faculty of a small or medium size department cannot cover the full range of topics that are associated with a given field of knowledge.  So departments create undergrad or graduate programs to cover a foundation, then allow students to specialize (in areas that the department can cover with the faculty they have).  As a non-tenure stream professor, I would explain to students that departments hire to cover a wide range of their field, so they generally do not have much duplication. But each department has to make a conscious choice for what they cover and not cover every time they make a hiring decision.

So what is a university promising with their graduates?  The base set of knowledge and methods (and methods are more important than knowledge, because it is easy to refresh knowledge, you actually need practice with methods), for STEM (and social sciences) the scientific method that creates understanding through iterative experimentation and statistical analysis of experimental results. And most crucially, the capability of learning a technical area. This ability to learn is arguably the most important part of this whole exercise.  Because the world is a big place, and a 17 year old high school student will not be able to predict what the next 40 years will be like. So where a 22 year old college graduate is capable of will be nothing like what she will do over the course of a career. It is hard to develop this ability without college. High school tends to be focused on what you know.  And it is too easy in most jobs to just do what you are doing now, unless you already have the experiences of having to learn new/different domains.  For example, in most STEM and the social sciences, statistics is a side knowledge domain. But for those who go into data science, the fact that they learned statistics makes learning applied machine learning easy.  And the scientific method, while it may not be the thing you think about when you think about engineering or economics, is ingrained into the methods by which they see the world.  It is relatively easy to teach skills, it is much hard to teach mindset or the ability to learn new ways to think.

Is there anything different about artificial intelligence? Actually, yes, which makes it easy to learn for STEM and social science trained people, but also dangerous.  By definition (see  Section 238(g) of the National Defense Authorization Act of 2019) any version artificial intelligence are those which perform tasks without significant human oversight, or that can learn from experience and improve performance when exposed to data sets. In particular, it means that the creators of an artificial intelligence system or model do not have to know how the system that the AI is being added to works. For those in the mathematical sciences (e.g. mathematics, statistics, applied math, operations research, computer science), this is incomprehensible. Even the most theoretical researcher has a core belief that any application of mathematical models involves representing important aspects of the system in mathematical form.  But this makes AI (such as machine learning) relatively easy to use in practice, and this has a low barrier to entry.  But if someone, like a company, actually has subject matter expertise relevent to the problem at hand, not incorporating that expertise into the model is lost value.

Is it enough to be able to learn new skills as needed?  No, we also have to be able to learn to think differently.  The most prominent example is Generative AI. For those who only have knowledge and skills, Generative AI is a completely new thing.  For those who are able to come up with new ways of thinking, Generative AI a combination and extention of deep neural nets, natural language processing, and reinforcement learning trained on the published internet.  And its strengths and weaknesses are not random facts akin to gotchas, but are based on characteristics related to its origins. And knowing that makes a world with Generative AI different, but something that we can use.   This past week I went to a seminar on quantum computing. The mathmatics are completely beyond me. but I could understand enough to recognize the reason for its promise, what is lacking, and some sense of what are some key intermediate steps that have to happen if it ever reaches the promise that many talk about.  And this practice of being faced with completely new subject domains is something I do frequently.

So what can companies expect from the graduates that come from their university partners (whether through former relationships or merely through hiring in the community).  Sometimes it is a collection of specific skills. But more important, a college graduate comes with a testiment that person is able to learn a range of skills and knowledge that are part of a cohesive whole and put them to use. And having done so once, will be able to do it again over a 40 year career.