Sunday, July 13, 2025

Adventures in core.logic: learning clojure and logic programming with help from Gen AI

 


This past month my project has been to learn logic programming, and as a vehicle to do this, learn clojure (again).  For those who are not computer scientists, logic programming is one of the four main computer programming paradigms:  procedural (what most people learn in an introductory programming class), object oriented (what most computer science programs and professional programmers aim for, Java, C++, C#, Ruby are all examples of OO languages), functional programming (Lisp and its relatives), and logic programming.  The closest most people get to logic programming is SQL, which is declarative and works by expressing the outcome, but not the steps to get there.  The most well known language is Prolog.  A more recent expression of logic programming, is miniKanren, which is a Domain Specific Language originally implemented in Scheme, but there are other implementations, whose quality seems to be related to how well functional programming is implemented in those languages.  This essay looks at (1) learning clojure (a Lisp that runs on the java virtual machine, (2) learning logic programming (3) learning core.logic, which is the implementation of miniKanren on clojure, and (4) using Generative AI to help with all these things.

This is my second exposure to Clojure, which is a Lisp (a functional programming language) that runs on the Java Virtual Machine. The big draw is that it provides a functional programming way of working that allows use of all Java libraries.  As a data scientist, the advantage of functional programming is that this is a much better style of programming when doing data manipulation. For example, using R with the tidyverse is functional style programming in that you perform operations on data frames that return data frames, and this allows the use of piping/sequencing of functions that conform to this pattern. (Pandas in Python is a flawed version of this as not all functions in Pandas follows this rule)

My first run with Clojure was around 2014 (so says my Github timeline). At the time the Incanter project was trying to establish it as a data analysis environment on the JVM. With the goal of being used in corporate IT departments that had standardized on the JVM (which places obsticals to using Python or R).  And it was good enough that I had written a model and associated analysis in Clojure for an attempted startup (a clean implementation which was not done at any of our home organizations). But the Incanter project stalled. And more recently a broader effort to provide data analysis/scientific computing capabilities into Clojure shows promise. Scicloj.  One standard mantra that I can confirm.  Lisp makes the claim that it has very little syntax, it is easy to learn.  And I would agree. After almost 10 years, a short online course and a review of some books I had from 10 years ago I was pretty up to speed.  Because when everything is a list, the question then becomes what is the form of that list for the task/function/library at hand.  Which is easier than any other language that I work with where I have to learn the philosophy of every package I use. (or collection in the case of the tidyverse on R).  In addition, the tooling was easier. Visual Studio Code has the Calva extension, which makes working with Clojure projects automatic (pretty much anything on the Java virtual machine needs an IDE to handle the project setup, so a good IDE is essential.)

For learning logic programming, I started with some Prolog materials, because that would allow me to focus on the logic and thinking part (Prolog is also fairly sparse in syntax).   I got Adventure in Prolog by Dennis Merritt and followed along with implementing the Nani adventure game as well as the geneology exercise that was developed over the entire book.  But I was always going to move to miniKanren, becuase in any conceivable use, I would be integrating logic programming into something else.

My first two attempts to moving from Prolog to a programming language were with Julia and Clojure.  With Julia, there was Julog (which is attempt to follow Prolog patterns but in the Julia language). This seemed servicable, although all I did was the adventure game. Then I looked at the miniKanren projects.  All of them were the beginnings of an implementation, but not complete enough to do anythihng.  (Scheme and miniKanren both have a reputation for being the target of a budding language creator's first target because they are so simple to write, but then the said creator's attention goes somewhere else).  And even though I have also used Julia in the past, I basically had to learn it over again as it changes every version (I review books by computer publishers, so I have had a chance to look at Julia every now and then, and it does feel like I'm starting over again every time).

Clojure has the advantage the the main language is very stable (and since it is a Lisp it has the advantage of having seen the history of language decisions, good and bad).  They have a fun graphic where the show the history of the source code changing which looks like layers instead of comparable graphics for other language projects that look like landslides.  But the same cannot be said about core.logic.  When core.logic first came out it was a unique in the sense that it was an implementation of logic programming that was in a relatively mainstream computing environment (because logic programming makes a lot more sense on a Lisp type programming environment than on a Algol type object oriented/procedural programming environment).  So there are a lot of early tutorials. But around version 0.8.5 or so there was a major change in the core.logic library organization, and a sub library was created to hold all of the non-logic things. Which includes things like facts and data.  But this broke all of the tutorials. And like faddish things, noone updated their tutorials. So all of the tutorials that everyone points to was from 0.7.6 or so. So as I repeated the Adventure in Prolog exercises, the getting started introduction was easy, but I had to discover that there was a new way of doing things that involved actual data (as opposed to being logic exercises) and I redid the Nani adventure and the bird expert system using the new core.logic and core.logic.pldb structure.  

The bird expert system exercise was particularly difficult. I actually did not do this set of exercises when I went through the Adventure in Prolog book (because it did not actually start until about halfway through).  So I tried to start from someone else's Prolog solution.  And that completely failed.  So I used OpenAI's ChatGPT and Google Gemini to help me. So neither of them completely got it right, but they got me on the right track. So my solution does not look anything like the Prolog solution. And the types of mistakes that the Gen AI did were interesting.

Generative AI works by going through the training data (essentially the internet), and using the tokens (roughly a word, sometime part of a word and sometimes a phrase) in the query, identifies other uses of that set of tokens and comes up with a probability of options for the next token.  Then chooses the next token randomly based on the calculated probabilities. Then, including the token the Gen AI just added, repeats the same and get the next token. And repeats.  The randomness is what gives Gen AI its creativity instead of just being a search engine. But it also leads to mistakes, as the Gen AI does not actually understand any of its source texts, so it does not recognize the context of its sources or the fact that some sources may not actually go with others.

This gets more problamatic in a subject like core.logic, where the majority of the texts on the internet are out of date, in a breaking way. Normally I say that Gen AI is particularly good at computing related topics, but that is because of the vast quantity of material available on various message boards programmers and computing professionals frequent to ask questions and get them answered.  Clojure core.logic is very different, as there is not much material (Clojure is not one of the more common languages, and logic programming is also a small niche), and there are at least three different eras, which are not mutually compatable.  And since modern examples do not overwhelm historical ones in quantity, things get mixed together. 

Now, how big of a problem is this.  In my experiences using Generative AI to aid in programming (again, I am a data scientist, so I am interested in data type issues), Generative AI is good for giving programming structure and style (which is very useful, (re-)learning new APIs is time consuming), but it regularly gets logic and the model wrong. But as a scientist, logic and the model are things I am good at, so I don't mind examining code to correct the logic and model, I wanted the help in getting the thing into a running state!  This is why despite Microsoft reporting 40% error rates in Copilot generated code and OpenAI reporting 70% failure in software engineering project when using Gen AI, professional programmers still find Generative AI to be very useful.  It does get things like how to work with an API right, and has pretty good programming style (with appropriate commenting!)  But logic, which the Gen AI gets wrong, is something that any competent programmer does not mind doing themselves.

The key for using Generative AI is the same as other things. It is good for style and structure. Not so good for facts and logic. But that is what subject matter experts are good at. (and most subject matter experts are not so good at style and structure)  So a trained SME can play to a Gen AI strengths and deal with the weaknesses. But only if the human is paying attention to this. 

Next steps, repeating the Adventure in Prolog exercise, but using the Kanren library in Python,



Tuesday, June 17, 2025

Why take opportunities for public speaking as an analytics professional

For many of us in technical fields, public speaking often feels like a skill we left behind in school or perhaps dusted off for job interviews, especially if our roles involved training or teaching. Once we're in the professional world, the focus tends to shift solely to our day-to-day tasks, and public speaking opportunities seem to dwindle. However, effective communication is crucial for professional growth, and unfortunately, workplaces don't always provide sufficient feedback on technical presentations.

This is where engaging with local professional communities can be incredibly valuable. While I've had the privilege of speaking at professional society conferences, I've also found immense benefit in giving talks within local technical organizations. Many metropolitan areas are familiar with these as "Meetups," named after the platform that serves as their online home. These local speaking engagements offer distinct advantages compared to academic talks or large industry conferences.

Low-Stakes Practice Environment


One significant benefit of giving technical talks locally is the opportunity for low-stakes public speaking practice. These communities are typically smaller, comprising individuals genuinely interested in professional development. Because many members also use these meetings as a platform to share their own insights, the environment is inherently supportive and sympathetic. It's a space free from the competitiveness that can sometimes arise when individuals are trying to build a reputation or feel they're in direct competition. This fosters a very friendly atmosphere for honing your presentation skills, where attendees genuinely want to see you succeed.

Sharpening Your Communication


Secondly, preparing a talk for a public audience compels you to think critically about what truly matters. In a work setting, it's easy to gloss over foundational concepts because everyone involved in a project is assumed to have that background. In a public forum, you're required to identify the essential information and ensure you cover it as necessary background. This is particularly true for work-related topics when you might need to use public datasets (as most companies don't permit the use of proprietary data for more informal talks). This process forces you to consider what's important for your audience and what's technically crucial. It's an excellent exercise in organizing your thoughts and effectively communicating them, a skill that translates seamlessly back to your work when you realize not everyone on your team has the same background knowledge.

Building Professional Community


Finally, these local groups are instrumental in fostering community. Recent articles in local Pittsburgh publications have highlighted the increasing difficulty of forming social connections after school, and professional colleagues, while valuable, often don't entirely fill this gap due to shorter average tenures at companies and the inherent limitations of work-only relationships. Professional organizations offer the unique advantage of being specific enough to align with shared interests, yet broad enough to expose you to ideas beyond your immediate work. Giving a talk provides a natural reason for others to engage with you, sparking discussions and building relationships that can extend far beyond any single job.

Monday, April 14, 2025

(DRAFT) What do university departments provide to the employers of their students (data science)

 I gave a talk at the 2025 INFORMS (Institute for Operations Research and the Management Sciences) Analytics+ conference (i.e., industry practice focused as opposed to research focused) on Where Should the Analysts Live: Organizing Analytics within the Enterprise. The talk was a result of many organizations asking if analytics should be managed within companies centralized or de-centralized.  One of the topics that came up is the fact that much of the practice of data science is learned on the job.  For some people, they may ask if this is the job of universities. I would argue that the practice of data science is so large that this is an impossible ask. I do so from the perspective of someone who for a while was an industry focused professor within an R1 engineering department.

First, what is data science?  Drew Conway still gives the best definition that I have seen in the data science Venn Diagram





Math/stats are the full range of analytical methods as well as the scientific method (the 'science' of data science).  Hacking skills are the computer programming, software engineering, and data engineering specific to working with data (as opposed to what is generally emphasized by academic computer science). Substantive expertise is the subject domain of the work, but it also includes the specifics of the company such as understanding its markets, its customers, and its strategy.

Math/stats is in principle the domain of our university departments.  But university departments are specialists (and research faculty are hyper specialists.  There are two problems with expecting university departments to cover the full range of math/stats that may be needed at a particular company.  First, university departments focus on a particular domain, so it is not expected that they cover other areas of data analysis that a company may need based on their particular interests. Second, they have limited time and unless you are at a very large state university with a particular mission to cover the full range of a subject area, the faculty of a small or medium size department cannot cover the full range of topics that are associated with a given field of knowledge.  So departments create undergrad or graduate programs to cover a foundation, then allow students to specialize (in areas that the department can cover with the faculty they have).  As a non-tenure stream professor, I would explain to students that departments hire to cover a wide range of their field, so they generally do not have much duplication. But each department has to make a conscious choice for what they cover and not cover every time they make a hiring decision.

So what is a university promising with their graduates?  The base set of knowledge and methods (and methods are more important than knowledge, because it is easy to refresh knowledge, you actually need practice with methods), for STEM (and social sciences) the scientific method that creates understanding through iterative experimentation and statistical analysis of experimental results. And most crucially, the capability of learning a technical area. This ability to learn is arguably the most important part of this whole exercise.  Because the world is a big place, and a 17 year old high school student will not be able to predict what the next 40 years will be like. So where a 22 year old college graduate is capable of will be nothing like what she will do over the course of a career. It is hard to develop this ability without college. High school tends to be focused on what you know.  And it is too easy in most jobs to just do what you are doing now, unless you already have the experiences of having to learn new/different domains.  For example, in most STEM and the social sciences, statistics is a side knowledge domain. But for those who go into data science, the fact that they learned statistics makes learning applied machine learning easy.  And the scientific method, while it may not be the thing you think about when you think about engineering or economics, is ingrained into the methods by which they see the world.  It is relatively easy to teach skills, it is much hard to teach mindset or the ability to learn new ways to think.

Is there anything different about artificial intelligence? Actually, yes, which makes it easy to learn for STEM and social science trained people, but also dangerous.  By definition (see  Section 238(g) of the National Defense Authorization Act of 2019) any version artificial intelligence are those which perform tasks without significant human oversight, or that can learn from experience and improve performance when exposed to data sets. In particular, it means that the creators of an artificial intelligence system or model do not have to know how the system that the AI is being added to works. For those in the mathematical sciences (e.g. mathematics, statistics, applied math, operations research, computer science), this is incomprehensible. Even the most theoretical researcher has a core belief that any application of mathematical models involves representing important aspects of the system in mathematical form.  But this makes AI (such as machine learning) relatively easy to use in practice, and this has a low barrier to entry.  But if someone, like a company, actually has subject matter expertise relevent to the problem at hand, not incorporating that expertise into the model is lost value.

Is it enough to be able to learn new skills as needed?  No, we also have to be able to learn to think differently.  The most prominent example is Generative AI. For those who only have knowledge and skills, Generative AI is a completely new thing.  For those who are able to come up with new ways of thinking, Generative AI a combination and extention of deep neural nets, natural language processing, and reinforcement learning trained on the published internet.  And its strengths and weaknesses are not random facts akin to gotchas, but are based on characteristics related to its origins. And knowing that makes a world with Generative AI different, but something that we can use.   This past week I went to a seminar on quantum computing. The mathmatics are completely beyond me. but I could understand enough to recognize the reason for its promise, what is lacking, and some sense of what are some key intermediate steps that have to happen if it ever reaches the promise that many talk about.  And this practice of being faced with completely new subject domains is something I do frequently.

So what can companies expect from the graduates that come from their university partners (whether through former relationships or merely through hiring in the community).  Sometimes it is a collection of specific skills. But more important, a college graduate comes with a testiment that person is able to learn a range of skills and knowledge that are part of a cohesive whole and put them to use. And having done so once, will be able to do it again over a 40 year career.

Wednesday, April 17, 2024

On simplicity in data science communications

Everything should be made as simple as possible, but no simpler  paraphrase of A. Einstein

 I spent this past week at the 2024 INFORMS Analytics conference.  One of the major themes across both the speakers, the panels (of which I was a panelist), and conversations in the halls was soft skills for data analysts/Data Scientist.  Data scientists are subject to the same stereotype that is used with all those who are technical specialists, that they lack soft skills and this prevents our end stakeholders from understanding and being able to take advantage of the knowledge and capability we bring to our organizations.  And that the most commonly provided solution is to ask the specialists to simplify the delivery.  But, other than an excuse for those with business backgrounds to beat up on the geeks, I don't know if this is the right solution. A better direction would be, as John-Eric Bonilla described it, the data scientist acts as a translator, the person who takes the aggregated insight of the data and of the subject matter experts throughout the organization and translate that into the framework of the decision maker.  This is a tall order, but this is the reason that Drew Conway in his Data Science Venn Diagram gives subject matter expertise equal weight with the math/stats and the computer skills that get so much prominence in these discussions.

When I was deployed in Afghanistan, a brief that I prepared was being pushed up to the Commanding General, ISAF- Afghanistan. Members of the General's staff was present at the last brief, and their comment was that it was a good brief, but I needed to redo it in their format. And they gave me a highly specified template. Now, I could be judgmental and say that it met no conceivable definition of "simple", but I won't. Because I realized immediately that the template had two functions.  First, to shortstop any Powerpoint Rangers and save the General from Death by Powerpoint, because a commanding General in a combat zone is a busy person and does not have time for that.  But also because the purpose of the template was to present the information inside the framework that that particular General processes information for the purpose of making a decision.  And that specificity of presentation, that the recipient can process the information using the framework they have as an expert in their area and make a decision, is the goal of technical communications, such as data science.

The common recommendation to data scientists is that we need to simplify our work for presentation to our decision maker audience.  And the reason that we are given this message is that our decision maker audiences do not need or want our technical explanations and they cannot understand complex topics.  But this view of our decision maker stakeholders is demeaning.  So far, in my career I have found the decision makers that motivate my work to be intelligent, subject experts in their own right, and fully capable of understanding detail and nuance.  But the key is not to remove subtlety and detail (which is the reason this person is in the position of decision maker), but to present the subtlety and detail that is important.  Certainly, the tendency of technical experts to want to focus on the story of their work does not help either. The answer (IMHO) lies in the use of frameworks.  Every specialty community that I know of has frameworks that are used to organize and communicate information. Examples are the range of SITREP formats used by specific emergency response and military communities, the 9-line medevac report, frameworks used for reporting patient condition in specific circumstances in the medical community.  And individual leaders have developed a framework to make the decisions, even if this framework is masked in intuition. And in the ideal case, that record of good decisions is the reason they are in their position.

That makes the key of data communication is to understand the decision making framework used by experts in this situation. In the case of the then Commanding General - ISAF, this framework was formalized by the General's staff, so that all briefs going to him were presented in that framework. And the General can fit all of the provided information into his internal decision making framework.

When this framework has not been formalized, the key is direct communication between the data analyst and the decision maker (or surrogate). The data scientist needs to communicate with the decision maker, or someone who knows how the decision maker things (either intuitively, or because they are members of the same professional community who analyzes information in a standard way) to understand how the decision maker thinks.  Then, this identifies both the type of information and the criteria that will be used to make the decisions.  The data scientist task becomes either identifying the data needed to provide this information, or to use the data that is available to come as close as possible to the information required.  And this unlocks the value of the data scientist, without diminishing either the role or capability of the decision maker.

There are a few ways for this to fail.  First is from the data scientist side. Many technical experts have no desire to learn the decision maker process. This is often accompanied by beliefs that the technical facts make the needed action self evident. Then from the other side, there are those who think that they give commands to people and the people doing the work should be able to get it done without needed resources. Both fall under the heading of lack of communications between the analyst and the customer, which is universally known to be the most common cause of data project failure.  The role of the data analytics manager is to ensure that constant communications is maintained and to intervene if not. (There are managers who think their role is to be a broker. But this also breaks communications and does not help change the most common cause of failure in data analytics projects)

Is being a translator easy? No.  But I have found on my projects is that the data scientist is often the first person who realizes all of the people who are actually involved in an activity, because the data scientist is tracing all of the data elements. So the data scientist needs to learn everyone's language to get a good picture on what is actually happening, and communicate to the decision maker in a format the decision maker can understand. Yes, this is hard (80% of data projects fail, and while vendors use this to market products, those who investigate that number say it is mostly communications). But we are not the only people who have to take complex information and transmit it to a decision maker in a form they can understand and use to make decisions.  The UX community does this too.  And, often they do it well.  So can we.

Wednesday, September 07, 2022

Book Review: Maiden of the Lux by Jada Fisher, Book 2 of the Dragon Guard

Maiden of the Lux (The Dragon Guard #2)Maiden of the Lux by Jada Fisher
My rating: 5 of 5 stars

This is the second book of the series. It is set in an embattled city-state, surrounded by a bleak and hostile world, only protected by the shield of The Lux. Ten, the protagonist, is a servant girl, living in a society which is highly caste driven and dominated by the Great Houses, who are above both the servant class and the freemen class. From The Great Houses, the elites of society; religious, political, and the dragon riders who protect the city from the evils that lie outside the shield of The Lux.

In the first book, Ten tricks the Great House she works in and enters the qualification process to become a dragon rider. And she eventually makes it, with the blessing of her House lords and her family (also servants). And this despite the open hostility of those who wish to enforce the caste/social class distinctions of the state. This is not a surprise, as this type of story is the American conceit, that it is a meritocracy, and the dragon riders as a whole support her ascending by her merit.

This book is about what happens now that Ten is a probationary dragon rider, with its promise of entering the top rung of society based on merit (passing the tests and being accepted by a dragon). But here is where the traditional society hits back. And as being a dragon rider is by meritocracy, the conservative elements of society have to fight back in the shadows. The naive Ten has allies in her fellow new dragon riders, and through demonstration of bravery in battle, some current dragon riders. But the opposition is in the form of back stabbing, obstacles placed in the way of progress, gaslighting, and overt hostility.

Fantasy and science fiction have its power in that they can create a world and society that is not our own, but because people are people, has parallels. Here is a story of one who is rising above where she started, based on effort, determination, and hard won skill. It is recognized by the defenders of the realm, which maintains the meritocracy and offers the chance for her to demonstrate her worth, and she is not found wanting. But when she gets there, like many in our real society with similar stories, she finds that for every step forwards, there are those who will drag her down and put obstacles in her way, because they don't want a living story of someone rising to take their place and serve the society they are the elites of. Even those who would rather noone rises, because they did not.

This book really needs to be read following the first. The first book invests you in the characters (others beside Ten develop naturally), and build up hope and have you root for Ten and her drive and determination. This book will continue the stories of drive and determination, but, like many such stories in real life, includes a gut punch that you would not expect in a YA novel.

View all my reviews

Sunday, August 28, 2022

Courage in Thirteen Lives

I paid attention to the Tham Luang Cave Rescue in 2018, not only because it was in Thailand, but also because of my background in both emergency response and in logistics (which was a major consideration in the rescue operation).  I used this as a white board exercise in my supply chain and logistics classes as a professor.  And a regular reminder for me is that I use the Saman WOD  (created by Crossfit Chamgmai), which memorials Saman Kunan, a Thai Navy SEAL who died in the line of duty during the rescue, annually as my birthday WOD.

Thirteen Lives is a documentary style movie about the rescue, told from the perspective of the cave rescue team.  With the access and active participation of the cave rescue team, it goes deep into the decision making and ethical dilemmas that those who were responsible had to address.

The choices in the cave rescue are well documented.  There was the local team, the Thai Navy SEALS.  Well motivated and trained, with the logistics planning expertise of a U.S. special forces unit to back them up.  But, their training and expertise where not in the hyperspecialized setting of cave rescue that they found themselves in.

The choices of the cave rescue divers was also well documented.  The need to work within the local system, which meant both the Thai civil and Thai military authorities (who were competing with each other in the way things things go anywhere in the world). The difficulties in just finding the team in the dark and flooded caves, then planning out how the boys would be brought out of the cave, with the host of ethical dilemmas that various aspects of that operation entailed.

But Thirteen Lives presents a profile in courage in the person of the governor.  Thirteen Lives presents him as a governor who is in his last days, and implies that it was not a glorious send off.  And that he was the apparent designated fall guy if things went wrong (with the foreign cave rescue divers being not too far behind, but that is a story that everyone is happy not to have to tell).

The first choice he was presented with was to work with the foreign cave rescue divers while in the presence of the Thai Navy and SEALS who were eager to do the job.  On one hand, a provincial governor recognized that high profile foreigners dying looks bad to the Thais (who would be suspected of using amateurs and admitting lack of confidence). On the other hand, the loss of one of the Thai divers in the rescue was an indicator of just how dangerous and difficult this was.  Further tough choices came when the boys were found, and the cave rescue team provided very honest assessments on the options they had and their chances of success (noone thought their odds were good, and almost guaranteed fatalities).  And as the senior authority, he had to give the yes or no every step of the way.

But the most telling illustration of leadership in action was when the hydrology people explained what was happening with the rain on top of the mountain.  That water was seeping into the mountain and into the cave system (which is what happens in all mountains, this seeping happens in the hills leading into my backyard).  And that it could be possible to divert the water falling on the mountain so that it did not go into the cave system.  But "there was a price".  The water had to be diverted somewhere.  So the governor had to go to the villagers to ask to flood their fields in the name of a rescue effort that noone knew if it would succeed. So the villagers are private citizens, and there is no law that says that the governor can order them to take the damage, and certainly not for a mere chance of 13 lives (compared to the ruination of the villages that would be a guarantee)  So, with no time, he has to persuade the villagers to do this, with only promises that he will try to make things right in the end, succeed or fail (and remember, he is leaving his position when all this is over)  And this is the result of trust and leadership gained over time.

There are many images of leadership.  An alternative is to cut your loses and avoid the possibility of things that don't look good.  Which often leads to loosing everything for no gain.  And that was the alternative provided to this governor. But leadership is shown when you are willing to make difficult decisions.  And convince people that it is everyone's best interest that they sacrifice, on nothing more than a promise that they will all gain in the end. If not materially, then spiritually.  And Thirteen Lives tries to show this in a very real way.



Sunday, July 24, 2022

Basement home gym - Summer 2022 version

 The end of 2021 was marked by a burst pipe in the basement, which ruined all of the flooring, but spared all of the equipment (as they were on top of mats so did not get soaked).  So, when insurance paid for new mats, this provided an opportunity to rearrange the basement gym.  Also, the big upgrade was the addition of the Whipr, a connected (i.e. bluetooth) multi-sport ergometer which sees regular use as a rower and ski-erg, and occasional use as a paddle (kayak/canoe) erg.

First, the equipment nook, which has the big changes from last year.

This side used to have the rack, a rower and the AirDyne.  Now, the flooring was replaced with rubber flooring to better withstand the heavier weights and equipment.  The treadmill was moved here, where the tougher rubber is a better floor than mats intended for martial arts.  
Equipment nook  - Basement home gym Summer 2022

The Whipr is the big addition here.  The Whipr (https://whipr.com/) is a multi-sport ergometer. The most common version of an ergometer is a rower.  It measures power, and bluetooth enabled (FTMS) apps can use it to measure your force production.  My son and I both use it as a rower (not as good as a Concept 2, but better than any other non-fan/water based rower.  In addition, I regularly use it as a Ski-erg, and occasionally use it for canoe, kayak, or stand up paddle. (ski and SUP are mounted to the rack). The big benefit is that it measures distance and calories directly from force production, which rewards good form. (less capable machines measure stroke rate and estimate everything else from there).  Both the Whipr and the treadmill (Horizon T101 with a Runn sensor) are bluetooth enabled, so I use Kinomap to give myself a virtual course to row/run/ski/paddle on and keep track of energy and work.

The other major component to this corner is the CAP power rack.  It is designed for the barbell, in particular squats and bench press (I do deadlifts on the floor after folding up the treadmill and moving the rower).  I have a bar with standard plates (1" hole, contrast to Olympic plates with 2" hole).  It is adequate for the weights I use, because I don't plan on going that heavy (i.e. low 200lbs is going to be enough). The biggest thing lacking with a standard bar vs Olympic bar is that standard bars do not spin, so I really cannot do the olympic weights with anything serious. So my Olympic movements are done with sandbags or kettlebell.


Basement home gym Summer 2022

Attached to the rack are rings and a cable pulley.  The rings replace a cheap suspension trainer, which was starting to fray.  I got better quality, but not willing to spend on a name brand trainer, I got rings instead. The wider webbing should make it last longer.  I do dips, rows, and presses on this. The instability due to not being anchored to the ground makes for a good challenge.  The cable pulley was an experiment.  Basically, it allows for using weights where the direction of resistance is more horizontal than normal with free weights (where resistance is toward the ground).  But I get much of that from the rings.

Also attached to the rig are the ski and SUP brackets for the Whipr.

The ski attachment is mounted on top of the rack using 3/4" webbing.  I mounted initially using a buckle, then wrapped the webbing like a big, repeated clove hitch.  Done twice, it is a very stable setup.  As the skierg movement is complementary to the rower movement (anterior chain as opposed to the rower posterior chain), this is a good complement to the rower. Also good if my legs are torched and I need something reasonably easy that is upper body cardio.  The paddle and SUP are also good for that in addition to the rotation stimulus they provide.


Whipr brackets on power rack

The SUP bracket is mounted to the lower rail of the power rack.  It took me a while to realize that I needed to have the hook and brace on the webbing in this order for it to work, or the brackets would slip off if any force is applied for any reasonable length of time.  I also have to put a couple spare mats under the bracket so support the Whipr base unit so it does not flop around.


\Whipr brackets on power rack

The other corner has the kettlebells, dumbbells, AirDyne, and punching bags.  The plate loaded kettlebells (variously Fitness Gear at Dick's sporting goods, or Apex, but made by the same company as Marcy.  They have a base weight of 20 pounds, but can be loaded with up to 4 5 pound standard weights. So this is my usual weight for StreetParking Crossfit-style workouts, with the weight depending on my abilities for the movement in question. The pair of 30 pound dumbbells is an aspirational weight.  For static moves, I am able to use these. But for olympic moves overhead (e.g. clean and press or snatches) I'm not there yet.  But I've been using these for dumbbell strength focused workouts.




Basement home gym Summer 2022

While the rower and treadmill are our cardio implements of choice, I get a lot of use out of the AirDyne.  The rower, treadmill, and skierg all are more technique driven while the AirDyne is more mindless (since the machine guides the motion along a fixed path). So for pure mindless work capacity, it is the goto. But also light mindless work (like taking a work break) it is good for moving when I want to listen or watch something.


Basement home gym Summer 2022

The next corner has homemade PVC dip bar, parallettes, a stepper, wobble board, and a set of sand kettlebells and medicine balls.  The dip bar is for dips and inverted rows, especially if the dip bars we have for the power rack are being claimed. The parallettes are for pushups (easier on the wrists) or as an obstacle for hop overs.  The sand kettlebells are mostly for my son(11). They are 10, 15, 20 pounds. We have another set, but they are scattered in rooms around the house for quick study/work breaks.  The medicine balls are used as wall balls. My daughter (8) uses the sand kettlebells and medicine balls for carries around the basement.  Last in this corner as the AV cart.  I have a monitor with an Amazon Fire attached which uses two bluetooth speakers for music when working out.  Also here are foam rollers and lacrosse balls for mobility, the fan for circulation, and whiteboard and cleaning supplies.




Basement home gym Summer 2022

Last corner are the jump ropes and sandbags.  We have jump ropes for both of the kids and myself, all sized appropriately (with a little bit of growing room set up for the kids.).  I have two sandbags, one at 40#, one at 65#.  I generally use the 40# bag for anything with serious volume.  Then we have shelves with the TKD gear.  I don't do TKD much anymore (a casualty of the COVID-19 pandemic), but my son still does so he uses the space for practice.

Thanks for reading about my  home gym.  It has come a long way from when we moved in and all we had were mats and a set of spin-lock dumbbells.  But like all good home gyms, we built this up a little at a time and we have most of what I can think of.  Some things that come to mind are increasing weight on the kettlebells (will need an additional one), and a spin type bike.  (we can dream, right)