How to be the best Economic Data Scientist: The Seven Tools of Causal Inference and Ethics

Originally published on November 21, 2019, on LinkedIn, updated lightly October 29, 2022

My blog tagline is economists put the science into data science. Part of the reason I make this claim is many applied econometricians (sadly not all) place a high value on causality and causal inference. Further, those same economists will follow an ethic of working with data that is close to the 2002 guidance of Peter Kennedy and myself.

Judea Pearl discusses “The Seven Tools of Causal Inference with Reflections on Machine Learning” (cacm.acm.org/magazines/2019/3/234929), a Contributed Article in the March 2019 CACM.

This is a great article with three messages.

The first message is to point out the ladder of causation.

  1. As shown in the figure, the lowest rung is an association, a correlation. He writes it as given X, what then is my probability of seeing Y?
  2. The second rung is intervention. If I do X, will Y appear?
  3. The third is counterfactual in that if X did not occur, would Y not occur?

In his second message, he discusses an inference engine, of which he says AI people and I think economists should be very familiar. After all, economists are all about causation, being able to explain why something occurs, but admittedly not always at the best intellectual level. Nevertheless, the need to seek casualty is definitely in the economist’s DNA. I always say the question “Why?” is an occupational hazard or obsession for economists.

People who know me understand that I am a huge admirer, indeed a disciple of the late Peter Kennedy (Guide to Econometrics, chapter on Applied Econometrics, 2008). Kennedy in 2002 set out the 10 rules of applied econometrics in his article “Sinning in the Basement: What are the rules.” I think they imply practices of ethical data use and are of wider application than with Kennedy’s intended audience. I wrote about Ethical Rules in Applied Econometrics and Data Science here.

Kennedy’s first rule is to use economic theory and common sense when articulating a problem and reasoning a solution. Pearl in his Book of Why explains that one cannot advance beyond rung one without other outside information. I think Kennedy would wholeheartedly agree. I want to acknowledge Marc Bellemare for his insightful conversation on the combination of Kennedy and Pearl in the same discussion of rules in applied econometrics. Perhaps I will write about that later.

Pearl’s third message is to give his seven (7) rules or tools for Causal Inference. They are

  1. Encoding causal assumptions: Transparency and testability.
  2. Do-calculus and the control of confounding.
  3. The algorithmization of counterfactuals. 
  4. Mediation analysis and the assessment of direct and indirect effects.
  5. Adaptability, external validity, and sample selection bias.
  6. Recovering from missing data. 
  7.  Causal discovery.

I highly recommend this article, followed by the Book of Why (lead coauthor) and Causal Inference in Statistics: A Primer. (lead coauthor). Finally, I include a plug for a book in which I contributed a chapter on ethics in econometrics, Bill Franks, 97 Things About Ethics Everyone in Data Science Should Know: Collective Wisdom from the Experts.

Avoiding Pitfalls in Regression Analysis

(Updated with links and more Dec 1, 2020. Updated with SAS Global Forum announcement on Jan. 22, 2021.)

Professors reluctant to venture into these areas do no service to their students for preparation to enter the real world of work.

Today (November 30, 2020)  I presented: “Avoiding Pitfalls in Regression Analysis” during the Causal Inference Webinar at the Urban Analytics Institute in the Ted Rogers School of Management, Ryerson University. I was honored to do this at the kind invitation of Murtaza Haider, author of Getting Started with Data Science.  Primary participants are his students in Advanced Business Data Analytics in Business. This is an impressive well-crafted course (taught in R) and at the syllabus-level covers many of the topics in this presentation. I met Murtaza some time ago online and have come to regard him as a first-rate Applied Econometrician.

Ethics and moral obligation to our students

Just as Peter Kennedy developed rules for the ethical use of Applied Econometrics, this presentation is the first step to developing a set of rules for avoiding pain in one’s analysis. A warning against Hasty Regression (as defined) is prominent.

(Update 1/22/2021: My paper, “Haste Makes Waste: Don’t Ruin Your Reputation with Hasty Regression,” has been accepted for a prerecorded 20 minute breakout session at SAS Global Forum 2021, May 18-20, 2021. More on this in a separate post later.)

Kennedy said in the original 2002 paper, Sinning in the Basement, “… my opinion is that regardless of teachability, we have a moral obligation to inform students of these rules, and, through suitable assignments, socialize them to incorporate them into the standard operating procedures they follow when doing empirical work.… (I) believe that these rules are far more important than instructors believe and that students at all levels do not accord them the respect they deserve.”– Kennedy, 2002, pp. 571-2”  See my contribution to the cause, an essay on Peter Kennedy’s vision in Bill Frank’s book cited below.

While the key phrase in Peter’s quote seems to be the “moral obligation,” the stronger phrase is “regardless of teachability.” Professors reluctant to venture into these areas do no service to their students when they enter the real world of work. As with Kennedy, some of the avoidance of pitfall rules are equally difficult to teach leading faculty away from in-depth coverage.

The Presentation

A previous presentation has the subtitle, “Don’t let common mistakes ruin your regression and your career.” I only dropped that subtitle here for space-saving and not to disavow the importance of these rules in a good career trajectory.

cover slide

This presentation highlights seven of ten pitfalls that can befall even the technically competent and fully experienced. Many regression users will have learned about regression in courses dedicating a couple of weeks to much of a semester, and could be self-taught or have learned on the job. The focus of many curricula is to perfect estimation techniques and studiously learn about violations of the classical assumptions.  Applied work is so much more and one size does not always fit. The pitfalls remind all users to think fully through their data and their analysis. Used properly, regression is one of the most powerful tools in the analyst’s arsenal. Avoiding pitfalls will help the analyst avoid fatal results.

The Pitfalls in Regression Practice?

  1. Failure to understand why you are running the regression.
  2. Failure to be a data skeptic and ignoring the data generating process.
  3. Failure to examine your data before you regress.
  4. Failure to examine your data after you regress.
  5. Failure to understand how to interpret regression results.
  6. Failure to model both theory and data anomalies, and to know the difference.
  7. Failure to be ethical.
  8. Failure to provide proper statistical testing
  9. Failure to properly consider causal calculus
  10. Failure to meet the assumptions of the classical linear model.

How to get this presentation

Faculty, if you would like this presentation delivered to your students or faculty via webinar, please contact me.  Participants of the webinar can request a copy of the presentation by emailing me at myers@uakron.edu. Specify the title of the presentation and please give your name and contact information. Let me know what you thought of the presentation as well.

You can join me on LinkedIn at https://www.linkedin.com/in/stevencmyers/. Be sure to tell me why you are contacting me so I will be sure to add you.

I extend this to those who have heard the presentation before when first presented to the Ohio SAS Users Group 2020 webinar series on August 26, 2020.

Readings, my papers:

Recommended Books:

Other Readings and references: