Air Pollution is getting worse. No it really isn’t!

Originally posted to LinkedIn on October 26, 2022

A student today told me that air pollution was up. The fact is, that is not even close to the truth. Typically students (and many people) lean toward the pessimistic. It is little wonder with the constant blaring of bad news and fear-mongering from those whose agenda is attracting eyeballs or votes (or both). The truth, however, is a stubborn thing, but it is not always front and center.

Check out this graph from the US EPA from their report on Air Quality (https://www.epa.gov/air-trends/air-quality-national-summary, accessed October 26, 2022).


Source: EPA.gov image source https://www.epa.gov/system/files/images/2022-06/1970-2021%20Baby%20Graph_1.png

From 1970, aggregate emissions from 6 common pollutants are down 78%. CO2 alone is down 9%.

At the same time, we consumed 43% more energy, had a 62% growth in population, almost a 200% increase in miles traveled by gas-powered vehicles, and nearly a 300% growth in GDP. At the same time, our standard of living (measured by real GDP per capita) rose 244%. (source US BEA)

Most would conclude, I would observe, that with population and vehicle miles and GDP rising, of course, air quality has to suffer. But that is not the case. Why are people so pessimistic? The evidence everywhere is that the world improves.  

I am not trying to simplify or dismiss real problems, but I am pointing out that the US is one of the world’s best examples of clean air. As countries get rich, they can spend more on cleaning their environment.

Ourworldindata.com says, “Death rates from air pollution are highest in low-to-middle income countries, with more than 100-fold differences in rates across the world.” Air quality is a normal good. As incomes rise and residents can move beyond mere survival demands, it becomes something they will demand. (https://ourworldindata.org/air-pollution).

The following graph shows worldwide death rates due to air pollution on the vertical axis. As countries become rich, they can afford to demand clean air. In the first graph below, countries defined as low-income are shown. The trend is downward for indoor air pollution, but the death rate due to all air pollution stands at 189 per 100,000 residents.

In the second graph, a similar trend is shown for countries the world bank classifies as rich, showing death rates from air pollution is falling. In 2019 the death rate from all air pollution in these high-income countries is 15 per 100,000 residents or less than 8 percent of the low-income countries. In other words, low-income countries, as of 2019, have shown a great reduction in air pollution deaths over time but have a death rate of almost 13 times the high-income countries.

When a country becomes richer, air quality gets better.

Economic Freedom: Solve Problems, Tell Stories

Time and time again we hear employers wanting two qualities out of their data scientists, be able to solve problems and tell stories. How important is economic freedom? Does it lead to greater standards of living? The answer can be shown in tables of results well laid out, but visualizing those results has an even greater impact and better tells the story.

If a “picture is worth a thousand words” then a SAS SGPLOT is worth many pages of tables or results. Can you see the story here?

Economic Freedom is shown to be associated with ever higher standards of living across countries.

The problem is whether countries with higher levels of economic freedom also have higher standards of living. It appears that is true. The association seems undeniable. Is it causal? That is another question that the visual begs. Chicken and Egg reasoning doesn’t seem likely here. It does appeal that the association is one way. For that to be established, we have to answer is economic freedom necessary for higher standards of living. And we have to determine that if the economic freedom had not been accomplished would the standard of living not been as high.

More on that in a future post on the importance of “why.” For now, enjoy the fact that their seems to be a key to make the world better off. Oh, not just from this graph, but from countless successes in countries in the past. My undergraduate analytic students are expanding on this finding to see if their choices from the 1600 World Development Indicators of the World Bank hold up in the same way as GDP per-capita does here in this graph. We/they modify the question to “Do countries that have higher economic freedom also have greater human progress?” I am anxious to see what they find.

The Economic Freedom data comes to us from The Heritage Foundation. Let me know what you think about the visual.

This is a followup to my post on my blog at econdatascience.com “Bubble Chart in SAS SGPLOT like Hans Rosing.”

The SAS PROC SGPLOT code to create the graph is on my GITHUB repository. It makes use of Block command for the banding and selective labeling based on large residuals from a quadratic regression. The quadratic parametric regression and the loess non-parametric regression are to suggest the trend relationship.

Sorry Data not included.

Bubble Chart in SAS SGPLOT like Hans Rosing

Robert Allison blogs as the SAS Graph Guy. He recreates using SAS PROC SGPLOT the famous bubble chart from Hans Rosing of Gapminder Institute. Hans shows that life expectancy and income per person have dramatically changed over the years. Because Hans Rosing is a ot the father of visualizations, Robert produces this graph (shown here) and this very cool animation.

I can’t wait to see  Economic Freedom and income per person soon in one of these graphs. My students are trying to do this right now.  At this point in the term they are acquiring two datasets from Heritage on 168 countries, which contain the index of economic freedom for 2013 and 2018. Then they are cleaning and joining them so they can reproduce the following figure and table in SAS PROC SGPLOT for each year.

 

 

 

 

 

 

 

 

 

 

 

 

I have written about this project in prior terms here. Once they have this data joined and the above figures reproduced then they will move on to the final project for this semester. They will be looking through the 1600 World Development Indicators of the World Bank.  Each team of students will choose 5 and will join that to their data to answer the question:

Does Economic Freedom lead to greater Human Progress?

I may share their results, for now this is some pretty cool graphics from the SAS Graph Guy. 

 

 

 

A Data Science Book Adoption: Getting Started with Data Science

In my undergraduate business and economic analytics course, I have adopted Murtaza Haider‘s excellent text Getting Started with Data Science. I chose it for a lot of reasons. He is an applied econometrician so he relates to the students and me more than many authors. I truly have a very positive first impression. 

Updated: November 7, 2020

On my campus you can hear economics is not part of data science, they don’t do data science, that is, data science belongs to the department of statistics (no to the engineers, to the computer science department, and on and on like that.)  We have come a long way, but years ago, for example, the university launched a major STEM initiative and the organizers kept the economic department out of it even though we ask to be part of it. Of course, when they did their big role out, without our department, they brought in a famous keynote speaker who was … wait for it … an economist.

My department , just launched a Business Data Analytic economics degree in the College of Business Administration at the University of Akron.  We see tech companies filling up their data science teams with economists, many with PhDs. Our department’s placements have been very robust in the analytic world of work. My concern is seeing undergraduates in economics get a start in this field. and Murtaza Haider offers a nice path. 

Dr. Haider, has a Ph.D. in civil engineering, but his record is in economics, specifically in regional and urban, transportation and real-estate, and he is a columnist for the Financial Post. and I can attest to his applied econometrics knowledge based on his fine book which I explore below.

WHAT IS DATA SCIENCE

Haider has a broad idea of what is data science and follows a well-reasoned path on how to do data science. Like my approach to this class, he is heavy into visualizations through tables and graphics and while I would appreciate more design, he makes an effort to teach the communicative power of those visualizations. Also, like me, he is highly skeptical of the value of learning to appease the academic community at the expense of serving the business (non-academic) community where the jobs are. I really appreciate that part of it.

PROBLEM SOLVING AND STORYTELLING

He starts with storytelling. our department recognizes that what our economists do, what they do to bring value is they know how to solve problems and tell stories. Again this is a great first fit. He then moves to Data in a 24/7 connected world. He spends considerable time on data cleaning and data manipulation. Again I like how he wants students to use real data with all of its uncleanliness to solve problems. Chapter 3 focuses on the deliverables part of the job and again I think he is spot on. 

Then through the remaining chapters he first builds up tables, then graphs, and onto advanced tools and techniques. My course will stop somewhere in the neighborhood of chapter 8.

(Update: Chapter 8 begins with the binary and limited dependent variables, and full disclosure my last course did not begin this chapter, we ended in Chapter 7 on Regression). Perhaps the professor in the next course will consider Getting Started in Data Science for Applied Econometrics II.  (Update: Our breakdown in our Business Data Analytics economics degree is that Econometrics I is heavily coding and application-based, while econometrics II is a more mathematical/ theoretical based course with intensive data applications.  It is a walk before you run approach, building up an understanding of analysis and data manipulation first. )

I use a lot of team-based problem-based learning in my instruction and Haider’s guidance through the text is instructing teams how to think through problems to get one of many possible solutions, not highlighting only one solution. In this way, he reinforces both creativity in problem-solving. I like what I read, I wonder what I will think after students and I go through it this term. (Update: I/we liked the text, but did not follow it page by page.  The time constraint of the large data problem began to dominate and crowd out other things, hence why I did not get to Chapter 8, my proposed end. However, because in course 1 which emphasizes data results over theoretical knowledge, I was well pleased.)

PROBLEM ARTICULATION, DATA CLEANING, AND MODEL SPECIFICATION

Another reason I like the book so much is he cites Peter Kennedy, the now passed, research editor for the Journal of Economic Education. Peter was very influential on me and applied econometricians who really want to dig into the data. Most of my course is built around his work and especially around the three pillars of Applied Econometrics.: (1) the ability to articulate a problem, (2) the need to clean data, and (3) to focus deeply on model specification. He argues that most Ph.D. programs fail to teach the applied, allowing their time to focus on theoretical statistics and propertied of inferential statistics. Empirical work is often extra and conducted, even learned, outside of class. I have never taught like that (OK, maybe my first year out of my Ph.D.), but my last 40 years have been a constant striving to make sure my students are prepared for the real as opposed to the academic world. Peter made all the difference bringing my ideas into sharp focus. I like Haider’s work, Getting Started with Data Science, because it is written like someone who also holds the principles put forth by Peter Kennedy in high regard. 

SOFTWARE AGNOSTIC, BUT TOO MUCH STATA AND NOT ENOUGH SAS

On page 12 he gets much credit for saying he does not choose only one software, but includes “R, SPSS, Stata and SAS.” I get the inclusion of SPSS given it is IBM Press, but there is virtually no market for Stata (or SPSS)  in the state of Ohio or 100 miles around my university’s town of Akron, OH. Also, absent is python, which is in heavy use in the job market.  You can see the number of job listings mentioning each program in the chart below. 

I am highly impressed with Haider’s book for my course, but that does not extend to everything in the book. My biggest peeve is his heavy use of Stata. I would prefer a text that highlights the class language (SAS) more and was more sensitive to the market my students will enter.  

Stata is a language adopted by nearly all professional economists in the academic space and in the journal publication space, however, I think this use is misguided when the book is to be jobs facing and not academic facing. While he shows plenty of R, there is no python and no SAS examples. All data sets are available on his useful website, but since SAS can read STATA data sets that isn’t much of a problem.

Numbers for all of indeed.com listings in August 2019: Python, 70K; R 52K; SAS 26K, SPSS 3,789; Stata 1,868

SAS Academic Specialization

Full disclosure, we are a SAS school as part of the SAS Global Academic Program and offer both a joint SAS certificate to our students as well as offering them a path to full certification. 

(Update: The SAS joint certificate program has been rebranded and upgraded to the SAS Academic Specialization and is still a joint partnership between the college or university and SAS, but now in three tiers of responsibilities and benefits. We are at tier 3 and the highest level. Hit the link for more details.) 

We also teach R as well in our forecasting course and students are exposed to multiple other programs over their career including SQL, Tableau, Excel (for small data handling, optimization, and charting/graphics), and more. 

Buy This Book

Most typical econometric textbooks are in the multiple hundreds of dollars (not kidding) and almost none are suitable to really prepare for a job in data science. This book on Amazon is under $30 and is a great practical guide. Is it everything one needs? Of course not, but at the savings from $30 you can afford many more resources.

More SAS Examples

So it is natural given our thrust as a SAS School, that I would have preferred examples in SAS to assist the students. Nevertheless, I accepted the challenge to have students develop the SAS code to replicate examples in the book. This is a great way to avoid too much grading of assignments. Let them read Haider’s examples, say a problem that he states, and then solves with STATA. He presents both question and answer in STATA and my student’s task is to answer the problem in SAS. They can self check and rework until they come to the right numerical answer, and I am left helping only the truly lost.  

Overall, I love the outline of the book. I think it fits with a student’s first exposure to data science and I will know more at the end of this term. I expect to be pleased. (Update: I was.) 

If you are at all in data science and especially if you have a narrow idea that data science is only Machine Learning or big data, you need to spend time with this book, specifically read the first three chapters and I think you will have your eyes opened and a better appreciation of the field of data science.

Testing for a structural break

Ever throw a dummy variable in a regression to see whether the effect the dummy variable is measuring has an impact on the dependent variable? Ever find that the dummy variable had the wrong sign, was of small magnitude or had vastly large variance so that you decided based on your data that the effect measured by the dummy variable had no effect? Of course you have, we all have.

But did you know that your data may be lying to you?

This presentation is an exploration of whether a time series changes based on an intervention that occurs half way through the data. Perhaps the intervention is a new law, or a treatment of some kind, did it have an effect. In our example, the dummy variable is insignificant in the first instance, model specification is in doubt and a full-on testing strategy is developed. That is, the test of whether D, the dummy variable, effects Y, the outcome measure is more than a p-test in a simple single regression, much more. Check out the classroom presentation and I will eventually load all the SAS code here to run all 8 regressions required and the multiple tests of each regression to answer the original hypothesis that D matters.

Susan Athey on the Impact of Machine Learning on Econometrics and Economics (part 2)

I posted Susan Athey’s 2019 Luncheon address to the AEA and AFA in part 1 of this post.  (See here for her address).

Now, in part 2, I post details on the continuing education Course on Machine Learning and Econometrics from 2018 featuring the joint efforts of Susan Athey and Guido Imbens.

It relates to the basis for this blog where I advocate for economists in Data Science roles. As a crude overview, ML uses many of the techniques known by all econometricians and has grown out of data mining and exploratory data analysis, long avoided by economists precisely because such methods lead to in the words of Jan Kmenta “beating the data until it confesses.” ML also makes use of AI type methods that started with brute force methods noting that the computer can be set lose on a data set and in a brutish way try all possible models and predictions seeking a best statistical fit, but not necessarily the best economic fit since the methods ignore both causality and the ability to interpret the results with a good explanation.  

Economists historically have focused on models that are causal and provide the ability to focus on the explanation, the the ability to say why. Their model techniques are designed to test economic hypotheses about the problem and not just to get a good fit. 

To say we have discussed historically the opposite ends of a fair coin by setting up the effect of X–>y is not too far off. ML focuses on y and econometrics focus on X. The future is focusing on both, the need to focus good algos on what is y and the critical understanding of “why” which is the understanding of the importance of X.  

This course offered by the American Economic Association just about one year ago, represents the state of the art of the merger of ML and econometrics.  I offer it here (although you can go directly to the AEA website) so more can explore how economists need to incorporate the lessons of ML and of econometrics and help produce even stronger data science professionals. 

AEA Continuing Education Short Course: Machine Learning and Econometrics, Jan 2018

Course Presenters

Susan Athey is the Economics of Technology Professor at Stanford Graduate School of Business. She received her bachelor’s degree from Duke University and her PhD from Stanford. She previously taught at the economics departments at MIT, Stanford and Harvard. Her current research focuses on the  intersection of econometrics and machine learning.  As one of the first “tech economists,” she served as consulting chief economist for Microsoft Corporation for six years.

Guido Imbens is Professor of Economics at the Stanford Graduate School of Business. After graduating from Brown University Guido taught at Harvard University, UCLA, and UC Berkeley. He joined the GSB in 2012. Imbens specializes in econometrics, and in particular methods for drawing causal inferences. Guido Imbens is a fellow of the Econometric Society and the American Academy of Arts and Sciences. Guido Imbens has taught in the continuing education program previously in 2009 and 2012.

Two day course in nine parts - Machine Learning and Econometrics, Jan 2018

 Materials:

Course Materials (will attach to your Google Drive)

The syllabus is included in the course materials and carries links to 4 pages of readings which are copied and linked to the source articles below.

Webcasts:

View Part 1 – Sunday 4.00-6.00pm: Introduction to Machine Learning Concepts 

(a) S. Athey (2018, January) “The Impact of Machine Learning on Economics,” Sections 1-2. 

(b) H. R. Varian (2014) “Big data: New tricks for econometrics.” The Journal of Economic Perspectives, 28 (2):3-27.

(c) S. Mullainathan and J. Spiess (2017) “Machine learning: an applied econometric approach” Journal of Economic Perspectives, 31(2):87-106  

View Part 2 – Monday 8.15-9.45am: Prediction Policy Problems

(a) S. Athey (2018, January) “The Impact of Machine Learning on Economics,” Section   3. 

(b) S. Mullainathan and J. Spiess (2017) “Machine learning: an applied econometric approach.” Journal of Economic Perspectives, 31(2):87-106. 

 

View Part 3 – Monday 10.00-11.45am: Causal Inference: Average Treatment Effects

(a) S. Athey (2018, January) “The Impact of Machine Learning on Economics,” Section 4.0, 4.1. 

(b) A. Belloni, V. Chernozhukov, and C. Hansen (2014) “High-dimensional methods and inference on structural and treatment effects.” The Journal of Economic Perspectives, 28(2):29-50. 

(c) V. Chernozhukov, D. Chetverikov, M. Demirer, E. Duo, C. Hansen, W. Newey, and J. Robins (2017, December) “Double/Debiased Machine Learning for Treatment and Causal Parameters.” 

(d) S. Athey, G. Imbens, and S.Wager (2016) “Estimating Average Treatment Effects: Supplementary Analyses and Remaining Challenges.”  Forthcoming, Journal of the Royal Statistical Society-Series B.

View Part 4 – Monday 12.45-2.15pm: Causal Inference: Heterogeneous Treatment Effects

(a) S. Athey (2018, January) “The Impact of Machine Learning on Economics,” Section 4.2. 

(b) S. Athey, G. Imbens (2016) “Recursive partitioning for heterogeneous causal effects.” Proceedings of the National Academy of Sciences, 113(27), 7353-7360.

View Part 5 – Monday 2.30-4.00pm: Causal Inference: Heterogeneous Treatment E ects, Supplementary Analysis

(a) S. Athey (2018, January) “The Impact of Machine Learning on Economics,” Section 4.2, 4.4. 

(b) S. Athey, and G. Imbens (2017) “The State of Applied Econometrics: Causality and Policy Evaluation,” Journal of Economic Perspectives, vol 31(2):3-32.

(c) S. Wager and S. Athey (2017) “Estimation and inference of heterogeneous treatment effects using random forests.” Journal of the American Statistical Association 

(d) S. Athey, Tibshirani, J., and S.Wager (2017, July) “Generalized Random Forests

(e) S. Athey, and Imbens, G. (2015) “A measure of robustness to misspeci cation.” The American Economic Review, 105(5), 476-480.

View Part 6 – Monday 4.15-5.15pm: Causal Inference: Optimal Policies and Bandits

(a) S. Athey. (2018, January) “The Impact of Machine Learning on Economics,”
Section 4.3. 

(b) S. Athey and S. Wager (2017) “Efficient Policy Learning.”

(c) M. Dudik, D. Erhan, J. Langford, and L. Li, (2014) “Doubly Robust Policy
Evaluation and Optimization” Statistical Science, Vol 29(4):485-511.

(d) S. Scott (2010), “A modern Bayesian look at the multi-armed bandit,” Applied Stochastic Models in Business and Industry, vol 26(6):639-658.

(e) M. Dimakopoulou, S. Athey, and G. Imbens (2017). “Estimation Considerations in Contextual Bandits.” 

View Part 7 – Tuesday 8.00-9.15am: Deep Learning Methods

(a) Y. LeCun, Y. Bengio and G. Hinton, (2015) “Deep learning” Nature, Vol. 521(7553): 436-444.

(b) I. Goodfellow, Y. Bengio, and A. Courville (2016) “Deep Learning.” MIT Press.

(c) J. Hartford, G. Lewis, K. Leyton-Brown, and M. Taddy (2016) Counterfactual
Prediction with Deep Instrumental Variables Networks.” 

View Part 8 – Tuesday 9.30-10.45am: Classi cation

(a) L. Breiman, J. Friedman, C. J. Stone R. A. Olshen (1984) “Classi cation and
regression trees,” CRC press.

(b) I. Goodfellow, Y. Bengio, and A. Courville (2016) \Deep Learning.” MIT Press.

View Part 9 – Tuesday 11.00am-12.00pm: Matrix Completion Methods for Causal Panel Data Models

(a) S. Athey, M. Bayati, N. Doudchenko, G. Imbens, and K. Khosravi (2017) “Matrix Completion Methods for Causal Panel Data Models.” 

(b) J. Bai (2009), “Panel data models with interactive fi xed effects.” Econometrica, 77(4): 1229{1279.

(c) E. Candes and B. Recht (2009) “Exact matrix completion via convex optimization.” Foundations of Computational Mathematics, 9(6):717-730.

 

David Autor on Changing Patterns of Work

Are earnings differences between males and females due to discrimination? A typical approach is to compare the earnings of women to that of men and try to control for typical understandable differences such as education level and location and other factors. Perhaps education levels and location interact and an interaction term is introduced into our model. However, the largest assumption is that once we define our variables, homogeneity rules, that education is homogeneous, e.g., HS graduation means the same for all groups, over all times and locations). I see Autor’s lecture pointing out this heterogeneity and disputing the assumption that all persons are products of the same data generating process. He takes this on and at least for me smashes my initial biases. To be fair, this is my reading of his efforts, he does not utter the word heterogeneity at all, but I don’t think he needed to, not everyone in the audience are econometricians and the implicit heterogeneity problem is taken on directly. I will be sharing this lecture with my data analytic students as a great example of exploratory data analysis that allows a masterfully told story through complex preparation and easy to understand visuals.

The Richard T. Ely lecture at the 2019 American Economic Association meetings was presented by David H. Autor (MIT, NBER) comparing Work of the Past with the Work of the Future. Motivated in part by the “remarkable rise of wage inequality by education,” “vast increase in supply of educated workers,” and a “key observation (of the) polarization of work” that while “most occupational reallocation is upward,” “the occupational mobility is almost exclusively downward” for non-college workers, Autor proceeds to give rise to answers to the questions surrounding

  1. Diverging earnings and diverging job tasks
  2. The changing geography of work and wages
  3. The changing geography of workers, and
  4. What and where is the work of the future.

The visual presentation makes his data exploration very understandable and are masterfully done. He truly paints a picture that emerges from a vast amount of data that is entertaining and informative. This is well worth the 47 minutes and may actually challenge your preconceived thinking as to the nature of inequality in earnings. It is not as simple as one may think and he perfectly illustrates without ever uttering the word that data heterogeneity when ignored leads to false and inescapable conclusions.

Work of the past, work of the future. Richard T. Ely Lecture, AEA meetings, Atlanta, January 4, 2019.

Click on the above image and you will be well rewarded if you want to see a story told with strong graphics, proving to me anyway, that deep diving into data and presenting simple graphics (although complex in their creation) is a most effective way to communicate. A couple of examples of the graphics:

What if we do an econometric analysis of earnings between men and women using current data and a similar analysis from the 1980s. Can you see how this graph as one of many in Autor’s presentation might create havoc in comparing the result in the 1980s to one current? Watch the presentation, plenty of more visuals like this one.