We are currently having problems with Twitter. Please read more here for details and how you can help!

1/3 The #Bookofwhy is not about "what causal calculus cannot do" (eg, play chess, translate languages etc) but about the many miracles it CAN do. Among them resolving the simple version of Lord Paradox, with two dining halls, each serving one diet, and a very large sample. So,

2/ believe it or not, but this simple version is still paradoxical to most mortals, and has been paradoxical for half a century. It is now resolved by causal calculus. Your multi-hall version may be of interest in a certain context, but I cant understand why you are insisting

3/3 that this idiosyncratic version is essential for resolving the simple version, and that he who does not attend to your version is guilty of neglecting the foundations of statistics or worse. I do not buy it. Lets focus on the simple version -- are you happy with the solution?

1/4 I believe this set of slides reinforces what I tweeted earlier: -- its hard to cut the embilical chord to Mother-Stat. It occurred to me that this urgency to stay in Stat-womb was also the motivation behind the potential-outcome framework. The benefits

2/4 were obvious, nothing is new, Y_1 and Y_0 are ordinary variables, with some missing values, so what? Everything else is ordinary statistics. The price, of course, was (1) Everything was tied to experimental "treatments", not to "events" or absence of events, and (2) we need

3/4 to express knowledge in the language of {Y_1, Y_0} , namely, in the formidable language of "conditional ignorability". Some would argue: What's wrong in letting statisticians broaden the scope of statistics and then believe that "it is all statistics"? I believe PO is a good

@jd_wilko 1/n It is not really "provocative", just a gentle way of luring statisticians to cut the umbilical chord from Mother Stat. Instead of surgical do-operator, you condition on a variable F (force) which does the surgery for you. I used it in 1993 ucla.in/2pJqtW3 when

@jd_wilko 2/n I thought that statistician are not prepared for a surgery. Other researchers, too, labor to create the illusion of remaining in the stat-womb. E.g., Heckman etal created a fix-operator to enforce this illusion ucla.in/2L8OCyl. The folks at @harvardEpi still teach CI

@jd_wilko @HarvardEpi 3/3 by "imitating RCT's". It is a tough umbilical chord to cut. Denis Lindley was the only statistician I met who said: CUT! In all these schemes we still need to import information from outside the data, which is the key to realizing that we are out of the stat-womb. #Bookofwhy

1/3 Thanks for posting, Bruce, and thanks for offering an honest assessment of #Bookofwhy from the viewpoint of an enlightened economist. As you can probably guess, I am particularly interested in your comment: "His [Pearl] grasp of what economists, for example, understand ..

2/3 and don't understand about causal relationships is incomplete,". What is it that economists DO understand and that I assumed they DON'T? Has this part of "econ. understanding" been expressed formally in the econ. literature since Haavelmo and Cowell's Com.? Can economists

3/3 solve the toy problems posed to them here: ucla.in/2mhxKdO? I am genuinely trying to understand what they know that they are laboring to hide from us. E.g., Do they know which parameters can be identified by OLS? Which models have testable implications? etc. etc,...

It’s finally summer, which means it’s time for the next #epibookclub!

Our book will be Epidemiology & the People’s Health by Nancy Krieger

Get your copy now & we’ll kick off reading once #SER2019 is done!

#EpiPeoplesHealth

Our book will be Epidemiology & the People’s Health by Nancy Krieger

Get your copy now & we’ll kick off reading once #SER2019 is done!

#EpiPeoplesHealth

For those of u new to #epibookclub, here’s how it works:

•We read 1 chapter / week

•each week, I’ll post a recap + thoughts, qs, etc

•everyone joins in w/ thoughts, qs etc

•tag ur posts with #EpiPeoplesHealth

•I’m enlisting @nabuelezam to help moderate, so follow her too!

•We read 1 chapter / week

•each week, I’ll post a recap + thoughts, qs, etc

•everyone joins in w/ thoughts, qs etc

•tag ur posts with #EpiPeoplesHealth

•I’m enlisting @nabuelezam to help moderate, so follow her too!

1/3 Our poll shows a slight preference - 58/42 - to allowing publication of juicy quotes from anonymous reviews. I have hoped for a more decisive preference. Aside from the entertaining value of such a collection, and its encouraging effects on young researchers,

2/3 I am concerned with its historical value. Written under the shield of power and impunity reviewers comments are the most honest and faithful reflections of the state of mind of a scientific community at any given period. Such historical treasure should not be allowed to rot

3/3 in the archives of outdated journals. There should be at least some status of limitation before unveiling this information to the public. Anyone knows what happens to these archives? Can a historian request access to the reviews of Turing's paper of 1937 ?? #Bookofwhy

1/ Communication between CI and ML folks will improve drastically if we can translate sentences such as: "Bottou trains his NN under conditions ABC" into sentences of the form: "Given the conditional probabilities P(y|x,do(z)...)". After all, what do we get from "training" if ..

2/ if not conditional probabilities, both observational and interventional. Another benefit for the translation: theorems of impossibility. CI has developed a theory that tells us if certain tasks can be accomplished given information in the form of probabilities P(y|x, do(z)...

3/3

We can use this theory to prevent disappointments from "training" schemes that lead to impossibilities. As far as I know, theories of what's possible or not possible were not developed (yet) for training schemes. Why not use what we have? eg,ucla.in/2Jc1kdD #Bookofwhy

We can use this theory to prevent disappointments from "training" schemes that lead to impossibilities. As far as I know, theories of what's possible or not possible were not developed (yet) for training schemes. Why not use what we have? eg,ucla.in/2Jc1kdD #Bookofwhy

1/

In view of persistent ambiguities regarding the definition of "causal inference" (CI) I am sharing here the definition that has guided me successfully throughout my journeys. CI is a method that takes data from various sources, as well as extra-data information, and produces

In view of persistent ambiguities regarding the definition of "causal inference" (CI) I am sharing here the definition that has guided me successfully throughout my journeys. CI is a method that takes data from various sources, as well as extra-data information, and produces

2/

answers to questions of two types (1) the effects of pending interventions and (2) the effects of hypothetical undoing of past events. See Causality (2000) Chapter 1. A vivid and recurrent example of a non-causal question is any question that can be answered from the joint

answers to questions of two types (1) the effects of pending interventions and (2) the effects of hypothetical undoing of past events. See Causality (2000) Chapter 1. A vivid and recurrent example of a non-causal question is any question that can be answered from the joint

3/

probability distribution of observed variables, eg, correlation, partial

regression, Granger causality, weak and strong endogeneity

(EHR 1983) etc. See ucla.in/2N9f28c .

This definition excludes Pearson's (1911) and Fisher's

(1925) descriptions of statistical tasks

probability distribution of observed variables, eg, correlation, partial

regression, Granger causality, weak and strong endogeneity

(EHR 1983) etc. See ucla.in/2N9f28c .

This definition excludes Pearson's (1911) and Fisher's

(1925) descriptions of statistical tasks

1/3

I have read this paper with great interest, trying to understand what makes regression analysts seek the wisdom of causal diagrams when they are not asking causal questions and labor merely to assess the magnitude of measurement errors.

The answer seems to be two fold.

I have read this paper with great interest, trying to understand what makes regression analysts seek the wisdom of causal diagrams when they are not asking causal questions and labor merely to assess the magnitude of measurement errors.

The answer seems to be two fold.

2/3

(1) The diagram allows them to use Wright's Rules ucla.in/2LcpmHz to compute correlations among latent variables (X,Y) in terms of correlations among observed proxies (x',Y'). This could be done, of course, w/o the diagram, but only at the cost of painful algebraic

(1) The diagram allows them to use Wright's Rules ucla.in/2LcpmHz to compute correlations among latent variables (X,Y) in terms of correlations among observed proxies (x',Y'). This could be done, of course, w/o the diagram, but only at the cost of painful algebraic

3/3

derivations, as in econ. (2) The problem is in fact causal in disguise. Why else would anyone be interested in cov(X,Y) as opposed to cov(X',Y') which is

estimable from the data and is sufficient for all predictive tasks?

Curious if other readers agree. #Bookofwhy

derivations, as in econ. (2) The problem is in fact causal in disguise. Why else would anyone be interested in cov(X,Y) as opposed to cov(X',Y') which is

estimable from the data and is sufficient for all predictive tasks?

Curious if other readers agree. #Bookofwhy

1/3

Commending you on so skillfully navigating the waters of DAGs and PO. That the two are compatible comes from the fact that both are derived from structural causal models (SCM). DAGs are used to encode what we know and PO what we wish to know. However, I find it hard to

Commending you on so skillfully navigating the waters of DAGs and PO. That the two are compatible comes from the fact that both are derived from structural causal models (SCM). DAGs are used to encode what we know and PO what we wish to know. However, I find it hard to

2/3

understand why you say that "PO are most useful for estimation". Assuming that we have obtained an estimand using DAG-based identification, isn't the estimand itself sufficient for estimation? Do we really need to dress it in PO cloths before proceeding with the estimation

understand why you say that "PO are most useful for estimation". Assuming that we have obtained an estimand using DAG-based identification, isn't the estimand itself sufficient for estimation? Do we really need to dress it in PO cloths before proceeding with the estimation

3/3

phase? This dressing habit, I believe, is a remnant of bygone age when, lacking DAGs, people attempted to identify queries of interest in the PO language. But why go through that tormented experience today, when we do have DAGs.??? #Bookofwhy

phase? This dressing habit, I believe, is a remnant of bygone age when, lacking DAGs, people attempted to identify queries of interest in the PO language. But why go through that tormented experience today, when we do have DAGs.??? #Bookofwhy

1/3

The simplicity of IV validity quickly disappears with nuances. see But, a more important aspect of the "repackaging" is CREDIBILITY, namely, judgments are recruited from where they reside, not from where they are distorted to appease the identifier.

The simplicity of IV validity quickly disappears with nuances. see But, a more important aspect of the "repackaging" is CREDIBILITY, namely, judgments are recruited from where they reside, not from where they are distorted to appease the identifier.

2/3

Consider the IV validity again, and ask yourself "what judgements were necessary to execute this exercise?". Mark them. Now compare to the judgments required in the PO framework, which are cast in ignorability language, (See Angrist etal). Finally, ask "What type of judgments

Consider the IV validity again, and ask yourself "what judgements were necessary to execute this exercise?". Mark them. Now compare to the judgments required in the PO framework, which are cast in ignorability language, (See Angrist etal). Finally, ask "What type of judgments

2/3

would be more CREDIBLE if I were the one to make them?" I am sure your assessment of the value of "repackaging" will become one of greater appreciation, perhaps even one of necessity. #Bookofwhy

would be more CREDIBLE if I were the one to make them?" I am sure your assessment of the value of "repackaging" will become one of greater appreciation, perhaps even one of necessity. #Bookofwhy

1/n

Thank you @PHuenermund for summarizing so vividly the Why-19 symposium. I agree with most of your observations and recommendations, especially those pertaining to causal inference in economics. Last week saw a huge interest on Twitter coming from economists, triggered

Thank you @PHuenermund for summarizing so vividly the Why-19 symposium. I agree with most of your observations and recommendations, especially those pertaining to causal inference in economics. Last week saw a huge interest on Twitter coming from economists, triggered

2/n

possibly by the challenge to analyze a causal chain using PO. While it unveiled the obvious advantages of DAGs in compactness, transparency and inference complexity, some bystanders might still have gotten the impression that one can do

possibly by the challenge to analyze a causal chain using PO. While it unveiled the obvious advantages of DAGs in compactness, transparency and inference complexity, some bystanders might still have gotten the impression that one can do

3/

without them through a heavy investment in PO training. Only passive on-lookers could come to such conclusion, not one who actually tries to analyze the chain using the two languages side by side. I therefore continue to advise readers: Do not rely on on-lookers, try to

without them through a heavy investment in PO training. Only passive on-lookers could come to such conclusion, not one who actually tries to analyze the chain using the two languages side by side. I therefore continue to advise readers: Do not rely on on-lookers, try to

1/3

In the interest of keeping this Twitter conversation as a platform for genuine learning, and saluting our Golden Rule: "One example outweighs ten debates", I strongly recommend that readers try to work out this toy example: It calls for analyzing a causal chain X-->Y-->Z

In the interest of keeping this Twitter conversation as a platform for genuine learning, and saluting our Golden Rule: "One example outweighs ten debates", I strongly recommend that readers try to work out this toy example: It calls for analyzing a causal chain X-->Y-->Z

2/n

in two frameworks: 1. DAGs, 2. Potential outcomes. It has two stages: (a) specify the model assumptions in both languages, and (b) decide if those assumptions have testable implications. The example is extremely important for

understanding the often-heard claim:

in two frameworks: 1. DAGs, 2. Potential outcomes. It has two stages: (a) specify the model assumptions in both languages, and (b) decide if those assumptions have testable implications. The example is extremely important for

understanding the often-heard claim:

3/3

"The two frameworks are "provenly equivalent"" and its counter-claim: "logical equivalence ain't computational equivalence." It is a great opportunity to engage in a fun example that most debaters have tried to avoid. Good luck.

#Bookofwhy

"The two frameworks are "provenly equivalent"" and its counter-claim: "logical equivalence ain't computational equivalence." It is a great opportunity to engage in a fun example that most debaters have tried to avoid. Good luck.

#Bookofwhy

1/n

Tired of caricatures? Note that we never construct a dag by listing 150,ooo variables. We start by asking: can you think of a variable affecting both X and Y? Is it measured? If not, is it significant? If yes, lump it together with all other such variables and mark it U,

Tired of caricatures? Note that we never construct a dag by listing 150,ooo variables. We start by asking: can you think of a variable affecting both X and Y? Is it measured? If not, is it significant? If yes, lump it together with all other such variables and mark it U,

2/n

"unobserved confounders", ONE node. Next you ask: Can you think of a variables that is either (1) on the X-Y path and shielded from U, or (2) affects

X and is sheilded from U and not affecting Y (except..)? The former is front-door the latter is IV. And so on and on. At each

"unobserved confounders", ONE node. Next you ask: Can you think of a variables that is either (1) on the X-Y path and shielded from U, or (2) affects

X and is sheilded from U and not affecting Y (except..)? The former is front-door the latter is IV. And so on and on. At each

3/n

stage the question arises: What is "shielded"? and the answer is given, again, in term of: "Can you think of a variable that resides here or there...and has a property that can easily be verified in the "mind's DAG" which is expert in answering only one primitive question:

stage the question arises: What is "shielded"? and the answer is given, again, in term of: "Can you think of a variable that resides here or there...and has a property that can easily be verified in the "mind's DAG" which is expert in answering only one primitive question:

1/2

Not really. Consider the causal chain X--->Y--->Z. Students of pictures can immediately conclude that X and Z are independant given Y. I do not know ANY student of Greek symbols who can easily come to same conclusion from a symbolic representation of the chain, say using PO

Not really. Consider the causal chain X--->Y--->Z. Students of pictures can immediately conclude that X and Z are independant given Y. I do not know ANY student of Greek symbols who can easily come to same conclusion from a symbolic representation of the chain, say using PO

2/2

(potential outcomes). It is doable, of course, but it would take you a good 5-30 minutes of derivations. You must try it yourself to appreciate the difference and, if you fail, you might wish to take a look at the solution: ucla.in/2QpcGzS

(potential outcomes). It is doable, of course, but it would take you a good 5-30 minutes of derivations. You must try it yourself to appreciate the difference and, if you fail, you might wish to take a look at the solution: ucla.in/2QpcGzS

3/3 or give it to a PO expert, for fun. #Bookofwhy

1/4

I think the Garbage Theory is fundamentally flawed. Since "credible inference" is subsumed by "structural economics", garbage generation is a logical impossibility. This is expressed clearly in ucla.in/2mhxKdO, Section on "Experimentalists". Quoting:

I think the Garbage Theory is fundamentally flawed. Since "credible inference" is subsumed by "structural economics", garbage generation is a logical impossibility. This is expressed clearly in ucla.in/2mhxKdO, Section on "Experimentalists". Quoting:

2/4

Quoting: "to the extent that the “experimental” approach is valid, it is a routine exercise in structural economics. However, the philosophical basis of the “experimentalist” approach, as it is currently marketed, is both flawed and error prone." (The Refs are illuminating)

Quoting: "to the extent that the “experimental” approach is valid, it is a routine exercise in structural economics. However, the philosophical basis of the “experimentalist” approach, as it is currently marketed, is both flawed and error prone." (The Refs are illuminating)

3/4

Thus, Good news to all sailors and passenger on this unassailable ship: "The garbage attack is over." Moreover, the more you hear about things "you dont even know" (eg. "what data you need") the closer we get to an automated-Angrist, because, if this is what we need to know,

Thus, Good news to all sailors and passenger on this unassailable ship: "The garbage attack is over." Moreover, the more you hear about things "you dont even know" (eg. "what data you need") the closer we get to an automated-Angrist, because, if this is what we need to know,

1/4

Just woke up, to the sound of garbage flying. Wow!. Must pacify some deadlines, but not before stating: The aim of causal inference is to automate the process of generating id-strategies, starting with mental models of the domain. I do not see any theoretical impediment to

Just woke up, to the sound of garbage flying. Wow!. Must pacify some deadlines, but not before stating: The aim of causal inference is to automate the process of generating id-strategies, starting with mental models of the domain. I do not see any theoretical impediment to

2/4

automate the process by which Angrist&Comp are generating their identification templates from their conceptual knowledge of the world, since the knowledge contained in the former is derivable from the former. This is what the Inference Engine is all about in #Bookofwhy p.11

automate the process by which Angrist&Comp are generating their identification templates from their conceptual knowledge of the world, since the knowledge contained in the former is derivable from the former. This is what the Inference Engine is all about in #Bookofwhy p.11

3/

An "automated Angrist" is not a far-fetched dream, it is partially implemented already in Elias software, which searches a model for nuggets such as frontdoor, backdoor, IV, napkins and, if none is found, goes to do-calculus. Thus, freeing economists to engage in things

An "automated Angrist" is not a far-fetched dream, it is partially implemented already in Elias software, which searches a model for nuggets such as frontdoor, backdoor, IV, napkins and, if none is found, goes to do-calculus. Thus, freeing economists to engage in things

1/n

I see a spark of agreement looming from this conversation. It is based on (I hope) everyone's agreeing that "we need a DAG for inference, bc it carries the info we need for id." Another spark is the fact that everyone (I hope) is talking about at least TWO DAGs, one residing

I see a spark of agreement looming from this conversation. It is based on (I hope) everyone's agreeing that "we need a DAG for inference, bc it carries the info we need for id." Another spark is the fact that everyone (I hope) is talking about at least TWO DAGs, one residing

2/n

in the mind and tacitly stores your understanding of the relevant domain,

and one (called Full DAG) is what you eventually explicate when you decide to draw it on paper for full analysis. Call the former "mental DAG" (or m-DAG) and the latter ex-DAG (for explicit).

in the mind and tacitly stores your understanding of the relevant domain,

and one (called Full DAG) is what you eventually explicate when you decide to draw it on paper for full analysis. Call the former "mental DAG" (or m-DAG) and the latter ex-DAG (for explicit).

3/n

Scott also introduced a project-specific DAG, or a premade DAG defined by the id-strategy one wishes to use. Call it t-DAG (for template). Barring two repairable exaggerations, I generally agree with Scott's depiction of "practical economists" and CI-theorists.

Scott also introduced a project-specific DAG, or a premade DAG defined by the id-strategy one wishes to use. Call it t-DAG (for template). Barring two repairable exaggerations, I generally agree with Scott's depiction of "practical economists" and CI-theorists.

The last days, a fascinating discussion has been happening on #econtwitter & #epitwitter abt #causalinference, pot outcomes, & dir acyclical graphs.

Since #AcademicTwitter is great for open discourse, & bad at keeping all in 1 place, I thought I provide this public good..

1/19

Since #AcademicTwitter is great for open discourse, & bad at keeping all in 1 place, I thought I provide this public good..

1/19

1st off, no idea where it all began; so this isn't chronological as much as topical.

From an econ perspective, a good start to get acquainted w/ the idea behind DAGs may be @yudapearl's #BookofWhy or this book amzn.to/2WuWf7d.

(Disclaimer: Haven't read them yet.)

2/19

From an econ perspective, a good start to get acquainted w/ the idea behind DAGs may be @yudapearl's #BookofWhy or this book amzn.to/2WuWf7d.

(Disclaimer: Haven't read them yet.)

2/19

Also, see this earlier paper for a more technical intro: bit.ly/2uwoRBa

As a 1st approximation, I found @PHuenermund's and @juli_schuess's slides helpful; to be found here: bit.ly/2uxW3Ze & here: bit.ly/2uwnGBK.

3/19

As a 1st approximation, I found @PHuenermund's and @juli_schuess's slides helpful; to be found here: bit.ly/2uxW3Ze & here: bit.ly/2uwnGBK.

3/19

1/5

Reading this justification of X||Y_x|Z, I was ready to plead ignorance of "cost sharing" "copay" "actuaries" and "utilization trends" and quit before it gets too

domain-specific. But out of respect to your genuine attempt to capture the meaning of this statement,

Reading this justification of X||Y_x|Z, I was ready to plead ignorance of "cost sharing" "copay" "actuaries" and "utilization trends" and quit before it gets too

domain-specific. But out of respect to your genuine attempt to capture the meaning of this statement,

2/5

I offer my version, in generic terms. (1) The cryptic statement X||Y_x|Z, also named "conditional ignorability" (CI) by PO folks, is a feature of the

population under study and, when valid, provides a license to estimate the ATE using regression, simply "controlling for Z"

I offer my version, in generic terms. (1) The cryptic statement X||Y_x|Z, also named "conditional ignorability" (CI) by PO folks, is a feature of the

population under study and, when valid, provides a license to estimate the ATE using regression, simply "controlling for Z"

3/5

CI is the key assumption behind all works in PO. (2) Being a feature of the population, it can be validated from our model of the world, without thinking about what we do or wish to do. It depends only on how Z is related to X and Y in the presence of other variables if any.

CI is the key assumption behind all works in PO. (2) Being a feature of the population, it can be validated from our model of the world, without thinking about what we do or wish to do. It depends only on how Z is related to X and Y in the presence of other variables if any.

1/5r

No offense, and I appeciate your sharing impressions with other readers. I am even more grateful for mentioning @StatModeling which should give readers a glimpse at how some 2019 statisticians think. I quote: "I find it baffling that Pearl and his colleagues keep taking

No offense, and I appeciate your sharing impressions with other readers. I am even more grateful for mentioning @StatModeling which should give readers a glimpse at how some 2019 statisticians think. I quote: "I find it baffling that Pearl and his colleagues keep taking

2/5

statistical problems and, to my mind, complicating them by wrapping them in a causal structure." This quote from Gelman's blog should enter the archives of scientific revolutions as proof that my depiction of the inertial forces paralyzing statistics is not made up; and my

statistical problems and, to my mind, complicating them by wrapping them in a causal structure." This quote from Gelman's blog should enter the archives of scientific revolutions as proof that my depiction of the inertial forces paralyzing statistics is not made up; and my

3/5

description of causal inference as a "revolution" is not a fantasy. The resistance to accepting needed assumptions as "extra statistical" is alive even in 2019. Moreover, readers of this quote take it at face value that problems solved in #Bookofwhy can also be solved by

description of causal inference as a "revolution" is not a fantasy. The resistance to accepting needed assumptions as "extra statistical" is alive even in 2019. Moreover, readers of this quote take it at face value that problems solved in #Bookofwhy can also be solved by

1/3

There isn't really great need to differentiate external validity vs generalisability vs transportability, since we now we have a unified framework to handle them all, as in ucla.in/2Jc1kdD. The most important distinction one needs to make is about the disparities

There isn't really great need to differentiate external validity vs generalisability vs transportability, since we now we have a unified framework to handle them all, as in ucla.in/2Jc1kdD. The most important distinction one needs to make is about the disparities

2/3

between the study and target populations, i.e., whether such disparities are "man-made" (as in recruiting subjects) or "nature made" (eg age differences).

The interplay between the two is described in ucla.in/2L6yTzE. Still, however we taxonomize these subproblems,

between the study and target populations, i.e., whether such disparities are "man-made" (as in recruiting subjects) or "nature made" (eg age differences).

The interplay between the two is described in ucla.in/2L6yTzE. Still, however we taxonomize these subproblems,

3/3

I would be very weary of any theory that does not provide you with playful solutions to at least some toy problems, for example, the three toy problems in Fig. 3 of ucla.in/2N7S0K9. #Bookofwhy @BrownUniversity #kolokotrones @HarvardEpi @harvard_data

I would be very weary of any theory that does not provide you with playful solutions to at least some toy problems, for example, the three toy problems in Fig. 3 of ucla.in/2N7S0K9. #Bookofwhy @BrownUniversity #kolokotrones @HarvardEpi @harvard_data

1/3

You ask if there is a "shorter" #Bookofwhy, and I assume you want to get the technical meat w/o reading the stories. Yes, there is. If you take a look at Section 2 of ucla.in/2mhxKdO, you will find the whole book summarized in 3 pages. But it must be supplemented

You ask if there is a "shorter" #Bookofwhy, and I assume you want to get the technical meat w/o reading the stories. Yes, there is. If you take a look at Section 2 of ucla.in/2mhxKdO, you will find the whole book summarized in 3 pages. But it must be supplemented

2/3

with the toy problems of Section 3. No matter how many books one reads ABOUT economics, shying away from solving toy problems would leave one where econometrics is today -- two decades behind the time. Plus, it is fun to see important methodological problems escaping their

with the toy problems of Section 3. No matter how many books one reads ABOUT economics, shying away from solving toy problems would leave one where econometrics is today -- two decades behind the time. Plus, it is fun to see important methodological problems escaping their

3/3

textbook handcuffs and rejoicing game-like solutions. I therefore recommend: do not skip the toy problems in ucla.in/2mhxKdO and their solutions. Try one - its better than reading a whole book. Among the easy ones: Can your research question be answered using OLS?

textbook handcuffs and rejoicing game-like solutions. I therefore recommend: do not skip the toy problems in ucla.in/2mhxKdO and their solutions. Try one - its better than reading a whole book. Among the easy ones: Can your research question be answered using OLS?

1/3

Your question: "why I wasn't taught the graphical approach" was raised by many economists on this Twitter, and I have partially answered it in ucla.in/2mhxKdO and ucla.in/2L8OCyl. Without going too deep into Psychology, the answer is "Not home grown!". (cont.

Your question: "why I wasn't taught the graphical approach" was raised by many economists on this Twitter, and I have partially answered it in ucla.in/2mhxKdO and ucla.in/2L8OCyl. Without going too deep into Psychology, the answer is "Not home grown!". (cont.

2/3

Your second question: "Would I be [taught it] today" is tricky. @Susan_Athey says there is no need, because economists already have the answers. (see

According to others (eg @marcfbellemare, @PHuenermund @causalinf), economists are beginning to rebel

Your second question: "Would I be [taught it] today" is tricky. @Susan_Athey says there is no need, because economists already have the answers. (see

According to others (eg @marcfbellemare, @PHuenermund @causalinf), economists are beginning to rebel

3/3

against the tyranny of outdatedness. This workshop: why19.causalai.net will

provide an opportunity for both rebels and conformists to present their cases before Clio, the Muse of history. #Bookofwhy.

against the tyranny of outdatedness. This workshop: why19.causalai.net will

provide an opportunity for both rebels and conformists to present their cases before Clio, the Muse of history. #Bookofwhy.