4 Comments
User's avatar
NickBanana's avatar

Hi Anton, thanks for sharing. In the example, when you computed base scenario probabilities (Vol,Rate) you marginalized over the CB root node, by not conditioning on any specific state of the root node, and calculated the overall distribution of Vol and Rate leaf nodes - essentially by considering all possible root node states. Is this what you meant when you were saying that the discrete Bayesian tree is a view per se - w/o the conditioning on any particular root node state? Example is a bit confusing as I was expecting to observe 5 PDFs displayed: prior, three states from root node (dovish, expected, hawkish) plus base one from marginalization - and in the chart you only plotted four. Could be that I did not understand the logic behind. I would have to check the figures but I presume the differences would be small from computing base probs the way you did and from conditioning on CB node under "expected" case.

Thanks!

Expand full comment
Anton Vorobets's avatar

Thanks for your question, Nick. I am happy to see that you put careful thought into the example.

Yes, when we do not condition on any outcome for the variables/nodes in the Bayesian network, this is what I call unconditional. However, by just imposing the Bayesian net structure and specifying Entropy Pooling views for the leaf node states, we see that the unconditional distribution deviates from the prior. So, that is what I mean by the Bayesian net being a view.

With the unconditional posterior, we can decide if we want to use this for comparing the different central bank scenarios or the prior. If we use the prior, we compare the effect of both the Bayesian net view and the variable conditioning. If we use the unconditional posterior, we compare just the effect of the conditioning.

We can also mix the prior with the unconditional posterior to assign some confidence to how much we believe in the Bayesian network view.

In relation to the conditioning, some cases might just be more interesting than others. So for this example, I just thought that it was the comparison between the prior and the unconditional posterior as well as the hawkish and dovish scenarios that were interesting.

For larger Bayesian networks, it becomes especially true that you probably just want to condition on the outcome of a couple of variables and then see the effect on the joint P&L.

I go into much more detail about all of the above in this video that I just posted: https://antonvorobets.substack.com/p/7-bayesian-nets-and-causal-stress-test

Let me know if you have further questions and once again thanks for studying the methods carefully :-)

Expand full comment
NickBanana's avatar

Thank you for your time and detailed reply. Indeed, also it matters which views (and how severe) one chooses to apply in the leaf nodes - except that they are now probability weighted by the network/graph and streamlined to follow a certain logic. I guess this helps users not to "wander off" and focus on key portfolio risk factors, drivers and related scenarios!

Expand full comment
Anton Vorobets's avatar

Investment managers are indeed usually focused on analyzing the key risk factors and building hypotheses for these. They then want something that helps them predict what will happen to the rest of the portfolio, which is what (Sequential) Entropy Pooling does in a very elegant way :-)

Expand full comment