Maximum depth of a decision tree

I am currently studying about decision tree classification. But i have some doubts regrading the maximum depth of a decision tree.

For example consider a case where i have 3 Features (X1 , X2, X3) and one binary predictor(Y1).
X1 and X2 are categorical , both having cardinal value 3 and X3 is continuous.

From the definition of the depth of a tree , i understand that the max depth it can have is 3. As the maximum it can do is split on one variable then go to other and then the last one.

But i have read on stack exchange that :
The absolute maximum depth would be N−1, where N is the number of training samples.

Link to the answer :

So can someone please explain this to me as to how this is happening.