Testing Parallelism for Several Lines (See Section 9.5 in text (attributed to Sen, Adichie)) The general model is for `parallel regressions` Y_ij = beta_i X_ij + mu_i + error_ij (1) where 1 le i le kk are different TREATMENT GROUPS or BLOCKS and 1 le j le nn_i are OBSERVATIONS WITHIN THE i-th GROUP. (A variant of (1) assumes mu_i=mu across groups. The hypothesis to test is H_0: beta_i = beta_1 = same for all blocks (2) This tests equality of incremental responses for a convariate across different groups or treatment groups. In traditional linear model theory (with normal errors), the model Y_ij = beta X_ij + mu_i + error_ij (3) (with constant beta) is called an ANALYSIS OF COVARIANCE (or ANOCOVA or ANCOVA) model. The name comes from that fact that the original methods for estimating beta and the error variance in (3) depended on sample covariances of Y_ij vs X_ij. Nowadays, linear models with normal errors are analyzed by setting up and inverting large ``design`` matrices, with the same methods being used for all linear models. Thus the term `analysis of covariance` is somewhat obsolete. In linear models, testing (2) for the more general ANOCOVA model (1) with treatment-group-dependent beta_i is called testing for an ``interaction`` between the slope beta and the treatment groups. EXAMPLE: Given kk=4 core samples from Apalachicola Bay, Florida, measure the ammonia released (Y_ij) at times X_ij See Table 9.4 page 431 in text kk=4 groups and nn=5 (X,Y) pairs per group Data: Core#1: Y: 0.000 33.019 111.314 196.205 230.659 X: 0.0 1.5 3.0 4.5 6.0 Core#2: Y: 0.000 131.831 181.603 230.070 258.119 X: 0.0 1.5 3.0 4.5 6.0 Core#3: Y: 0.000 33.351 97.463 196.615 217.308 X: 0.0 1.5 3.0 4.5 6.0 Core#4: Y: 0.000 8.959 105.384 211.392 255.105 X: 0.0 1.5 3.0 4.5 6.0 APPROACH I: Usual the model described above. The H_0 model is Y_ij = beta X_ij + mu_i + error_ij. Minimizing over the mu_i before the beta shows that the least-squares estimate of beta is the same as for the model Y_ij - Ybar_i = beta (X_ij - Xbar_i) + error_ij (4) Thus, given H_0, the least-squares estimate of beta is the same as for a single linear regression for all (i,j) pairs with covariate and response variables centered at group means. The Sen-Adichie approach is to compare ``aligned`` values Ystar_ij = Y_ij - beta X_ij for the value of beta estimated from (4). If H_0 is false, then Ystar_ij should tend to decrease in j for fixed i if beta_ibeta. The values Ystar_ij are not corrected for the intercepts mu_i, but within-group Friedman-like ranks R_ij = Rank(Ystar_ij) (so that 1 le R_ij le nn) should show the same correlation. Using rank-regression-like statistics T_i = Sum(j=1,nn) (X_ij-Xbar_i)*R_ij(Ystar) should pick up this effect more strongly: That is, T_i should be large positive if beta_ibeta The Sen or Sen-Adichie test is based essentially on the sum of the squares of the T_i. Overall beta (beta_bar) = 42.492 Beta values within groups: 41.634 40.965 39.859 47.510 Aligned observations Y_ij - betabar X_ij: Core#1: 0.000 -30.719 -16.161 4.992 -24.291 Core#2: 0.000 68.093 54.128 38.857 3.169 Core#3: 0.000 -30.387 -30.012 5.402 -37.642 Core#4: 0.000 -54.779 -22.091 20.179 0.155 Within-group ranks, Ti=Sum_j Xc(i,j)*Rank(i,j), and Ci2=Sum(j=1,nn_i) (X_ij - Xbar_i)^2:\n Core#1: 4.0 1.0 3.0 5.0 2.0 Ti= 0.00 Ci2=22.50 Core#2: 1.0 5.0 4.0 3.0 2.0 Ti= 0.00 Ci2=22.50 Core#3: 4.0 2.0 3.0 5.0 1.0 Ti=-0.75 Ci2=22.50 Core#4: 3.0 1.0 2.0 5.0 4.0 Ti= 1.50 Ci2=22.50 Sen-Adichie test: X=1.5000 P=0.6823 (df=3) Compare with a permutation test of within-group ranks: (Friedman-like permutations.) Initializing the random-number generator at 12345678 Carrying out ns=10000 sets of within-group permutations: Observed Tscore = 4.5 Number of simulations with values >= Tscore and total number: 8578 10000 95% CI for true P-value bracketing estimate of true pvalue: (Since H and Fscore >= 0, P-values are inherently two-sided.) 0.8510 0.8578 0.8646 APPROACH II: Since the within-group regressions appear to have no intercepts, it makes more sense to apply the same arguments with no-intercept regressions and ignore the observations with j=1 and X_ij=Y_ij=0 IGNORING the first columns of X and Y: using 4x4 matrices X1,Y1 Using no-intercept regressions: Overall beta (beta_bar) = 41.924 Beta values within groups: 39.264 49.283 37.497 41.652 Aligned observations Y1_ij - betabar X1_ij: Core#1: -29.867 -14.458 7.548 -20.884 Core#2: 68.945 55.831 41.413 6.576 Core#3: -29.535 -28.309 7.958 -34.235 Core#4: -53.927 -20.388 22.735 3.562 Within-group ranks, Ti=Sum_j Xc1(i,j)*Rank(i,j)/(nn1+1), and Ci2=Sum(j=1,nn_i) (X1_ij - X1bar_i)^2:\n Core#1: 4.0 1.0 3.0 5.0 2.0 Ti= 0.60 Ci2=11.25 Core#2: 1.0 5.0 4.0 3.0 2.0 Ti=-1.50 Ci2=11.25 Core#3: 4.0 2.0 3.0 5.0 1.0 Ti=-0.30 Ci2=11.25 Core#4: 3.0 1.0 2.0 5.0 4.0 Ti= 1.20 Ci2=11.25 Sen-Adichie test: X=4.4160 P=0.2199 (df=3) Compare with a permutation test of within-group ranks: (Friedman-like permutations.) Initializing the random-number generator at 12345678 Carrying out ns=10000 sets of within-group permutations: Observed Tscore = 9.2 Number of simulations with values >= Tscore and total number: 2160 10000 95% CI for true P-value bracketing estimate of true pvalue: (Since H and Fscore >= 0, P-values are inherently two-sided.) 0.2079 0.2160 0.2241 APPROACH III: Since within-group regressions appear to have no intercepts, and X_ij>0 except for j=1 with X_ij=Y_ij=0, one could also apply the Kruskal-Wallis test for Z_ij=Y_ij/X_ij for 2 le j le nn Data for Y, X, and Z for X_ij>0: Core#1: Y: 33.019 111.314 196.205 230.659 X: 1.5 3.0 4.5 6.0 Z: 22.013 37.105 43.601 38.443 Average: 35.290 Core#2: Y: 131.831 181.603 230.070 258.119 X: 1.5 3.0 4.5 6.0 Z: 87.887 60.534 51.127 43.020 Average: 60.642 Core#3: Y: 33.351 97.463 196.615 217.308 X: 1.5 3.0 4.5 6.0 Z: 22.234 32.488 43.692 36.218 Average: 33.658 Core#4: Y: 8.959 105.384 211.392 255.105 X: 1.5 3.0 4.5 6.0 Z: 5.973 35.128 46.976 42.517 Average: 32.649 Kruskal-Wallis ranks, rank sums, and rank means: Core#1: Y: 2.0 7.0 11.0 8.0 Rsum=28.00 Rmean= 7.000 Core#2: Y: 16.0 15.0 14.0 10.0 Rsum=55.00 Rmean=13.750 Core#3: Y: 3.0 4.0 12.0 6.0 Rsum=25.00 Rmean= 6.250 Core#4: Y: 1.0 5.0 13.0 9.0 Rsum=28.00 Rmean= 7.000 No ties. Large-sample Kruskal-Wallis approximation (ASSUMING no ties): H=6.5515 P=0.0877 (df=3)