the path coefficients per se cannot be significant, but I guess you mean significantly different from zero, which makes a lot of sense of course. Robin’s way is certainly quick. You just check the z-value whether it is above 1.96 or below -1.96, respectively, and if it is, the path coefficient is significantly different from zero. Advantage of this method of displaying the result (contrary, say, to giving the p-value for this test, which some other programs do) is that you can also check if the z-value is, say, above 2.96; if it is, the path coefficient is significantly different from 1 instead of 0 (which sometimes can be more interesting than the test against zero, although 0 is of course more frequently useful).
If you want the „best possible p-value“, which in this case would mean one that also includes the cross-information from other parameters, I suggest to set up a likelihood ratio test; clone your model, and in the clone, fix the path you are interested in to zero (or one or any other value). You can then connect the two models (by dragging a path from one to the other) and check in the little ball on the edge between the models (by hovering over it with the mouse) what the p-value for this comparison is. This is a Likelihood Ratio test, which is provably the best test asymptotically for normal distributed data.
BTW (I see we cross-posted, and you asked about modification indices), you can also use the LR between these two models as a modification index if the path you restricted is a factor loading.