I object, objected, to the phrasing, word choice "random key". Just where the author got that, I don't know. I doubt that that phrase is anywhere in any of the writings of any of the people I listed.
With your rewriting, clarification, explanation of his phrasing, word choice "random key", you show that the author was at least trying to be right and was thinking of the right things in spite of his, say, unique wording.
In an attempt to be more clear, as you were, about the roles of what we have called w and t, I did continue on and define sample path.
In some applied work, there is a tendency to jump too soon to averages, e.g., expectations. But for doing much with stochastic processes, we should also consider the sample paths: E.g., in stochastic optimal control we have the controller control each sample path individually and do not (A) average the sample paths, (B) design a controller for that average, and (C) apply that controller to each of the sample paths.
So, net, and in line with some of your post, we should at least have the concept and some notation for sample paths.
I agree that the term "random key" seems his own invention. It worked for me. I wonder if he also used it in the earlier material on plain random variables?
It appears that there are three camps of stochastic processes:
(1) Old. Analyze bumps on railroad tracks, sound noise, the weather over time, ocean waves, .... So, the expectation is fixed, that is, does not vary. The sample paths are bounded. Might assume that the variance is fixed. If increments are independent and identically distributed, then can assume the distributions are Gaussian (have a Gaussian process).
In that case can do interpolation, extrapolation, smoothing, find power spectra, see what happens to sample paths as you run them through a time-invariant linear filter.
(2) Newer. Can get into Markov processes and maybe also martingales. There is a good text by Cinlar, long at Princeton. His chapter on the Poisson process is especially good. Cinlar does not mention measure theory, but other texts do or really are all about measure theory, that is probability based on measure theory. Can get the measure theory out of Royden, Real Analysis and Rudin, Real and Complex Analysis. Should have a measure theory background in probability, and for that try any or all of Breiman, Neveu, Loeve, Chung. Can get into potential theory. Then stochastic optimal control. May end up with some expensive books from Springer written by Russians. Otherwise I gave some names above.
(3) Queuing Theory. Get to do a lot of work with the Poisson processes with the hope of making applications, e.g., in the sense of operations research.
With your rewriting, clarification, explanation of his phrasing, word choice "random key", you show that the author was at least trying to be right and was thinking of the right things in spite of his, say, unique wording.
In an attempt to be more clear, as you were, about the roles of what we have called w and t, I did continue on and define sample path.
In some applied work, there is a tendency to jump too soon to averages, e.g., expectations. But for doing much with stochastic processes, we should also consider the sample paths: E.g., in stochastic optimal control we have the controller control each sample path individually and do not (A) average the sample paths, (B) design a controller for that average, and (C) apply that controller to each of the sample paths.
So, net, and in line with some of your post, we should at least have the concept and some notation for sample paths.