This comes up all the time. It seems like it would be possible, but imagine the case where you want to verify that a menu shows on hover. Was the hover on the menu intentional?
Another example, imagine an error box shows up. Was that correct or incorrect?
So you need to build a "meta" layer, which includes UI, to start marking up the video and end up in the same state.
Our approach has been to let the AI explore the app and come up with ideas. Less interaction from the user.
My way of thinking while working of B2 enterprise app, sometimes users come up from weird scenarios in feature with X turn on, off with specific edition (country).
Maybe the gpt can surf the user activity logs or crash logs and reproduce the scenarios as test case.
Another example, imagine an error box shows up. Was that correct or incorrect?
So you need to build a "meta" layer, which includes UI, to start marking up the video and end up in the same state.
Our approach has been to let the AI explore the app and come up with ideas. Less interaction from the user.