The cloud only gets audio when local processing identifies the wake word. Only if the local device determines it has heard the wake word is audio sent to the cloud, which may perform additional analysis on the wake word, and cancel the audio streaming.
I'd bet that it totally omits sending any audio for exclusions processed on device, and that the article is not ideally worded. It would be true that some devices may not be able to perform on device exclusion processing (perhaps some of the oldest ones). Further cloud processing would use any information not available to local processing (like sound signatures from commercials no longer running, or the unknown media processing described).
I'd bet that it totally omits sending any audio for exclusions processed on device, and that the article is not ideally worded. It would be true that some devices may not be able to perform on device exclusion processing (perhaps some of the oldest ones). Further cloud processing would use any information not available to local processing (like sound signatures from commercials no longer running, or the unknown media processing described).