No, Amazon isn't changing how all Echos process your voice requests to satisfy Alexa+'s more powerful models
It's not what you think

Amazon is turning off the ability to process voice requests locally. It's a seemingly major privacy pivot and one that some Alexa users might not appreciate. However, this change affects exactly three Echo devices and only if you actively enabled Do Not Send Voice Recordings in the Alexa app settings.
Right. It's potentially not that big of a deal and, to be fair, the level of artificial intelligence Alexa+ is promising, let alone the models it'll be using, all but precludes local processing. It's pretty much what Daniel Rausch, Amazon's VP of Alexa and Echo, told us when he explained that these queries would be encrypted, sent to the cloud, and then processed by Amazon's and partner Antrhopic's AI models at servers far, far away.
That's what's happening, but let's unpack the general freakout.
After Amazon sent an email to customers, actually only those it seems who own an Echo Dot 4, Echo Show 10 (3rd Gen), and Echo Show 15, that the option to have Alexa voice queries processed on device would end on March 28, some in the media cried foul.
They had a point: Amazon didn't have the best track record when it comes to protecting your privacy. In 2019, there were reports of Amazon employees listening to customer recordings. Later, there were concerns that Amazon might hold onto recordings of, say, you yelling at Alexa because it didn't play the right song.
Amazon has since cleaned up its data act with encryption and, with this latest update, promises to delete your recordings from its servers.
A change for the few
This latest change, though, sounded like a step back because it takes away a consumer control, one that some might've been using to keep their voice data off Amazon's servers.
However, the vast majority of Echo devices out there aren't even capable of on-device voice processing, which is why most of them didn't even have this control.
A few years ago, Amazon published a technical paper on its efforts to bring "On-device speech processing" to Echo devices. They were doing so to put "processing on the edge," and reduce latency and bandwidth consumption.
Turns out it wasn't easy – Amazon described it as a massive undertaking. The goal was to put automatic speech recognition, whisper detection, and speech identification locally on a tiny, relatively low-powered smart speaker system. Quite a trick, considering that in the cloud, each process ran "on separate server nodes with their own powerful processors."
The paper goes into significant detail, but suffice it to say that Amazon developers used a lot of compression to get Alexa's relatively small AI models to work on local hardware.
It was always the cloud
In the end, the on-device audio processing was only available on those three Echo models, but there is a wrinkle here.
The specific feature Amazon is disabling, "Do Not Send Voice Recordings," never precluded your prompts from being handled in the Amazon cloud.
The processing power that these few Echos had was not to handle the full Alexa query locally. Instead, the silicon was used to recognize the wake word ("Alexa"), record the voice prompt, use voice recognition to make a text transcription of the prompt, and send that text to Amazon's cloud, where the AI acts on it and sends a response.
The local audio is then deleted.
Big models need cloud-based power
Granted, this is likely how everyone would want their Echo and Alexa experience to work. Amazon gets the text it needs but not the audio.
But that's not how the Alexa experience works for most Echo owners. I don't know how many people own those particular Echo models, but there are almost two dozen different Echo devices, and this affects just three of them.
Even if those are the most popular Echos, the change only affects people who dug into Alexa settings to enable "Do Not Send Voice Recordings." Most consumers are not making those kinds of adjustments.
This brings us back to why Amazon is doing this. Alexa+ is a far smarter and more powerful AI with generative, conversational capabilities. Its ability to understand your intentions may hinge not only on what you say, but your tone of voice.
It's true that even though your voice data will be encrypted in transit, it surely has to be decrypted in the cloud for Alexa's various models to interpret and act on it. Amazon is promising safety and security, and to be fair, when you talk to ChatGPT Voice and Gemini Live, their cloud systems are listening to your voice, too.
When we asked Amazon about the change, here's what they told us:
“The Alexa experience is designed to protect our customers’ privacy and keep their data secure, and that’s not changing. We’re focusing on the privacy tools and controls that our customers use most and work well with generative AI experiences that rely on the processing power of Amazon’s secure cloud. Customers can continue to choose from a robust set of tools and controls, including the option to not save their voice recordings at all. We’ll continue learning from customer feedback, and building privacy features on their behalf.”
For as long as the most impactful models remain too big for local hardware, this will be the reality of our Generative AI experience. Amazon is simply falling into line in preparation for Alexa+.
It's not great news, but also not the disaster and privacy and data safety nightmare it's made out to be.