Dies ist eine alte Version des Dokuments!
As with everything related to the Analytical Engine, you have to consider it as a function of time. The Engine was constantly evolving.
In the Early versions the store was limited in size, wrapped around the same large circular wheels as the mill. Any transfer from one column to another left the source column at zero and retaining a value required an explicit copying and restoration action.
Later, Babbage extended the store by introducing linear racks that could extend off to the side an arbitrary distance. This introduced a new problem, because those racks, while unlimited (in principle) in lengh had a very limited amount of lateral travel. So, after one or more transfers had been made, they had to be restored to the starting position.
Babbage realized this was a opportunity, because by leaving a store column engaged with the rack while it was being restored, the original value could be restored to the store column also. So this is the origin of the choice Ada had in either the destructive or the non-destructive readout. A non desctructive read does require a long cycle though, where a desctructive read could be completed in a short cycle.
After this point in the development, it would simply have been a convention, not a requirement, that store registers be left at zero after the last use.
One of the horrendous practical programming problems still remains, that a column must be zero before a new value can be transferred to it. As far as I know Babbage never hit on the modern semantics of store operations in which the previous value is automatically errased. This may have been another incentive to leave things at zero when no longer needed - to reduce the probablity of error if a column is reused later.