As long as no nodes fail during the backup it should backup all records, even
if a node is down for the moment. If a node fails during backup then the
backup is aborted which is reported to the management client.
So your problem sounds a bit strange, if you can recap exactly what you did
then you can report in a bug report or to this list and we can check whether you
used the right procedure.
When restoring you need to use files from all nodes that was up at the time of the
backup.
All other comments are also relevant that a backup is only a last resort after you
have tried node recovery (this will mean no downtime) or a system restart if the
cluster failed. Only after that is it necessary to use a backup.
Rgrds Mikael
jhay viray wrote:
> Just wondering ...
>
> We have 6 nodes on our setup , 2 MGM, 2 Data Nodes
> and 2 Mysql, then one data node crashed .. then
> we quickly backup our cluster to save our data .
> We restart all nodes in a usual manner .. first
> the MGM then Data Nodes afterwards the SQL nodes
> ... but we issued a --initial option to both Data
> Nodes because we just want to restore our backup
> .. but when we restored it and checked the record
> count , some of the records are not restored, we
> lost about half the original count .. why is that
> happening? is there something we missed ? We do
> backup with all the nodes running and restore the
> backup on both Data nodes it seems we get the
> right count but when one nodes went down some
> records are not restored. Do you have any
> procedure on backup/restore when one data node
> went down? Please Help.
>
>
> -jhay
Mikael Ronstrom
Senior Software Architect, MySQL AB
My blog:
http://mikaelronstrom.blogspot.com