This is correct. There is a --debug option which may well give you the information you seek;
however, in some distributions the debug option is disabled. You can check this by running
$ mysqldump --help |grep debug
If it's disabled in your distribution, you will have to download and build from source.
By default, mysqldump uses the --lock-tables setting, in which each table is locked only
whilst it is being dumped. Alternatively one can use the --lock-all-tables, which gives the
same behaviour as the mysqlhotcopy utility: all tables are locked from the outset. If using
this option, then of course the duration of the lock can be given by prefixing the command
If you're using mysqldump for backup purposes and you're finding that the table locking is
causing problems then you need to find another method altogether.
Take a look at Percona XtraBackup, this is a free product that can do live non-blocking
backups, allow you to do point-in-time recovery etc. All in all a much better solution than
Alternatively use a file system which supports snapshots and backup a snapshot of your DB
As mysqldump does it thing it will lock each table in turn "read only" and then unlock it.
One way to find out how long it takes to run is to use "time mysqldump ..." make sure you use
the password on the command line otherwise the time to enter it will be taken into account.
There is the --quick/-q flag for mysqldump, I believe it will depend on the size of the
tables if you'll see performance increase or not.
If you need to do a mysqldump without lock affects, you would really need to set replication
up and mysqldump from the slave server, this would then queue inserts/updates on the master
to be applied when the lock is removed.