SQL Error - and what I did seems to have fixed it?

First time setting up piwik. We have 16 web servers in a cluster we need to track stats on.

First task: get 2014 data into the system. Because ‘mine not to reason why’.

Extracted logs from backup to /var/log/2014/$servername

Ran this command


/path/to/stats/log-analytics/import_logs.py --token-auth=<API from server> /var/log/2014/$servername -url=myurl.com -idsite=1 --recorders=16 --enable-http-errors --enable-http-redirects --enable-static --enable-bots

It chugged along for a while them halted


'Fatal error: HTTP Error 500 Internal Server Error, response: {"status":"error","tracked":0}
You can restart the import of "/var/log/2014/$servername//blah.gz" from the point it failed by specifying --skip=18901 on the command line."

At the same time this was written to httpd’s error log


[error] [client 10.210.2.26] Error in Piwik (tracker): Error query: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction                                 In query: UPDATE piwik_log_visit SET idvisitor = ?, visit_total_time = ?, visit_last_action_time = ?, visit_exit_idaction_url = ?,  visit_total_actions = visit_total_actions + 1 , custom_var_k1 = ?, custom_var_v1 = ? WHERE idsite = ? AND idvisit = ?                                 Parameters: array (   0 => 'J\x96F\x86\xe2\xa8pa',   1 => 8554,   2 => '2014-01-11 11:59:09',   3 => 230633,   4 => 'Bot',   5 => 'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)',   6 => 1,   7 => 22530, )

This happened pretty frequently.

Following the advice here at SO, I set these values in sql


mysql> SET tx_isolation = 'READ-COMMITTED';
Query OK, 0 rows affected (0.00 sec)

mysql> SET GLOBAL tx_isolation = 'READ-COMMITTED';
Query OK, 0 rows affected (0.00 sec)

And restarted. That was an hour and 20 minute ago, it has not failed yet, although it has written that error to log.

Comments? Am I missing a ‘how to make SQL act better with large sets of data’ guide?

Hi there,

This bug/issue is discussed in: Tracker: Serialization failure: 1213 Deadlock found when trying to get lock · Issue #6398 · matomo-org/piwik · GitHub - feel free to leave a comment there

I think even you are using the newest version (2.14.3), this problem will occur.
Especially, when you add “–replay-tracking”, import_logs.py not works at all because of MySQL’s bulk insert problem. If you added option “–recorder-max-payload-size = 1” with “–replay-tracking” to avoid trouble over MySQL’s bulk insert – may be less frequent - but unfortunately this problem will appear.

But, please, use the BulkTracking plugin, that will solve this problem.
I activated BulkTracking plugin and used import_logs.py with options “–replay-tracking” and “–recorder-max-payload-size = 200”, it worked well.

I had imported 30,000 lines by import_logs.py (2.14.3) with --replay-tracking option.
And I found a pattern of import_logs.py.
If you use the option “–recorders” outside value “1”, for example, 2, 3, 4, 5, 6, … you still get dead lock error.
Please use “–recorders=1” and “–recorder-max-payload-size” with a little bit lager number to quickly import, e.g., --recorder-max-payload-size=200, --recorder-max-payload-size=500, --recorder-max-payload-size=1000.

Of course, you do not forget to use the Bulk Tracking plugin.