I want to see that the.

 
Potestio: It might take longer to show the rows than the query :P:

Decoux: Scott0_: you must have a fast disk

Drong: I hope that goes as well on production

Atamanczyk: Decoux: could it have been cached by me or something?

Boggus: Can I clear out the cache?

Decoux: Scott0_: SELECT SQL_NO_CACHE

Decoux: Scott0_: no, with the rest of the sql.

Accardo: I don’t understand this now

Coskey: Decoux: thanks for the help

Nyhus: But I tihnk my qwuery is bad

Sturz: The final number should not be 5 millions rows

Hussain: Https://dev.mysql.com/doc/refman/5.1/en/optimizing-innodb-bulk-data-loading.html

Gidwani: Did you just commit a url?

Hochadel: Do i have to change the autocommit value at the end of my import?

Decoux: I think he committed a crime.

Culler: No i am fighting with 500.000 INSERTs

Mizrahi: At the moment mysql inserts one record at time.

Decoux: Phpcoder: your enter key seems to be blazing fast, however.

Linder: I would like to find a faster way

Gidwani: Set autocommit actually does an implicit commit

Broxterman: Gidwani, if i set autocommit to zero i must write COMMIT; at the end of my file

Gidwani: You can also do an explicit begin;

Sweet: My problem is. if i set autocommit = 0 and commit; at the ends.can mysql deal with 500k records?

Gidwani: Begin; insert; insert; insert; commit

Schneeman: Or do i have to write commit; each N records 10.000 for example

Gidwani: You probably don’t want to do 500,000 records in a single transaction

Richins: Gidwani, ok so i can write BEGIN; 10.000 inserts COMMIT; then BEGIN; . etc etc right ?

Gidwani: You should also use the multi-value insert syntax

Muehl: Gidwani, i do not understsand one thing. if you look at SET foreign_key_checks=0; why at the end it is set to 1 ?

Davisson: If i change something during the import to i have to reset to the original values ?

Gidwani: Only if you want to do other things in the session

Gidwani: Mysqldump produces a dump file that will reset variables that it changes

Dorff: Aah ok, at the end of the session all the things will be restored, ok

Nevil: And index will work if you search by partial collumns in the right order right/

Wasowski: Index a,b,c and the query is for a,b ?

Cypher: Gidwani, ok so. one thing is to write begin/commit. then? SET unique_checks=0; ?

Bandyk: SET foreign_key_checks=0;

Glauberman: Can i set those things at the beginning

Mcpeck: Decoux: was able to get the query down to 1M rows on the EXPLAIN

Degenfelder: Gidwani, can i write the settings like: SET foreign_key_checks=0; SET unique_checks=0; on a single line ?

Jemison: At the beginning ? or every option must be set on its line

Ruszala: Ok so i think there is no problem

Amison: Decoux: 6 min 26.87 sec for 15 rows

Rayne: That might be tolerable

Felks: Yeah I should have about 6x that

Chander: Gidwani, 😮 😮 😮 😮 😮 5.9 seconds to insert 358k records!

Fausto: I should have just said “hi, Decoux”

Fausto: Sorry, I switched nicks and I couldn’t tell if the client had . nvrmnd

Fausto: I should have just poked chanserv or seomthing

Fausto: This was the only channel with a bot I was connected to that I could think of offhand

Bagg: Decoux, to update the timestamp field

Decoux: But the value is already the same.

Uyetake: I want to see that the record was processed, even if the same