Keiter: yes. good point.

 
Dalby: And I enter my root p***word, I STILL get an error saying “Too many connections”?

Puglisi: Because some other super user is using your extra connection.

Kuklenski: Danblack: thanks, i’ll see which one it is

Jamie: Ok, I’m a bit of a database noob. What I have is a bunch of .csv files that have two columns, EAN/barcode and title

Athey: I’ve made a table and set the EAN column to be the primary key

Accola: What I want to do is import these CSV files into the database, which I can do but I’ve run into a small problem

Moine: Bantalon: that’s aside from the problem that you’ve run out of max_connections and need to fix it so you don’t which sometime means fixing queries rather than just upping max_connections.

Blose: Bantalon: that’s aside from the problem that you’ve run out of max_connections and need to fix it so you don’t which sometime means fixing queries rather than just upping max_connections.

Keiter: It’s terrible when you can’t even remain stable enough for a few minutes to see the answers to your questions

Sarnacki: Because in some of the files I have duplicated EANs, which is bad as it’s set as the primary key. So I guess what happens is that when I import a CSV that already has a EAN in the database it’ll fail

Keiter: Sonny_Jim: don’t set the PK first then. You can delete duplicates after

Ducci: Sure, so create a table with ID, EAN, Title and use ID as the key and autoincrementing?

Goetzke: Sonny_Jim: how are you importing? And do you want the new files to overwrite the previous EAN or be ignored?

Diepenbrock: Well, we have a problem where I work where some bright fool decided the best way to handle EANs was to make a series of Excek files

Mcpartland: Consequently, I have a list of EANs that we’ve been issued and a list of EANs+Titles that were ***igned to

Keiter: Sonny_Jim: you don’t need another column – just don’t add the constraint until you inserted all rows, and deleted / renamed dupes

Fraioli: Or insert ignore. instead of insert, or replace into.

Mccarney: Sure, I’m using a command line import at the moment

Ferrando: Load data local infile ‘/mnt/data/General/barcodes2.csv’ into table ean_tbl fields terminated by ‘,’ enclosed by ‘”‘ lines terminated by ‘n’ ean, sku;

Keiter: Sonny_Jim: I would use my approach

Hardester: So make a table with two columns, EAN and SKU but not key?

Ahle: Import all the files, strip out the duplicates, then import the ‘clean’ data into another table?

Keiter: You may have to add an auto-increment column to delete the dupes, however. But you can remove it after.

Blott: Is it a bad idea to use the EAN as the key?

Lolagne: Load data also has replace/ignore options

Keiter: Sonny_Jim: just delete the dupes from the table

Ziter: My feeling was that we don’t want to get into the situation we are in where we are listing two products with the same EAN

Keiter: Danblack: well, the question is, do you trust the replace / ignore behaviour to pick the right row?

Keiter: Danblack: I wouldn’t, in this case.

Dabdoub: Danblack: yeah that’s what I’m trying to debug

Stanley: Cool, thanks guys I’ll give it a whirl

Hazelrigg: Want to see what kind of query is stuck in the list causing the excessive connections, or if there are none at all and there’s just some error in my code where i forgot to close the connection somewhere etc

Swed: Sonny_Jim, you will need to check if the duplicates are of the same product or if really one ean was used multiple times

Uran: I think I found the issue though

Saldibar: Will have to wait and see if there is an effect or if the issue occurs again :

Keiter: Swed: replace/ignore wouldn’t do that well, well, not as well as a human eye

Kurpiel: Bantalon: enable slow query log.

Unnewehr: Keiter: yes. good point.