Select id,ean,sku from.

Necessary: Lock time should be visibile in slow query log

Correl: Bythebeach: You mentioned “during times of high iowait”

Lafrancois: Yeah. iowait caused by shared disk

Correl: Bythebeach: So you already know the problem is outside of MySQL

Manca: Yeah. I’m hoping to maybe reduce the impact of high iowait by allocating more RAM

Caro: I should use a real discussion client, too.

Neiling: If I have a whole bunch of IDs. say like 1000 or so, and I want to get the data for all the rows that match them- what’s the best way to do that?

Policicchio: Dmko: where do you have those IDs? printed on paper?

Danfield: Dmko: create a table of IDs and join to the table with the data

Leer: No_gravity: programmatically, hehe

Curney: No_gravity: in this case, Go

Frayre: Danfield: is that better than a m***ive “WHERE ID=a OR ID=b OR.”?

Sibounma: Also better than “IN”?

Lollar: Isn’t creating a new table for every request kinda wasteful? I’ve never done that before. how does it look so it’s also automatically destroyed after?

Rickie: Dmko: i dont know GO but in PHP you could do something like this: SELECT * FROM T WHERE id IN {JOIN’,’,$myIds}

Danfield: Temp tables are connection specific

Dempewolf: No_gravity: Danfield just said to avoid “IN”

Danfield: Dmko: benchmark your possible solutions

Younglove: Do you really need to fetch 1000 rows for every requests?

Danfield: Dmko: no need to guess

Wehmeyer: Dmko: i do the IN thing and i am still alive.

Camfield: Where do you get these ids from?

Brocklesby: Naktibalda: no. in most cases it would actually probably be very few, ~10. but I need it to also handle a case of many, even if rare

Freier: No_gravity: good to know, I can build like this for now and benchmark/optimize later when there’s less pressure : thanks all!

Pesiri: Usually “when there’s less pressure” never comes.

Mauson: Usually refactoring happens when the pressure gets to new highs, but the codebase is such a mess that there is just no way forward anymore.

Safier: I’m thinking for my problem– bump VM RAM to 4GB, set innodb buffer pool size to 1.6GB, and turning off flush on txn commit.

Balduf: Per what i’ve calculated based on forumlas from stackoverflow, that is sufficient to cache the entire database so reads don’t hit disk at all once things warm up, and not flushing on commit should prevent updates/inserts from being blocked.

Fam: Does that logic seem sound?

Deguzman: Am I right in thinking that this will look for an EAN that is 13 digits long:

Allsbrook: Select ean from all_tbl where CHAR_LENGTHean = 13;

Busico: Bythebeach: its not even clear that the problem is inside of the db

Muth: Bythebeach: unless you figure out what is slow and why, you will have a hard time optimizing.

Heddleson: The problem isn’t inside the DB at all. i’m trying to compensate for poor disk by tuning it to use more RAM

Knezevich: Unfortunately there is no way for me to fix the slow HW; at best, i can play the cheap VPS lottery and set up another VM and hope it doesn’t have unpredictable disk i/o latency

Hazel: Hmm. to minimize downtime I can spin up another VM with lots of RAM and run the database from it. but the connection would need to be secured. using TLS would be dependent on the application supporting it for the mysql client.

Lauters: I have a table with three columns, id, ean and sku. I’m trying to find all the rows that don’t have anything in the sku col. Could someone explain to me why this doesn’t find any:

Willette: Bythebeach: you could create an ssh tunnel between the two hosts and connect from your application to its local endpoint

Chappell: Select id,ean,sku from all_tbl where LENGTHsku 1;

Wolery: Select id,ean,sku from all_tbl where LENGTHsku 2;