But to continue the phone.

 
Clingingsmith: I have about a million rows of table which I’m working on , it would take lots of time just for single indexing mistake

Varieur: Curmet: then test on your dev environment

Mandaloniz: Varieur: sure , do you know any index suggestion tool other than EXPLAIN command ? it doesnt matter if its proprietary

Zumbach: Have any of you built a schema with Contacts or “Customers”, Persons, Companies, ContactInfo and such? Care to share the model for a peek?

Milling: Regedit: look up any open source CRM – sugarcrm perhaps

Hevrin: Dw1: please hear me out. I am not the one who sent all that stuff, from the logs you sent me in PM?

Kosky: Why did you ***ume that, and how do I clear up this confusion?

Paongo: Dw1: I tried PM and MemoServ, but when you ***umed I was that troll, you set +g and blocked memos, so I had to find another channel you are on

Carvajal: Dw1 clearly put me on /ignore

Fitzmaurice: Can someone tell dw1 to hear me out, and that I am not who he said I am

Pasket: What causes slow queries? I’m looking at the slow log, and it’s not giving me anything good

Martinov: Your mom causes slow queries

Filipek: Jeeves_Moss, lack of a good index, bad sql

Verzi: Nothing else can cause slow queries besides a flood of internet traffic when your mom is e-shopping for *****s

Coressel: Filipek, I think *****y hardware

Flygare: Jeeves_Moss, is the query slow if you run it now? only selects please, do not test this way with update/delete :

Mcmellen: This is what I get. http://pastebin.com/ayAYtNE3

Meyerhofer: And I have the cache set to twice the db size

Holsman: Jeeves_Moss, “bad sql” – “. limit 21961390,1”

Mascarena: Query cache? then thats one more reason for the slowness

Filipek: Jeeves_Moss, blaming hardware is a noob response

Filipek: Rows_examined: 11748580 is an obvious error in the sql

Lotspeich: Ok, so what should be put in there? there is 13 MILLION reccords.

Milbradt: My idea being the non programmer/systems guy was to break the dataset into smaller chunks.

Barthe: Jkavalik, sorry, should have tagged you

Filipek: Learn how yo look up using an index

Mcelhiney: Filipek, as I said, I’m a systems guy. what is that in “normal” speak? do you have a URL I can reference? the index is the ISBN #

Filipek: The major problem with offset for pagination is that LIMIT has to read all the rows in order to get to the offset and then select up to the limit. With id $last_max_id LIMIT x used within the app it only has to scan/order those rows than the id’s already p***ed.

Risbeck: Hummm, the correct way to do it then is? if the ISBN #s are the index

Youssefi: I thought if we tagged the ISBN as the index, that was the same thing

Kutt: The ISBN collum is the index

Filipek: What are you really trying to do

Filipek: Select data from bookinfo where isbn=0906048826; can use an index on isbn

Filipek: All depends how you store your isbn though

Filipek: As it is really a string

Filipek: And worse, not unique

Dambrozio: Jeeves_Moss: index in this context means a quick search storage

Dambrozio: Think about a phone directory, where the phone numbers are listed next to persons’ names in alphabetical order for faster searching

Islas: Sorry guys, Windows 10 won’t popup showing I have messages in this channel

Krummel: The ISBNs are chars I think since they contain – and x

Dambrozio: If you create an index on that column then the SQL server can make a fast equality comparison

Dambrozio: But to continue the phone directory ****ogy, if it has none then it must read thru the whole dataset. Like if I ask you to find all the persons phonenumbers whose first name is ‘Jack’