With the missing ‘ is.

Helmsing: 2010-01-01 11:00:00 would be the UTC value in that case.

Helmsing: If the offset varies from value to value, you have two choices:

Helmsing: 1 convert it to UTC before storing it add the offset to the date/time value or 2 store the local time and apply the UTC offset using the other column and INTERVAL when selecting the data.

Helmsing: 2 would make it a PITA if you want to display multiple rows with different offsets

Baird: Alright, i’ll keep that in mind if it becomes an issue. Thanks

Bednorz: Hi, how do I qualify a table name that has a dash in it?

Schepens: Nm I got it. back tick.

Kanno: Or don’t do it, ever :-

Cassara: I’ve installed the auth_pam_compat plugin but it’s not working with a proxy user unless I restart mysql. Is there any way to flush/reload the plugins without having to restart mysql?

Ngyun: I have a database with about 20 million rows. groups of these rows about 500 per group share an identical string. I want to query this table against that string, to get all the rows that have this string. My problem is that doing it this way is insanely slow like 7 second queries. If alternatively, i was to say serialize all the data for each group, and only have 1 row for that, then i could make that string a unique

Magyar: Key, and then the queries would be super fast 0.004 seconds or whatever. But if the data is serialized, i lose my ability to do relational things JOINS, etc. Is there some way to get around this?

Yost: Sorry it probably sounds like a dumb question.

Lobello: Deweydb: Please paste your query, the EXPLAIN select., and the relevant SHOW CREATE TABLE/s in one pastebin with the sql formatted so it is readable

Whitton: Deweydb: http://dev.mysql.com/doc/refman/5.6/en/execution-plan-information.html and http://dev.mysql.com/doc/refman/5.6/en/explain.html

Daltorio: Deweydb: http://www.percona.com/files/presentations/percona-live/london-2011/PLUK2011-practical-mysql-indexing-guidelines.pdf

Thomasson: Ok. i guess i should put my problem more simply. is there way to do a performant index that isn’t unique ?

Serrett: Doing an indexed equality lookup on a table with 20 million rows shouldn’t take 7 seconds unless your hardware is crap. How about you follow the stuff danblack linked for you first

Fasy: Ok. one sec. i gotta rebuild the database i trashed it because i was going to restructure it

Schmied: But i can just type out a sample of the data.

Schnitzler: Http://pastebin.com/VCeY2wpA

Catts: So there are about 500 rows where zip is the same. i.e. 500 rows where zip is 901, another 500 rows where zip is 902, etc.

Buzzeo: Only way i have been able to get the performance not horrible is by putting all the data for those 500 rows into one giant serialized string. instead of having 500 rows.

Legendre: Hardware is good. 16 cores, 8GB ram.

Peltz: Hardware a lot depends on io latency. 500 rows isn’t really indicitative. see above instructions.

Deary: Dont’ bother rushing to put up info fast. correct information is better

Fassett: Explain of the table: http://pastebin.com/K6wifgNY

Helmsing: Deweydb: don’t run EXPLAIN on the table.

Helmsing: Deweydb: you run EXPLAIN on the SELECT queries!

Isabell: So, I’m a noob, and clearly a complete idiot because I cannot figure out why this isnert statement fails silently. http://pastebin.com/dC2QHg1P

Cardinal: What am I missing? feel free to point and laugh, too

Grieff: Jx, no idea why silently, but you probably missed one quote “. ‘tester5′,testpw’,NOW .” – before testpw

Blea: Ok, so that helped a little, but that error doesn’t carry over into my php statement, so let me poke a moment. Thanks, jkavalik. It’s 3am and extra eyes help.

Dollins: Jx, do you check for mysql error properly in your php code? try mysql client too to see if it is Kuhens or other sql error or just a php thing

Donalson: With the missing ‘ is failed silently at the command line, but with it it gave me a no db selected error, which I’m trying to see if that is wy fails in my php