Im not a dbadmin but im as.

 
Cupples: I came in for help because it was slow, but was then informed it didn’t work and should be rewritten. so I’ve done that but now I’d like someone to look and tell me if

Ragusa: 1 it gives the expected results and 2 there are any obvious and significant performance enhancements: http://sqlfiddle.com/#!9/f78864/10

Cerveny: I know those dependent subqueries are bad but I need the top 2 results from grouped data

Bebber: All of the alternatives seem more evil

Cahalane: At least the ones I am aware of

Reiswig: Hi – ? I have a simple DB with 1 table, I have no key field or auto increment. How can I add a new field = IDX field that is key and auto ?

Zabriskie: Diex7: See http://dev.mysql.com/doc/refman/5.6/en/alter-table.html

Mott: Lucid: create index seler_id, date in Ranks table

Stansel: Lucid: http://jan.kneschke.de/projects/mysql/groupwise-max/ http://dev.mysql.com/doc/refman/5.6/en/example-maximum-column-group-row.html

Matheis: Naktibalda: because of “.WHERE sellerId=MatchingSellers.sellerId ORDER BY date.” right?

Shill: Derived table approach could be faster

Georgiana: Naktibalda: I need the top 2 in the group

Engert: Naktibalda: currentRank, lastRank

Starling: Oh, I misread that as min and max

Mchaffie: I suppose I could change the currentRank/RankDate

Chhum: So it would be optimization of 2 of the 4?

Avison: You could still do groupwise max to get current rant

Wronski: And max current rank date

Rasbery: And last rank would be groupwise maximum which is lower than current date

Hainsworth: Naktibalda: as long as I don’t start updating multiple times per day yes. I don’t expect that to happen but there is maybe a 5% chance it could happen

Toye: Naktibalda: I sc**** the original source data from a 3rd party and try to be respectful with their resources

Meecham: And last rank would be groupwise maximum which is lower than current datetime

Gobel: You called it date in your table

Wakabayashi: Naktibalda: Hmm yeah I might do those for now

Subido: Naktibalda: thanks so much for your time

Klammer: I feel a lot better with this solution, going to implement those changes

Padon: ALTER TABLE table1 ADD tidx VARCHAR7; It dont think and index should be VARCHAR . or should it?

Mcswain: We have 1 very large table in our db a few gigs filled with logs inserts only rarely accessed could that have an impact on the db entirely ?

Petruzzi: Diex7: varchar can be indexed, it can’t be auto_increment

Keiter: Rathrock: what impact would that be?

Duesing: The fact that other queries are slow too

Keiter: Rathrock: use the slow query log, and EXPLAIN

Keiter: Rathrock: also, use InnoDB and tune it properly.

Chill: I mean vhen i run those queries on my vbox it takes seconds, on our production it takes minutes and my vbox has less cpu/ram but also doesnt have the log table only a few 100 ros in my logs table

Brittman: Diex7: alter table t1 add id int unsigned not null primary key auto_increment first;

Donawa: Diex7: evil is clearly defined at http://www.p****error.com/sql/select*isevil.html # To get all column names: EXPLAIN EXTENDED SELECT * FROM yourtable; SHOW WARNINGS;

Keiter: Rathrock: use the slow query log, and EXPLAIN

Keiter: Rathrock: you’re spinning your wheels thinking about the wrong things.

Nauman: Keiter: would it provide a benefit if we move that table to a seperate db though ? and maybe even split it up in severals tables different type of logs

Soyke: Rathrock: listen to Keiter, resistance is futile.

Keiter: Rathrock: fix the real bottleneck

Keiter: Rathrock: again, you’re going in a million wrong directions

Matthey: Im not a dbadmin but im as close as it comes to 1 in this company, so it automaticly falls on my head