Oh wow I’ve never seen the.

 
Cazzell: Http://p.defau.lt/?56L9sqQbOgIKQB1W0nJ54w

Nettleton: Scott0_: you must be across the aisle from domas!

Abels: How does it come up with that number?

Wollmer: Scott0_: count is a special breed on Innodb tables, doesn’t matter if it’s indexed, it does a table scan

Roorda: Wasn’t there a faster way?

Arena: Scott0_: counted b-trees

Dillion: Myisam stores the row count in the header

Manteca: Explain said it uses index though

Walken: Myisam has a very naive implementation of a counted tree

Sweeting: It has just one count

Villacorta: Scott0_: because of MVCC there isn’t just one count

Weiss: Each thread sees it’s own revision

Sieber: So it can’t rely on a global row count

Rivett: Https://www.percona.com/blog/2006/12/01/count-for-innodb-tables/

Gottsch: Technically you can maintain counted-btree in mvcc environment too

Mcgilton: As long as counts are implemented as commutative operations

Sattley: Yeah, but I suspect the devs don’t see that as low hanging fruit, their too busy trying to make 5.7/8 fit for you

Mielnicki: Ah so I can specify something in the WHERE and it will count faster?

Neiner: Well, it is hard to implement them in concurrent fashion

Checketts: Scott0_: yes, because it will look for that in the index

Pysher: This is exactly what domas was talking about earlier. His query was doing a full index scan on 100mn+ entries, he got it to perform range scans and reduce the rows touched

Feerst: I wonder if column IS NOT NULL counts 😛

Henington: You have 594mn rows and so Innodb has to walk the ordered indexes to get the count

Dossett: Scott0_: PK can’t be null anyway

Nickas: But im not using the primary key anymore

Jacquot: So it’s a non-starter

Houlberg: Im using the first key in the index

Batun: There is an existing index

Marsingill: Scott0_: that just means you are using an index that’s even bigger than the PK :

Vigorito: So how do I get around this?

Dermody: Umm, don’t use count* to get precise row counts

Wilda: You mean adding up multiple range scans?

Stencil: Pedward: another interesting problem with optimizer

Heap: It ***umes that data is there

Kuning: Time to write another blog post

Pratte: Scott0_: the simple, but potentially naïve and costly approach is to use information_Schema.tables.table_rows

Kilgo: But querying that table can cause locking contention

Obryon: Lets see what that says

Rahimi: Pedward: it isn’t that expensive

Douthit: As well as causing mysql to read in tables

Tsan: Think about it though, I do full dumps nightly

Spingler: Scott0_: I used to have ‘EXPLAIN SELECT * FROM table’ as a dirty method

Christmann: Even if I explain my countid it gives me the total in the index

Bonhomme: Domas: try telling that to people who run those queries a lot and have 500 queries running at the same time, with a table cache of 2k

Dunford: Pedward: depends on stats_on_metadata value I guess

Papranec: What im doing is saving tabular data with each value in a csv as a row

Bou: Its kind of expensive, but allows to pull reports and graph data with lots of flexibility

Shellabarger: Scott0_: show index will give you the cardinality of the primary key, which is the number of rows in the table

Gallargo: Scott0_: are you storing in a CSV table or converting a CSV to an innodb table?

Graciano: Oh wow I’ve never seen the index displayed that way before