And you’re sure that all.

 
Flegle: This is offtopic anyway 😛

Portee: For whatever reason, I when one of my apps uses prepared statements, they do not get counted by performance schema.

Ronne: 200 is supposed to be the max

Mckirryher: JesusTheHun: Percona++

Furutani: Table ‘performance_schema.events_statements_summary_by_digest’ doesn’t exist

Tambe: Wrksx: Yea, new MySQL 5.6 feature :

Furutani: Cheramie, so let me not help you =

Strathman: Wrksx: Don’t worry, you save a lot of performance by not having it

Galvez: Strathman: eh, it’s not that much overhead

Chopton: And the information you get is amazingly useful

Girona: Strathman: not really supposed to be used in production is it ?

Strathman: Cheramie: I have seen issues with perf schema cause CPU saturation by looking through perf more than a few times

Strathman: JesusTheHun: I would hope not, but some of its metrics are only really useful in prod

Tams: The set the table on another disk

Strathman: Doesn’t help whent it causes CPU issues

Toelke: Strathman: Could be, depends on what your wo***oad is like

Meuler: Strathman: I recently deployed a new monitoring setup I built, sc****s several performance_schema tables every 15s or so

Faglie: Strathman: No noticeable impact for my setup

Hodsdon: Cpu issues ? meaning it’s the calculation of what is to be writing that takes the cpu down ?

Staino: Cheramie: is your monitoring setup open source ?

Strathman: If you’re not using partitioned tables you’re probably fine. Most of the time I see issues with it it’s due to partitioned tables and commonly overhead from the Lock-Free hash that P_S uses

Roeger: Im looking for a good one

Salek: Strathman: Lock-Free hash that P_S uses # what is that ?

Walla: I use partitioned tables

Strathman: JesusTheHun: mysql/lf_hash.c

Covello: Strathman: -_- really ?

Doiron: JesusTheHun: http://prometheus.io/ is part of what what I work on

Dethomasis: JesusTheHun: Specifically, I have been doing a bunch of work on https://github.com/prometheus/mysqld_exporter

Chimilio: JesusTheHun: I’m monitoring about 150 servers, 100 databsaes, 1000 tables with a single prometheus instance

Wrighten: JesusTheHun: It generates about 500,000 timeseries metrics

Ashraf: JesusTheHun: and 40,000 metric updates per second

Hartje: Cheramie: dude, this is awesome

Wiatrowski: Where do you guys are from ?

Pirnie: Hi 2 quick replication question here, I have a master mysql 5.5 instance with databases on dedicated partition and binarylogs on separate dedicated partition with 1 day purge and 1GB limit per bin file. Yet at times my binary logs outgrow partition size of 25GB, causing mysql to stop accepting connections. I am okay with replication broken which can addressed at later time. Is there a way to ensure Database to continue

Degirolamo: Accepting connections even if binary logs partition runs out of space? Are following settings enforcing/causing mysql to stop connections: innodb_flush_log_at_trx_commit=1 or innodb_support_xa=1 or

Kneifl: Sync_binlog=1? any quick insight is apreciated as my head is spinning at this point sifting through do***entation and google results. Thank you

Ossman: So expanding binarylogs partition is the only option correct?

Milici: Logit: having no recent binary logs break the replication

Gidwani: There are some bugs here

Gambino: So it’s important to keep them

Gidwani: Mysql will not be able to commit a transaction if it can’t write to the binlog

Mcgarrigle: But you can trash old ones

Alva: If you are ok with the fact of not being able to recover +6 month operations

Grensky: And you’re sure that all your replication’s slave has sync at least once in the past 6 month