But that increases the CPU.

 
Flott: Pedward: converting a csv into a single row for each value

Gorena: If the csv has 3 rows and 3 columns, its 12 records in the innodb table

Neel: Show index from t1 where Key_name=’PRIMARY’;

Brincks: Yeah I did that, looks nice

Ladouce: With the WHERE you can just get the info from the PK

Ceasor: It breaks down the individual keys for the index

Mettille: The count is a little off, but its ok

Bothof: But yeah, the app is able to receive log files with data, and store them into a databse for easy querying than would be in a csv file

Irby: And then I don’t have to load whole csv files for only a single column of data

Kohls: Its grown to this size in about 3 years

Brighter: You should consider partitioning that table

Hurter: I’ve looked into sharding a loing while back 😛

Sixtos: With 5.6 you can exchange partitions too, so you can move them out of the big table into other tables

Krumroy: Yeah I might do some archiving into another table

Studniarz: Well, partitioning and sharding have similar concepts, their implementation is different

Lagrotta: Because not all the data is relevant anymore or searched

Brockell: Sharding is typically across servers

Houpt: Scott0_: that’s exactly why you use partitions

Liljenquist: Internally it’s arranged as several tables

Jim: But that requires more labor

Lascola: So for now the plan is to upgrade the resources

Markette: You specify what range of keys go into each table

Altonen: Then mysql will only query the tables that match the key range

Welliver: Im moving to an 8GB Ram and 196GB disk VPS

Murga: For 594mn row table, that’s all?

Pinell: And im gonna add another index

Bitetto: I had it surviving on 4GB and before that on 2

Kreisman: Partitioning first checks the key range, then it selects which partitions it will query, this cuts your resource requirements significantly

Grime: Pedward: unless you are querying both partitions

Nives: It’s like having 1 table for every month of tata

Kotch: So you store 1mo of data per partition, when you query you constrain to a date range

Sajor: Then you only query the tablespartitions that contain that date range

Erchul: This requires less resources

Zullinger: So I usually run an aggregate script on the data into a new aggregate table which handles averages

Sheirich: So I can get faster queries with the aggregate tables

Fogelson: Well, to each his own

Bandt: It works for what’s needed

Hemann: We don’t query very often

Szumigala: Yeah, but when something fubars, it’s nice to not have all your eggs in one basket

Lottie: When you add partitions, it moves the data from the big table into the smaller table. When you’ve created partitions for all of the key ranges you end up with a lot of little tables

Sossong: So its partitioning based on indexes?

Abrom: So when the VPS provider give you problems, you don’t have 1 big table

Wigglesworth: Whether a date range or function

Goldbeck: I don’t think the vps provider cares

Depedro: But I can dump in 1 hour

Moreman: How many GB is your table?

Cardello: Its about 80GB with indexes

Gonsar: But that compresses quite a bit without the index and a dump

Luzader: Heh, compressed tables and partitioning would probably make things nice for you

Pamplin: Compressed tables is news to me

Colwell: But that increases the CPU usage to decompress