Danblack: file size is.

 
Malm: Nsanden: http://mysqldump.Fineout.com/archives/20-Nermalisation.html and http://goo.gl/2X5B4 and some here http://www.keithjbrown.co.uk/vworks/mysql/

Godlewski: I think normalization is doing opposite of what im trying to achieve

Decoux: Nsanden: yes, you’re doing the wrong thing.

Migl: I already have a normalized db more or less. but i want to take bits and pieces from various places and create a cached view so basically a table that ill just update once a day at most

Decoux: Nsanden: why do you want to create this?

Wentcell: So i dont have to do lots of joins

Auzston: Ill trade that for the redundant data and disk space

Decoux: Nsanden: have you run EXPLAIN on your joins yet?

Kubica: I dont know. i guess i ***ume it will be much faster not doing joins

Abreo: Especially since i dont need up updated data each query

Clyatt: Nsanden: very wrong ***umption

Whisenhunt: Salle, Decoux: your saying i should not consider cached view table and i should just do the joins on each request for this data – but consider i don’t need fresh data, and the tables im joining can be pretty large

Decoux: Nsanden: define “large”

Elwonger: Decoux: Sorry, I have no idea about that manual entry.

Andris: Nsanden: See http://dev.mysql.com/doc/refman/5.6/en/using-explain.html

Clyatt: Nsanden: Joining 4 tables each with 2-3 columns usually is way faster than querying single table with 40 columns with same data

Altig: Largest is 26 million rows but only 1.5 gig

Clyatt: Nsanden: There is no such thing as “cached view table”

Decoux: Nsanden: that’s a decent size.

Decoux: Nsanden: indexes and proper sql should have no problem, however.

Kendig: Salle: when i say cached view table i mean my own version of that which would be a drop table; create table once a day

Decoux: Nsanden: why bother with such a table?

Ricciardi: Purely for performance Decoux. but ill look into explain if its not needed, id rather not

Decoux: Nsanden: you make too many ***umptions.

Obiano: Is it me or is a “schema only” mysqldump just ridiculous slow? ok, its 6mb and like 8000 tables – but its a multitude of minutes on a beastMachine

Viguerie: Hi, https://dev.mysql.com/doc/refman/5.0/en/copying-databases.html lists several methods of xfering a database to a new server. any suggestions on which ones work better/reliably than others? Any favorites?

Mahan: Geo: Choices for backing up MySQL data include: stopping the server and copying the files :: mysqldump -F–single-transaction :: LVM snapshot :: innobackup/xtrabackup :: replication AND one of the previous.

Bachinski: Geo: you’re not actually using version 5.0, right? :-

Hildebrandt: Bachinski, thanks, but that doesnt really answer my question- I’m curious what people find easiest of the solutions listed

Paulis: Size? versions from and to? Downtime allowed? Are all tables innodb or read only myisam?

Bachinski: Geo: well, to really answer your question, that depends. on the size of the databases your exporting, etc

Pheonix: How would i determine the size?

Bachinski: Geo: that means it’s not very large, then. use mysqldump

Wahid: Do you have a backup? or du -chs /var/lib/mysql? exclude binary logs

Figlioli: Geo: mysqldump -u uname -p –routines dbname yourfile.sql

Balaam: Geo: mysql -u uname -p dbname yourfile.sql

Bachinski: Geo: you can export more than one db using mysqldump

Herpich: Geo: See http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html

Obermiller: Bachinski: I’m not saying that

Pettersson: How would I determine how large it is?

Linza: And are you referencing records, or disk space

Mitsch: Danblack: file size is roughly 500MB