Gallatin: Dump and import semi-works, as data is stored as utf8 in latin1 tables, php/mysql are set to latin1
Wessner: Imig: yeah, i know, let’s not comment it’s a bad idea, we all know it :
Imig: BerhanK: if they are indeed identical then stop the SQL server and binary copy the DB
Klingen: But dump and export should give identical data
Fitzsimmons: Imig: I have several db’s so I don’t copy the whole file, but export the db, then import on the other side
Imig: What’s wrong with that approach?
Chaix: The data looks in utf8 format
Clyatt: BerhanK: Dump the structure and data separately
Imig: BerhanK: I mean what goes wrong?
Inserra: The website shows the funny chars , to fix it, I copy say the name of a person, alter it to say 22, then paste back the same data that was there before, now the data looks right on the site
Swiger: Https://gist.github.com/e8fd13f5313fc9af82b4 Could I make the querying more efficient for this schema? This table just stores memory, cpu, and player count on those servers. I’m still not sure how often data should last before it gets deleted, but there will be an insertion every second or two.
Imig: BerhanK: check your locale and ch****t settings
Dougher: I did dump them separately, now importing the data alon, after the dbchema was imported separately
Imig: Or even better, use iconv
Imig: To fix the ch****t issues
Alsobrooks: That’s the thing, not sure the ch****t is an issue
Clyatt: BerhanK: Then create the new database using the first dump and before importing the data ALTER all tables to set ch****t to BINARY
Clyatt: BerhanK: If you have lot of tables it is easy to do it by replacing ch****t in the dump
Clyatt: BerhanK: Then import the data and after that try ALTER TABLE . SET CHARACTER SET utf8; do not use CONVERT at this point
Mcmellen: Importing only the data takes very long time
Rosella: The databases are already in utf8 col
Hultgren: Update: importing the data alone worked perfectly
Mynear: But I did dump the chema first, changed latin1 to utf8, then imported the chema in new VM
Kennerly: The issue, is the data import takes soooo long , almost 2-3 hours
Corre: While importing from say heidiSQL, read csv file pulls in the data in few minutes with errors
Chaconas: Dump db is very fast, exporting data only is slow
Delpozo: As is import data only
Keyworth: So my issue now really is export/import takes long time, as each record will be exported/imported by own statement, probably over 1M insert statements
Rumery: Delzer, no primary key?
Furutani: Ok I’ll stop repeating three times or it’s goin to be painful for my hands =
Florestal: Rumery, what would I make it out of?
Furutani: I’m playin around with timezone atm
Rumery: Delzer, you have to know ; can there be multiple rows with the same server_session_id ?
Rasulo: Rumery, yeah it records stats for many server_session_id
Furutani: And I’m a bit confused, I loaded the mysql.time_zone and time_zone_name tables as explained in the doc, and had a look at it and I can’t find CST.
Furutani: But a long list of city
Rumery: Delzer, so there are multiple stats for the same server session with different timestamps?
Furutani: Any idea what’s the name of CST tz in mysql ?
Palagi: Rumery, keep in mind, this table is meant to provide a graph in the future, and not necessarily querying per server.
Furutani: Or szhould I use a city name that uses CST ?
Rumery: Delzer, if you do not give any primary or unique key to an inoodb table, the server creates some sort of autoincrement automagically for the table to work, you cannot see it or use it but it is still there