DrJ: Try free open source.

Naslund: There still isn’t a good rename

Helmsing: Mehwork: the same way you’d do it in 5.5

Herod: I have to be able to rename it without losing existing data

Gurganious: Create a new database, move existing tables over to it.

Helmsing: Mehwork: you won’t “lose” data – you’ll just make it unavailable for a little while.

Peon: It’s just a development database, nothing on production

Kanno: Mehwork: In the mysql console do ‘CREATE DATABASE new_db;’, exit to the cli, do ‘mysqldump -uroot -pPWD –routines old_db mysql -uroot -pPWD new_db’

Helmsing: Kanno: there’s another trick:

Routhier: Might break some triggers, views, sp, etc

Goodrum: I don’t have any of those mgriffin

Kanno: Create the new schema. Then do ‘SELECT CONCAT’RENAME TABLE ‘,table_schema,’.’,table_name,’ TO’,’new_schema.’,table_name,’;’ FROM information_schema.TABLES WHERE table_schema LIKE ‘old_schema’;’ Then drop the old schema.

Lightfoot: It’s very simple, just a lot of data

Routhier: Tldr, you don’t need to rename it

Kanno: I’ve never tried that last one

Gurganious: Don’t forget to actually run the statements that thing produces before you go about dropping the old schema.

Kanno: Mehwork: backup first, of course

Helmsing: No, no, drop the old database first.

Gurganious: And as mentioned, it will not work properly if you have triggers or FKs or table-specific privileges

Routhier: Mehwork: curious, since i ***erted you do not need to rename it, why are you interested in this?

Allmand: I’m inserting lots of rows with a single insert, and some of them violate a foreign key constraint. How can I know which ones?

Shepp: Is there a performance hit / issue with joining latin1 on utf8?

Insana: Eg are the gains from using a single byte encoding lost when joining against a utf8 table? I’m just curious, not doing this in practice

Lauro: I guess the column you’re joining against could be set explicitly to latin1

Musielak: Someone internally is advocating latin1 on a table because joins are supposedly 3x more performant

Helmsing: BlaDe: if you join on a varchar column with different collation, it may not use indexes

Routhier: BlaDe: if you compare mismatched data types one has to be cast to the other

Enote: How would you do . LIKE %YEAR%

Lemkau: Normall it would be LIKE ‘%2015%’

Baynham: If I didn’t have to use function

Harcourt: Helmsing / Routhier thanks. Do you guys generally just use utf8 everywhere? is 3x worse performance on joins realistic? I did a couple of tests and can’t reproduce that

Murphrey: Maybe I need multibyte characters in my dataset

Routhier: BlaDe: do you need utf8?

Routhier: DrJ: what type is the column?

Millstein: Varchar. and I know I know

Routhier: DrJ: fix your schema?

Lerman: Regardless I want to know how you do this

Redfern: I just tried CONCAT but that didn’t work

Buford: Routhier: in some tables absolutely sms, translation tables

Routhier: BlaDe: then what is the question? ;

Lucchese: Names/addresses etc, we’re international. the question is that should you keep it consistent – if a table doesn’t require multibyte chars, should you use latin1 in a mostly utf8 environment?

Fumagalli: Let me rephrase the question

Routhier: BlaDe: you might use latin1 for something like a hash, other columns might need utf8 in the future even if you don’t think they do today

Routhier: BlaDe: fixing it later is a real h***le


Acree: DrJ: Try free open source search tools like Lucene http://lucene.apache.org/ or Sphinx http://www.sphinxsearch.com/