Danblack, Bingham: And,.

 
Lett: I have no primary key and i would like to query 1 specific row and group the results by 5

Stablein: Group by and be an expression. group by x div 5

Streib: Pixelgrid: evil is clearly defined at http://www.p****error.com/sql/select*isevil.html # To get all column names: EXPLAIN EXTENDED SELECT * FROM yourtable; SHOW WARNINGS;

Strahan: Especially with group by. select a group or an agregate expression.

Rissanen: So SELECT * FROM table where 1=1 GROUP BYx div 5?

Bingham: Where 1=1 is a total waste of time

Burgas: It just returned 1 row

Sanfelix: SELECT fieldName, value FROM event_authAccessValue GROUP BYvalue div 5

Bingham: Your group by has to be sensible to your data

Rougier: They are random texts

Rasinski: And i have no primary key

Murri: Is there a way to group the number of results?

Bingham: So your group by is meaningless

Bingham: I dont think you realise what group by is, you probably want limit

Rau: Hi! I’m quite new with optimizing database and I’m working with legacy MySQL database. I was told to speed up some queries. The following may be a simple question, but I wanted to be sure:

Rau: I have tables ‘orders’ and ‘orders_products’. For example, if I’m selecting data from orders_products with product code 26117, it returns 100 rows with the product name and the order id of order it belongs to. But if I do INNER JOIN to orders table by order id, EXPLAIN command says that rows examined for orders table is 24981. That’s the full row count of orders table. Is this normal with JOINs that every row in joined table have to be examined, or is there

Twomey: Spexi: start here http://www.percona.com/files/presentations/percona-live/london-2011/PLUK2011-practical-mysql-indexing-guidelines.pdf

Dobbs: If your are stuck with a specific case

Klish: Spexi: Please paste your query, the EXPLAIN select., and the relevant SHOW CREATE TABLE/s in one pastebin with the sql formatted so it is readable

Rohan: Spexi: newer database versions also can do better query planning.

Rau: Danblack: http://pastebin.com/56ucBHjE

Rau: And thanks for the link for reading about indexing, I will read that

Rau: But would be good too, if there’s some obvious database desing problem

Rau: I haven’t designed it, but I still have to be the one that tries to understand it and of course without any do***entation 😀

Colla: With mysqldump -uuser -p db table dumpfile.sql, I get an insert into table values bla bla query. I have to take dump, then add a colunm to this existing table, and then import the dumped data back into the same table. How do it such that the insert query has the fields noted, thats going to put the data back in ?

Yockers: When I import a file, which has one less column, for which default vaule is set, what happens ? does the import work ok ?

Hoeger: Hi, is it possible to replicate only some of n tables from database 1 to database 2 and have there some completely different tables too?

Bingham: Spexi, try a composite key on id,paid in the orders table

Kubera: Spexi: orders_products productcode,order_id would be a better key.

Saffel: Spexi: a malfunctioning form of blackhole engine that sometimes accidentally stores data.

Rau: Hmm okay, thanks for the tips :

Honchell: Will a csv dump get imported if a new column was added to the table after the dump was taken ?

Rau: Danblack, Bingham: so in short, it is different to create composite key with two fields than to create key for each one? That was the one thing I was wondering, that why orders_products multiple fields under key named Search

Kenaan: Haris: Maybe. Depends on the constraints and the style insert.

Palardy: Take dump from select query into file; truncate table; add col to table; import from dump via load file

Rau: Danblack, Bingham: And, does the order of the fields matter when creating composite key?