Yakulis: for a large data.

 
Kilkus: SamSagaZ, I just saw your fiddle and take care, grouping by DAY is most likely not what you want. I will group together data from the 01 of each month in one result.

Kilkus: SamSagaZ, grouping by DATE iirc is ok for this. I gotta go now, good luck with this

Freiler: Where’s a good place channel? mailing list? forum? to ask flexviews questions?

Addiego: Jamiejackson: did you already read the blog posts about it

Addiego: Https://github.com/greenlion/swanhart-tools/blob/master/flexviews/README.md

Trosien: Some days I feel I should put my projects on github instead just with a gplv3 license until I am convinced it is actually servicable as what I want it to do, then change to a different one

Spinello: Addiego:yes, i’ve read the blog posts and readme, but i don’t remember seeing any mention of support options

Flummer: So far, it’s lonely business, trying to come to grips with flexviews, and stumbling through its lacking docs.

Lowndes: I’d also like to get a feel for whether anybody’s actually using flexviews. most of the mentions are by swanhart, himself.

Addiego: Jamiejackson: are you also aware of virtual columns in mariadb?

Holleran: Yeah, thanks, Addiego, but this is cross-table stuff i’m trying to cache

Guidetti: Can mysql workbench backup the relevant tables/fields before making an update or delete?

Halik: You can make a backup, but it doesn’t have a “backup before every DML statement” feature.

Perelman: We update our database by creating a temporary table, dropping the existing table and then renaming the temporary table. Is there a way to lock the tables to delay reads until the drop and rename finish?

Beckem: I have a giant 16GB dump I want to import. I also want to turn off autocommit, etc to make it go faster. How can I do this?

Berbereia: Right now I fired up MySQL and turned off autocommit, then used “source” to pull in the file.

Mclaren: But it’s slowing down : spamming lots of Query OK, 2767 rows affected 4.49 sec, where it was like 0.5 s at first

Starcic: Is this about normal? Can I make it go faster? I did make my innodb pool about 75% of my RAM 6GB

Halstrom: Hi there, would anyone be wiling to please help me with a MySQL plugin?

Manross: Yakulis: how does mysql server know if the sql queries come from a single instance of ajax code or 1,000 seperate clients?

Yakulis: Manross: does that count ?

Manross: Yakulis: using ajax and onkeypress simply sends another sql query to the server

Manross: Yakulis: for each keypress

Culbreath: Yakulis: I think he’s saying, the effects of your situation should be similar to another situation that puts the same demand ont he server

Culbreath: Yakulis: and that other situation many users is common and manageable

Yakulis: Manross: ah indeed sir, with every keypress, my code seeks the DB and returns the matching primary keys, so the user has a clear idea of what can be the right choice

Manross: Yakulis: This is loosely true: For planning – competent hardware, competent mysql server config as to buffers and caches and log file sizes, normalized schema design, correct table storage engine choice, proper data types, adequate indexing, competent queries. For performance troubleshooting – reverse the order. :-

Yakulis: Yay, what’s all that 😮

Manross: Yakulis: if you’re presented with a problem, fix the problem

Yakulis: Manross: ah, i was just wondering in case the DB has become huge :p

Varieur: Yakulis: define “huge”

Manross: Yakulis: if you then have performance issues you will need to fix them

Yakulis: Varieur: like having hundreds of rows in the table being scanned

Varieur: Yakulis: only hundreds? That’s tiny.

Yakulis: So huge would mean thousands? 😮

Varieur: Yakulis: thousands is tiny too.

Manross: Yakulis: for a large data set I will use a 250 millisecond timeout to the ajax code