Flat file vs mysql




















For simple mysql queries with indexes and smallish tables, you're going to be dealing with query times in the low 10,ths of a second range, so I doubt you'll hit any walls there the server should be releasing the connect so fast that several hundred concurrent users shouldn't be a problem. You should look into a persistent connection, though, as the real time waster will be creating and dropping connections. However, in your situation you know that you're always going to be accessing via a single key, ie.

Sid Sid 7, 1 1 gold badge 26 26 silver badges 41 41 bronze badges. But then it's running in a web-server apache maybe and he can cache it there. For example if it were Django I'd use one of the cache backends. Sign up or log in Sign up using Google. Sign up using Facebook. Sign up using Email and Password. Post as a guest Name. Email Required, but never shown. The Overflow Blog. Stack Gives Back This way our data size is not going to become unmanageable but at the same time data is still indexed in the form of flat files on disk.

Is the current approach good to scale? Does moving old data to flat files help or not? Is storing all data in a single table correct? Do we need to change our database engine itself? Please give your thoughts on this architecture. Thank you very much in advance! Database engines are in principle designed to cope with huge amounts of data much faster than with raw data files, when you have to access data in a non-sequential manner.

However, I see in your query example that you use:. This forces your database engine to convert the timestamp of every row matching the where of the query do date, which I suspect is very time consuming. That's 5 bytes overhead per row. If this is not sufficient, make sure the server is correctly dimensioned for its big data challenge, and look if your DBMS is sufficiently well placed in benchmarks with other DBMS. As a work around you also could consider using two tables: an active table for the last 30 days, and a second table with all the historical data older than 30 days.

Some batch jobs would then move the expiring data from one table to the other. When you insert data into big indexed tables at a high data rate, then easily index maintenance can become the limiting factor. Every single row's values have to be inserted into their indexes. So, limiting the table size is a good idea. But not by having one table and deleting old entries from the table because that means another index update, this time removing entries from the index, which takes about the same amount of time as the insertion.

If mySQL doesn't have it, there's no black magic necessary to do it with a bunch of normal database tables, a view and a bit of logic. With such a scheme, I think you can keep the old data in the database without compromising query performance for your typical current-month queries. Here's an idea. Start a new database file every month.

The only problem I see with that is that you may get a query that spans two different months. You could just tell your users to execute two different queries if they need data from different months, or make some proxy code that can split a query and stitch up the data before returning it you could do this in VBA so your users can execute it from their workbook.

The timestamp should be in the name of your database so the database to be addressed can be constructed from the query parameters. You may need a catalog that maps a database name to a server so you can scale over different servers. You could create the future databases upfront, enter some dummy data and test with those. You may want to compress older databases and unzip them just before mounting them, and delete the unzipped databases each night or before a different old database is addressed.

That would give your users a considerable performance hit when addressing old data but it would still be possible and you could save a lot on space. Here's another idea. You could have one file per day, or month. For any query you jump into your stream, see what timestamp you hit, if you are too far jump in halfway in the part you know your data to be in et cetera. It doesn't have to be external, just install MySQL JohnTheRipper , Aug 23, Unless you're dealing with thousands and thousands of entries, I see no reason why you should use MySQL.

DrAgonmoray , Aug 23, Necrodoom , Aug 27, CoreProtect actually claims flatfile is faster than mySQL in their case. Fishfish , Aug 27, Milkywayz and np like this. Advocate , Aug 28, This is an interesting debate. We are currently looking for a good inventory plugin and are pending between using WorldInventories and MultiInv.

We have player files atm, which with WorldInventories would result in xml files. Basically: When a user logs in, is it faster for it to lookup a flatfile among alot others, or to look it up using a MySQL database? And we are using "My Worlds" by bergerkiller to manage our multi-worlds. I simply store all player inventories in the player info file in the world.

In NBT tag format. Why bother storing it separately? Questions: Is there a way to check if a table exists without selecting and checking values from it? I want to execute a text file containing SQL queries. May 27, Mysql Leave a comment. Add menu. Flat file vs database — speed? Just go with a database. So far my method has been this.



0コメント

  • 1000 / 1000