1

I have some small sets of data from the database (mysql) who are seldom updated.
Basically 3 or 4 small bi dimensional arrays (50-200 items).
This is the ideal case for memcached, but I'm on a shared server and can't install anything.
I only have PHP and MySQL.

I'm thinking about storing the arrays on file and regenerate the file via a cron job every 2-3 hours.

Any better idea or suggestion about this approach?
What's the best way to store those arrays?

10
  • Do you experience at this moment any performance problems related to these arrays? Commented Jun 22, 2010 at 9:53
  • @Col. Shrapnel I have to get this info from the database for EVERY page. And MySQL is dying cause of the load... Commented Jun 22, 2010 at 9:59
  • @ILMV jslint??? I think you pasted the wrong link... Commented Jun 22, 2010 at 10:01
  • Yeah, it's ok. 99% of sites doing that. That's what database were invented for - getting info for every page. You have nothing to worry about. And if your database dying, you have to do a profiling first, to make yourself know, what query causing this. Commented Jun 22, 2010 at 10:05
  • Yeah, but in 99% of the sites The database server works ok. This is not the case... Every query to the database take ages, no mater how simple they are, no mater how good is the query plan Commented Jun 22, 2010 at 10:10

3 Answers 3

2

If you're working with an overworked MySQL server then yes, cache that data into a file. Then you have two ways to update your cache: either via a cron job, unconditionally, every N minutes (I wouldn't update it less frequently than every hour) or everytime the data changes. The best approach depends on your specific situation. In general, the cron job way is the simplest but the on-change way pretty much guarantees that you won't ever use stale data.

As for the storage format, you could just serialize() the array and save the string to a file. With big arrays, unserialize() is faster than a big array(...) declaration.

0
1

As said in the comments, it would be better to check whether the root of the problem can't be fixed first. A roundtrip that long sounds like a network configuration problem.

Otherwise, if the DB simply is that slow, nothing speaks against a filesystem based cache. You could turn each query into an md5() hash, and use that as a file name. Serialize() the result set into the file and fetch it from there. Use filemtime() to determine whether the cache file is older than x hours. If it is, regenerate the query - or in fact, to avoid locking problems on the cache files, use a cron job to regenerate it.

Just note that this way, you would be dealing with whole result sets that you have to load into your script's memory all at once. You wouldn't have the advantage of being able to query a result set row by row. This can be done too in a cached way, but it's more complicated.

1
  • I think I'll use an hybrid of yours, carlos and Josh answers :-) Commented Jun 22, 2010 at 11:25
1

My english is not good, sorry.

Some times I have read about any alternative to memcache. Is complex, but I think that you can use http://www.php.net/manual/en/ref.sem.php acceding to shared memory.

A simple class example used for storing data is here: http://apuntesytrucosdeprogramacion.blogspot.com/2007/12/php-variables-en-memoria-compartida.html

Is written in spanish, sorry, but the code is easy to understand (Eliminar=delete)

I never have test this code!! and I don't know if it's viable in a shared server.

1
  • Mi ingles tampoco debe ser muy bueno, a juzgar por la cantidad de comentarios que necesite poner para que entiendan de que hablaba... Lo mirare mas tarde, ahora necesito dormir un poco, gracias Commented Jun 22, 2010 at 11:20

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Not the answer you're looking for? Browse other questions tagged or ask your own question.