Hi all I'm a new user here
So here's my issue, on which i am thinking for awhile. I'm developing a web counter (something like google analitycs) and i can't think of the best database model. I have limited disk space and memory (10GB/256MB) on the server machine so i need to use something compact. The queryes aren't realtime so speed isn't the top priority (though it is a priority).
I have to log all daily unique visits (one per ip per site) and impressions. I have the possibility of loging every single request but i think that will be very bad in terms of space usage. I have seperated the IP-s into their own table, where i keep some generic geopositioning data and i use the IDs from that table
So, what is the best way to log visits for lots of sites into a table, along with several fields (e.g. browser name, color debth and so on) so that the table will be compact and not very slow?
I was thinking of only adding one field per user per day per site () and then updating a field with the impressions (i was using a similar model in a previous version, but it was all screwed up and wrong, a total fail) but i can't really think out the details.
Thank you. I'm sorry if my post was unclear, or if i have any typo's
Because i need more than that Like geopositioning and so on. And they take too much space, it'll be hard for the machine to run through several GBs of logs to get the data for different sites
This is a public web counter (like statcounter, clicky, google analitycs and so on), not just for my site. That's why i think i need sql.
And by the way is SQLite slower than MySQL in big tables? And do you have any impressions on MariaDB
Are you serious? You can forget it unless you can invest some decent hardware. That's not sufficient to run a dev machine in my judgement, never mind a web analytics service!
That's a pretty standard shared hosting arrangement...
Speaking of... What is your hosting situation? This type of analytic tool is often included with hosting packages. Often times it will source raw data on your server logs and then reference geocoding stuffs to produce more detailed reports.
I get this for free on all my webhosts. I'm sure there's FLOSS solutions for this too.
When you mention this isnt' just for your site, what exactly do you mean?
I'm on a VPS with lots of space to grow, this is the lowest plan. I was running on a home nginx/sqlite machine and it held quite nicely, but my internet connection didn't so i moved to a VPS
I've had the web counter running for half an year, and now the database grew to 7GB and caching the data was a bit slow so i started all over, because i realised that my database sucked
Servers aren't my problem, i just can't think of a suitable database model to record all these things nicely
And the counter is for people who want a public counter or don't get something like awstats/webalizer with their hosting. And also a local alternative of another (and quite buggy) Bulgarian counter
So it's hosted with me but it isn't for my sites, but rather the user's and i don't have access to their server logs
He is a really nice fellow really
He answered me a couple of questions, based on that i decided that i'll use several small tables, record every page visit and use a batch script
Thank you all for your help
It looks like you've already got your answer but I just thought I'd throw in my 2 cents.
I have done this before for an undergraduate project. I was only maintaining dates, ips, and in & out pages. I think that worked well enough for me, and it certainly didn't take up much space. I even setup a page which would take all of the results from a date range and I could download them to a text file. You'd be surprised what statistical data you can get using only 4 rows in a table, but maybe you're looking for a lot more than that. Either way, keeping it simple is one of the best ways to not confuse yourself or get your tables and code out of order. Good luck!
Hey, thanks for your answer.
I pulled it out that way, writing every record in the database
It seems pretty speedy right now, with about 11 000 records for three days in closed beta on two master-master replicated MySQL servers
Hope this thread helps someone else too, thank you all for the help