Table with many columns / some tables with some columns
I'm planning the db-design for an online-game and ask myself, whether it is better to have one table with the player-id as key and very many data-fields or some tables each with only some data-fields.
Does it need longer to get a result from a table with many columns even if I specify the columns I need?
Does an update-query need longer?
Let's say there are 200 players online at one time, everybody sending one request per second and every request leads to data-manipulations in three tables / in one table with many columns:
Can a normal web-server handle the higher number of queries resulting from the higher nunber of tables without a recognizable decrease of speed?
[I'm using PHP and the PEAR-DataObject-Abstraction.]
Thank you for your help!
EDIT: Changed "rows" to "columns" (-> I'm German ... sorry for that )
Dummkopf... Ja, richtig! Sorry, but this kid ain't buying that one, but I like the joke! Now, on to your questions.
As a general rule you want to normalize data, meaning that you want more tables with fewer columns in each of them. You want to define a PK or Primary Key that will uniquely identify each row in your table, and make sure that every column in the table depends on that PK. If a column doesn't depend on the key, it belongs in another table.
In a transactional environment like a game, normalization becomes more important than ever. The smaller I/O size, and more discreet data help performance. The logical structure helps the developer know where to get/put data.
MySQL will deal with smaller rows faster than it will deal with larger ones. It will write them faster, and it will read them faster. As long as there are less than a few thousand tables, the normallized data will perform better in a transactional enviornment in nearly every case.
As a general rule, the smaller rows can be found/manipulated/managed faster. The PK does in fact allow the row to be found, but due to the way that database engines work, smaller rows are generally faster. When you update a 200 byte wide row, you only have to move/write 200 bytes of data. When you update a 10000 byte row, you have to manipulate all 10000 bytes. Database engines usually work in pages of data. The more free space per page, the less like an update is to "split" a page.
You best bet for getting a feel about how things will work is to try it. Set up a couple of scenarios, and try to do write/read/update cycles against them. See how they perform using your hardware and software. My guess is that more tables with fewer columns in each table will give you better overall performance, but you can test this so you don't have to take my opinion without trying it.