Hello forums!!
I had made a table which contains more than 50 fields..
I am afraid this will hamper later ..
What i want to know ?
1>how many fields should be in a table for optimization ?
2>what should taken into consideration for future hazards in case there are more than
50 fields in table ?
I would really appreciate the answers from you
Thanks in advance to all of you !!
more than 50 fields in a table ??
Moderator: General Moderators
If all your fields are necessary, then there is no problem with it. Shouldn't matter. Although a table with that many fields usually indicates that there could be some cross-table referencing.
Set Search Time - A google chrome extension. When you search only results from the past year (or set time period) are displayed. Helps tremendously when using new technologies to avoid outdated results.
- Christopher
- Site Administrator
- Posts: 13596
- Joined: Wed Aug 25, 2004 7:54 pm
- Location: New York, NY, US
-
alex.barylski
- DevNet Evangelist
- Posts: 6267
- Joined: Tue Dec 21, 2004 5:00 pm
- Location: Winnipeg
50 fields in a single table??? Jesus H Christ man...thats madness...
As arborint said, it's not so much fields which affect performance, but rather the type of fields. Variable fields I would imagine are the hardest on performance, you could have a 50 fields of TINYINT, with the proper indexes, fixed width records can be indexed quickly using simple arithematic.
However, 50 fields...thats insane...it's maintenence nightmare...inserting or updating would require something of a SQL story as opposed to a SQL statement.
I would really try to normalize your table a little more. It typically goes against relational best practice when you have two tables which are 1:1 with each other (meaning they are seperate tables and share only the PKID as a common denominator) but in this case I would segment your table into tables (organized logically like one might with classes).
Just my advice
As arborint said, it's not so much fields which affect performance, but rather the type of fields. Variable fields I would imagine are the hardest on performance, you could have a 50 fields of TINYINT, with the proper indexes, fixed width records can be indexed quickly using simple arithematic.
However, 50 fields...thats insane...it's maintenence nightmare...inserting or updating would require something of a SQL story as opposed to a SQL statement.
I would really try to normalize your table a little more. It typically goes against relational best practice when you have two tables which are 1:1 with each other (meaning they are seperate tables and share only the PKID as a common denominator) but in this case I would segment your table into tables (organized logically like one might with classes).
Just my advice
- Kieran Huggins
- DevNet Master
- Posts: 3635
- Joined: Wed Dec 06, 2006 4:14 pm
- Location: Toronto, Canada
- Contact:
As an experienced database developer and former college database instructor, I must disagree with those who indicated that 50 (or even many more) fields in a table is necessarily an indication that it is poorly designed or will cause any problems. It is important that the table should be normalized, though. Many practical database tables simply must have a large number of fields to represent all the characteristics of the entity represented by that table. That's the key concept: a table represents some definable entity--usually a person, an object, a transaction, or something like that--and that each field in that table represents a characteristic of that entity. If that principle is followed, the number of fields is totally irrelevant.