There are conceptually infinite designs for a piece of software. But in reality, of the hundreds of blogs, forums, CMSs, etc. -- only a few are actually very popular. Many on that list would be considered to have badly designed code by some here. My thesis is that the code is not badly designed, but actually may have some interesting things about it that might be helpful to know.
Precisely. Which is exactly why I suggested focusing on "tangibles".
In my experience, the better the design (whether it be MVC or do-hickey) the better the overall technical quality of the application.
And I think there is disagreement what is relevant and what is irrelevant. But if you would like this thread to be only about the design of the code without any discussion of features, that is your call in your thread.
I think in the previous thread certainly there was confusion. I was personally refering to tangible qualities of a project which are actually quantifiable, facts which cannot be argued or debated, numbers which might be used in a quality metrics standard, such as whether the pages validate against W3C, etc.
pytrin apparently means intangibles, like unit testing, etc. Where as you and jshpro2 seem to focus more on quality from the business side of things, in how well the project solves the problem domain.
All of these are/were valid arguments in their given context I suppose. The problem is, design patterns can be argued until your blue in the face, whether unit testing helps is highly debatable, if a software application solves your users problems can only really be tested once you go live with an application and develop a commercially successful software system/community.
As I have said, most of those "metrics' are just opinion -- except the test coverage. You might as well add "in a way I like" to the end of most of those statements. It is not that opinions about those might not be interesting to hear. I just think there are some better, more objective metrics.
That's twice you have said that and twice you have not bothered to elaborate.
There is no 'opinion' involved in gauging the quality of an application design whether benchmarking, validating against W3C, memort footprint, etc. While they are not directly indicative of good design (you could write really bad code that executed like greased lightning but produced buggy results) in enterprise applications it is unlikely this is the case.
The metrics I am speaking can quite simply be summed up in the following: Assume you have two indentical applications. Exacting in every way from interface to DB schema, there is no visible difference from the end user perspective and they solve the *exact* same problem.
Given the two applications, which would you pick:
1. The on that performed better (used less CPU cycles)
2. The one which used less memory (smaller system footprint)
3. The one which was historically more secure (less security exploits over it's life time)
4. The one which was more stable and less buggy (less reported bugs over it's life time)
5. The one which had fully validating pages with no exceptions
6. The one which was fully internationalized/localizaed (not just language translation)
7. The one which was fully accessible from a client side perspective (more of the application worked without JS enabled)
Answer these type of questions and you get a clear sense of that kind of quality metrics (which are/should be undisputable). I mean if you insist on using the application with more bugs, security holes and problems, than by all means, giver'
As a professional software developer I certainly wouldn't
Might it not be more useful to categorize the different types of design and then evaluate each separately? IMO, code base design, user interface design, and feature design are all separate fields (and I'm sure there are more than just those three), though at times they may have overlapping concerns. If you can evaluate each subset of design, it becomes easier to quantify the quality of the overall design.
Are my posts to long to read through touroughly or something? Is this not what I have been advocating?
Lists like this are trying to measure 'good'. However they don't really point to much explanation of what the target of 'good' is. Maybe if we first start with a more general description of good
A more general description of good? That is exactly what is causing the problem. If it's too general it's to subjective. I think my list is the only tangible way to prove quality over mediocre. There is no debate given the question I just gave.
Ask yourself, given two identical applications in every way...which would you chose given the list of standard metrics above? And don't change the subject or try and sway off topic, remember I said two otherwise identical applications (from the end user perspective) but one is technically superior, which do you chose?
From experience I can tell you that it takes a tremendous amount of work to make an Internationalized application, one that consistently validates, runs faster, etc, etc. Most of these metrics can *only* be achieved with a good underlying design (I happened to follow MVC the best I understood given the problem domain) that is to say, when you follow no design (phpList, WordPress, etc) you end up with bloated buggy software that "technically" speaking performs significantly poorer.
Ignoring the subjective arguments, such as "does it solve the users problems?". That is irrelevant in this dissucssion (my version anyways) as it's assumed we have a crack marketing/sales team who have dictated development from day one to solve exactly the problems set forth by our clients.
Let us focus on strictly the tangible metrics, so when two developement teams are writting an identical project, there is a quantifiable way of proving (without a dought) which is of the higher quality; aka standards compliant.
It seems like to define good you have to use phrases like: "Easy to extend","Functions Correctly","Is Popular".
That is very subjective. Everyone will have a different answer. And again, IMHO an application that runs faster, uses less memory, etc is going to be easy to extend, enhance, refactor, etc...by virtue of the fact it meets the above metrics. Large monolithic applications are not easy to extend, they are brittle and hard to unit test because of bad design choices (ie: WordPress). As a side effect these applications run slowed and have more bugs as well.
Only once we define "good" in general terms can we start looking for metrics to apply to software.
That is why I have been focusing no tangible metrics which are undisputable...you will *never* define "good" in "general" terms...the day you figure that out I will hand you the nobel peace prize because you also likely found the key to world peace.
You will never get an entire room to agree on such a vague description. However what I can tell you, is that as each "opinion" passes or exceeds the tangible metrics I have given (about 10 times now) such as validation, more bugs, less security holes, etc...then it becomes safer to assume that one given design works better than the next.
If design ABC consistently results in faster code, less bugs, less security holes, better stability, overall performance...the it's probably safe to assume the design ABC is better than design MVC and so on.
Cheers,
Alex