Probably not, but the the average TDD user (on the grand scheme of things) probably isn't using TDD effectively, either. It still boils down to how you use(d) TDD compared to the ideal of BDD. The concepts brought to light with BDD have always been there for TDD, that is certain, BDD was evolved from TDD to make it easier for developers to grasp this. Dave Astel and a few others from the "Ruby Gang" were looking to improve TDD's use in the wide world, and after much deliberation decided it would be better to chop it right down and (almost) start over with a new framework and the inclusion of should, be, not etc. just to ease the translation from "As a User, when I submit X, I want to receive Y" to (in Smalltalk for sake of my memory):
I've been seeing people who can use TDD, not necessarilly in the ideal way, but with some degree of success, look at BDD and simply say "Why bother? That's what I already do." but they are actually not looking at either correctly. They are still testing private methods/properties, or verfiying needlessly because that is what they did before hand with TDD. It's all a matter of relevance.
I still stand that BDD is simply TDD re-branded. The process for effective TDD, and BDD, is translate requirements (e.g. user stories) into tests, complete code to pass the tests, then refactor into more tests, and code to complete the tests (rinse repeat ad nauseum) until you have completed all the requirements. What BDD has done very well is, like you say, shorten the learning curve of xDD, and rightly so. I believe the naming of Test Driven Design was simply brought about because someone, somewhere, decided it would be useful to have the tests coded before coding the actual application - so for simplicities sake called it TDD. Later we have looked and thought "That's silly, Tests are past-tense."
The seminars and conferences have exploded for BDD, mainly within the Ruby community because RSpec get's the most attention from the testing guru's, because BDD has accomplished what it set out to do.. making TDD easier to understand, so more people have jumped onto it and thus there is more discussion.
The two are achingly similar - you have a spec from a user. You need to ensure that what you create satisfies that spec. Both in TDD and in BDD, you create a programmatic module for ensuring that spec is met. TDD tended to throw people off because it was a common mistake to assume that if it is a class/object, it must have it's own test. This has some logic behind it, you have a collection of common objects/classes shared amongst many applications, so it is logical you will have a test/spec that will simply ensure the class/object does what it says on the tin. An Iterator, you will want to know that it iterates. But for application or business logic, you may want to know if ObjectA will cause an effect on ObjectB, ObjectC and Object22. This is where many familiar with UnitTesting were thrown off, they did not realise that a UnitTest (technically an Intergration test) can still be used here. BDD bridges this gap somewhat.
However, if by "BDD" and the "Evolution of TDD to BDD" you are including the extra publicity/emphasis the guru's are putting out there to let people know it is ok to do the above, then that is a different matter entirely. We would not be debating this case if they did not change the name of TDD, but simply gave all their seminars on TDD with the ethic of "X" (what you claim is the ethic of BDD, and I claim is the ethic of both TDD and BDD.)