… Of course all Unicode values are defined by the Unicode Consortium. Per definition :)
The Unicode Consortium people are, IMO, the Unsung Heroes of this century. I vividly remember the ginormous mess of different ASCII tables and — why thank you mr. Gates — Windows Codepages of the late '80s. Every computer system seems to have a set of characters of its own, and about the only consolation was that the really weird systems such as EBDIC were abandoned swiftly in liue of at least the canonical ASCII set, space through tilde. (And tab and return; but that was about it.)
The WordPerfect Corp. was, in my recollection, one of the first to try to break through the 256-character barrier; they devised a system with “character sets”, and linked each of these virtual composite fonts to more than one physical font, so you could “set” a font Times, then use anything from the usual Latin characters to a full math set to Hebrew and Greek, without having to manually browse for each character in each available font … eat that, Glyphs Panel!
But with the advent of Unicode and the technological advances in Font Technology — I'm talking Opentype here, dudes, not your-basic-256-character-based Type 1, or the heavily Codepage Infected TrueType specs — suddenly that was a thing of the past. About the only reasonable complaint left to us is, “what, there are only two styles of digits in this font?”