Page 16 - 2010 Object identifiers (OIDs) and their registration authorities
P. 16
Object identifiers (OIDs) and their registration authorities
Workers in those days supposed that this could cover all of the identification requirements in the whole
world! But of course, later history showed that other schemes were invented, but none (to-date) with the
fundamental properties of the Object Identifier tree: a hierarchy of registration authorities, an infinitely deep
tree of nodes, and infinitely many arcs from each node.
6.2 Decisions on use of names or numbers to identify arcs in the OID tree
There was a dispute around the 1986 period, on whether arcs
from a node to a next-level node should be identified by Author's remarks:
character names, or by numbers.
There was much argument in the
In those days, Unicode was in its infancy, and any use of early days on whether names or
character names would be just ASCII characters. This caused a numbers should be used for
lot of discussion, with, broadly, ITU-T representatives wanting identification of arcs. A
compact binary (numbers) for the identification, and ISO compromise resulted, which has
representatives wanting human-friendly names. stood the test of time, but the
The resulting 1986 compromise, lasting almost to the 2000 arguments continued into the 2010
period, was for the unique and unambiguous identification on period!
an arc (hence identifying the node it led to) to be a simple
integer value, zero upwards, but with an additional human-readable ASCII character string that could also be
added in printed material. This was the beginning of the terminology which resulted in a "primary integer
value" (an integer uniquely and unambiguously identifying arcs from a superior node) and a "secondary
identifier" (an ASCII character string that is neither a unique nor an unambiguous identification of the arc).
These terms appeared in the 2008 edition, but the concept was there in the 1980s.
The ASCII character string could not, for ASN.1 reasons (it was an ASN.1 value name, not a type name),
contain a space character, and had to start with a lower case ASCII letter. In the end, and still to this day, the
"secondary identifier" has to start with a lower-case ASCII letter, and contain only ASCII letters, digits, and
hyphens.
Later, companies got merged, and wanted to change the human-readable names, whilst keeping the
unambiguous and unique integer identification of the arc from other arcs from the superior node. So today it
is recognized that the secondary identifiers (the human-readable names) are not necessarily unique, or even
unambiguous!
The 1986 compromise decision was that encodings of object identifiers (for machine to machine use) should
include only the binary integer value identifying an arc, but in addition a more human-friendly notation for
use in published material could include names. This is discussed further in clauses 9 and 9.5.
Nothing changed much until the early 2000s.
At that stage, there was increasing interest in the use of names (in any language), even in encodings, and
some old battles were re-opened. Available bandwidth had increased by then, and the need for compact
binary representations was reduced.
The old OID tree was extended (internationalized) by the ability to have one or more (the "or more" was
controversial!) so-called Unicode labels (names in any language script, encoded in Unicode) associated with
each arc, unambiguous in referencing a node, but not necessarily unique. This is covered in clause 13.
At the same time, a new ASN.1 type called OID-IRI (with a corresponding OID-IRI notation for OIDs) was
introduced to allow a node to be identified (in both human-readable notation and also in encodings), using
only a sequence of the unambiguous Unicode labels (names) from the root, separated by a solidus '/' (see
clause 9.4).
6