COBOL and ENDED-ness (and other "general issue")
Probably Chuck has already sufficiently answered the "general" questions,
but I thought I would explicitly mention a few issues that MAY help
non-COBOL people understand "the real world of COBOL today" and its
relationship to the Standard.
1) Arithmetic mode PRIMARILY deals with "intermediate results".
Traditionally (before the 2002 Standard) any arithmetic expression that
required "intermediate results" was evaluated in a TOTALLY
"implementor-defined" manner. For an example of what IBM did with their
mainframe compilers, you might want to look at:
(and several following pages).
IBM's solution is "famous" (infamous?) for the fact that if you have ONLY
integer data items and you calculate
(7 / 4) * 4
you will get an answer of 4 not 7. (read the information pointed to above
in detail to see why)
Similarly, about 15 years ago (but no longer) Micro Focus did fractional
exponentiation by "truncating" the fraction to the nearest integer and using
that as the exponent.
Both of the above is "totally conforming" to the ANSI/ISO Standards (past
and present) when "NATIVE arithmetic" is in effect. In fact, some users
(and their auditors) REQUIRE compiler vendors to continue to provide
"inaccurate" results - even when better results are available.
What Chuck has been talking about here is using an IEEE defined floating
point data item as the intermediate data item for such arithmetic - and
following (mostly?) the IEEE rules for arithmetic.
This is what COBOL "arithmetic mode" is all about.
2) As far as "interchange" of data using various "encodings," this is and
continues to be an issue for many COBOL vendors. It is NOT something that
the Standard deals with. The most common (for COBOL) issue is 8-bit ASCII
versus EBCDIC "character" data. Because there are (historically and
presently) many, MANY COBOL applications written in and for EBCDIC
environments, when COBOL programs are "ported" to ASCII environments (or
when data is exchanged between the two), this encoding issue is often
significant. A number of Windows, Unix, and Linux COBOL compilers have
compile-time options to tell the object code to ACT AS IF it was running in
an EBCDIC environment. (This impact not only how data is interpreted but
also sorting orders, key values for indexed files and several other things
"inherent" to COBOL.) To the best of my knowledge, there are no existing
COBOL compilers or run-times that assume that they must be able to handle
both ASCII and EBCDIC data "simultaneously" ad character data. (There are
ways to convert from one to the other, but this is slightly different).
When it comes to "endedness" for binary data, this is also an issue that a
number of compilers and run-times deal with. Unlike ASCII/EBCDIC, I do know
of some compilers that define "different usages" to have different
endedness, for example,
- USAGE BINARY may been "big-ended"
- while USAGE COMP-5 mans "use the endedness that is native to this
There are other compilers that simply have a compile-time directive or
option to tell the compiler to create object code that "assumes" big- vs
little-endian binary data.
It is worth mentioning that what of the MAJOR issues for users who try and
"port" COBOL applications (programs and data) from one platform to another
is the DIFFICULTY in migrating files where the same record includes both
character data and non-display (e.g. packed-decimal or binary) numeric data.
Only an "intelligent" migration program can convert both character data
from/to ASCII/EBCDIC data while also handling (signed) numeric data that is
binary, packed-decimal, or floating-point. (I haven't even mentioned the
IBM specific encoding for floating-point data that has been an historical
issue for taking IBM mainframe COBOL applications from the mainframe to the
PC or Unix/Linux).
3) HISTORICALLY, COBOL "grew up" in a time when it was the same vendor who
designed and implemented
- compiler and run-time
- operating system
- file systems
In that environment, it was (relatively) safe and easy to assume that what
the compiler and run-time had to deal with would "all work together" AND
that the user would never try to take what they had to another environment.
For many years, COBOL was (arguably) one of the MOST portable programming
languages. IF (and few programmers actually did) a program stuck to
"standard syntax", then the program could port quite easily from one
environment to another with little change required to recompile. Even when
extensions were used, for much of COBOL's history, IBM was a de facto
standard and MANY (most - but not all) other implementors picked up their
extension. (Compare how common, for example, it is to have COMP-3 as
meaning Packed-Decimal or how common it is to have a GOBACK statement.)
COBOL was also one of the two major X/Open programming languages so many
extensions introduced for the Unix world are also common to most of the
Unix/Linux COBOL compilers.
4) COBOL does NOT have any way to have "self-defining" data. (Well several
implementors have extensions to handle XML, but that it is not currently a
part of the COBOL language). The general philosophy of COBOL is that the
program/programmer KNOWS how to interpret the data that it must deal with.
If some data is defined as packed-decimal and is really binary, or if it is
defined as ASCII and it is really EBCDIC, then "results are unpredictable".
In a few cases, it is possible to detect this (e.g. packed-decimal items
with "bad" sign-nibbles) and in those cases an "incompatible data" exception
is raised. However, in most cases it is up to the programmer to "debug"
and "fix" such data mis-handling and the language itself won't help (much).
* * * * *
The bottom-line is that the current revision work is TRYING to provide
- will take advantage of "industry standard" facilities, that are
- already available outside the COBOL environment on many platforms
(hardware and operating systems)
- that will provide predictability, portability, and user-requested
I think the recent (and past) discussions in this group will help us get the
"best possible" enhancements in the next revision. Whether we can get it all
done in time is still up in the air as is the question of whether all those
involved with the revision works see this as important enough to fix
(change) at this late date in the revision process.