Discussion:
SCALE intrinsic subprogram (aka a Fortran post)
(too old to reply)
Steven G. Kargl
2023-11-04 21:21:53 UTC
Permalink
The SCALE intrinsic allows one to change the
floating point exponent for a REAL entity.
For example,

program foo
real x
x = 1
print *, scale(x,1) ! print 2
end program

This scaling does not incur a floating point
rounding error.

Question. Anyone know why the Fortran standard (aka J3)
restricted X to be a REAL entity? It would seem that X
could be COMPLEX with obvious equivalence of

SCALE(X,N) = COMPLX(SCALE(X%RE,N),SCALE(X%IM,N),KIND(X%IM))

Should the Fortran be amended?
--
steve
gah4
2023-11-16 01:17:50 UTC
Permalink
Post by Steven G. Kargl
The SCALE intrinsic allows one to change the
floating point exponent for a REAL entity.
For example,
program foo
real x
x = 1
print *, scale(x,1) ! print 2
end program
This scaling does not incur a floating point
rounding error.
Question. Anyone know why the Fortran standard (aka J3)
restricted X to be a REAL entity? It would seem that X
could be COMPLEX with obvious equivalence of
SCALE(X,N) = COMPLX(SCALE(X%RE,N),SCALE(X%IM,N),KIND(X%IM))
Should the Fortran be amended?
Wow, no answer yet.

It does seem that sometimes Fortran is slow to add features, especially
when need for them isn't shown.

It does make sense to have the complex version, though as you note, it
isn't all that hard to get away without it.

If I had a vote, it would be yes.
gah4
2023-11-16 01:28:06 UTC
Permalink
Post by Steven G. Kargl
The SCALE intrinsic allows one to change the
floating point exponent for a REAL entity.
For example,
program foo
real x
x = 1
print *, scale(x,1) ! print 2
end program
This scaling does not incur a floating point
rounding error.
Question. Anyone know why the Fortran standard (aka J3)
restricted X to be a REAL entity? It would seem that X
could be COMPLEX with obvious equivalence of
SCALE(X,N) = COMPLX(SCALE(X%RE,N),SCALE(X%IM,N),KIND(X%IM))
Should the Fortran be amended?
Wow, no answer yet.

It does seem that sometimes Fortran is slow to add features, especially
when need for them isn't shown.

It does make sense to have the complex version, though as you note, it
isn't all that hard to get away without it.

If I had a vote, it would be yes.
pehache
2023-11-16 11:17:31 UTC
Permalink
Post by gah4
Post by Steven G. Kargl
The SCALE intrinsic allows one to change the
floating point exponent for a REAL entity.
For example,
program foo
real x
x = 1
print *, scale(x,1) ! print 2
end program
This scaling does not incur a floating point
rounding error.
Question. Anyone know why the Fortran standard (aka J3)
restricted X to be a REAL entity? It would seem that X
could be COMPLEX with obvious equivalence of
SCALE(X,N) = COMPLX(SCALE(X%RE,N),SCALE(X%IM,N),KIND(X%IM))
Should the Fortran be amended?
Wow, no answer yet.
It does seem that sometimes Fortran is slow to add features, especially
when need for them isn't shown.
The reason is maybe because the standard doesn't specify how a complex
number is internally represented. In practice it is always represented by
a pair (real,imag), but nothing would prevent a compiler representing it
by (module,argument) for instance. Given that, the standard cannot
guarantee the absence of rounding errors.
Steven G. Kargl
2023-11-16 20:01:02 UTC
Permalink
Post by pehache
Post by gah4
The SCALE intrinsic allows one to change the floating point exponent
for a REAL entity.
For example,
program foo
real x x = 1 print *, scale(x,1) ! print 2
end program
This scaling does not incur a floating point rounding error.
Question. Anyone know why the Fortran standard (aka J3) restricted X
to be a REAL entity? It would seem that X could be COMPLEX with
obvious equivalence of
SCALE(X,N) = COMPLX(SCALE(X%RE,N),SCALE(X%IM,N),KIND(X%IM))
Should the Fortran be amended?
Wow, no answer yet.
It does seem that sometimes Fortran is slow to add features, especially
when need for them isn't shown.
The reason is maybe because the standard doesn't specify how a complex
number is internally represented. In practice it is always represented
by a pair (real,imag), but nothing would prevent a compiler representing
it by (module,argument) for instance. Given that, the standard cannot
guarantee the absence of rounding errors.
You are correct that the Fortran standard does not specify
internal datails, and this could be extended to COMPLEX.
It would however be quite strange for a Fortran vendor to
use magnitude and phase given that the Fortran standard does
quite often refer to the real and imaginary parts of a COMPLEX
entity. Not to mention, the Fortran standard has introduced:

3.60.1
complex part designator

9.4.4 Complex parts

R915 complex-part-designator is designator % RE
or designator % IM

PS: If a Fortran vendor used magnitude and phase, then the vendor
would need to specify a sign convention for the phasor. I'm mpt
aware of any vendor that does.
--
steve
pehache
2023-11-16 22:51:52 UTC
Permalink
Post by Steven G. Kargl
Post by pehache
The reason is maybe because the standard doesn't specify how a complex
number is internally represented. In practice it is always represented
by a pair (real,imag), but nothing would prevent a compiler representing
it by (module,argument) for instance. Given that, the standard cannot
guarantee the absence of rounding errors.
You are correct that the Fortran standard does not specify
internal datails, and this could be extended to COMPLEX.
It would however be quite strange for a Fortran vendor to
use magnitude and phase
I fully agree that it would be strange, and I can't see any advantage to
such implementation. Yet, it is not prohibited by the standard.
Post by Steven G. Kargl
given that the Fortran standard does
quite often refer to the real and imaginary parts of a COMPLEX
entity.
Yes, but it's at the conceptual level
Post by Steven G. Kargl
3.60.1
complex part designator
9.4.4 Complex parts
R915 complex-part-designator is designator % RE
or designator % IM
Yes again, but behind the hood c%re and c%im could be the functions
m*cos(p) and m*sin(p). And on assignement c%re = <expr> or c%im = <expr>
the (m,p) pair could be fully recomputed.
Post by Steven G. Kargl
PS: If a Fortran vendor used magnitude and phase, then the vendor
would need to specify a sign convention for the phasor. I'm mpt
aware of any vendor that does.
I don't think so, as the phase component would not be directly
accessible by the user. The vendor could choose any convention as long
as the whole internal stuff is consistent, he could also chose to store
a scaled version of the phase in order to have a better accuracy...
--
"...sois ouvert aux idées des autres pour peu qu'elles aillent dans le
même sens que les tiennes.", ST sur fr.bio.medecine
ST passe le mur du çon : <***@mid.individual.net>
David Jones
2023-11-17 08:18:57 UTC
Permalink
Post by pehache
Post by Steven G. Kargl
Post by pehache
The reason is maybe because the standard doesn't specify how a
complex number is internally represented. In practice it is
always represented by a pair (real,imag), but nothing would
prevent a compiler representing it by (module,argument) for
instance. Given that, the standard cannot guarantee the absence
of rounding errors.
You are correct that the Fortran standard does not specify
internal datails, and this could be extended to COMPLEX.
It would however be quite strange for a Fortran vendor to
use magnitude and phase
I fully agree that it would be strange, and I can't see any advantage
to such implementation. Yet, it is not prohibited by the standard.
Post by Steven G. Kargl
given that the Fortran standard does
quite often refer to the real and imaginary parts of a COMPLEX
entity.
Yes, but it's at the conceptual level
Post by Steven G. Kargl
3.60.1
complex part designator
9.4.4 Complex parts
R915 complex-part-designator is designator % RE
or designator % IM
Yes again, but behind the hood c%re and c%im could be the functions
m*cos(p) and m*sin(p). And on assignement c%re = <expr> or c%im =
<expr> the (m,p) pair could be fully recomputed.
Post by Steven G. Kargl
PS: If a Fortran vendor used magnitude and phase, then the vendor
would need to specify a sign convention for the phasor. I'm mpt
aware of any vendor that does.
I don't think so, as the phase component would not be directly
accessible by the user. The vendor could choose any convention as
long as the whole internal stuff is consistent, he could also chose
to store a scaled version of the phase in order to have a better
accuracy...
There seems no reason why the standard might not be extended to allow
the two different types of representations of complex variables to
exist in the same program, as separate data-types, and to interact when
required. Two major questions are:

(i) whether there are any applications that would be more readily and
usefully programmed using the modulus-phase representation?

(ii) the relative speed of both addition and multiplication in the two
representations.?
Thomas Koenig
2023-11-19 13:28:18 UTC
Permalink
Post by David Jones
There seems no reason why the standard might not be extended to allow
the two different types of representations of complex variables to
exist in the same program, as separate data-types, and to interact when
(i) whether there are any applications that would be more readily and
usefully programmed using the modulus-phase representation?
(ii) the relative speed of both addition and multiplication in the two
representations.?
Multiplication and especially division would likely be faster - you
would have to multiply the two moduli and add and normalize the
modulus to lie between 0 and 2*pi.

However, the normalization step can have unintended execution
speed consequences if the processor implements it via branches,
and branches can be quite expensive if mispredicted.

_Addition_ is very expensive indeed in polar notation. You have
to calculate the sin() and cos() of each number, add them, and then
call atan2() (with a normalization) to get back the original
representation.

If you're doing a lot of multiplication, and not a lot of addition,
that could actually pay off.
Steven G. Kargl
2023-11-19 16:03:22 UTC
Permalink
Post by Thomas Koenig
Post by David Jones
There seems no reason why the standard might not be extended to allow
the two different types of representations of complex variables to
exist in the same program, as separate data-types, and to interact when
(i) whether there are any applications that would be more readily and
usefully programmed using the modulus-phase representation?
(ii) the relative speed of both addition and multiplication in the two
representations.?
Multiplication and especially division would likely be faster - you
would have to multiply the two moduli and add and normalize the modulus
to lie between 0 and 2*pi.
However, the normalization step can have unintended execution speed
consequences if the processor implements it via branches, and branches
can be quite expensive if mispredicted.
_Addition_ is very expensive indeed in polar notation. You have to
calculate the sin() and cos() of each number, add them, and then call
atan2() (with a normalization) to get back the original representation.
If you're doing a lot of multiplication, and not a lot of addition,
that could actually pay off.
If a vendor used magnitude and phase as the internal representation,
then that vendor would not be around very long. Consider cmplx(0,1).
The magnitude is easy. It is 1. Mathematically, the phase is
pi/2, which is of course not exactly representable.

% tlibm acos -f -a 0.
x = 0.00000000e+00f, /* 0x00000000 */
libm = 1.57079637e+00f, /* 0x3fc90fdb */
mpfr = 1.57079637e+00f, /* 0x3fc90fdb */
ULP = 0.36668
% tlibm cos -f -a 1.57079637
x = 1.57079625e+00f, /* 0x3fc90fda */
libm = 7.54979013e-08f, /* 0x33a22169 */
mpfr = 7.54979013e-08f, /* 0x33a22169 */
ULP = 0.24138

7.549... is significantly different when compared to 0.
--
steve
David Jones
2023-11-19 18:45:24 UTC
Permalink
Post by Thomas Koenig
Post by David Jones
There seems no reason why the standard might not be extended to
allow >> the two different types of representations of complex
variables to >> exist in the same program, as separate data-types,
Post by Thomas Koenig
Post by David Jones
(i) whether there are any applications that would be more readily
and >> usefully programmed using the modulus-phase representation?
Post by Thomas Koenig
Post by David Jones
(ii) the relative speed of both addition and multiplication in the
two >> representations.?
Post by Thomas Koenig
Multiplication and especially division would likely be faster - you
would have to multiply the two moduli and add and normalize the
modulus to lie between 0 and 2*pi.
However, the normalization step can have unintended execution speed
consequences if the processor implements it via branches, and
branches can be quite expensive if mispredicted.
Addition is very expensive indeed in polar notation. You have to
calculate the sin() and cos() of each number, add them, and then
call atan2() (with a normalization) to get back the original
representation.
If you're doing a lot of multiplication, and not a lot of addition,
that could actually pay off.
If a vendor used magnitude and phase as the internal representation,
then that vendor would not be around very long. Consider cmplx(0,1).
The magnitude is easy. It is 1. Mathematically, the phase is
pi/2, which is of course not exactly representable.
% tlibm acos -f -a 0.
x = 0.00000000e+00f, /* 0x00000000 */
libm = 1.57079637e+00f, /* 0x3fc90fdb */
mpfr = 1.57079637e+00f, /* 0x3fc90fdb */
ULP = 0.36668
% tlibm cos -f -a 1.57079637
x = 1.57079625e+00f, /* 0x3fc90fda */
libm = 7.54979013e-08f, /* 0x33a22169 */
mpfr = 7.54979013e-08f, /* 0x33a22169 */
ULP = 0.24138
7.549... is significantly different when compared to 0.
If it were worth doing, the obvious thing to do would be to use a
formulation where you store a multiple of pi or 2*pi as the effective
argument, with computations done to respect a standard range.
Thomas Koenig
2023-11-19 22:00:29 UTC
Permalink
Post by David Jones
Post by Thomas Koenig
Post by David Jones
There seems no reason why the standard might not be extended to
allow >> the two different types of representations of complex
variables to >> exist in the same program, as separate data-types,
Post by Thomas Koenig
Post by David Jones
(i) whether there are any applications that would be more readily
and >> usefully programmed using the modulus-phase representation?
Post by Thomas Koenig
Post by David Jones
(ii) the relative speed of both addition and multiplication in the
two >> representations.?
Post by Thomas Koenig
Multiplication and especially division would likely be faster - you
would have to multiply the two moduli and add and normalize the
modulus to lie between 0 and 2*pi.
However, the normalization step can have unintended execution speed
consequences if the processor implements it via branches, and
branches can be quite expensive if mispredicted.
Addition is very expensive indeed in polar notation. You have to
calculate the sin() and cos() of each number, add them, and then
call atan2() (with a normalization) to get back the original
representation.
If you're doing a lot of multiplication, and not a lot of addition,
that could actually pay off.
If a vendor used magnitude and phase as the internal representation,
then that vendor would not be around very long. Consider cmplx(0,1).
The magnitude is easy. It is 1. Mathematically, the phase is
pi/2, which is of course not exactly representable.
% tlibm acos -f -a 0.
x = 0.00000000e+00f, /* 0x00000000 */
libm = 1.57079637e+00f, /* 0x3fc90fdb */
mpfr = 1.57079637e+00f, /* 0x3fc90fdb */
ULP = 0.36668
% tlibm cos -f -a 1.57079637
x = 1.57079625e+00f, /* 0x3fc90fda */
libm = 7.54979013e-08f, /* 0x33a22169 */
mpfr = 7.54979013e-08f, /* 0x33a22169 */
ULP = 0.24138
7.549... is significantly different when compared to 0.
If it were worth doing, the obvious thing to do would be to use a
formulation where you store a multiple of pi or 2*pi as the effective
argument, with computations done to respect a standard range.
It could also make sense to use a fixed-number representation for
the phase; having special accuracy around zero, as floating point
numbers do, may not be a large advantage.

The normalization step could then be a simple "and", masking
away the top bits.

This is, however, more along the lines of what a user-defined
complex type could look like, not what Fortran compilers could
reasonably provide :-)
unknown
2023-11-20 10:41:17 UTC
Permalink
Post by David Jones
Post by Thomas Koenig
Post by David Jones
There seems no reason why the standard might not be extended to
allow >> the two different types of representations of complex
variables to >> exist in the same program, as separate data-types,
Post by Thomas Koenig
Post by David Jones
(i) whether there are any applications that would be more
readily >> and >> usefully programmed using the modulus-phase
representation? >> > >
Post by David Jones
Post by Thomas Koenig
Post by David Jones
(ii) the relative speed of both addition and multiplication in
the >> two >> representations.?
Post by David Jones
Post by Thomas Koenig
Multiplication and especially division would likely be faster -
you >> > would have to multiply the two moduli and add and normalize
the >> > modulus to lie between 0 and 2*pi.
Post by David Jones
Post by Thomas Koenig
However, the normalization step can have unintended execution
speed >> > consequences if the processor implements it via branches,
and >> > branches can be quite expensive if mispredicted.
Post by David Jones
Post by Thomas Koenig
Addition is very expensive indeed in polar notation. You have to
calculate the sin() and cos() of each number, add them, and then
call atan2() (with a normalization) to get back the original
representation.
If you're doing a lot of multiplication, and not a lot of
addition, >> > that could actually pay off.
Post by David Jones
If a vendor used magnitude and phase as the internal
representation, >> then that vendor would not be around very long.
Consider cmplx(0,1). >> The magnitude is easy. It is 1.
Mathematically, the phase is >> pi/2, which is of course not exactly
representable. >>
Post by David Jones
% tlibm acos -f -a 0.
x = 0.00000000e+00f, /* 0x00000000 */
libm = 1.57079637e+00f, /* 0x3fc90fdb */
mpfr = 1.57079637e+00f, /* 0x3fc90fdb */
ULP = 0.36668
% tlibm cos -f -a 1.57079637
x = 1.57079625e+00f, /* 0x3fc90fda */
libm = 7.54979013e-08f, /* 0x33a22169 */
mpfr = 7.54979013e-08f, /* 0x33a22169 */
ULP = 0.24138
7.549... is significantly different when compared to 0.
If it were worth doing, the obvious thing to do would be to use a
formulation where you store a multiple of pi or 2*pi as the
effective argument, with computations done to respect a standard
range.
It could also make sense to use a fixed-number representation for
the phase; having special accuracy around zero, as floating point
numbers do, may not be a large advantage.
The normalization step could then be a simple "and", masking
away the top bits.
This is, however, more along the lines of what a user-defined
complex type could look like, not what Fortran compilers could
reasonably provide :-)
Any extension to the existing standard is at least possible. But the
real question is whether there are enough (or any at all) applications
that require only (or mainly) complex multiplications as opposed to
additions. I can't think of any.
pehache
2023-12-21 15:52:03 UTC
Permalink
Post by pehache
Post by Steven G. Kargl
Post by pehache
The reason is maybe because the standard doesn't specify how a complex
number is internally represented. In practice it is always represented
by a pair (real,imag), but nothing would prevent a compiler representing
it by (module,argument) for instance. Given that, the standard cannot
guarantee the absence of rounding errors.
You are correct that the Fortran standard does not specify
internal datails, and this could be extended to COMPLEX.
It would however be quite strange for a Fortran vendor to
use magnitude and phase
I fully agree that it would be strange, and I can't see any advantage to
such implementation. Yet, it is not prohibited by the standard.
Post by Steven G. Kargl
given that the Fortran standard does
quite often refer to the real and imaginary parts of a COMPLEX
entity.
Yes, but it's at the conceptual level
Post by Steven G. Kargl
3.60.1
complex part designator
9.4.4 Complex parts
R915 complex-part-designator is designator % RE
or designator % IM
Yes again, but behind the hood c%re and c%im could be the functions
m*cos(p) and m*sin(p). And on assignement c%re = <expr> or c%im = <expr>
the (m,p) pair could be fully recomputed.
Well, after all I have to disagree with myself...

Since it's possible to get pointers to these designators:

complex, allocatable, target :: c(:)
real, pointer :: rr(:), ri(:)
rr => c(:)%re
ri => c(:)%im

The compiler has really no other choice than implementating a complex by a
(real,imaginary) pair. If %re and %im were hidden functions, such pointer
assignments would not be possible. Still, the order is not clearly
enforced, and it could be (imaginary,real)

Thomas Koenig
2023-11-17 06:48:54 UTC
Permalink
Post by pehache
Post by gah4
Post by Steven G. Kargl
The SCALE intrinsic allows one to change the
floating point exponent for a REAL entity.
For example,
program foo
real x
x = 1
print *, scale(x,1) ! print 2
end program
This scaling does not incur a floating point
rounding error.
Question. Anyone know why the Fortran standard (aka J3)
restricted X to be a REAL entity? It would seem that X
could be COMPLEX with obvious equivalence of
SCALE(X,N) = COMPLX(SCALE(X%RE,N),SCALE(X%IM,N),KIND(X%IM))
Should the Fortran be amended?
Wow, no answer yet.
It does seem that sometimes Fortran is slow to add features, especially
when need for them isn't shown.
The reason is maybe because the standard doesn't specify how a complex
number is internally represented.
I disagree almost entirely.

Subclause 19.6.5 of F2018, "Events that cause variables to become
defined" has

(13) When a default complex entity becomes defined, all partially
associated default real entities become defined.

(14) When both parts of a default complex entity become defined as
a result of partially associated default real or default complex
entities becoming defined, the default complex entity becomes
defined.

Which means that something like

real :: a(2)
complex :: c
equivalence (a,c)

allows you to set values for a(1) and a(2) and you can expect the
components of c to get the corresponding values.

This is important for FFT.

Now, you might argue that the compiler can invoke "as if", but there
is no practical way to use any other complex representation.
pehache
2023-11-17 08:22:53 UTC
Permalink
Post by Thomas Koenig
Post by pehache
Post by gah4
Post by Steven G. Kargl
The SCALE intrinsic allows one to change the
floating point exponent for a REAL entity.
For example,
program foo
real x
x = 1
print *, scale(x,1) ! print 2
end program
This scaling does not incur a floating point
rounding error.
Question. Anyone know why the Fortran standard (aka J3)
restricted X to be a REAL entity? It would seem that X
could be COMPLEX with obvious equivalence of
SCALE(X,N) = COMPLX(SCALE(X%RE,N),SCALE(X%IM,N),KIND(X%IM))
Should the Fortran be amended?
Wow, no answer yet.
It does seem that sometimes Fortran is slow to add features, especially
when need for them isn't shown.
The reason is maybe because the standard doesn't specify how a complex
number is internally represented.
I disagree almost entirely.
Subclause 19.6.5 of F2018, "Events that cause variables to become
defined" has
(13) When a default complex entity becomes defined, all partially
associated default real entities become defined.
(14) When both parts of a default complex entity become defined as
a result of partially associated default real or default complex
entities becoming defined, the default complex entity becomes
defined.
Which means that something like
real :: a(2)
complex :: c
equivalence (a,c)
allows you to set values for a(1) and a(2) and you can expect the
components of c to get the corresponding values.
I almost entirely disagree with your almost entire disagreement :)

The standard requires the complex type to occupy 2 storage units, which
allows the above equivalence, and the above clause tells that a complex
is made of two adjacent reals. However it does not tell what are a(1)
and a(2) precisely : they could be the module + phase (or the imaginary
part and the real part in that order).
Post by Thomas Koenig
This is important for FFT.
We are all relying on the fact that in your above equivalence a(1) is
the real part and a(2) is the imaginary part, all compilers follow this
convention, and nobody would "buy" a compiler that would follow another
convention. Nonetheless this is just a convention, this is not enforced
by the standard.
--
"...sois ouvert aux idées des autres pour peu qu'elles aillent dans le
même sens que les tiennes.", ST sur fr.bio.medecine
ST passe le mur du çon : <***@mid.individual.net>
Loading...