Discussion:
Mixed language programming Tcl/Tk and Fortran (Windows)
(too old to reply)
Gustav Ivanovic
2003-09-22 20:05:56 UTC
Permalink
I would like to share my first experience of mixed language
programming using tcl/tk and fortran.

I have been working for sometime in an engineering dept in a big
company. Recently the IT policy of the company has changed and we have
now Windows XP without any administrator privilege therefore we do not
have anymore the possibility to install any GUI programming tool like
visual basic.

By chance, I stumbled on comp.lang.tcl and found that tcl and tk is
the solution to our difficulty to develop GUI without having
administrator privilege on our XP work station.

Until now our programs run on Unix servers and the GUI is built on PC
using MS visual basic. We are used to have our fortran program
input/output as files. input and output files are sent or fetched from
Unix computers using ftp or rcp. The programs on Unix servers are
triggered by rexec or rsh. Briefly, all communication between Unix and
XP is done via files (instead of sockets) to keep the architecture
simple.

We have now very fast PC and we started to migrate our fortran apps to
PC using Compaq Visual Fortran 6 compiler (CVF is the only development
tool officially "approved" by our bloody IT dept.). All fortran
programs run OK in this PC environment, now the problem is to build
the GUI because we have lost the possibility to write in VB.

Since all communication with our fortran programs is done by file, the
port to PC is quite straight forward. The last experiment we have done
is to build DLLs for our fortran and to plug it into GUI built in Tcl
and Tk with the help of ffidl. Tclkit allows us to work without any
installation whatsoever on our XP computers.

We have now left the old Unix server and work almost exclusively on
our XP workstation. Only the basic engineering database remains on
Unix servers (a kind of vault).

I put some examples here after to give an illustration of how we
transfer information from Tcl/Tk GUI to fortran dll. The only transfer
from fortran to Tcl/Tk we managed to do is by return of integer or
real function (see snippets).

FORTRAN PART (Compaq Visual Fortran)
=========================================
module tcl

contains

integer function integervar(n)

!DEC$ ATTRIBUTES DLLEXPORT, ALIAS: 'integervar', STDCALL
::integervar

integervar = n * n

end function integervar

real function realvar(x)

!DEC$ ATTRIBUTES DLLEXPORT, ALIAS: 'realvar', STDCALL ::realvar

realvar = x * x

end function realvar

subroutine stringvar(length,line)

!DEC$ ATTRIBUTES DLLEXPORT, ALIAS: 'stringvar', STDCALL
::stringvar
!DEC$ ATTRIBUTES REFERENCE ::line

character(len=length) :: line
integer ::length,fileunit
fileunit=1

open(fileunit,file=line)
write(fileunit,'(A)') line
close(fileunit)

end subroutine stringvar

end module tcl

END OF FORTRAN

TCL/TK script (with ffidl05.dll)
=========================================

load ../ffidl05
ffidl::callout RunFortranint {int} int [ffidl::symbol testcvf.dll
integervar]
ffidl::callout RunFortranstr {int pointer-utf8} int [ffidl::symbol
testcvf.dll stringvar]
ffidl::callout RunFortranfloat {float} float [ffidl::symbol
testcvf.dll realvar]

set buffer ABCDE
puts $buffer

puts [RunFortranint 12]
puts [RunFortranstr [string length $buffer] $buffer]
puts [RunFortranfloat 2.5]

END OF TCL/TK script


Interesting links:

http://mini.net/tcl/ffidl

wiki.tcl.tk/ffidl

www.tcl.tk

wiki.tcl.tk/tclkit

http://mini.net/tcl/tclkit
Greg Lindahl
2003-09-22 20:39:51 UTC
Permalink
Post by Gustav Ivanovic
We have now very fast PC and we started to migrate our fortran apps to
PC using Compaq Visual Fortran 6 compiler (CVF is the only development
tool officially "approved" by our bloody IT dept.).
You might want to check out the Windows GUI stuff supported by Compaq
Visual Fortran. If the languages you used before were Basic and
Fortran, it might be cheaper to use a Fortran GUI tool than a Tcl/TK GUI
tool.

-- greg
Gustav Ivanovic
2003-09-23 12:03:32 UTC
Permalink
Tcl/Tk is free. Please visit www.tcl.tk

When you use tclkit, there is nothing to install just copy the .exe
file and there you go !

We tried first to make GUI with Compaq Visual Fortran, but discovered
that it was a pain. Tcl/Tk is much easier. The fact that we have built
many korn shell scripts (ksh) on Unix helps us a lot to understand
tclsh and wish (Tcl/Tk).

The features offered by Tcl/Tk are quite interesting. I started only
four months ago and I am fully satisfied by this incredible scripting
language. For instance, I was quite amazed by ffidl plugin that allows
me to call fortran DLLs from a tcl script (YES, it is a SCRIPT).



***@pbm.com (Greg Lindahl) wrote in message news:<3f6f5e15$***@news.meer.net>...

[...]
Post by Greg Lindahl
If the languages you used before were Basic and
Fortran, it might be cheaper to use a Fortran GUI tool than a Tcl/TK GUI
tool.
-- greg
Brooks Moses
2003-09-23 02:57:31 UTC
Permalink
Post by Gustav Ivanovic
I would like to share my first experience of mixed language
programming using tcl/tk and fortran.
[...]

Thanks! I've saved a copy of your post for future reference, if I need
to do something like this in the future.

- Brooks
--
Remove "-usenet" from my address to reply; the bmoses-usenet address
is currently disabled due to an overload of W32.Gibe-F worm emails.
Michael Schlenker
2003-09-23 03:14:27 UTC
Permalink
Post by Gustav Ivanovic
I put some examples here after to give an illustration of how we
transfer information from Tcl/Tk GUI to fortran dll. The only transfer
from fortran to Tcl/Tk we managed to do is by return of integer or
real function (see snippets).
Maybe this helps for other types, putting fortran and Tcl together:
http://wiki.tcl.tk/3359

Michael
Arjen Markus
2003-09-24 07:52:25 UTC
Permalink
Post by Gustav Ivanovic
I would like to share my first experience of mixed language
programming using tcl/tk and fortran.
We have now left the old Unix server and work almost exclusively on
our XP workstation. Only the basic engineering database remains on
Unix servers (a kind of vault).
I put some examples here after to give an illustration of how we
transfer information from Tcl/Tk GUI to fortran dll. The only transfer
from fortran to Tcl/Tk we managed to do is by return of integer or
real function (see snippets).
Nice work.

You may be interested in some of my experiments (see
<http://wiki.tcl.tk/2?fortran
for instance). I have built (based on work by others :) libraries to
call
Fortran routines from Tcl and vice versa. (The main problem: make time
to consolidate this stuff :()

Regards,

Arjen
Gustav Ivanovic
2003-09-25 17:45:30 UTC
Permalink
I visited your pages. One of the tricks you presented uses pipes and
it gave me an idea: instead of using pipes, files can be used. I think
it will be extremely easy to pass values from tcl to fortran and vice
versa using ffidl and files. It won't be efficient of course, but the
whole system will be easily maintainable.

The basic assumptions are:
1. Tcl/Tk part will be only for GUIs and launching scripts
2. the number crunching part will be done by fortran DLLs.

Therefore a relatively slow transfer using files is negligible
compared to time the HUMAN uses to enter the necessary data and the
number crunching time in fortran.

When I will have finished to write examples based on this idea I'll
post the outcome here.
Post by Arjen Markus
You may be interested in some of my experiments (see
<http://wiki.tcl.tk/2?fortran
for instance). I have built (based on work by others :) libraries to
call
Fortran routines from Tcl and vice versa. (The main problem: make time
to consolidate this stuff :()
Chang Li
2003-09-26 01:52:43 UTC
Permalink
Post by Gustav Ivanovic
1. Tcl/Tk part will be only for GUIs and launching scripts
2. the number crunching part will be done by fortran DLLs.
Is it good to run two processes: one for GUI and another for Fortran
computing?
Post by Gustav Ivanovic
Therefore a relatively slow transfer using files is negligible
compared to time the HUMAN uses to enter the necessary data and the
number crunching time in fortran.
Chang
Arjen Markus
2003-09-26 06:32:38 UTC
Permalink
Post by Chang Li
Post by Gustav Ivanovic
1. Tcl/Tk part will be only for GUIs and launching scripts
2. the number crunching part will be done by fortran DLLs.
Is it good to run two processes: one for GUI and another for Fortran
computing?
Post by Gustav Ivanovic
Therefore a relatively slow transfer using files is negligible
compared to time the HUMAN uses to enter the necessary data and the
number crunching time in fortran.
Chang
I have experimented with that set-up myself: no problem.
In general: a GUI will not send too much stuff to a computational
component,
but perhaps the results are huge (for graphical presentation). In that
case,
use binary files.

The big issue is that some Fortran compilers do not do support flushing
the
output files. That has been the major problem that I have encountered.

But if you use two different processes, you have a very loose connection
between two entirely different pieces of your system, so independent
development and maintenance is much easier.

Regards,

Arjen
Greg Chien
2003-09-26 14:29:11 UTC
Permalink
Post by Arjen Markus
I have experimented with that set-up myself: no problem.
In general: a GUI will not send too much stuff to a
computational component, but perhaps the results are huge
(for graphical presentation). In that case, use binary files.
As long as the GUI refrains from controlling the computational component
while the latter is running. However, with the increase of computing
power (and sometimes being influenced by interactive games), the users
tend to expect faster turn-around and, ultimately, interactive control of
the computation processes.
Post by Arjen Markus
The big issue is that some Fortran compilers do not do support
flushing the output files. That has been the major problem that
I have encountered.
<Sarcasm>
Perhaps, Giles can teach you how to solve the problem, or design the
system correctly. ;-)
</Sarcasm>
Post by Arjen Markus
But if you use two different processes, you have a very loose
connection between two entirely different pieces of your system,
so independent development and maintenance is much easier.
Loosely coupled, I agree. Easier? Well, you will need to work alone or
have a highly disciplined team in order to synchronize the changes between
the two. I've seen too often that when the GUI is done, it is obsolescent
due to underlying requirement changes/enhancements in the computational
module. The other problem is code duplication in Fortran routines and GUI
component built in different languages/tools. Both of them need "read and
validate" routines to fetch input data and guard against errors and the
functions are generally overlapped. If the GUI and the computation
routines are developed in the same language system, or with a well
thought-out scheme (hint, hint :-) the input data have already been in the
memory before the Fortran DLLs are called. The "read and validate"
routines will be in only one place, which is much easier to develop and
maintain.
--
Best Regards,
Greg Chien
e-mail: remove n.o.S.p.a.m.
http://protodesign-inc.com
Chang Li
2003-09-26 16:44:06 UTC
Permalink
Post by Greg Chien
Post by Arjen Markus
I have experimented with that set-up myself: no problem.
In general: a GUI will not send too much stuff to a
computational component, but perhaps the results are huge
(for graphical presentation). In that case, use binary files.
Our MMF (Memmory Mapping File) package on Windows may help. Look at
http://www.neatware.com/myrmecox/studio/ex_mmf.html

Two processes can share a common data in memory.
When there are only one writer and multiple readers
the synchronization will be easier.
Post by Greg Chien
Post by Arjen Markus
But if you use two different processes, you have a very loose
connection between two entirely different pieces of your system,
so independent development and maintenance is much easier.
Loosely coupled, I agree. Easier? Well, you will need to work alone or
have a highly disciplined team in order to synchronize the changes between
the two. I've seen too often that when the GUI is done, it is obsolescent
due to underlying requirement changes/enhancements in the computational
module.
The shared data must be defined to follow a common protocol.
Post by Greg Chien
The other problem is code duplication in Fortran routines and GUI
component built in different languages/tools. Both of them need "read and
validate" routines to fetch input data and guard against errors and the
functions are generally overlapped. If the GUI and the computation
routines are developed in the same language system, or with a well
thought-out scheme (hint, hint :-) the input data have already been in the
memory before the Fortran DLLs are called. The "read and validate"
routines will be in only one place, which is much easier to develop and
maintain.
Guess MMF can help solve this problem. It worked on shared memory.
The read and validate routines may still needed but in a public Tcl library.
Tcl is the wrap on both processes.

Chang
Post by Greg Chien
--
Best Regards,
Greg Chien
e-mail: remove n.o.S.p.a.m.
http://protodesign-inc.com
Greg Chien
2003-09-26 18:10:17 UTC
Permalink
Post by Chang Li
Post by Greg Chien
I've seen too often that when the GUI is done, it is obsolescent
due to underlying requirement changes/enhancements in the
computational module.
The shared data must be defined to follow a common protocol.
Common protocol is certainly needed. However, it's easier said than done
and people tend to trivialize it. For example, a new version may have a
double precision floating point field that is changed from an integer in
an earlier version; an array is needed and was a scalar before; three more
fields have to be added and one should be deleted, etc. A good version
evolution/control mechanism in GUI and computational components and
between them is needed.
Post by Chang Li
Post by Greg Chien
If the GUI and the computation routines are developed in
the same language system, or with a well thought-out
scheme (hint, hint :-) the input data have already been in
the memory before the Fortran DLLs are called.
Guess MMF can help solve this problem. It worked on shared
memory. The read and validate routines may still needed but
in a public Tcl library. Tcl is the wrap on both processes.
I think I should have said that the input data have already been in the
memory, *parsed*, *validated*, and *ready-to-use* before the Fortran DLLs
are called. Using a file, whether it is disc or memory based, still
requires the computational component to do all the chores in a different
language. To ask simply, can one use Tcl to write Fortran user defined
types into the memory-mapped files so that the computational routines can
use them "as is" (and vice versa)? I found the C-interop in F2k3 a very
encouraging standard to solve this problem. Do you have Tcl/Fortran
interop in this level and quality? Perhaps, one will be forced to do
Tcl/C/Fortran mixed in the future (in the context of directly using the
memory, not external files)?
--
Best Regards,
Greg Chien
e-mail: remove n.o.S.p.a.m.
http://protodesign-inc.com
Chang Li
2003-09-26 21:23:19 UTC
Permalink
Post by Greg Chien
Post by Chang Li
Post by Greg Chien
I've seen too often that when the GUI is done, it is obsolescent
due to underlying requirement changes/enhancements in the
computational module.
The shared data must be defined to follow a common protocol.
Common protocol is certainly needed. However, it's easier said than done
and people tend to trivialize it. For example, a new version may have a
double precision floating point field that is changed from an integer in
an earlier version; an array is needed and was a scalar before; three more
fields have to be added and one should be deleted, etc. A good version
evolution/control mechanism in GUI and computational components and
between them is needed.
That is the design problem. When output format of program is not fixed the
GUI program has to changed in any cases.
Post by Greg Chien
Post by Chang Li
Post by Greg Chien
If the GUI and the computation routines are developed in
the same language system, or with a well thought-out
scheme (hint, hint :-) the input data have already been in
the memory before the Fortran DLLs are called.
Guess MMF can help solve this problem. It worked on shared
memory. The read and validate routines may still needed but
in a public Tcl library. Tcl is the wrap on both processes.
I think I should have said that the input data have already been in the
memory, *parsed*, *validated*, and *ready-to-use* before the Fortran DLLs
are called. Using a file, whether it is disc or memory based, still
requires the computational component to do all the chores in a different
language. To ask simply, can one use Tcl to write Fortran user defined
types into the memory-mapped files so that the computational routines can
use them "as is" (and vice versa)? I found the C-interop in F2k3 a very
encouraging standard to solve this problem. Do you have Tcl/Fortran
interop in this level and quality? Perhaps, one will be forced to do
Tcl/C/Fortran mixed in the future (in the context of directly using the
memory, not external files)?
The idea is to use shared memory rather than file or pipe for process
communication. Fortran process can write the data to that shared
memory, Tcl GUI process can read the data and present it. The shared
memory is used as normal array. Do not know if Fortran support MMF.
But C can do that. So (Fortran + C) process and Tcl process can
share a memory for communication. The C code provides APIs for
Fortran to use the MMF. I agree that Tcl wrap on Fortran may be not
the best solution. If Fortran supports MMF C is unnecessary.

IN general, two MMF objects are required for Fortran/Tcl program.
One MMF for Fortran read and Tcl write, another MMF for Fortran
write and Tcl read. This model greatly simplify the synchronization.
However windows message communication between processes is
weak. It is stupid to find windows title rather than pid to send
message.

Chang
Post by Greg Chien
--
Best Regards,
Greg Chien
e-mail: remove n.o.S.p.a.m.
http://protodesign-inc.com
Greg Chien
2003-09-26 22:49:29 UTC
Permalink
Fortran process can write the data to that shared memory, Tcl GUI
process can read the data and present it.
If we can execute the GUI (built in C, for example) and the Fortran
component in the same process space (perhaps, in a computation/worker
thread), fetching and updating data do not require interprocess
communication (IPC) or external file at all. All the GUI has to do is to
package the data and pass pointers to the Fortran routines. Going across
the process boundary is much less efficient, no matter what mechanism is
used. Besides, there is no "standard library" that does IPC in Fortran,
whereas the upcoming F2k3 does have pointer dereferencing between Fortran
and C. In short, when the GUI is in C, the IPC and external file issues
are moot.
IN general, two MMF objects are required for Fortran/Tcl program.
One MMF for Fortran read and Tcl write, another MMF for Fortran
write and Tcl read. This model greatly simplify the synchronization.
No, you just let data go in and out between the two programs. To
synchronize them you still need locking mechanisms such as semaphore,
mutex, etc.
--
Best Regards,
Greg Chien
e-mail: remove n.o.S.p.a.m.
http://protodesign-inc.com
Chang Li
2003-09-27 01:23:32 UTC
Permalink
Post by Greg Chien
Fortran process can write the data to that shared memory, Tcl GUI
process can read the data and present it.
If we can execute the GUI (built in C, for example) and the Fortran
component in the same process space (perhaps, in a computation/worker
thread), fetching and updating data do not require interprocess
communication (IPC) or external file at all. All the GUI has to do is to
package the data and pass pointers to the Fortran routines. Going across
the process boundary is much less efficient, no matter what mechanism is
used. Besides, there is no "standard library" that does IPC in Fortran,
whereas the upcoming F2k3 does have pointer dereferencing between Fortran
and C. In short, when the GUI is in C, the IPC and external file issues
are moot.
The GUI is in Tk not in C. C is only used to implement the APIs.
Post by Greg Chien
IN general, two MMF objects are required for Fortran/Tcl program.
One MMF for Fortran read and Tcl write, another MMF for Fortran
write and Tcl read. This model greatly simplify the synchronization.
No, you just let data go in and out between the two programs. To
synchronize them you still need locking mechanisms such as semaphore,
mutex, etc.
No lock needed if there is only one writer and multiple readers.

Chang
Post by Greg Chien
--
Best Regards,
Greg Chien
e-mail: remove n.o.S.p.a.m.
http://protodesign-inc.com
Arjen Markus
2003-09-29 06:54:08 UTC
Permalink
Post by Greg Chien
Post by Arjen Markus
I have experimented with that set-up myself: no problem.
In general: a GUI will not send too much stuff to a
computational component, but perhaps the results are huge
(for graphical presentation). In that case, use binary files.
As long as the GUI refrains from controlling the computational component
while the latter is running. However, with the increase of computing
power (and sometimes being influenced by interactive games), the users
tend to expect faster turn-around and, ultimately, interactive control of
the computation processes.
:) I was thinking of the big programs we run here - they run for hours
on end,
and the main feedback _to_ the user is via "online visualisation".

But I agree, there are many many more possible scenarios. To counter
your
hint, why not write an article for the Fortran Forum on these issues? :D
Post by Greg Chien
Post by Arjen Markus
The big issue is that some Fortran compilers do not do support
flushing the output files. That has been the major problem that
I have encountered.
<Sarcasm>
Perhaps, Giles can teach you how to solve the problem, or design the
system correctly. ;-)
</Sarcasm>
Sorry, the sarcasm is lost on me - I must have missed one or more
discussions
(or parts of some I did read)
Post by Greg Chien
Post by Arjen Markus
But if you use two different processes, you have a very loose
connection between two entirely different pieces of your system,
so independent development and maintenance is much easier.
Loosely coupled, I agree. Easier? Well, you will need to work alone or
have a highly disciplined team in order to synchronize the changes between
the two. I've seen too often that when the GUI is done, it is obsolescent
due to underlying requirement changes/enhancements in the computational
module. The other problem is code duplication in Fortran routines and GUI
component built in different languages/tools. Both of them need "read and
validate" routines to fetch input data and guard against errors and the
functions are generally overlapped. If the GUI and the computation
routines are developed in the same language system, or with a well
thought-out scheme (hint, hint :-) the input data have already been in the
Re hint, hint: see above
Post by Greg Chien
memory before the Fortran DLLs are called. The "read and validate"
routines will be in only one place, which is much easier to develop and
maintain.
It is a very interesting subject, with lots of aspects! Boy, I wish
there
was more time in a day, as I would love to write or read about that!

Regards,

Arjen
Greg Chien
2003-09-29 22:53:37 UTC
Permalink
Post by Arjen Markus
Post by Greg Chien
As long as the GUI refrains from controlling the computational
component while the latter is running. However, with the increase
of computing power (and sometimes being influenced by interactive
games), the users tend to expect faster turn-around and,
ultimately, interactive control of the computation processes.
:) I was thinking of the big programs we run here - they run for
hours on end, and the main feedback _to_ the user is via "online
visualisation".
Do you mean "post-processing" after the computation is all done?
Post by Arjen Markus
But I agree, there are many many more possible scenarios. To
counter your hint, why not write an article for the Fortran Forum
on these issues? :D
My first attempt can be found in:
http://www.compaq.com/fortran/visual/vfn09/page2.html
Be warned that it uses Cray pointers to fetch the data objects packaged by
the GUI program written in C/C++ (the memory management part is in C,
making it easier to communicate with Fortran and others). This is the
main reason why I am advocating C-Interop in F2k3 so that the programming
interface can conform to the standard. That'll be the time to write one
for Fortran Forum :-)
Post by Arjen Markus
Sorry, the sarcasm is lost on me - I must have missed one or
more discussions (or parts of some I did read)
Giles has followed up in this thread. For the historical arguments on
FLUSH, you can search google news group with "Giles Fortran Flush."
--
Best Regards,
Greg Chien
e-mail: remove n.o.S.p.a.m.
http://protodesign-inc.com
Arjen Markus
2003-10-01 06:45:42 UTC
Permalink
Post by Greg Chien
Post by Arjen Markus
Post by Greg Chien
As long as the GUI refrains from controlling the computational
component while the latter is running. However, with the increase
of computing power (and sometimes being influenced by interactive
games), the users tend to expect faster turn-around and,
ultimately, interactive control of the computation processes.
:) I was thinking of the big programs we run here - they run for
hours on end, and the main feedback _to_ the user is via "online
visualisation".
Do you mean "post-processing" after the computation is all done?
That is one way. The other is to display the results of intermediate
steps rightaway, while the calculation is in progress.
Post by Greg Chien
Post by Arjen Markus
But I agree, there are many many more possible scenarios. To
counter your hint, why not write an article for the Fortran Forum
on these issues? :D
http://www.compaq.com/fortran/visual/vfn09/page2.html
Got that. I think I have seen it before ...
Post by Greg Chien
Post by Arjen Markus
Sorry, the sarcasm is lost on me - I must have missed one or
more discussions (or parts of some I did read)
Giles has followed up in this thread. For the historical arguments on
FLUSH, you can search google news group with "Giles Fortran Flush."
Ah, I had missed that, but now I see the proposal. I would vote "yes"
to that (not being able to foresee all consequences, though, but it
seems straightforward and it transfers the burden from the programmer
to the computer).

Regards,

Arjen
James Giles
2003-09-29 19:21:22 UTC
Permalink
...
Post by Greg Chien
Post by Arjen Markus
The big issue is that some Fortran compilers do not do support
flushing the output files. That has been the major problem that
I have encountered.
<Sarcasm>
Perhaps, Giles can teach you how to solve the problem, or design the
system correctly. ;-)
</Sarcasm>
Yes, and I did so in a previous thread. Not only that, I proposed an
alternative language feature which correctly flushes buffers when
necessary (on those poorly designed systems which require it).
It would be a much better candidate for standardization than FLUSH.

Evidently, opposing features that are frequently the cause of serious
error is an appropriate target of sarcasm. While promoting such
features is filled with virtue.
--
J. Giles
Greg Chien
2003-09-29 22:30:13 UTC
Permalink
Post by James Giles
Yes, and I did so in a previous thread. Not only that, I proposed an
alternative language feature which correctly flushes buffers when
necessary (on those poorly designed systems which require it).
It would be a much better candidate for standardization than FLUSH.
I must have missed your proposal. Could you please provide a link?
Post by James Giles
Evidently, opposing features that are frequently the cause of serious
error is an appropriate target of sarcasm. While promoting such
features is filled with virtue.
Unfortunately (or fortunately ;-), we are not living in Utopia.
--
Best Regards,
Greg Chien
e-mail: remove n.o.S.p.a.m.
http://protodesign-inc.com
James Giles
2003-09-29 22:43:33 UTC
Permalink
Post by James Giles
Yes, and I did so in a previous thread. Not only that, I proposed an
alternative language feature which correctly flushes buffers when
necessary (on those poorly designed systems which require it).
It would be a much better candidate for standardization than FLUSH.
I must have missed your proposal. Could you please provide a link?]
I don't feel like digging through google. The proposal is to add a line
in the standard that all WRITE statements always implicitly flush as
their last operation. Also, an OPEN statement option to request that
such automatic flushes not be performed on particular files (where
the user of the application would then be responsible to guarantee that
no timing or buffering delays are relevant for that file).

That is the only rule that's objectively correct and portable for using
flush: it must be performed immediately after *all* writes. Any other
choice is error prone (and in a non-portable, often unreproducible
way).
Post by James Giles
Evidently, opposing features that are frequently the cause of serious
error is an appropriate target of sarcasm. While promoting such
features is filled with virtue.
Unfortunately (or fortunately ;-), we are not living in Utopia.
I neve claimed we were. I do think, however, that we should promote
things that are improvements. And that it's counterproductive to promote
things that aren't.
--
J. Giles
Greg Chien
2003-09-30 13:19:46 UTC
Permalink
The proposal is to add a line in the standard that all WRITE
statements always implicitly flush as their last operation.
Also, an OPEN statement option to request that such
automatic flushes not be performed on particular files (where
the user of the application would then be responsible to
guarantee that no timing or buffering delays are relevant for
that file).
As an example, please allow me to elaborate Arjen's case even though I am
not familiar with his situation exactly. Suppose we want our numerical
solvers to write output data to a file at every time step, and there is a
GUI process (written in any language) that reads the file and plots the
data at, hopefully, every time step. Because there can be many WRITE
statements in an interval, using implicit flushing after each write is
inefficient for file operations. If we turn off automatic flush in OPEN,
there is no way in your proposal that we can tell the system that a time
step is reached to guarantee the contents being written to the file for
the GUI process. A simple FLUSH statement does the job, however.

Suppose we had eliminated FLUSH, the situation might lead us to the
following possible solutions:
1. Do-it-yourself buffering: put all the needed output data in a big long
string, using internal write and concatenation, and use one WRITE
statement at each time step. One would have to manage the string and
allocate more memory should the data output exceeds the string buffer.
2. Use our in-memory scheme: manipulate all data in the memory and let the
GUI module peek into the memory to fetch whatever needed for plotting.
One can still write output to files as before for postprocessing, but they
are not used for monitoring/manipulating the computational process.

With legacy code that WRITEs everywhere, I am not sure if the above
solutions are feasible without major rework. There could be more
alternatives, that's why I would like to solicit thoughts from you and
others in c.l.f [tcl group trimmed].
--
Best Regards,
Greg Chien
e-mail: remove n.o.S.p.a.m.
http://protodesign-inc.com
James Giles
2003-09-30 17:13:26 UTC
Permalink
Post by Greg Chien
The proposal is to add a line in the standard that all WRITE
statements always implicitly flush as their last operation.
Also, an OPEN statement option to request that such
automatic flushes not be performed on particular files (where
the user of the application would then be responsible to
guarantee that no timing or buffering delays are relevant for
that file).
As an example, please allow me to elaborate Arjen's case even though I am
not familiar with his situation exactly. Suppose we want our numerical
solvers to write output data to a file at every time step, and there is a
GUI process (written in any language) that reads the file and plots the
data at, hopefully, every time step. Because there can be many WRITE
statements in an interval, using implicit flushing after each write is
inefficient for file operations. If we turn off automatic flush in OPEN,
there is no way in your proposal that we can tell the system that a time
step is reached to guarantee the contents being written to the file for
the GUI process. A simple FLUSH statement does the job, however.
That's a quality of implementation issue. First, the fact that an
inefficient use of flush is even possible is the system's fault. Second,
the implementation of the compiler can detect cases where numerous
WRITEs are consecutive (or very close to consecutive), and can
delay the FLUSH til after the last one. I thought to mention that
possibility in my original suggestion, but I then decided to wait
to see if anyone else even thought of the possibility - not for months.
Post by Greg Chien
Suppose we had eliminated FLUSH, the situation might lead us to the
1. Do-it-yourself buffering: put all the needed output data in a big long
string, using internal write and concatenation, and use one WRITE
statement at each time step. One would have to manage the string and
allocate more memory should the data output exceeds the string buffer.
But, why eliminate flush without an alternate? Since I've proposed
a superior alternate, why have flush?
Post by Greg Chien
2. Use our in-memory scheme: manipulate all data in the memory and let
the GUI module peek into the memory to fetch whatever needed for
plotting. One can still write output to files as before for
postprocessing, but they are not used for monitoring/manipulating the
computational process.
Or, let the GUI tell the system what files it's connected to, an get the
system flush all buffers pending to that file before processing any READs
originating from the GUI process. But wait - that's what I've said the
system should already be doing.
Post by Greg Chien
With legacy code that WRITEs everywhere, I am not sure if the above
solutions are feasible without major rework. There could be more
alternatives, that's why I would like to solicit thoughts from you and
others in c.l.f [tcl group trimmed].
The problem is that legacy code with all those writes *is* everywhere.
And providing FLUSH is not much help (unless the programer adopts
the "FLUSH aftert every WRITE" discipline - which eliminates any
benefit from buffering). The reason is that there is no objectively
correct, portable rule for determining when and where to flush which
buffers. The information needed to make that decision is possessed
by the operating system at run-time, not by the application programmer
when the code is written.
--
J. Giles
Richard Maine
2003-09-30 18:00:56 UTC
Permalink
The information needed to make that decision [when to flush] is possessed
by the operating system at run-time, not by the application programmer
when the code is written.
I hesitate to disgree with Giles because I'm pretty much always
sure that I'll be wrong by some standard or other (by what standard,
I'm less sure), but...

In general, I prefer to avoid making generalizations :-) as
broad as the one above, so I won't try. I make no claim about
all situations, or even usual situations, or about the way that
things might be if they weren't the way they are now.

I do, however, maintain that I personally have programs for which I
personally know when flushing is needed. I make no claim that this is
the case for all programs in all situations, most particularly
situations involving operating systems that I'm not currently using,
but I do claim to know cases where it is appropriate for my codes
today with the operating systems they are running on today.

I don't know whether I've qualified this enough to avoid being
told that I am wrong. Probably not. :-( I suppose that even more
likely I'll be told that if I had followed the thread carefully
enough, I'd realize that the debate is about the future and that
my data doesn't apply.
--
Richard Maine | Good judgment comes from experience;
email: my first.last at org.domain | experience comes from bad judgment.
org: nasa, domain: gov | -- Mark Twain
James Giles
2003-09-30 19:18:36 UTC
Permalink
Some people are very sensitive to criticism of their statements and
believe that any disagreement with their technical positions is a
personal attack on themselves. In some cases this results in a
string of qualifications to their statements in an attempt to stave
off any contradiction. However, qualified or not, the technical
points need to be addressed. Whether this hurts feelings or not
can't be of concern.

Now, I have programs where I know *that* a FLUSH is needed.
I seldom have any precise idea where or when the FLUSH needs
to occur. Well, it clearly should happen before any external
process (or person) tries to read the data. And it *must* happen
before that outside process (or person) needs to act upon the
data in a way that feeds back into the program itself. The system
can (and should) know when to FLUSH at run-time, and often so
do I - at run-time. But I seem never to have that information at
the time I write the code. So, I have to put the FLUSH into the
program at some point where I can be reasonably confident that
it's correct. The closer to the WRITE statement, the more confident
I can be.

Now, if someone in this thread has a set of objective criteria for
determining where and when to FLUSH which buffers - even if it's
only in his (her) own code and on a particular system, for a particular
set of narrowly defined circumstances - those criteria should be
shared. They may provide the kernel of an objectively correct,
portable means to determine where and when to FLUSH which
buffers from source code. I doubt it's possible, but let's hear
the details.

Of course, the mere existence of a set of such criteria doesn't eliminate
the problem: it's still probable (likely) that numerous programmers
will misapply the criteria (or not use them at all) and continue to
make mistakes. However, the the set of criteria are sufficiently
simple, perhaps the language (or the implementation) can apply
them automatically.

Maybe I've been sufficiently impersonal not to give offence.
--
J. Giles
Jan C. Vorbrüggen
2003-10-01 10:48:36 UTC
Permalink
Post by James Giles
Now, if someone in this thread has a set of objective criteria for
determining where and when to FLUSH which buffers - even if it's
only in his (her) own code and on a particular system, for a particular
set of narrowly defined circumstances - those criteria should be
shared. They may provide the kernel of an objectively correct,
portable means to determine where and when to FLUSH which
buffers from source code.
I think of this in terms of transactional semantics: in those cases where
it is relevant, a bundle of WRITEs (possibly to more than one file) are
part of one "transaction" - e.g., as Ron Shepard mentioned, dumping the
output relevant to one step of an interative procedure. At the end of the
loop, there is a clear decision point - no matter how distributed the
actual WRITEs are in the program - where a flush can be put to make sure
the data is made visible to other users of the data.

As for doing this automatically (your question to Ron), I don't see how
this can be done in a general way.

The problem, as I see it, is the status quo. There are a lot of systems
that allow proper sharing of data committed to the OS - though I have my
doubts with regard to that on Winwoes. The compiler's run-time support
library that is the first level of interface to the compiled code itself
could, _in_principle_, be informed by the OS that there are additional
readers of the data in question and that flushing is indicated: however,
I haven't seen that facility even in sophisticated OSes. And not doing
buffering at the application level is often prohibitive performance-wise.

Jan
James Giles
2003-10-01 18:03:57 UTC
Permalink
Jan C. Vorbrüggen wrote:
...
Post by Jan C. Vorbrüggen
As for doing this automatically (your question to Ron), I don't see how
this can be done in a general way.
Then you haven't read the thread. In his case, it's trivially obvious
that automatically flushing after all WRITEs is sufficient.
Post by Jan C. Vorbrüggen
[...] And not doing
buffering at the application level is often prohibitive performance-wise.
"Fast wrong answers" don't exist. The actual speed of an erroneous
code is the time it takes to discover *that* there is an error, plus the
time it takes to find and correct the code's error, pkus the time it
takes to go back and rerun all cases upon which you have based
any important decisions. That's often a several year turnaround
for a single short code.
--
J. Giles
Jan C. Vorbrüggen
2003-10-02 07:09:32 UTC
Permalink
Post by James Giles
Post by Jan C. Vorbrüggen
As for doing this automatically (your question to Ron), I don't see how
this can be done in a general way.
Then you haven't read the thread.
Why do you elect to be needlessly insulting whenever the slightest
opportunity arises?
Post by James Giles
In his case, it's trivially obvious
that automatically flushing after all WRITEs is sufficient.
Yes, after all WRITEs that belong to one transaction. How does your run-time
system determine where one transaction ends, and the next one begins?
Post by James Giles
Post by Jan C. Vorbrüggen
[...] And not doing
buffering at the application level is often prohibitive performance-wise.
"Fast wrong answers" don't exist.
Of course, I agree with that; thanks for lecturing me.

However: In what way does this apply to the discussion? If I need inter-
process (or -thread) synchronization, I will use that explicitly, because
it's the only approach that will give me the necessary correctness guarantee.
If all I need is a guarantee that state will solidify in bounded time, flush
is all I need.

Jan
James Giles
2003-10-02 18:51:37 UTC
Permalink
Post by Jan C. Vorbrüggen
Post by James Giles
Post by Jan C. Vorbrüggen
As for doing this automatically (your question to Ron), I don't see how
this can be done in a general way.
Then you haven't read the thread.
Why do you elect to be needlessly insulting whenever the slightest
opportunity arises?
If you insist on taking offence, that's your choice. I was pointing out
that the correctness of flush after every WRITE has already been
established (and accepted) as being obviously correct (in the sense
of never omitting a flush that's needed). Flush after every WRITE
can obviously be automated. And it's completely general.
Post by Jan C. Vorbrüggen
Post by James Giles
In his case, it's trivially obvious
that automatically flushing after all WRITEs is sufficient.
Yes, after all WRITEs that belong to one transaction. How does your
run-time system determine where one transaction ends, and the next one
begins?
*EVERY* WRITE is a transaction. It is a transaction between the
program and the system (with the I/O library as an intermediary).
A flush after *every* WRITE is sufficient to guarantee correctness.
It isn't necessary, but it's sufficient.
--
J. Giles
Jan C. Vorbrüggen
2003-10-06 09:08:08 UTC
Permalink
Post by James Giles
*EVERY* WRITE is a transaction.
No, it isn't. For a start, there is no one-to-one correspondance between
one WRITE statement and one call to the RTL, and between one RTL call and
its internal write call to the OS, if any. For a second, we were discussing
precisely the scenario - which people here confirmed quite often occurs in
real code - that there are multiple WRITEs distributed over various pieces
of code that make up a (conceptual, if you want) transaction. _That_ is the
case that needs to be handled.

In fact, I cannot remember a single case where an automatic flush-after-
write would have been both correct and performant, and that includes
applications written in other languages that Fortran - even some in DCL,
which with all its interpretative overhead nonetheless manages to suffer
from such a simple scheme.

Jan
Greg Lindahl
2003-10-01 18:12:16 UTC
Permalink
Post by Jan C. Vorbrüggen
The problem, as I see it, is the status quo. There are a lot of systems
that allow proper sharing of data committed to the OS - though I have my
doubts with regard to that on Winwoes.
Most OSes have different semantics for different filesystems -- local
files are often more consistant than networked files. In Unix local
filesystems, anything that's actually been sent to the OS is visible
to every reader, but NFS is funkier. In Windows, CIFS/SMB is
undoubtedly less consistent than local files.

-- greg
Greg Lindahl
2003-09-30 20:47:09 UTC
Permalink
Post by Richard Maine
I do, however, maintain that I personally have programs for which I
personally know when flushing is needed.
I'll second this: personally, when I've been annoyed by buffering,
I've always known where to put a flush.

-- greg
Ron Shepard
2003-10-01 05:52:11 UTC
Permalink
Post by Greg Lindahl
Post by Richard Maine
I do, however, maintain that I personally have programs for which I
personally know when flushing is needed.
I'll second this: personally, when I've been annoyed by buffering,
I've always known where to put a flush.
I often use flush() calls in iterative procedures. I write out
information during the iteration, and at the end of each one, I
flush the output unit so that I can monitor the process as it occurs.

$.02 -Ron Shepard
James Giles
2003-10-01 07:05:39 UTC
Permalink
Ron Shepard wrote:
...
Post by Ron Shepard
I often use flush() calls in iterative procedures. I write out
information during the iteration, and at the end of each one, I
flush the output unit so that I can monitor the process as it occurs.
And your use would be hurt in what way if the flush occured
automatically?
--
J. Giles
Paul van Delst
2003-10-01 13:25:51 UTC
Permalink
Post by James Giles
...
Post by Ron Shepard
I often use flush() calls in iterative procedures. I write out
information during the iteration, and at the end of each one, I
flush the output unit so that I can monitor the process as it occurs.
Dunno if this is the same thing but....

I do the equivalent of flushing in long jobs submitted to the big number crunchers here at
NCEP. I say equivalent since I use the netCDF API for anything important, so "flush" to me
means a call to nf90_sync().

Production jobs (e.g. forecasts) can preempt research jobs -- this means my running jobs
can be summarily halted at any time. If I don't flush at some regular interval, I can lose
a lot of data. It's a toss-up between fear of losing data and slower execution as to how
often I sync to disk
Post by James Giles
And your use would be hurt in what way if the flush occured
automatically?
In my case, I wish it would. I didn't realise things were screwy until I discovered data
files full of "missing" data markers after a job terminated. And, I've seen (somewhat
poorly written) code that specifically called flush() but *after* a system call that mv'ed
the file to either a different name or directory...the result was a file missing the last
part that was written. In such a case, an auto-flush (under the hood somewhere I guess)
would've saved about half-a-day of "wha' happened?" On the plus side, the guy that wrote
the code won't make the same mistake again.

cheers,

paulv
--
Paul van Delst
CIMSS @ NOAA/NCEP/EMC
Ph: (301)763-8000 x7748
Fax:(301)763-8545
James Giles
2003-10-01 18:41:11 UTC
Permalink
Paul van Delst wrote:
...
Post by Paul van Delst
Post by James Giles
And your use would be hurt in what way if the flush occured
automatically?
In my case, I wish it would. I didn't realise things were screwy until I
discovered data files full of "missing" data markers after a job
terminated. And, I've seen (somewhat poorly written) code that
specifically called flush() but *after* a system call that mv'ed the
file to either a different name or directory...the result was a file
missing the last part that was written. In such a case, an auto-flush
(under the hood somewhere I guess) would've saved about half-a-day of
"wha' happened?" On the plus side, the guy that wrote the code won't
make the same mistake again.
We seem to be alone in this discussion. The usual practice of
accepting plausible wrong answers, as long as you *can* explicitly
flush someday (after you've finally noticed the problem) seems to
be the majority vote here.

I had nearly this same discussion (probably a decade ago) with
respect to pointers and C. People kept insisting that there was
nothing wrong with the language design, and if people had problems
with the feature it was their own fault. This was even the opinion
of people that had, just weeks before, come to me for help with
their C code - and it turned out to be a pointer error.

Blame-the-victim is such a pervasive philsophy in computing that
people apply it even to themselves.
--
J. Giles
Jan C. Vorbrüggen
2003-10-02 07:10:17 UTC
Permalink
Post by James Giles
We seem to be alone in this discussion.
Oh nonsense.

Jan
Dick Hendrickson
2003-10-01 14:20:25 UTC
Permalink
Post by James Giles
...
Post by Ron Shepard
I often use flush() calls in iterative procedures. I write out
information during the iteration, and at the end of each one, I
flush the output unit so that I can monitor the process as it occurs.
And your use would be hurt in what way if the flush occured
automatically?
It would be hurt on programs where high I/O throughput was also
important. The (old) systems I'm familiar with would save output
lines in a memory buffer and only do physical output when the
buffer was nearly full. With reasonable care, this could do
the writes in track size chunks and minimize head motion and
missed revolutions on the disks. At least in the old days, disk
positioning was a slow thing and really bogged down heavy I/O
programs.

Dick Hendrickson
Post by James Giles
--
J. Giles
Gordon Sande
2003-10-01 16:44:25 UTC
Permalink
Subject: Re: FLUSH [was: Mixed language programming Tcl/Tk and Fortran (Windows)]
Organization: AT&T Worldnet
Date: Wed, 01 Oct 2003 14:20:25 GMT
Newsgroups: comp.lang.fortran
Post by James Giles
...
Post by Ron Shepard
I often use flush() calls in iterative procedures. I write out
information during the iteration, and at the end of each one, I
flush the output unit so that I can monitor the process as it occurs.
And your use would be hurt in what way if the flush occured
automatically?
It would be hurt on programs where high I/O throughput was also
important. The (old) systems I'm familiar with would save output
lines in a memory buffer and only do physical output when the
buffer was nearly full. With reasonable care, this could do
the writes in track size chunks and minimize head motion and
missed revolutions on the disks. At least in the old days, disk
positioning was a slow thing and really bogged down heavy I/O
programs.
In some (old) systems you could post multiple disk writes from
memory buffers and the disk controllers would then do the physical
I/O in an order which matched the head movements. This feature of
"out of order" disk I/O meant that the I/O subsystems of such
systems often had more computing power (but dedicated to one
purpose) than the central processors which they were serving.

Current hard disks may have quite large buffers to allow for
such behavior even if you don't have multiple memory buffers.
Even large buffers can be quickly saturated so it mostly just
confuses small benchmarks which can be badly mislead if they
do not fill the buffers.
Dick Hendrickson
Post by James Giles
--
J. Giles
James Giles
2003-10-01 18:15:25 UTC
Permalink
Post by Dick Hendrickson
Post by James Giles
...
Post by Ron Shepard
I often use flush() calls in iterative procedures. I write out
information during the iteration, and at the end of each one, I
flush the output unit so that I can monitor the process as it occurs.
And your use would be hurt in what way if the flush occured
automatically?
It would be hurt on programs where high I/O throughput was also
important. The (old) systems I'm familiar with would save output
lines in a memory buffer and only do physical output when the
buffer was nearly full. With reasonable care, this could do
the writes in track size chunks and minimize head motion and
missed revolutions on the disks. At least in the old days, disk
positioning was a slow thing and really bogged down heavy I/O
programs.
Again, the feature I propose *optionally* lets you declare that not
WRITEs are not to be flushed. In the case you describe here, no
FLUSH would never need to be explicit, since you mention no
shared use of the data by independent processes. As long as
the system flushes when the program terminates, you're all
right. (I'm told there are severly broken systems that don't even
do that. You shouldn't design a language badly because some
systems are implemented badly.)
--
J. Giles
Ron Shepard
2003-10-01 16:12:49 UTC
Permalink
In article
Post by James Giles
Post by Ron Shepard
I often use flush() calls in iterative procedures. I write out
information during the iteration, and at the end of each one, I
flush the output unit so that I can monitor the process as it occurs.
And your use would be hurt in what way if the flush occured
automatically?
It would only be a matter of performance, which of course is why
data is buffered in the first place. Having an automatic flush()
after each write statement would cause many more synchronization
bottlenecks than the occasional flush() at the end of the iteration.

I did not follow the beginning of this thread, so I don't know what
exactly has been proposed. What I would like is a new open()
parameter of the general form

flush=<character value>

I'm not exactly sure what the actual character strings should be,
but there are three cases that should be allowed to be specified.
Call them 'always', 'never', or 'explicit' here. The first would
flush automatically after each write statement to that unit (like
stdout works on most machines now), the second tells the OS that
output will never be flushed to that unit during execution of the
program, and the last keyword (which I think should be the default
because it is the current situation for most files on most machines)
specifies that buffering output is allowed but that explicit flush()
calls may occur for that unit.

There is a separate issue of what should occur if flush() is called
on a file in the first two cases. It is redundant in the first
case, but should it be an error? It is not allowed in the second
case, but should it be a noop or should it be an error?

And, of course, there is yet another issue of what should happen on
machines that cannot support flush() operations in any form at all.
That is, beyond the syntax in the open statement and of the flush()
calls, what exactly should be *required* by the fortran standard?

$.02 -Ron Shepard
Richard Maine
2003-10-01 16:42:36 UTC
Permalink
Post by Ron Shepard
flush=<character value>
I'm not exactly sure what the actual character strings should be,
but there are three cases that should be allowed to be specified.
Call them 'always', 'never', or 'explicit' here.
I'm not sure that I see the distinction between 'never' and
'explicit' as being useful. Seems to me that if the user wants
'never', he/she could say 'explicit' and then just never
explicitly do one. I don't see much point in either

1. Allowing flush statements, but having this option effectively
override them.

2. Being able to disallow use of flush statements on a unit so
as to check for.... what kind of error are we worried about?

By the way, I liked the analogy to transactions that someone made
(forget who at the moment - maybe even you).
Post by Ron Shepard
And, of course, there is yet another issue of what should happen on
machines that cannot support flush() operations in any form at all.
That is, beyond the syntax in the open statement and of the flush()
calls, what exactly should be *required* by the fortran standard?
I think that's mostly a non-issue. It is already addressed in the
f2003 draft of the flush statement. (Well, in spite of saying it is
mostly a nonissue, I don't like one thing the draft does, but my
dislike has mostly to do with what I regard as a trivial inconsistency
in how problems are reported.) The draft acknowleges that flush might
not be supported on all (or, implication, even any) units. This is
no different from lots of other I/O stuff. Backspace might also be
not supported on all (or any) units.

If you try a flush on a unit that doesn't support it, you get
either

1. No effect on the file. It is just a no-op.

2. An error status.

3. A special "warning" status.

The draft takes option 3, which wasn't my choice, but it isn't a
big deal. I just think it inconsistent with other treatments.
--
Richard Maine | Good judgment comes from experience;
email: my first.last at org.domain | experience comes from bad judgment.
org: nasa, domain: gov | -- Mark Twain
Dan Nagle
2003-10-01 17:10:13 UTC
Permalink
Hello,

On 01 Oct 2003 09:42:36 -0700, Richard Maine <***@see.signature>
wrote:

<snip requoted etc>
Post by Richard Maine
If you try a flush on a unit that doesn't support it, you get
either
1. No effect on the file. It is just a no-op.
2. An error status.
3. A special "warning" status.
The draft takes option 3, which wasn't my choice, but it isn't a
big deal. I just think it inconsistent with other treatments.
IIRC, the discussion hinged on whether data would be lost
by the flush failure. That is, if you got a "that unit
doesn't support flush", it was "no harm no foul". But if
the flush failed due to a disk being full, the data
might be lost if something else wasn't done.

At least that's my t^Hrusty memory of the discussion.
--
Cheers!

Dan Nagle
Purple Sage Computing Solutions, Inc.
Ron Shepard
2003-10-02 05:11:58 UTC
Permalink
Post by Richard Maine
I'm not sure that I see the distinction between 'never' and
'explicit' as being useful. Seems to me that if the user wants
'never', he/she could say 'explicit' and then just never
explicitly do one.
The distinction would be only a matter of efficiency. The I/O
library would be allowed to buffer data differently in the two
situations if it wanted to. For example, with flush='never', it
might even be able to configure the I/O to that device so that
physical I/O never occurred at all (until possibly the very end of
the program). With flush='explicit', it might also be able to
buffer the I/O, but it would be required additionally to be able to
flush the unit at any time during the execution of the program.

$.02 -Ron Shepard
Gary L. Scott
2003-10-02 11:53:48 UTC
Permalink
Post by Ron Shepard
Post by Richard Maine
I'm not sure that I see the distinction between 'never' and
'explicit' as being useful. Seems to me that if the user wants
'never', he/she could say 'explicit' and then just never
explicitly do one.
The distinction would be only a matter of efficiency. The I/O
library would be allowed to buffer data differently in the two
situations if it wanted to. For example, with flush='never', it
might even be able to configure the I/O to that device so that
physical I/O never occurred at all (until possibly the very end of
the program). With flush='explicit', it might also be able to
buffer the I/O, but it would be required additionally to be able to
flush the unit at any time during the execution of the program.
I think I like flush='shared' or flush='volatile', meaning an external
process may be reading and/or writing to the same device/file. This
would force a flush on each write and seems to cover all my needs. In
all other cases, the compiler/os would be free to buffer as it likes.
It still amazes me that some compilers/os' do not make this assumption
when you open a file with an explicit "share" option (extension).
Post by Ron Shepard
$.02 -Ron Shepard
--
Gary Scott
mailto:***@ev1.net

Fortran Library
http://www.fortranlib.com

Support the GNU Fortran G95 Project: http://g95.sourceforge.net
Ron Shepard
2003-10-02 15:18:37 UTC
Permalink
In article <***@ev1.net>,
"Gary L. Scott" <***@ev1.net> wrote:

[on automatic flush after write...]
Post by Gary L. Scott
It still amazes me that some compilers/os' do not make this assumption
when you open a file with an explicit "share" option (extension).
I think that it is a matter of performance. Even 20 years ago, disk
I/O was 100 times slower than memory accesses. Today, buss speeds
have increased another factor of 50 or so (not counting cache speeds
and sizes which make memory accesses effectively even faster), while
disk speeds have only improved about a factor of 2. Writing to
physical disk means the CPU must wait a very long time, it probably
even means a few context switches and pipeline flushes. And,
memories are much larger now than in the past, by factors of 1000's,
so there is a tendency to use these vast memories in order to
optimize performance.

$.02 -Ron Shepard
Jan C. Vorbrüggen
2003-10-02 07:14:10 UTC
Permalink
Post by Richard Maine
By the way, I liked the analogy to transactions that someone made
(forget who at the moment - maybe even you).
Thanks for the pat on the back, Richard - that was me...

Jan
James Giles
2003-10-01 18:32:07 UTC
Permalink
Post by Cameron Laird
In article
...
Post by Cameron Laird
I did not follow the beginning of this thread, so I don't know what
exactly has been proposed. What I would like is a new open()
parameter of the general form
flush=<character value>
I'm not exactly sure what the actual character strings should be,
but there are three cases that should be allowed to be specified.
Call them 'always', 'never', or 'explicit' here. The first would
flush automatically after each write statement to that unit (like
stdout works on most machines now), the second tells the OS that
output will never be flushed to that unit during execution of the
program, and the last keyword (which I think should be the default
because it is the current situation for most files on most machines)
specifies that buffering output is allowed but that explicit flush()
calls may occur for that unit.
That's similar to what I've proposed, except that 'always' is the
default, 'never' does not mean that output will never be flushed,
but that the system can flush when it likes (probably what you
menat). Preferably, 'explicit' can be left out. (Certain persons
have maintained that they *know* when to flush in their programs.
If they would share their knowlegde with the rest of us, maybe we
can make it automatic.) 'Always' is the default because it is the
only one guaranteed to always be correct. Unnecessary, error-prone
features are always bad language design.
Post by Cameron Laird
There is a separate issue of what should occur if flush() is called
on a file in the first two cases. It is redundant in the first
case, but should it be an error? It is not allowed in the second
case, but should it be a noop or should it be an error?
I think it should be allowed (if it exists at all). I'd prefer that it
not exist, but a programmer shouldn't be penalized. The fewer
errors this mis-feature causes the better.
Post by Cameron Laird
And, of course, there is yet another issue of what should happen on
machines that cannot support flush() operations in any form at all.
That is, beyond the syntax in the open statement and of the flush()
calls, what exactly should be *required* by the fortran standard?
A really well-designed system knows when to flush (ie. it knows
when files are shared, what "files" are really pipes or terminals,
and so on, and it even knows when other processes try to read
from pipes or shared files - so it knows when buffers pending
to those must be flushed). On such a system, if FLUSH exists
at all, it should be a no-op.
--
J. Giles
Duane Bozarth
2003-10-01 21:01:11 UTC
Permalink
Post by James Giles
....
A really well-designed system knows when to flush (ie. it knows
when files are shared, what "files" are really pipes or terminals,
and so on, and it even knows when other processes try to read
from pipes or shared files - so it knows when buffers pending
to those must be flushed). On such a system, if FLUSH exists
at all, it should be a no-op.
And how many of these are there (and who are they)?
James Giles
2003-10-01 21:50:58 UTC
Permalink
Post by Duane Bozarth
Post by James Giles
....
A really well-designed system knows when to flush (ie. it knows
when files are shared, what "files" are really pipes or terminals,
and so on, and it even knows when other processes try to read
from pipes or shared files - so it knows when buffers pending
to those must be flushed). On such a system, if FLUSH exists
at all, it should be a no-op.
And how many of these are there (and who are they)?
How many hospitals did your town have before someone
demanded them?
--
J. Giles
Duane Bozarth
2003-10-02 14:52:53 UTC
Permalink
Post by James Giles
Post by Duane Bozarth
Post by James Giles
....
A really well-designed system knows when to flush (ie. it knows
when files are shared, what "files" are really pipes or terminals,
and so on, and it even knows when other processes try to read
from pipes or shared files - so it knows when buffers pending
to those must be flushed). On such a system, if FLUSH exists
at all, it should be a no-op.
And how many of these are there (and who are they)?
How many hospitals did your town have before someone
demanded them?
Which has what to do w/the question?

As for your question, the original (and only) hospital in this town came
into being like those in a large majority of small towns--a specific
individual (in this case my grandmother) was the impetus to get her
church involved enough to spearhead the drive to actually found Epworth
(Methodist) Hospital. As is often the case, real action was taken in
response to need as opposed to waiting for some bureacracy to
respond/pander to "demands"...
James Giles
2003-10-02 19:27:08 UTC
Permalink
Post by Duane Bozarth
Post by James Giles
Post by Duane Bozarth
Post by James Giles
....
A really well-designed system knows when to flush (ie. it knows
when files are shared, what "files" are really pipes or terminals,
and so on, and it even knows when other processes try to read
from pipes or shared files - so it knows when buffers pending
to those must be flushed). On such a system, if FLUSH exists
at all, it should be a no-op.
And how many of these are there (and who are they)?
How many hospitals did your town have before someone
demanded them?
Which has what to do w/the question?
I was trying to get you to identify the relevance you your question. I'll
rephrase directly:

(1) No language feature I've proposed requires the system feature in
the paragraph above. In fact, no language feature I've proposed in
this thread even makes sense on such a system. All the language
features I've proposed would be no-ops on such a system. That
would be a desirable outcome. So, what relevance is your question
to the context of the thread?

(2) Perhaps it's your contention that the lack of common systems with
the desired feature is because it is impossible. No, there have been
systems which did so (based on analysis of transaction semantics*).
Some yahoos in congress managed to require that all computing
done by certain government agencies (at first, NSF) must be done
only in UNIX. Probably the worst choice. That killed many a good
system. But, the technology remains a proven thing. There is actually
no reason that even UNIX couldn't adopt it.
Post by Duane Bozarth
As for your question, the original (and only) hospital in this town came
into being like those in a large majority of small towns--a specific
individual (in this case my grandmother) was the impetus to get her
church involved enough to spearhead the drive to actually found Epworth
(Methodist) Hospital. As is often the case, real action was taken in
response to need as opposed to waiting for some bureacracy to
respond/pander to "demands"...
Sounds like a classic case of response to demand to me. Demand is
when something becomes sufficiently important, useful, entertaining,
or otherwise desirable that some collection of people with resources
are willing to expend those resources to acquire that thing.

And yes, bureaucracy exists to impede responsiveness to demands.


*- PS I referred to "transaction semantics" in previous incarnations of
this subject. Now it seems that use of the term is praiseworthy.
--
J. Giles
Jan C. Vorbrüggen
2003-10-02 07:23:48 UTC
Permalink
Post by James Giles
A really well-designed system knows when to flush (ie. it knows
when files are shared, what "files" are really pipes or terminals,
and so on, and it even knows when other processes try to read
from pipes or shared files - so it knows when buffers pending
to those must be flushed).
Perhaps we are getting closer to the source of our misunderstanding.

What you describe would, in any case, be a very useful facility (to
put it mildly), although only few OSes could currently support it, and
a Fortran standard demanding it won't be enough leverage for it to be
implemented in others important in the market place, and thus no Fortran
standard will demand it 8-|.

However, it isn't good enough. As Paul mentioned, you need to provide a
consistent view of the data as accessed by others - in spite of the data
that is required by this consistent view being provided by a non-atomic
sequence of actions (in this case, WRITEs). If that is an absolute
requirement, you need to support transaction semantics and call your
ACID-suporting database of choice. In many other cases, it is enough to
make sure all the data of which the "transaction" consists is made
accessible in finite time - in fact, you'd rather prefer that the change
from one consistent state to the next happens in as small a time window as
possible. This is exactly what current implementations buffering WRITEs
internally until FLUSH is called provide. Nonetheless, the consumers of
this data should be able to detect intermediate, possibly inconsistent
state: and as Paul showed, at least some professionally developed code
does so.

Additionally, I could point you to decades-old documentation discussing
this issue in widely-used (at the time) OSes, e.g., for BACKUP or ISAM
files using RMS in VMS.

Jan
James Giles
2003-10-02 19:04:54 UTC
Permalink
Post by Jan C. Vorbrüggen
Post by James Giles
A really well-designed system knows when to flush (ie. it knows
when files are shared, what "files" are really pipes or terminals,
and so on, and it even knows when other processes try to read
from pipes or shared files - so it knows when buffers pending
to those must be flushed).
Perhaps we are getting closer to the source of our misunderstanding.
What you describe would, in any case, be a very useful facility (to
put it mildly), although only few OSes could currently support it, and
a Fortran standard demanding it won't be enough leverage for it to be
implemented in others important in the market place, and thus no Fortran
standard will demand it 8-|.
I guess I missed where I proposed that the Fortran standard should
demand the facility. In fact, all the proposals I've made for Fortran
features have assumed the host system *doesn't* have such a capability.
Flush after every write (which I propose to be the default behavior)
is obviously meaningless in a system that doesn't support flush because
it's never needed.
Post by Jan C. Vorbrüggen
However, it isn't good enough. As Paul mentioned, you need to provide a
consistent view of the data as accessed by others - in spite of the data
that is required by this consistent view being provided by a non-atomic
sequence of actions (in this case, WRITEs). [...]
Each WRITE, from a Fortran perspective is close to an atomic operation.
That is, you can do very little else (in any standard conforming way) while
it's going on.
Post by Jan C. Vorbrüggen
[...] If that is an absolute
requirement, you need to support transaction semantics and call your
ACID-suporting database of choice. In many other cases, it is enough to
make sure all the data of which the "transaction" consists is made
accessible in finite time - in fact, you'd rather prefer that the change
from one consistent state to the next happens in as small a time window
as possible. [...]
As in, say, flush after every WRITE?
Post by Jan C. Vorbrüggen
[...] This is exactly what current implementations buffering
WRITEs internally until FLUSH is called provide. [...]
Or not - if the programmer forgets to flush. Is that a case which you
conveniently consign to the "blame the victim" category?
--
J. Giles
Jan C. Vorbrüggen
2003-10-06 09:11:34 UTC
Permalink
Post by James Giles
Post by Jan C. Vorbrüggen
[...] If that is an absolute
requirement, you need to support transaction semantics and call your
ACID-suporting database of choice. In many other cases, it is enough to
make sure all the data of which the "transaction" consists is made
accessible in finite time - in fact, you'd rather prefer that the change
from one consistent state to the next happens in as small a time window
as possible. [...]
As in, say, flush after every WRITE?
No, a typical transaction consists of multiple operations.
Post by James Giles
Post by Jan C. Vorbrüggen
[...] This is exactly what current implementations buffering
WRITEs internally until FLUSH is called provide. [...]
Or not - if the programmer forgets to flush. Is that a case which you
conveniently consign to the "blame the victim" category?
If the programmer writes an incorrect program, it is indeed the programmer's
fault.

Jan
James Giles
2003-10-06 19:11:35 UTC
Permalink
Jan C. Vorbrüggen wrote:
...
Post by Jan C. Vorbrüggen
Post by James Giles
Or not - if the programmer forgets to flush. Is that a case which you
conveniently consign to the "blame the victim" category?
If the programmer writes an incorrect program, it is indeed the
programmer's fault.
Please don't design any aircraft. You'll be sued when you
intentional design traps cause crashes.
--
J. Giles
Brooks Moses
2003-09-30 21:19:40 UTC
Permalink
Post by James Giles
Post by Greg Chien
Post by James Giles
Evidently, opposing features that are frequently the cause of serious
error is an appropriate target of sarcasm. While promoting such
features is filled with virtue.
Unfortunately (or fortunately ;-), we are not living in Utopia.
I neve claimed we were. I do think, however, that we should promote
things that are improvements. And that it's counterproductive to promote
things that aren't.
Do you remember the thread a few weeks ago, where someone had a code
doing binary writes that took a few seconds to run using local file
access or Windows remote file access, but took a large fraction of an
hour to run on Linux NFS file access?

The reason for the dramatic differences in runtime, we determined, was
that binary writes of the nature used in that program _do_ do what
amounts to immediate flushing, and that NFS propogates the writes across
the network and waits for a confirmation that they worked before
confirming back to the program that the write has happened.

If one modifies the standard Fortran WRITE statement to include an
implicit FLUSH, this will happen on _every single program_ that attempts
a notable number of formatted writes to an NFS-mounted file. The
performance hit, as noted, can be several orders of magnitude.
Furthermore, this would be a retroactive change -- it would change the
way that previously-written programs worked, and in some cases change
them in very detrimental ways; this, IMHO, rather strongly violates the
spirit of backwards compatibility, and does so without creating any sort
of warning to the user when they try to compile old code with a new
compiler.

I agree with you that promoting things that are not improvements is
counterproductive; thus, I cannot condone promoting this default
automatic FLUSHing as an addition to the standard.

Might I suggest, instead, that you promote the idea of adding an option
to the file open commands that requests automatic flushing-after-write
for the particular file-handle being opened? This will, at the least,
not break already-existing programs.

- Brooks
--
Remove "-usenet" from my address to reply; the bmoses-usenet address
is currently disabled due to an overload of W32.Gibe-F worm emails.
James Giles
2003-09-30 21:55:13 UTC
Permalink
Brooks Moses wrote:
...
Post by Brooks Moses
The reason for the dramatic differences in runtime, we determined, was
that binary writes of the nature used in that program _do_ do what
amounts to immediate flushing, and that NFS propogates the writes across
the network and waits for a confirmation that they worked before
confirming back to the program that the write has happened.
The case you cite is one of the worst (in fact, it's one of
the usual examples in discussions of this). When you write to a
remote file system, there is no way to determine whether there are
other processes awaiting that data. Much less can you determine
when those other processes *need* that data. Nor can you (as the
author of an application program) merely assume that there aren't other
processes that need the data ASAP. You really must flush each write
in such a case. That's the only safe decision. Whether the system
does it automatically or you have to do it manually, that's exactly
what you *must* do. (Unless you really do just advocate the error
prone method of fixing such things only after you notice a symptom
of error - how long, and how many erroneous runs of your program
does that take?)

The existence of bad implementations is not sufficient cause to reject
a feature. The implementation should not impose such large penalties
on such traffic. It should be written to perform the data transfer and
the confirmations asyncronously with continued operation of the
application program.
Post by Brooks Moses
I agree with you that promoting things that are not improvements is
counterproductive; thus, I cannot condone promoting this default
automatic FLUSHing as an addition to the standard.
How is promoting an unnecessary, error-prone feature to be regarded
as an improvement? The flush-after-write approach is, at least, not
error prone.
Post by Brooks Moses
Might I suggest, instead, that you promote the idea of adding an option
to the file open commands that requests automatic flushing-after-write
for the particular file-handle being opened? This will, at the least,
not break already-existing programs.
How will the recommedation I made break any existing code? FLUSH
isn't standard. Standard conforming codes don't do it. Extensions that
provide some form of flush are not conforming to the standard's
proposal. There are no already-existing codes that use flush that
will be broken by my recommendation that *won't* be broken by
the existing proposal.

In any case, adding the automatic flush feature is a good idea. It would
be better if it were the default. Unsafe things that require extra work on
the part of the programmer should be the options, not the other way around.
--
J. Giles
Catherine Rees Lay
2003-10-03 09:05:58 UTC
Permalink
Post by Brooks Moses
Post by James Giles
Post by Greg Chien
Post by James Giles
Evidently, opposing features that are frequently the cause of serious
error is an appropriate target of sarcasm. While promoting such
features is filled with virtue.
Unfortunately (or fortunately ;-), we are not living in Utopia.
I neve claimed we were. I do think, however, that we should promote
things that are improvements. And that it's counterproductive to promote
things that aren't.
Do you remember the thread a few weeks ago, where someone had a code
doing binary writes that took a few seconds to run using local file
access or Windows remote file access, but took a large fraction of an
hour to run on Linux NFS file access?
The reason for the dramatic differences in runtime, we determined, was
that binary writes of the nature used in that program _do_ do what
amounts to immediate flushing, and that NFS propogates the writes across
the network and waits for a confirmation that they worked before
confirming back to the program that the write has happened.
If one modifies the standard Fortran WRITE statement to include an
implicit FLUSH, this will happen on _every single program_ that attempts
a notable number of formatted writes to an NFS-mounted file. The
performance hit, as noted, can be several orders of magnitude.
Furthermore, this would be a retroactive change -- it would change the
way that previously-written programs worked, and in some cases change
them in very detrimental ways; this, IMHO, rather strongly violates the
spirit of backwards compatibility, and does so without creating any sort
of warning to the user when they try to compile old code with a new
compiler.
I agree with you that promoting things that are not improvements is
counterproductive; thus, I cannot condone promoting this default
automatic FLUSHing as an addition to the standard.
Might I suggest, instead, that you promote the idea of adding an option
to the file open commands that requests automatic flushing-after-write
for the particular file-handle being opened? This will, at the least,
not break already-existing programs.
- Brooks
I think this sounds like an excellent idea. I'd also like the option to
add it to WRITE statements though. This would be for the situation where
you have a large number of writes and don't want a forced flush at the
end of every one because half the data isn't much use and you don't want
the slowdown, but once all the block of data is written it would be
useful. I'd imagine this is what other posters mean when they say they
know where in their programs the flush is required. Programs where any
available data is useful and a potential slowdown less important could
use the OPEN statement form instead.

At the risk of preaching to the converted, all programs are different.
I've recently worked on a program where a slowdown due to flushing after
every write would have been a disaster, but have also worked on several
where having every possible item of data in the file if something went
wrong would have been very useful. The standard needs to not make
assumptions about the desirable behaviour here.

Catherine.
--
Catherine Rees Lay
Brian Elmegaard
2003-09-24 10:06:26 UTC
Permalink
Post by Gustav Ivanovic
By chance, I stumbled on comp.lang.tcl and found that tcl and tk is
the solution to our difficulty to develop GUI without having
administrator privilege on our XP work station.
I did a survey on this some time ago. I found that tcl with its
limited number of variable types (1) didn't feel right. But, many
different (portable scripting) languages, e.g., perl, python (my
favourite), ruby, scheme, have been extended to use tk as one possible
GUI toolkit.

The intention with this post is mainly to let you know that there are
many languages available that can be used for your task. (Not that I
am that experienced in GUI programming)
I found GUI toolkits: What are your options? (Cameron Laird and
Kathryn Soraiz) and Choosing a Scripting Language (Sunworld Online)
very interesting to read, but as fas as I can see they are available
Sunworld anymore :-(
Perhaps doing as Cameron here writes may be the solution:
http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=utf-8&safe=off&threadm=wk1ygbrlh6.fsf%40mail.afm.dtu.dk&rnum=1&prev=/groups%3Fhl%3Den%26lr%3D%26ie%3DUTF-8%26oe%3Dutf-8%26safe%3Doff%26q%3Dcameron%2Blaird%2Bauthor%253Aelmegaard%26btnG%3DGoogle%2BSearch

Interesting link:
http://www.itworld.com/AppDev/4061/swol-0202-regex/

But, tk is not that windows-native-like. For python the wxwindows GUI
toolkit is available as wxpython for making portable applications.
http://www.wxpython.org/

tk in different languages:
http://www.python.org/topics/tkinter/
http://httpd.chello.nl/k.vangelder/ruby/learntk/
http://www.perltk.org/
http://kaolin.unice.fr/STk/

And the reason for choosing python for is the readability of the
language:
x=1
print x
for n in 1,2,'foo',2.0:
print x

It's OO and lots of other fancy stuff, by coming from fortran I found
it much more easy to learn, than tcl, perl,...

Dlls are easily made and read with different compilers. My experience
is with Salford.

An example from Salford:

subroutine increment(i)
integer i
i = i + 1
end

is made into a dll.

and called from python:
from ctypes import *

# load dll
inc = windll.LoadLibrary("t.dll")

# Initiate c-variable for the dll
n=c_int(1)

# Call the dll one
inc.INCREMENT(byref(n))
print "The integer is now: %d" % n.value

# And a few times more:
for i in range(5):
inc.INCREMENT(byref(n))
print "The square of the integer is now: %d" % n.value**2

which outputs:
The integer is now: 2
The square of the integer is now: 9
The square of the integer is now: 16
The square of the integer is now: 25
The square of the integer is now: 36
The square of the integer is now: 49

Best regards,
--
Brian (remove the sport for mail)
http://www.et.dtu.dk/staff/be
Cameron Laird
2003-09-24 11:08:45 UTC
Permalink
In article <***@mail.afm.dtu.dk>,
Brian Elmegaard <***@rk-speed-rugby.dk> wrote:
.
.
.
Post by Brian Elmegaard
I found GUI toolkits: What are your options? (Cameron Laird and
Kathryn Soraiz) and Choosing a Scripting Language (Sunworld Online)
very interesting to read, but as fas as I can see they are available
Sunworld anymore :-(
http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=utf-8&safe=off&threadm=wk1ygbrlh6.fsf%40mail.afm.dtu.dk&rnum=1&prev=/groups%3Fhl%3Den%26lr%3D%26ie%3DUTF-8%26oe%3Dutf-8%26safe%3Doff%26q%3Dcameron%2Blaird%2Bauthor%253Aelmegaard%26btnG%3DGoogle%2BSearch
http://www.itworld.com/AppDev/4061/swol-0202-regex/
.
.
.
While SunWorld Online is defunct, I'm slowly making progress
in providing permanent URLs for all the content we published
there. <URL: http://regularexpressions.com/ > points to these
results. PLEASE let me know if there's a particular piece
you're searching; with a little motivation, I often can find
a version that I can make public.
--
Cameron Laird <***@Lairds.com>
Business: http://www.Phaseit.net
Personal: http://phaseit.net/claird/home.html
Richard Maine
2003-09-26 22:00:19 UTC
Permalink
Post by Gustav Ivanovic
I would like to share my first experience of mixed language
programming using tcl/tk and fortran.
I'll note that an approach I'm fond of for tcl/tk+fortran is to
use expect. The thing I really like about that is that it allows
your number-crunching program to have zero gui-related code. You
can still write the number crunching program as a portable text-based
program that will run anywhere.

The gui is basically a separate thing that "knows" how to run the
text-based program. Thus you can work in either world.

This was particularly attractive to me in adding a gui front end
for an existing program. I did not have to force all users to go
with the gui (though many of them pretty quickly converted). I
didn't even have to make a new release of the program.

It isn't the best approach for everything, but it is one of the
options.

Alas, expect has "issues" with Windows and none of the current
solutions are fully adequate. So that's currently a negative.
--
Richard Maine | Good judgment comes from experience;
email: my first.last at org.domain | experience comes from bad judgment.
org: nasa, domain: gov | -- Mark Twain
Chang Li
2003-09-27 21:03:13 UTC
Permalink
Post by Richard Maine
Post by Gustav Ivanovic
I would like to share my first experience of mixed language
programming using tcl/tk and fortran.
I'll note that an approach I'm fond of for tcl/tk+fortran is to
use expect. The thing I really like about that is that it allows
your number-crunching program to have zero gui-related code. You
can still write the number crunching program as a portable text-based
program that will run anywhere.
The gui is basically a separate thing that "knows" how to run the
text-based program. Thus you can work in either world.
How do you handle large amount of text data such as 100MB by Expect?

Chang
Richard Maine
2003-09-28 17:25:04 UTC
Permalink
Post by Chang Li
Post by Richard Maine
I'll note that an approach I'm fond of for tcl/tk+fortran is to
use expect. The thing I really like about that is that it allows
your number-crunching program to have zero gui-related code. You
can still write the number crunching program as a portable text-based
program that will run anywhere.
The gui is basically a separate thing that "knows" how to run the
text-based program. Thus you can work in either world.
How do you handle large amount of text data such as 100MB by Expect?
Unless you are talking about a game or some other heavily graphical
application, large amounts of data don't go through a gui. It doesn't
even make sense for 100 mb of text data to go through a gui as far as
I can tell. Graphics data I could see (pun not intended); a human
can look at 100mb of graphics data from an application. But how is
a human supposed to be involved in 100mb of text data?

Certainly guis are used to control programs that manipulate large amounts
of data, but that is not at all the same thing as routing that data through
the gui.

In the scenario I described, the gui front-end that "knows" how to run
the program is "seeing" the same text that a human would have seen
running it. The program does not tend to throw 100mb of text at the
human. And the program even more certainly doesn't expect the human to
type in 100 mb of text.

For large quantities of data like that, you use data paths other than
the text-based terminal-like interaction on which expect is based. You
may well use expect to *CONTROL* the large amounts of data, but not
to feed those large amounts through. Indeed, the programs I was referring to
do tend to involve data from and to various files other than the gui.
At least one of these parts even involves a plot; again, the gui is used to
control the plot, but not to actually do the plotting. (When plottingm
three separate programs end up involved - the number crunching program,
the separate gui front end using expect and tcl/tk, and the plotting
program, which has been passed data and commands via separate files.)
--
Richard Maine
email: my last name at domain
domain: isomedia dot com
Chang Li
2003-09-29 04:23:10 UTC
Permalink
Post by Richard Maine
Post by Chang Li
How do you handle large amount of text data such as 100MB by Expect?
Unless you are talking about a game or some other heavily graphical
application, large amounts of data don't go through a gui. It doesn't
even make sense for 100 mb of text data to go through a gui as far as
I can tell. Graphics data I could see (pun not intended); a human
can look at 100mb of graphics data from an application. But how is
a human supposed to be involved in 100mb of text data?
A model file could be a text file generated by computer. It can be very
large. And you can expect a large XML file. Although it may be human
being readable it needs computer processing.
Post by Richard Maine
Certainly guis are used to control programs that manipulate large amounts
of data, but that is not at all the same thing as routing that data through
the gui.
In the scenario I described, the gui front-end that "knows" how to run
the program is "seeing" the same text that a human would have seen
running it. The program does not tend to throw 100mb of text at the
human. And the program even more certainly doesn't expect the human to
type in 100 mb of text.
For large quantities of data like that, you use data paths other than
the text-based terminal-like interaction on which expect is based. You
may well use expect to *CONTROL* the large amounts of data, but not
to feed those large amounts through. Indeed, the programs I was referring to
do tend to involve data from and to various files other than the gui.
At least one of these parts even involves a plot; again, the gui is used to
control the plot, but not to actually do the plotting. (When plottingm
three separate programs end up involved - the number crunching program,
the separate gui front end using expect and tcl/tk, and the plotting
program, which has been passed data and commands via separate files.)
Expect should be limited to talk short phrases. But the large data
processing
by file is poor in performance.

Chang
Post by Richard Maine
--
Richard Maine
email: my last name at domain
domain: isomedia dot com
Richard Maine
2003-09-29 15:37:51 UTC
Permalink
Post by Chang Li
Post by Richard Maine
Post by Chang Li
How do you handle large amount of text data such as 100MB by Expect?
Unless you are talking about a game or some other heavily graphical
application, large amounts of data don't go through a gui. It doesn't
even make sense for 100 mb of text data to go through a gui as far as
I can tell. Graphics data I could see (pun not intended); a human
can look at 100mb of graphics data from an application. But how is
a human supposed to be involved in 100mb of text data?
A model file could be a text file generated by computer. It can be very
large. And you can expect a large XML file. Although it may be human
being readable it needs computer processing.
Yes. This seems to be a nonsequitur to me. I think we must be
talking about something entirely different. Yes, it would need
computer processing. However, I don't see what this has to do with
the gui. This is a fine example of the kind of thing I was referring
to in my next para (cited below).
Post by Chang Li
Post by Richard Maine
Certainly guis are used to control programs that manipulate large amounts
of data, but that is not at all the same thing as routing that data
through the gui.
To re state in other words my answer to your question
Post by Chang Li
Post by Richard Maine
Post by Chang Li
How do you handle large amount of text data such as 100MB by Expect?
I would say not to do that. I don't think expect is an appropriate
tool for such a thing. I do think expect is one of several plausible
tools for creating gui wrappers for programs. If your gui wrapper is
trying to parse 100mb files, then you aren't designing it the way I
would.
--
Richard Maine | Good judgment comes from experience;
email: my first.last at org.domain | experience comes from bad judgment.
org: nasa, domain: gov | -- Mark Twain
Loading...