Data Flow languages and programming – Part I
Should it be turtles all the way up?In the famous anecdote, the little old lady replies to the noted philosopher, “It’s turtles all the way down.” When it comes to writing software many writers on software design and many programming language creators seem to believe that it is turtles all the way up.
What do I mean by “turtles all the way up”? By this I mean the thesis that the techniques and programming language concepts that are used in the small can be extended indefinitely to programming in the large. In other words if we use language X for 100 line programs and 1000 line programs we can also use it for 1000000 line programs. We may have to add some extensions and new features along the way, but we can increase the size of the programs we write indefinitely using the same ideas.
The antithesis is that it isn’t turtles all the way up, or at least it shouldn’t be turtles all the way up. That is, the kinds of languages and programming technologies that we use in large scale programming should be quite different from the kind of languages used for programming in the small.
There are many propositions as to what those large scale technologies should be, and many such technologies in use. Here I am going to look at data flow languages and data flow structuring of programs.
What are data flow languages?There are two wikipedia articles that give useful answers:
The distinction wikipedia makes between data flow programming and flow based programming is obscure. The following definition is an edited version of definitions used in the two articles.
Data flow languages structure applications as networks of “black box” elements that exchange data across predefined connections by message passing. Elements execute when they get messages; they send messages asynchronously. Data flow based applications are inherently parallel.There are a wide variety of data flow languages, varying from spread sheets, to Labview, to Erlang. Many are graphical; programming is done by altering flow diagrams. One thing they all have in common is that they have a run time system.
Traditional imperative programs are composed of routines that call each other; i.e., when a call is made the caller constructs a data packet (calling sequence) and transfers control and the data packer to the called routine. When the called routine is done it constructs a data packet to pass back to the caller and transfers control back to the caller.
In data flow programs the “routines” do not call each other. Instead they are activated by the run time system when there is input for them; when they create outputs the run time system takes care of moving the output to the destination input areas. When the “routines” are done they transfer control back to the run time system.
One difference between traditional programs and data flow programs is that traditional programs use LIFO semantics whereas data flow programs use FIFO semantics. That is, a traditional program puts data on a stack and gets data back on the same stack. In data flow programs each element gets data from a queue and puts data to other queues.
Another difference is that the connectivity of traditional programs is deeply embedded in the code. To pass data from A to B, A calls B. That is, the caller has to specify where the data goes. In data flow programs the connectivity can be separate from the code. A doesn’t pass data directly to B; instead it passes data to the run time system which in turn passes the data to B. The caller does not have to specify where the data goes.
As a result data flow programs can use different languages for the internal implementation of the computational elements and for the connectivity. In fact, it is common for data flow languages to be graphical.
Advantages and disadvantagesWhat are the advantages and disadvantages of data flow programming?
Some significant advantages:
Some significant disadvantages:
This page was last updated November 1, 2009.