Can we study programming languages in the context of linguistics? Do programming languages evolve naturally in similar ways to natural languages?
Although full rationality, and mathematical consistency is essential to programming languages, there still is the need (especially modern languages) to make them readable and comfortable to humans.
Are programming languages evolving to become more linguistic and thus more natural? For example machine code, punch cards and assembly languages have given way to more readable languages like Ruby and Python etc.
When I say computer languages are becoming more natural, I don’t mean they contain more ‘words we have in english’, I mean they seem to becoming more like a natural language, in terms of their complexity of grammer and ability to express meaning (for example, being able to eloquently describe a query from a database in both rational and human understandable ways).
What do you all think? Are programming languages becoming more like natural languages, and thus becoming applicable to the laws of Linguistics?
Or perhaps languages live on a spectrum, where on one side you have the extreme rational languages and the other the more creative. Maybe, programming and natural languages are identical and both just lie on this language spectrum (their only difference, perhaps being the ‘thing’ they are trying to give their meaning to).
Is there a connection between the (Babel Tower effect) separation of human languages and of computer langages? Do they become more diverse for the same reasons (i.e. to solve different problems within ever-evolving computer-systems/culture-systems etc.)?
Not really, no. Programming languages have become more like natural languages only in the sense of “words we have in english” (sic).
A key feature of programming languages is that they are not ambiguous. When you write a program and execute it, it has a well-defined meaning, which is its behavior. If you want to write a program that works as intended (a difficult objective), it is important that the behavior¹ of the program be as predictable as possible. Programming languages haven’t made much difference in the wide gap towards natural languages.
Conversely, there has been work in bridging the gap from the other side: analyzing natural languages with the same tools as programming languages. This field is called natural language processing. These approaches have been pretty much discarded in favor of machine learning. I’ll cite a passage in the Wikipedia article which is directly relevant here:
Up to the 1980s, most NLP systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in NLP with the introduction of machine learning algorithms for language processing. This was due both to the steady increase in computational power resulting from Moore’s Law and the gradual lessening of the dominance of Chomskyan theories of linguistics (e.g. transformational grammar), whose theoretical underpinnings discouraged the sort of corpus linguistics that underlies the machine-learning approach to language processing.
One way in which programming is evolving is that as we design larger and larger systems, source code is not always a good way of understanding them. For example, an Intel CPU is one of the most complex objects ever designed by Man, and its “source code” is not just a collection of text files, far from it. But the full design isn’t evolving towards anything resembling a human language either. I don’t know what the appropriate cognitive tools or metaphors here, and I don’t think anybody knows just yet; ask again in a couple of centuries.
¹ Or rather the set of possible behaviors annotated with the circumstances under which they arise, but that’s only adding one step of indirection in the modeling, so this isn’t really relevant here.