Good article on a question that is always of interest in programming language circles. The modern take, apparently, is ““Languages differ essentially in what they must convey and not in what they may convey.”
The obvious example would be that in explicitly-typed languages, you always have to convey information on what types your functions expect and return:
int plus(int a1, int a2);
while in an implicitly typed language, you don’t:
def plus(a1, a2)
Another example is that in Ruby, if you want to access an instance variable, you use the ‘@’ symbol: @my_instance_variable, as opposed to languages such as Java and C#, where instance variables are not necessarily distinguishable from local variables. Because that’s often a valuable thing to know, one often finds coding standards: naming instance variables with _foo or mFoo or always referring to them as this.foo or what-have-you.
Presumably, the upshot of this is that you ought to seek a language in which you must convey the things that you think are universally important.