Concerning importance of natural language processing in autonomous thinking machines.
1) When I say "La-La" to my Chihuahua she knows we are going now to a place that we have gone to in the past to see others she loves. She knows she will be inside a safe building. If I say "no" she knows it's the negation of her sentence (a whine) to get in my lap. If she thinks I am thinking in an angry way by my voice or actions, she gets out of my way: she can think about my thoughts via our language. It was once said no animals used tools, and now we know that was not true by a long shot, otters and chimpanzees being the most famous. Raising livestock was considered unique, even though Darwin pointed out some ants have aphid livestock, and some aphids have hired protection. Concerning fire, I suspect pine trees dropping flammable straw is a way to weed out others with fire. To say there is a qualitative difference between man and beast seems to me to come out of a pre-Darwinian, pre-Copernicus sky. I do not think the advanced features of human language that are not seen in other species are not in some since occurring in their cortexes. As we learn more about the languages of other species, the domain of what's left in human language that is considered unique gets smaller, like a God confronting science. Are you being a priest of human language? If equivalent thoughts are occurring in other cortexes as I suppose, but some capabilities not communicated to others (recursive was the only clear thing I could not give an example for), it is interesting, and I'll agree the NLP-like communication might be needed for groups of A.I. to do their deeds. But the process of evolution can test all possible computation and thereby communication paths, bypassing or obviating an NLP-like viewpoint, or at least it would require a greater imagination to tie it to NLP than to simply abandon the NLP viewpoint. NAND and XOR and Toffoli gates are individually capable of Turing completeness, so I see no need to restrict the rise of the machines to anything like the specifics of NLP. DNA can be very self-referencing, resulting in fractals. It also seems to have maps of sections, or at least subroutines, needing to specify the design of a neuron only once. Even ecosystems show the fractal pattern, indicating self-referencing. There's communication going on all over the place. NLP seems too human-specific, limited by what our brains are capable of and those abilities are filtered even more by what we can self-observe and thereby communicate. Self-awareness is important, but might be as un-real as free will and desire. We might be so unaware as to what we are (like how the brain operates, trying to use the brain to see the brain), or it might be just a word made up for a "holding place" for a group of thoughts and actions, that it may not be not proper to claim machines and animals are qualitatively or even greatly quantitatively different in self-awareness.
It will be important to autonomous A.I. to use NLP in order to consume everything written by people on the internet. That is surely a possible jumping-off point for really dangerous A.I., to know our minds better than we know ourselves, and, for example, have complete control of us via bitcoin and blockchain laws without us ever knowing who's pulling the strings.
Maybe there are important lessons from NLP that the initial human programmers will need as a guide in designing the initial A.I.