James Beshara has a really interesting read on how communication will change and evolve in a post-verbal world, namely one where human/brain interfaces like Neuralink can more directly transmit thought between people than the medium of language allows today.
After reading the essay I wonder if people's thoughts or the neural pathways they activate, if they could be directly transmitted into another brain, would actually make any sense to someone else with a unique internal set of pathways and framework for parsing and understanding the world. The essay assumes we'd understand and have more empathy with each other, but that seems like a leap. It seems likely the neural link would need it own set of abstractions, perhaps even unique per person, similar to how Google Translate AI invented its own meta-language.
Today idea-viruses that cause outrage (outrageous?) in today's discourse have been weaponized by algorithms optimizing for engagement, and directly brain-transmitted memes seem especially risky for appealing to our base natures or causing amygdala hijack. But perhaps a feature of these neural interface devices could counteract that, with a command like "tell me this piece of news but suppress my confirmation bias and tribal emotional reactions while I'm taking it in."
Interesting ideas. I keep thinking that as we get closer to solving all the big problems we will realise we didn’t actually want to solve them; it was just a lot of fun trying to solve them.
Your filter idea makes sense but then if we have that capability wouldn’t that quickly become the default? If we can glimpse “correct” thought why would we want to dip back into tribalistic, flawed daily thought?
Perhaps this is how the AI will take over… because we will convince ourselves humanity is no longer required.
And so if one were to have the wherewithal to bring themselves to say,
“..tell me this piece of news but suppress my confirmation bias and tribal emotional reactions while I’m taking it in.”
This would be the human being recognizing that there may be biases and reptilian reactions….which is a problem. AI would correct the issue and adopt the new as default but humans are creatures of habit.
What the human does with this information would make the case for whether humanity is further required. This is why we must hold ourselves to the highest of standards or we WILL be phased out for newer, more reliable models
One book that explores this topic semi-realistically is https://www.goodreads.com/book/show/21418013-lock-in
It deals with the concept of a virus paralyzing humans who still have their full brain capacity. The solution is to interface with a physical object so they can still go about living a “human” life. Obviously since the brain has been interfaced into, it can be hacked…