I wanted to visit a few topics of personal interest to me, several of which seem to have received little or no attention in the LR. Before launching a thread of this kind, I’d like to credit Fatmouse, the undisputed master of the “Random Thoughts” genre. If this potpourri of ideas stimulates some interesting discussion, then I may offer further editions of it from time to time. If not, then the shortcomings should be ascribed to me, not to the Rotund Rodent.
So is the Singularity still getting nearer? I’m referring, of course, to Ray Kurzweil’s book *The *Singularity is Near, published in 2005. The concept of a technological Singularity, for those unfamiliar with it, was popularized by Vernor Vinge (one of my favorite SF authors, BTW) back in the 1980s. Kurzweil focuses on the enormous hyper-exponential growth of technology–particularly the synergistic revolutions in GNR (genetics, nanotechnology, and robotics driven by artificial intelligence). Extrapolating mathematically from current trends (think of Moore’s Law, for example), he predicts that by some point in the near future–probably around 2045–technological progress will be accelerating so quickly that the change will no longer be assimilable by a merely human mind, leading to a kind of phase change in human civilization. On the surface, the notion sounds pretty “out there,” but Kurzweil supports his case with a lot of strong references and some fairly convincing graphs.
In any case, I’m eagerly awaiting further books from Kurzweil. In his opinion economic ups and downs do not significantly affect the long-term trends, but I wonder just what kind of impact a very severe downturn like the one we are now experiencing might have.
I have to confess to a guilty secret here. I’m afraid I was personally delaying mankind’s progress toward the Singularity until some time last summer, when I became the last person on the planet to acquire a cell phone. Speaking of cell phones, BarryP wrote in a recent thread: “there may be no objective external universal standard for how to answer a cell phone, but everytime I do I say ‘What up, dog?’” Remind me never to call Barry on his cell. ![]()
Despite my humorous asides, I take Kurzweil’s ideas seriously, although I don’t necessarily accept his timetable for the progression. He foresees that in the near future most of our material goods will be produced by robotic processes, probably manipulating nanomaterials. Mass production will then become so cheap that economic value will be concentrated principally in the production of new ideas–i. e., recipes for production. If he’s right, then those Slowtwitchers who lament the decline in conventional manufacturing in the US may be completely off base.
Kurzweil also predicts that radical life extension will become available in the decades preceding the Singularity–in other words, in the lifetimes of many or most of us. Younger Slowtwitchers will almost certainly be able to benefit from it. Those of my age (Kurzweil and I were both born in 1948) are in a “cusp” generation: We may likely be able to realize the benefits, depending largely on how well we take care of ourselves today. Personally, I’m made a lot of lifestyle adjustments toward that end. I think that financial preparedness is also essential, since you can be sure that the most advanced procedures and treatments won’t be paid for by National Health Care, and they may not even be legal in the US.
Some of the biological advances Kurzweil projects will also have an impact on the sport of triathlon–but I’ll leave that for a future thread.
On another topic, I’ve noticed that ambiguities of language lead to a lot of unnecessary acrimony in the LR, and occasionally in the Main Forum as well. Time after time, posters will argue heatedly about some particular phrase–call it X–without ever recognizing that they don’t mean the same thing by it. One of them may have in mind X1, and the other is thinking of X2. If they were to define their terms unambiguously, they might well all agree that X1 is a good thing, whereas X2 is not. But since they don’t, the debate eventually degenerates into something like this, with the mental image X1 or X2 shown in brackets:
Poster 1: Obviously, X is desirable.
**
Poster 2: Do you realize how dumba that sounds? Who in the world could possibly want X ? It would obviously lead to Y.
**
*Poster 1: What the fuck does Y have to do with X ? I never said a damn fucking *thing about Y. Drop that bullshit strawman argument. Any idiot can see that value of X , with the exception of yourself, of course.
Et cetera, ad nauseam. Some of these ambiguities may arise from poor language skills of a few posters, but many of them are simply inherent in the English language. Obviously, the best way to avoid getting caught in this kind of endless-circle debate is to become attuned to ambiguities and then to seek to eliminate them as early as possible by defining one’s terms clearly.
For example, let X = “imposing one’s beliefs on others.” Most people seem to agree that X is a Bad Thing. But just what does one mean by X? I can think of at least three very different meanings:
X1 = thinking that other people ought to behave in a certain way. Example: I think (although I may not say anything) that a friend of mine ought to quit smoking.
X2 = telling other people that they ought to behave in a certain way. Examples: I suggest to my friend that it might be advisable to lay off the cigarettes, or I write a newspaper article on smoking and mortality.
X3 = forcing other people to behave as one believes they should. Example: I and others, through an institution of government, impose a special tax on smoking or seek to outlaw it entirely.
Conceivably, any or all of the above three could be construed as “imposing one’s beliefs on others.” But one can regard some of them as a Bad Thing and others as perfectly acceptable.
As I see it, if you see X1 as a Bad Thing, you are going to be very unhappy in life unless you are in possession of some kind of mind-control technology.
If you see X2 as a Bad Thing, you may want to outlaw it entirely, except that that would seem to eliminate freedom of speech and freedom of the press. Alternatively, you might see X2 as a Bad Thing, but not as something that should be outlawed. Paradoxically, though, to be consistent you could never criticize other people for doing X2, because then you’d be doing it yourself!
The X3 category, which moves beyond speech into action, needs some further clarification. I might build a fence around my property in order to prevent other people from stealing my livestock. Does that mean that I’m “imposing my belief” that stealing is wrong on others? To me, that seems a stretch. My beliefs about what is morally proper for others don’t really have anything to do with why I built the fence. After all, I also expect the fence to keep out a marauding bobcat, and I don’t view a bobcat as capable of making ethical choices at all. Generalizing from this example, it seems to me that in evaluating whether X3 is really a Bad Thing, you need to confine the discussion to cases where the force involved serves some purpose beyond the defense of life, liberty, and property. In other words, some coercion against nonaggressive people must be involved. But in that case, it also seems to me, the real evil consists not in the imposition of a belief system per se, but rather in the initiation of force against other human beings.