Philosophical Musings

I wanted to visit a few topics of personal interest to me, several of which seem to have received little or no attention in the LR. Before launching a thread of this kind, I’d like to credit Fatmouse, the undisputed master of the “Random Thoughts” genre. If this potpourri of ideas stimulates some interesting discussion, then I may offer further editions of it from time to time. If not, then the shortcomings should be ascribed to me, not to the Rotund Rodent.

So is the Singularity still getting nearer? I’m referring, of course, to Ray Kurzweil’s book *The *Singularity is Near, published in 2005. The concept of a technological Singularity, for those unfamiliar with it, was popularized by Vernor Vinge (one of my favorite SF authors, BTW) back in the 1980s. Kurzweil focuses on the enormous hyper-exponential growth of technology–particularly the synergistic revolutions in GNR (genetics, nanotechnology, and robotics driven by artificial intelligence). Extrapolating mathematically from current trends (think of Moore’s Law, for example), he predicts that by some point in the near future–probably around 2045–technological progress will be accelerating so quickly that the change will no longer be assimilable by a merely human mind, leading to a kind of phase change in human civilization. On the surface, the notion sounds pretty “out there,” but Kurzweil supports his case with a lot of strong references and some fairly convincing graphs.

In any case, I’m eagerly awaiting further books from Kurzweil. In his opinion economic ups and downs do not significantly affect the long-term trends, but I wonder just what kind of impact a very severe downturn like the one we are now experiencing might have.

I have to confess to a guilty secret here. I’m afraid I was personally delaying mankind’s progress toward the Singularity until some time last summer, when I became the last person on the planet to acquire a cell phone. Speaking of cell phones, BarryP wrote in a recent thread: “there may be no objective external universal standard for how to answer a cell phone, but everytime I do I say ‘What up, dog?’” Remind me never to call Barry on his cell. :wink:

Despite my humorous asides, I take Kurzweil’s ideas seriously, although I don’t necessarily accept his timetable for the progression. He foresees that in the near future most of our material goods will be produced by robotic processes, probably manipulating nanomaterials. Mass production will then become so cheap that economic value will be concentrated principally in the production of new ideas–i. e., recipes for production. If he’s right, then those Slowtwitchers who lament the decline in conventional manufacturing in the US may be completely off base.

Kurzweil also predicts that radical life extension will become available in the decades preceding the Singularity–in other words, in the lifetimes of many or most of us. Younger Slowtwitchers will almost certainly be able to benefit from it. Those of my age (Kurzweil and I were both born in 1948) are in a “cusp” generation: We may likely be able to realize the benefits, depending largely on how well we take care of ourselves today. Personally, I’m made a lot of lifestyle adjustments toward that end. I think that financial preparedness is also essential, since you can be sure that the most advanced procedures and treatments won’t be paid for by National Health Care, and they may not even be legal in the US.

Some of the biological advances Kurzweil projects will also have an impact on the sport of triathlon–but I’ll leave that for a future thread.

On another topic, I’ve noticed that ambiguities of language lead to a lot of unnecessary acrimony in the LR, and occasionally in the Main Forum as well. Time after time, posters will argue heatedly about some particular phrase–call it X–without ever recognizing that they don’t mean the same thing by it. One of them may have in mind X1, and the other is thinking of X2. If they were to define their terms unambiguously, they might well all agree that X1 is a good thing, whereas X2 is not. But since they don’t, the debate eventually degenerates into something like this, with the mental image X1 or X2 shown in brackets:

Poster 1: Obviously, X is desirable.
**
Poster 2: Do you realize how dumba that sounds? Who in the world could possibly want X ? It would obviously lead to Y.
**
*Poster 1: What the fuck does Y have to do with X ? I never said a damn fucking *thing about Y. Drop that bullshit strawman argument. Any idiot can see that value of X , with the exception of yourself, of course.

Et cetera, ad nauseam. Some of these ambiguities may arise from poor language skills of a few posters, but many of them are simply inherent in the English language. Obviously, the best way to avoid getting caught in this kind of endless-circle debate is to become attuned to ambiguities and then to seek to eliminate them as early as possible by defining one’s terms clearly.

For example, let X = “imposing one’s beliefs on others.” Most people seem to agree that X is a Bad Thing. But just what does one mean by X? I can think of at least three very different meanings:

X1 = thinking that other people ought to behave in a certain way. Example: I think (although I may not say anything) that a friend of mine ought to quit smoking.

X2 = telling other people that they ought to behave in a certain way. Examples: I suggest to my friend that it might be advisable to lay off the cigarettes, or I write a newspaper article on smoking and mortality.

X3 = forcing other people to behave as one believes they should. Example: I and others, through an institution of government, impose a special tax on smoking or seek to outlaw it entirely.

Conceivably, any or all of the above three could be construed as “imposing one’s beliefs on others.” But one can regard some of them as a Bad Thing and others as perfectly acceptable.

As I see it, if you see X1 as a Bad Thing, you are going to be very unhappy in life unless you are in possession of some kind of mind-control technology.

If you see X2 as a Bad Thing, you may want to outlaw it entirely, except that that would seem to eliminate freedom of speech and freedom of the press. Alternatively, you might see X2 as a Bad Thing, but not as something that should be outlawed. Paradoxically, though, to be consistent you could never criticize other people for doing X2, because then you’d be doing it yourself!

The X3 category, which moves beyond speech into action, needs some further clarification. I might build a fence around my property in order to prevent other people from stealing my livestock. Does that mean that I’m “imposing my belief” that stealing is wrong on others? To me, that seems a stretch. My beliefs about what is morally proper for others don’t really have anything to do with why I built the fence. After all, I also expect the fence to keep out a marauding bobcat, and I don’t view a bobcat as capable of making ethical choices at all. Generalizing from this example, it seems to me that in evaluating whether X3 is really a Bad Thing, you need to confine the discussion to cases where the force involved serves some purpose beyond the defense of life, liberty, and property. In other words, some coercion against nonaggressive people must be involved. But in that case, it also seems to me, the real evil consists not in the imposition of a belief system per se, but rather in the initiation of force against other human beings.

Don’t worry: I didn’t get a cell phone until last summer, either.

As far as the Singularity, my current grad school advisor had some of my fellow students do some research on the issue. It’s definitely out there, and I have a hard time really grasping it.

He foresees that in the near future most of our material goods will be produced by robotic processes, probably manipulating nanomaterials. Mass production will then become so cheap that economic value will be concentrated principally in the production of new ideas–i. e., recipes for production. If he’s right, then those Slowtwitchers who lament the decline in conventional manufacturing in the US may be completely off base.

Several points;

First “Mass production” becoming so cheap that it’ the ideas that that concentrate the wealth is already a reality. When you buy a CPU do you think you are paying for the “Process” or the idea? If it were the process then the CPU would not be able to go from a price of 1000$ to 100$ in six months. The folks at the “Bleeding edge” ae willing to pay more for the “Idea” and will pay 1K for the CPU. 6 months later when it’s no longer a new idea people won’t pay the price, but the company can still afford to sell the chip at 100$ and still make money.

Granted this is a simplified view as the “Process” is refined and you get some level of efficiency improvement and thus some gain from “Manufacturing” over the months.

Second, robotics in and of themselves create MORE jobs not less. Unless we reach a point were robots can make robots without any human intervention we can’t reach any “Singularity”. Not saying that will never happen but in order for that to happen we need robots with near human “innovation” levels. I thinks that’s a bit off.

Third, no matter what the scenario is all of this “Manufacturing” will take MASSIVE amounts of resources and energy. Not only will the people with the ideas be in control but the folks in control of the power and resources will also be in control. Of course historically once again this is the way it’s always been.

In short, you’re looking at a “Startrekian” replicator situation. All that would take would be ENORMOUS amounts of energy and ideas. So what all this really boils down to is that we have to come up with an easily replicatable power generating system and we are set.

At such a point however there will be no one “in power” and anyone would be able to replicate anything and everyone would be able to live any life style they chose. I guess I don’t see this as a problem, but a goal.

On another topic, I’ve noticed that ambiguities of language lead to a lot of unnecessary acrimony in the LR…

I’ve been saying for a long time, particularly about politics, most people want the same things. For the most part we all want far more of the same thing than we do different things. We simply see things from different perspectives which often times leads us to say and hear things from different perspectives.

I don’t know how many “Heated” discussions that have gone on here, where people were actually civil, that eventually ended up with, “Sure I guess that sounds reasonable” after both sides understand the other sides point of view.

For me the real challenge for the human species will be figuring out how to communicate, for real, not just surface “Pleasantries”. If we don’t figure this out we will probably end up blowing our selves up or something well before I get my replicator.

~Matt

All I know is that genetic manipulation of our DNA is the only way this species is going to continue to improve for long.

We need to get smarter, and less short sighted, quick.

“First ‘Mass production’ becoming so cheap that it’ the ideas that that concentrate the wealth is already a reality.”

I think that to a large extent you’re right, but what Kurzweil is saying is that such will become true almost universally. Remember, he’s merely extrapolating from what’s already happening (or what was happening way back in 2005).

“Second, robotics in and of themselves create MORE jobs not less.”

I agree–or at any rate, I don’t believe their net effect is to eliminate jobs. As robots take care of certain tasks and relieve us of worry about certain basic needs, we begin to invest our efforts in other kinds of needs instead, which creates other kinds of jobs. It’s Maslow’s hierarchy of needs, translated into economics. I don’t think Kurzweil was arguing that robots are going to usher in mass unemployment, although some followers of the Singularity idea might believe that.

“Unless we reach a point were robots can make robots without any human intervention we can’t reach any ‘Singularity.’ Not saying that will never happen but in order for that to happen we need robots with near human ‘innovation’ levels. I thinks that’s a bit off.”

Actually, self-replication and near-human and eventually superhuman innovation levels are precisely the kinds of goals that AI researchers are now working toward. One of the things that Kurzweil points out is that although people tend to pooh-pooh AI, the latter is already accomplishing things that would have been considered impossible just a few years ago. His newsletter, which I receive for free by email, points to new advances on those fronts just about every day. The biggest advances, in his view, will come from advancing brain-scan technology, which is for the first time becoming capable of viewing what’s going on in the brain on a micro level, enabling the development of intelligence models that could eventually be transferred to robots.

“Third, no matter what the scenario is all of this ‘Manufacturing’ will take MASSIVE amounts of resources and energy.”

Manufacturing will take large amounts of physical resources and energy anyway, and I don’t know of any reason why manufacturing by (say) nanobots should be more wasteful than conventional manufacturing–if anything, the opposite. Kurzweil also has some very interesting ideas about ways of harnessing nanotechnology to use energy much more efficiently, as well as to tap into new energy sources.

Gene therapy will likely play a big role, but don’t neglect the possibilities for artificial organ replacement, artificial blood cells, nanobot disease scavengers, etc.

Personally, I would like to see the question of “How many angels can dance on the head of a pin?” answered definitively.

Those mideval philosophers had nothing on triathletes.

Now, please - condescending, narcissist blowhards - go back and forth about 600 times stating definitive answers to unanswerable questions about faith, religion, politics, and philosophy.

Don’t listen to the other guys opinion. In fact, mis-interpret it for drama. Step all over each other in order to win the argument.
You are smarter than the other guy and you know it. Use colorful and foul language to get your point across.

Each of you are right, and each of you are entitled to the last word.

I’ll start the debate - 1,024 angels can dance on the head of a pin.

… now go ahead, have at it…

I’ll start the debate - 1,024 angels can dance on the head of a pin.

… now go ahead, have at it…
You’re clearly out of your effin’ mind, you moron. You couldn’t count past ten with both shoes off!

It’s obvious to anyone with half a functioning brain that a pin can only hold 1,023 angels.

Wait… how big is the pin?

“It’s obvious to anyone with half a functioning brain that a pin can only hold 1,023 angels.”

Ah, but the point you’re missing is that majorminor didn’t choose the number 1,024 at random. 1024 is a power of two, namely 210. Obviously, majorminor was trying to keep the discussion germane to the topics in my original post, by positing a computerized pin on which the angel-data are stored using nanotechnological circuits. The error he made, however, was in assuming that this technology will remain constant. Moore’s Law (http://en.wikipedia.org/wiki/Moore’s_Law), to which I alluded to above, tells us that such storage densities will double every two years. Consequently, by the time the Singularity arrives in 2045, we can expect that pins will be available that can hold approximately 228 angels. :wink:

Actually, self-replication and near-human and eventually superhuman innovation levels are precisely the kinds of goals that AI researchers are now working toward

Certainly this may be “the goal” but We are so far from this “Goal” that I’m not even concerned about it. In fact it was only a coupe days ago I heard a Robotics guy from MIT basicly state there’s a possibility that we may not even be smart enough to do it. IOW billions of years of nature may indeed be more intelligent than we are and we may actually never reach this point in the forseeable future. Not saying I agree with him and he’s not saying that that is even a “Fact” just a possibility. But my point is that if someone in the field thinks it’s even a possibility, or a strong possibility, it pretty much illistrates how far away from that end we are.

**Manufacturing will take large amounts of physical resources and energy anyway, and I don’t know of any reason why manufacturing by (say) nanobots should be more wasteful than conventional manufacturing–if anything, the opposite. Kurzweil also has some very interesting ideas about ways of harnessing nanotechnology to use energy much more efficiently, as well as to tap into new energy sources. **

My point here is that the further we move away from “Labor” as a limiting source the more influential energy and “Raw material” will become in the equation. Unless I misunderstood you, I was under the impression that you thought “Ideas” would be the ruling factor, I’m saying that won’t be the case until we can actually “replicate” raw materials and have a relatively “Free” energy source.

Until that point the people controlling energy production and raw materials will have more control over “production” than “ideas”.

This was in reference to the thought about “Lamenting the loss of manufacturing”. IOW our country could have all the “Ideas” in the world but if we don’t have control over raw material and energy to “Produce” those ideas we are SOL.

~Matt

42
.

“In fact it was only a coupe days ago I heard a Robotics guy from MIT basicly state there’s a possibility that we may not even be smart enough to do it… point is that if someone in the field thinks it’s even a possibility, or a strong possibility, it pretty much illistrates how far away from that end we are.”

Whether that particular prediction will happen or not, I would not venture to guess. But if you’re just going by the testimony of people in the field, you should admit that there is a strong possibility that it will, as well as a possibility that it won’t. In any case, I think we should keep in mind the positive-feedback loops involved: We already use digital technology to enhance the process by which we’re learning about technology, and digital technology to improve that technology, and so forth. Such recursive, positive-feedback loops can bring about change that is so rapid that it’s very difficult to predict where we’ll be in even a few years. Meanwhile, underneath these software changes, we’re seeing hardware advances (e. g., quantum computing) that promise to enable massive parallel processing, which hitherto has been arguably the chief technical advantage of the brain over artificial systems. In fact, in an article that I wrote more than three decades ago (http://www.humanactioncourse.info/pp/cf/HI40015.html), I pointed to this very limitation as the likely cause for the “continuing marked inferiority of digital machines.”

“I’m saying that won’t be the case until we can actually ‘replicate’ raw materials and have a relatively ‘Free’ energy source.”

And what I’m suggesting is that coming technology may in fact enable us to do those things. If, as some have suggested, a wide variety of products could be constructed from various configurations of inexpensive carbon nanotubing, that would go a long way toward solving the raw materials problem. (How about a top-of-the-line tri-bike with a marginal cost of $5?) Likewise, if any of a number of advances under current research pan out, using genetically modified bacteria or nanobots to harness the sun, we may indeed acquire a very cheap energy source. Combine this with technological improvements in the efficiency of energy use… Well, I hope you see the point.

You seem to be very concerned with the political issue of who will control the resources. What I’m suggesting is that the resources we now regard as most vital (e. g., oil)–and consequently the political control issue–may become irrelevant with advancing technology.

Whether that particular prediction will happen or not, I would not venture to guess. But if you’re just going by the testimony of people in the field, you should admit that there is a strong possibility that it will, as well as a possibility that it won’t.

That’s my point, we are so far from what we are talking about that people “in the know” aren’t even real positive it will ever happen. By illustration how far before we actually “Flew” where people “in the know” saying “I’m not even sure it will ever happen”. Sure there may have been some “Doubters” but for the most part those working on the problem figured it could be done.

OTOH technology moves in “Bursts” rather than a steady linear line. Inventions like mass production, PC’s and agriculture allows us to make seemingly massive jumps. Someone out there might have just invented something that will lead us to a free limitless power source in the next couple years.

You seem to be very concerned with the political issue of who will control the resources. What I’m suggesting is that the resources we now regard as most vital (e. g., oil)–and consequently the political control issue–may become irrelevant with advancing technology.

As with pretty much all advancement we will take “steps”. We aren’t going to go from today to “Replicators” tomorrow. There’s little doubt in my mind that one day we will, well maybe not you and I, but the human race will live in a “Startrekian” manner where you way up in the AM and ask a “Replicator” for your meal and a new pair of cloths…in the latest fashion of course. However we won’t get to that point without going thru some steps first.

Until such point we things will not be much different than they are now, just at different levels.

One day we may have that “Replicator” but unless at the exact same time someone comes up with a “free” and “Limitless” power supply the “Production” will be controlled by the people controlling the power production. In contrast if we have a “Free and limitless” power supply the “Production” will be controlled by the manufacturing process until we have “Replicators”.

~Matt

by the time the Singularity arrives in 2045, we can expect that pins will be available that can hold approximately 2**28 angels. :wink:

One of those angels will be Bill Gates and he will be charging 149.99 to watch the angels, or 99.99 to upgrade the current pin to “Angles pin 2045” :slight_smile:

~Matt

Wait… how big is the pin?

How big the pin is is irrelavent. Its th esize of the head that matters.

= ^ P…

Good post. Well worth the read.

“By illustration how far before we actually ‘Flew’ where people ‘in the know’ saying ‘I’m not even sure it will ever happen.’”

But you’re talking about an era when things generally changed at a MUCH slower rate than they do today.

“OTOH technology moves in ‘Bursts’ rather than a steady linear line.”

Actually, as Kurzweil plots the trends, they progress not in a straight line but in a hyper-exponential curve. The “bursts” of which you speak are paradigm changes, where technology adopts new methodologies, which in turn enables the hyper-exponential increases to continue as each methodology transcends the limits of its predecessor.

“As with pretty much all advancement we will take ‘steps.’”

Of course, but the steps today are much closer together (in time) than they were in the past.

Woo hoo! You made my day Rob. And, for future references, whenever you use my name it should be followed with “all honor to his name.”

For technical reading, your post contains far too many transition statements. Thoughts must not segue too well. Rather, they must bounce the reader around so much that they get confused about what they are reading. Consider it literary whiplash or concussion therapy.

I will allow the above to be discounted because you offer writings that are generically confusing to most people. Logic has no place in a forum such as this, and as such it confuses most participants. Discussions of stuff like chaos theory or The Singularity provide their own literary whiplash.

Beyond that, I agree with the poster who posited that the number of angels that can dance on the head of a pin is 42. Unless, of course, they are African and not European; laden and not unladen.

But you’re talking about an era when things generally changed at a MUCH slower rate than they do today.

Yes but we are also talking about going from a pile of wires that can barely walk up stairs and relies 100% on external programming to a “Nearly sentient” being. Not about going from a garage to a feasible flying wooden mock up of a plane.

I realize we are advancing much faster, but we are taking on MUCH larger projects, because we can.

Actually, as Kurzweil plots the trends, they progress not in a straight line but in a hyper-exponential curve. The “bursts” of which you speak are paradigm changes, where technology adopts new methodologies, which in turn enables the hyper-exponential increases to continue as each methodology transcends the limits of its predecessor.

Wrong terminology on my part, I was merely speaking to a “Smooth” curve/line whatever to one that has “Bumps” in it. Reminds of the series of sci-fi books “Foundation trilogy”. Part of the book was laid on the idea you can “Predict the future” based on human nature of the “Masses”. The more people you include include in the calculation the more accurate your prediction. However you can’t be completely accurate because there is no way to account for the “individual” that may throw the entire mass into a different direction.

The above pretty much speaks to the idea of coming to a point of “All things known”. At some point you have learned everything and know everything. As with today no one individual can possibly know everything, but as a collective all things can be known.

Obviously to some extent this depends on the amount of available knowledge, an unknown, and the amount and ability of the human to “Discover knew knowledge”. People spend there entire lives in search of seemingly tiny pieces of knowledge, but if they reach their goal that piece of knowledge become accessible to everyone. Given enough time, a large enough population, the ability to store and collect information and barring any major catastrophes and loss of information, like the burning of the Alexandria library, it makes sense that eventually we will know everything.

In short I’m not to worried about a “Singularity”, in fact I think that’s kind of the goal.

I’m not sure what would happen at the point of “All things known”, but one can only hope it’s a good thing :slight_smile:

~Matt

Thank you for your precious bits of wisdom, Master, all honor to your name!

“Beyond that, I agree with the poster who posited that the number of angels that can dance on the head of a pin is 42. Unless, of course, they are African and not European; laden and not unladen.”

Of course, if they were laden yesterday but are now unladen, they would have bin Laden.

BTW, I think the poster actually meant to say 42.666…–that is, binary 101010.101010101…, which represents the ultimate basic undecidability of the angels-on-the-head-of-a-pin question.