Discussion in 'Critical Discussions Among Proponents and Skeptics' started by Sciborg_S_Patel, Aug 2, 2014.
My bad. I think I misread
Yeah, I was actually defending the welfare state and universal healthcare.
Basically, once these technologies become available, you've got three options: 1. Ban them, thus sending them underground. 2. leave everything to the free market. 3. Make sure everybody has access to them through universal healthcare.
Hughes realizes that all three options will have some really bad consequences. It's just a question of picking the least bad of the three.
Sciborg, as to the idea that some socialists might want to force everybody to become 'enhanced', I don't know of anybody in this debate who argues for this, though of course there may be some crazy Stalinists out there somewhere.
The more serious worry is that people will effectively be forced into doing it to themselves and their children through social pressure and the need to compete in the economy. That is a big worry.
I was thinking of China.
Though the UHC scenario isn't necessarily much better. If people who get enhancements are more competitive in a job market - especially one severely contracted by advancements in AI & robotics - isn't it hard to escape the momentum of transhumanism? (Think Walmart & McDonalds helping their employees sign up for government benefits.)
The optimistic transhumanist seems to think the UHC option means in the long run everyone gets the H+ package - stronger/faster/better/smarter*. But what if there are trade-offs? Being better equipped to handle Job Type A [possibly] means reducing your capacity for Job Types B & C. A single bad recession could create a subspecies of humanity that becomes "obsolete" after the market shifts.
It seems to me very possible you end up with an "untouchable" class who are the ever present guinea pigs for new enhancements.
*Sorry, couldn't resist:
I don’t think Walmart’s push for healthcare is a good comparison. That decision was made because they knew they could expect the cost if they were one of the people chartering the bill–smaller businesses would be forced to pay out in amounts that were not already budgeted and thus hurting their competition. Documentaries have already been made by this point about just how little Walmart actually cares about their employee wellbeing. I think one of them was called, “The High Cost of Low Prices.”
I may be in the minority here, but I don’t believe in universal health care. Socialized medicine seems to go hand in hand with more restrictions on people’s daily lives (see: Scandinavian countries who ban more food additives than non-socialized medicine countries.) The Cato Institute has also had podcasts where they invite people to talk about their experience with socialized medicine in times of extreme health conditions, and almost always the private healthcare gives better results.
Socialized anything is basically holding you at gunpoint and telling you that you’re spending money on something “for the greater good”, and you’re ever at the mercy of whatever the State wishes to pay for treatments. While a treatment might exist to fix a tooth, the cheapest option is to yank it out and so that is what you will get. The only benefit here is that it raises the treatment floor–and for a lot of people with very minor issues this is good for them–but I still have to pay heightened taxes to support public medicine even if I use private clinics. I think a socialist system is more likely to push people towards a common model of prosthetic simply because it will be cheaper for whoever administers the “universal” healthcare to buy from a single source. This gives one source a tremendous amount of power over people.
In the libertarian model of transhumanism–which seems to be the more common one, subject to confirmation bias of course–the producers of prosthetics are required to compete for value. This means that under ideal circumstances there is no single kind of prosthetic, so no single corporate entity is going to have total domain over millions of people’s bodies.
“Adapt or die” has been held as a creed by status quo for some time, the problem is that transhumanism places more reward on those who can engineer than currently exists. Jock personalities end up having to acknowledge brain personalities who produce their equipment in a more direct way, especially as once mechanization starts there is more room for tweaking away a person’s physical troubles. In a sense, people’s bodies become programmable and this eliminates genetics as the primary decider of people’s lives. That kind of change is huge–so of course it too must be denied.
Right, because this doesn’t already happen. People with terminal illness or severely restricting disabilities tend to throw their arms open to anyone who claims to be able to fix it–completely regardless of evidential proof of whether the method works or not. And distribution of wealth already establishes that there is an elite caste who has most of the wealth in existence.
The only difference here is that in the transhumanist world, the cures for said illnesses actually work.
I meant Walmart helping employees sign up for food stamps in order to supplement its cheaper wages. Basically if a corporation can utilize government provided services to up its own profits there's a high probability it will do so. Pushing employees to sign up for augmentations would likely be a no-brainer save perhaps for certain corporations that were owned by a small group of people opposed to transhumanism.
I'm undecided, but I suspect getting deep into this would require a new thread? In any case I think all transhumanist roads lead to negative outcomes (though some might ultimately be positive), so I'm not trying to plug for socialism over libertarianism. In fact I suspect the biggest reasons to be pessimistic aren't solved by choosing one strategy over another, and that both will be tried and shown to be wanting.
I don't know if this would be a necessity under socialist versions of transhumanism, but I agree this sort of thing will likely be a bigger problem for governments attempting to prevent a massive chasm between those who can afford augmentations and those who cannot.
I don't think this really gets around the problem you mentioned - so instead of one corporation it's five? From the perspective of the guy at the bottom of the totem pole it's not clear if oligarchies are better than monopolies.
I don't know if the comparison you're making between the ill [of today] and underclass I'm predicting is apt. The percentage of the population you're talking about is much smaller than the number of people who can - and thus IMO will - be replaced by machines & AIs. I'm also not talking about curing illnesses but rather altering yourself in some way for the sake of being employable. Perhaps in the long run the technologies are so cheap that everyone gets the best H+ package available yet it's not clear how long that run would be.
In the meantime, you could have a huge underclass that doesn't have augmentations or at least doesn't have good ones. This seems likely given that improvements in AI + gadgetry will severely reduce the need for humans in service & production sectors. I suppose this might result in more people taking improved birth control and thus lowering the percentage of people in generational poverty cycles. Might be a good long term solution given the expected reduction in per capita benefits as the aforementioned obsolescence of many, many people in the job market occurs.
Ok. I misunderstood.
I’m not certain that the kind of augmentations which would provide an economic advantage would be offered by socialist medicine. Many surgeries for transsexualism are considered “optional cosmetics” except in a growing range of programs which have sustained continued pressure from right’s groups. How many social health programs offer state of the art computerized legs versus just shoving a glorified peg leg in place?
I suspect what is more likely to happen is a scenario supposed by Human Revolution: a volunteer soldier gets injured, volunteers for experimental augmentation, and then outperforms normal soldiers. The necessity is drug up because people with knowledge implants or faster legs are more performant than not. This same problem came out with the advent of phones (and later cellular phones); bosses preferred employees who could be harassed 24/7, and so many workplaces require 24/7 cell phone availability.
Do governments even care? Again, the gap between the rich and the poor is rarely helped by governments. A lot of programs designed to “help” the poor usually end up being either boondoggles or cause massive deficits–those deficits are then filled by overtaxing the working-or-lower classes and almost never comes out of the top (who use tax shelters and/or have legions of accountants.)
I think the idea that a government would care anything about augmentation is kind of silly; they would care if it became a threat to the gravy boat, likely nothing more. Maybe a big scare campaign could change that, in the same vein as the craze to ban butterfly knives or drugs using nothing but unsubstantiated claims and fear mongering?
Monopolies and zaibatsus have very well defined and observed problems. An example of this is how Bell was able to charge practically infinite amounts of phone services, or how IBM did the same with making people both purchase mainframes in-house and then secondarily charging them licensing for CPU time. Both of these practices were eventually killed because of backlash. Unfortunately these companies now group up in “associations” and have pretend-competition, which leaves us back in the monopoly situation of Bell practically existing and deciding the cost of US internet prices. Other countries who aren’t just Bell clones have laughably better internet cost/speed ratios than the US does!
So yes, five oligarchs are better than a monopoly because they generate friction off of each other and this means jobs for people at the very least. When you have no friction (e.g. a monopoly) innovation stops wholesale and everyone is worse off for it. Afterall, why would you ever need to make your product better if everyone is legally prohibited from copying it in any way?
Maybe people will stop overpopulating the planet with so many children (especially families who can’t support the ones they have, yet refuse to have protected sex) that aren’t needed?
People happily plunge themselves in to debt (sometimes over 50k$!) to get college degrees, which then in a growing number of cases turn out to be ultimately worthless–and thus worthlessly accrued debt. Yet, there is no big government-backed solution to crack down on overinflated tuition costs. So evidently, people don’t consider having to take on large debts for nothing more than *potential* wages as a problem that needs addressing. How is bioaugmentation different?
Most people within a hegemony don’t care about anyone outside of it; I’m sure you can see how many middle-class employed Americans are concerned with starved Urgandian children. Or similarly well-off Europeans for that matter. Evidently, people as a whole do not appear to care.
One of the reasons we need a benevolent singularity is because it would likely care about said people, and help find a way to do something productive with them. Non-singularity humans seem rather uncaring about people in lower castes, even without technology.
Please try harder...
You do know that about 50% of Americans pay no income tax and that the top 10% of wage earners pay about 70% of the income tax, right?
Not that think the government's tax policies are marvelous. Just sayin'.
Will have a longer reply, but here's some stuff on the potential contraction in employment that I think will lead to State intervention:
Yeah, fair point about China. I'm probably guilty here of just reading what philosophers in liberal democracies are saying about transhumanism and not looking at the bigger picture.
I agree with Marshall Brain that advances in robotics and automation will probably lead to mass technological unemployment, and so this will eventually lead to a different kind of capitalism with a guaranteed basic income for everyone. See his article Robotic Nation for more on this:
I also think this new kind of society will probably come into being before the transhumanists' 'enhancements' come in, if they ever do, so I think it's a mistake to try to imagine what would happen if enhancements were available in the US right now. It will be a different world if the robotic revolution happens.
To be honest, though, I'm very skeptical about whether these human enhancement technologies will ever work anyway. The idea that you can just turn up the dials for greater intelligence, strength, creativity and memory and turn down the dials for things you don't like strikes me as being too good to be true. Surely aggression, creativity, empathy and envy are linked in all sorts of complicated ways, and screwing around with human nature in this way is bound to have all sorts of unintended consequences.
So that means its Tuesday in the science lab?
Maybe I didn't express myself very well. Anyway, here's Fukuyama from his article “Transhumanism,” Foreign Policy, Sept-Oct 2004:
Our good characteristics are intimately connected to our bad ones: If we weren't violent and aggressive, we wouldn't be able to defend ourselves; if we didn't have feelings of exclusivity, we wouldn't be loyal to those close to us; if we never felt jealousy, we would never feel love. Even morality plays a critical function in allowing our species as a whole to survive and adapt…. Modifying any one of our key characteristics inevitably entails modifying a complex, interlinked package of traits, and we will never be able to anticipate the ultimate outcome.
I think when it comes to trying to enhance people morally he's got a good point. In the abstract it sounds like a great idea trying to reduce aggression and increase empathy, but we have to be extremely careful with it. Virtues and vices are linked in very complicated ways.
As for the non-moral enhancements like speed, strength and intelligence, it might be a bit more straightforward, but I'm still skeptical about the idea of just being able to turn up the good stuff and turn down the bad stuff.
Fukuyama seems to be jumping the gun there, given he's not making an ontological assertion but merely an appeal to complexity. It's possible that we hit a limit on nano-technology or some other technology, but assuming we don't saying "we will never be able to anticipate the ultimate outcome" seems overly bold.
If the world we live in can be mastered by technology (regardless of whether or not said facts are reducible to physics, or even if Idealism is true really) why is the human mind going to remain some kind of mystical black box? I don't even know if we'd need nanotechnology capable of manipulating atoms.
One thing that I'm curious to see is the break downs of who resists augmentation & on what grounds. I think many would expect the non-spiritual to be more for it and the more religious to be more against it, but I think it'll be more complicated than that. Especially if you can live socially in VR.
Just to be clear, I agree with the transhumanists that many of the arguments put forward by bio-conservatives like Fukuyama are bad arguments. These guys want to say that world-engineering is okay but person-engineering is not, and that treatment is okay but enhancement is not. All of this falls apart under close analysis.
The point I agree with Fukuyama about is that enhancing people will be much more complicated and will take much longer than most transhumanists realize, especially when we get into the realm of moral enhancements. There's going to be massive disagreement about what counts as a moral enhancement and about what counts as a virtue or a vice, and you also have all these problems about unintended consequences and about how our 'good' traits and 'bad' traits are interlinked.
But of course I don't agree with Fukuyama that we should ban all research into this simply because it's really complicated and there might be some bad unintended consequences.
I agree with your point about transhumanism and religion/spirituality. One of my original criticisms of Alex and Skeptiko was that he was trying to link atheism and materialism with technology worship, techno-utopianism and transhumanism. This is ridiculously simplistic, as anybody would know if they were familiar with primitivism, neo-luddism, bio-conservatism, deep ecology, and transhumanism. Techno-utopians can be religious or non-religious, right-wing or left-wing, pro war or anti war, pro psi or anti psi, and so on. It's a very complex landscape.
I just meant this idea of something being impossible because of the brain's complexity didn't ring very true to me. I've heard the same from a friend of mine, Compsci PhD, regarding genetic engineering of babies for desirable traits. I was baffled that he would think such complexity would escape us indefinitely.
Regarding belief, to elaborate I think someone who thinks mind=brain might be more inclined to think humanity's preservation lies in a preservation of biology. OTOH, someone who thinks mind != brain might be more inclined to think humanity's preservation can be accomplished even with radical changes to underlying biology...or even the abandonment of flesh altogether.
Providing a VR outlet that is significantly close to the real world will also encourage people to jack up their flesh in a variety of ways, as your social (and thus "real" life) will be conducted primarily online. As this gets cheaper and cheaper to offer, we might even see some "low-res" versions being offered by the government.
OTOH, and this might be a more plausible - or at least less Sci-Fi - scenario, we might see a different kind of trend. If urban farming can manage to take off we might see more people plugging out of the 'net and globalization. If the West can't sustain jobs it's unclear anyone else will manage, and so food/clothing/shelter production may end up being more and more localized - your neighbors, your friends, your community. This isn't to say everyone abandons technology but rather people minimize their use of it to the point where basic energy needs & medicine account for a majority share of usage. If the Humans Need Not Apply scenario comes around the same time as a global outbreak of some treatment resistant disease this scenario might become more likely than the globalized dystopias.
Anyway, on the subject of AIs smart enough to replace "white collar" professions (including AI programmers - ack!), let's turn to good old Chalmers:
For a start, I think most materialists are emergentist materialists and not mind=brain identity theorists, whether they actually know the philosophical jargon or not.
It's strange you should talk about 'humanity's preservation' since many transhumanists are explicitly not interested in preserving humanity. Many of them think it will be great if humanity is eventually phased out and the superior robots take over.
Also, the transhumanist revolution will mean a massive increase in existential threats to humanity from such things as nanotechnology and genetic engineering of viruses.
It's perfectly consistent for a materialist to think that the best way forward for humanity (and for the planet as a whole) is low-tech, anti GM, anti fossil fuels, anti industrial agriculture and anti human 'enhancement'. Indeed, such people may even get all spiritual about the need for biodiversity and about the importance of other-than-human 'wild places' for our psychological well being. A certain kind of nature worship seems quite appropriate.
I just meant people who think their human-ness is rooted in their biology might be more resistant to extreme augmentation that those who think their human-ness comes from some immaterialist aspect. But it will also go the other way around as well, possibly leading to small communities defined by their beliefs about alteration.
I agree on the point about existential threats, though it all gets a bit wobbly when people start talking about post-Singularity stuff that may not ever happen. What we can be confident about, IMO, is AI + robotics good enough to gut the job market [and increasing progress in altering humans]. I think believable VR worlds isn't as safe a bet but it's hard to pinpoint exactly where that would fail. I know some people theorize smell/taste might work at a quantum level, which might put them beyond human tech?
That said I do think the engineered virus thing will be a big problem but then I just finished S2 of Utopia.
It'll be interesting if people can tailor their children's genes to achieve these kinds of gifts...but I suspect that's centuries away.
You mentioned Deus Ex - Have you seen the trailer for Cyberpunk:
The creator of the original RPG is involved, and he wants to include the psychological impact of altering your body. I also think that's something people envisioning a post-Singularity improvement to humanity seem to overlook. In the short-run (50-200 years) it seems we have massive contraction of employment, drones flying around killing people, increased surveillance, likely terrorism by "luddites", and increased pressure to alter one's self or offspring - likely via drugs since Xanax/Percoset/Adderall/etc are already commodities used to get through work, then later implants & the like - in the hopes of gaining one of the few remaining jobs.
Alterations to the body is likely to impact how we think about ourselves. I can actually see increases in bodily alterations leading to increased sense of religion. If your true self awaits after Judgement Day, perhaps its okay to alter your physical form with grafts and what not in this life. To see how people's beliefs are impacted by their day-to-day experiences look at the rise of Saint Death.
That said, as noted previously I suspect we might see more and more people simply unplugging from the system as they become unable to stay or enter the job market. If you can grow your own food - or at least do so with help from your neighbors - your need to take part in a super-tech society diminishes. IIRC a few people even started doing this in Detroit? It's not like people couldn't use sports, crafts, and books from the local library for entertainment if they didn't have TV/video games/Internet. Robert Koons thought the waning of materialism would lead to this scenario but I think that's just the pipedream of someone desperately hoping for Church to supercede State. OTOH, local communities becoming increasingly self-sufficient as a result of severe contraction in available jobs could lead to such a future.
Is Transhumanism Building a Dystopian Nightmare?
Can transhumanism bring about Humanity 2.0 before scarcity causes global collapse? How much of the Post-Singularity goodness will be useful in reversing this trend? ->
Limits to Growth was right. New research shows we're nearing collapse
Separate names with a comma.