• 1 Post
  • 1.06K Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle

  • Oxford Economist in the NYT says that AI is going to kill cities if they don’t prepare for change. (Original, paywalled)

    I feel like this is at most half the picture. The analogy to new manufacturing technologies in the 70s is apt in some ways, and the threat of this specific kind of economic disruption hollowing out entire communities is very real. But at the same time as orthodox economists so frequently do his analysis only hints at some of the political factors in the relevant decisions that are if anything more important than technological change alone.

    In particular, he only makes passing reference to the Detroit and Pittsburgh industrial centers being “sprawling, unionized compounds” (emphasis added). In doing so he briefly highlights how the changes that technology enabled served to disempower labor. Smaller and more distributed factories can’t unionize as effectively, and that fragmentation empowers firms to reduce the wages and benefits of the positions they offer even as they hire people in the new areas. For a unionized auto worker in Detroit, even if they had replaced the old factories with new and more efficient ones the kind of job that they had previously worked that had allowed them to support themselves and their families at a certain quality of life was still gone.

    This fits into our AI skepticism rather neatly, because if the political dimension of disempowering labor is what matters then it becomes largely irrelevant whether LLM-based “AI” products and services can actually perform as advertised. Rather than being the central cause of this disruption it becomes the excuse, and so it just has to be good enough to create the narrative. It doesn’t need to actually be able to write code like a junior developer in order to change the senior developer’s job to focus on editing and correcting code-shaped blocks of tokens checked in by the hallucination machine. This also means that it’s not going to “snap back” when the AI bubble pops because the impacts on labor will have already happened, any more than it was possible to bring back the same kinds of manufacturing jobs that built families in the postwar era once they had been displaced in the 70s and 80s.











  • Having made the very poor decision to wade through all of that, and taken the necessary nap to try and let my brain stop overheating from the strain, here’s what I’m left with:

    This gets at part of why the TESCREAL bundle is such an awkward frame to work in. Emile Torres and the other writers who have broken it down do a very impressive job of drawing connections between the different members of the bundle not only through ideological consistencies and historical development of a body of work but through direct links between people and organizations that eventally led to this bizarre but influential sci-fi eschatology where our most important moral duty as a society is the development of post human AI that can move us one step closer to having forty gazillion simulated “people” doing who knows what in their Dyson spheres until the last sun goes out. Unlike most millenarian movements the people who work to advance these ideas don’t (or at least didn’t) have a central organization or a single ideology so you can’t just criticize LessWrong or Effective Altruism in the same way that you could criticize the Branch Davidians or Heaven’s Gate. And that really does feel like the most relevant point of comparison here: a terrifyingly large share of our collective money and power are controlled by people who seem to adhere to a bizarre secular apocalypse cult, but that cult doesn’t have a name because these people don’t organize that way. Describing the TESCREAL bundle does an admirable job of naming the problem and constructing it from the ground up, which is honestly a far more “good faith” handling of their belief system than any alternative I can find.

    The most relevant point of comparison I can think of is the idea of “leaderless resistance” in both activist and terrorist activities. Even though you have a bunch of people who plan and take actions to advance their shared beliefs, they recognize the vulnerability created by doing so through an explicit heirarchical organization, so they don’t create one. Unlike the klan or other terrorists, TESCREAL is able to use celebrity and public communities as their points of recruitment and activity rather than drawing media attention through atrocity, but the same ambiguity and pattern of disavowal seems to play in how the network operates. Anything too far outside of mainstream acceptability can be disavowed by LW as a specific organization or by Elon and Thiel as specific individuals, even as they’re all broadly on the same “side” of the issue. TESCREAL is an attempt to name that “side” in a way that prevents this. People can argue whether or not they or their faces are adherents of TESCREALism, but not the existence of TESCREALism.

    However, the fact that it’s a constructed bundle rather than a preexisting flag that these people have claimed explicit allegiance to makes the attempt to describe the problem look like a bad-faith effort to construct an enemy where none exists. And that appears to be what the R9PRESENTATIONAL bundle (even more awkward than TESCREAL! Good job!) is trying to do. Most of the bundle doesn’t refer to specific elements of an array as much as adjectives that can apply to a whole host of different activities and organizations. Transhumanism, for example, is a complete and specific structure of beliefs. “Relational” is an attribute of many different ideologies and while I think the idea is that the underlying bundle views all of these qualities as good the central thing he’s trying to describe already has names like Humanism, Environmentalism, Socialism, Anti-capitalism, Ludditism, and so on. I think the problem is that the author doesn’t want to demonize any of those actual ideologies that oppose TESCREALism either explicitly or incidentally because they’re more popular and powerful and because rather than being foundationally opposed to “Progress” as he defines it they have their own specific principles that are harder to dismiss.

    Most of the connections that the writer here draws are also well outside of living memory, while the oldest elements of TESCREAL appear to date back to cyberpunk science fiction in the 1980s and the surrounding conversations about technology and the meaning and importance of humanity. The defining elements came together over a period of decades, not centuries. While some of that was writers building on a body of knowledge and theory, that just brought us back to the end result where the central idea existed but didn’t have a name, so one had to be constructed for it by naming it’s constituents and ideological forefathers. By contrast, R9PRESENTATIONALism seems to have its “real” roots in obscure or unpopular theological disputes in the early 19th century. Even if those disputes did have some impact on the intervening history of thought, naming and outlining those and avoiding talking about anti-capitalism and environmentalism as central ideas to the tech backlash makes the attempt to construct a category very transparent. The author doesn’t want to be anti-socialism or anti-environmentalism, but does want to do the tech thing that socialists and environmentalists are criticizing, so he needs to reframe those criticisms as arising from somewhere else that he can more comfortably position himself against.

    This, combined with the emphasis on opposition to postmodernism, means that we’re very likely to, whether the author intends it or not, end up going down some weird roads with this. I’m not sure if we’re going to get to Jordan Peterson ranting about “Postmodern Neomarxism” or if we’re going to end up doing the full Alex Jones thing, but just like those two zoo exhibits it should be fun to watch from outside the enclosure.


  • I know this is a shit post but I think we should probably spare some thought for the connection between Adam Weishaupt and modern technofascist grifters in the sense of trying to profit from building a social club/identity around a set of ideas that are broadly popular among a subset of wealthy elites but not necessarily powerful in wider society. Given their shared (professed) allegiance to the enlightenment and progress I cannot see him being anything other than proud of what Lighthaven or MIRI have accomplished in the field of separating people from their money.



  • I think the religious angle isn’t a general criticism as much as a counter to the specific narrative that the TESCREAL ideology is somehow rooted purely in logic and realistic evaluations of technology rather than in fantasy and wild speculation. The criticism of how it rhymes with certain elements of fundamentalist Christianity are usually rooted in this same observation, as well as in the fact that the elements of Christianity that TESCREALism most closely rhymes with are themselves harmful or insane regardless of what kind of wrapper you put them in. Like, both Christian and Singularitarian eschatologies use their faith in apocalyptic prophecies/predictions to devalue action to address very real suffering in the here-and-now in favor of trying to improve the lot of humanity in this hypothetical future, which just so happens to involve preaching more Christian/Singularitarian end times crap and also giving insiders to associated organizations lots of money and power. There’s a leftist version of that too and it’s also pretty fucked up.




  • […] it actually has surprisingly little to do with any of the intellectual lineages that its proponents claim to subscribe to (Marxism, poststructuralism, feminism, conflict studies, etc.) but is a shockingly pervasive influence across modern culture to a greater degree than even most people who complain about it realize.

    I mean, when describing TESCREAL Torres never had to argue that it’s adherents were lying or incorrect about their own ideas. It seems like whenever someone tries this kind of backlash they always have to add in a whole mess of additional layers that are somehow tied to what their interlocutors really believe.

    I’m reminded, ironically, of Scott’s (imo very strong) argument against the NRx category of “demotist” states. It’s fundamentally dishonest to create a category that ties together both the innocuous or positive things your opponents actually believe and some obnoxious and terrible stuff, and then claim that the same criticisms apply to all of them.