General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsBubbles are REALLY evil by Cory Doctorow

https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/
Here's a historical comparison that's illuminating: Enron vs Worldcom. Both were monumental frauds, the CEOs of both companies died shortly after the frauds were discovered, but they have very different legacies. Enron a scam that pretended to secure billions of dollars' worth of new efficiencies through "energy trading" but was actually just engineering rolling blackouts in order to jack up energy prices left behind nothing.
Well, not quite nothing. Enron did leave behind a little useful residue after it burned to the ground: a giant repository of emails. You see, after Enron went bust, it was sued by its creditors, who demanded access to relevant emails from the company's Outlook server. But the company execs decided they didn't want to spend the money to weed out the irrelevant emails before the court-mandated disclosure, so instead they published all the emails ever sent or received by anyone at Enron, including tons of extremely private, personal, sensitive information relating to Enron's employees and customers:
https://en.wikipedia.org/wiki/Enron_Corpus
This became the "Enron Corpus" and it was the first large tranche of emails that were in the public domain and available to researchers. As a result, it became the gold standard dataset for researchers investigating social graphs, natural language, and many other subjects that subsequently became very important computer science fields and commercial applications.
As legacies go, the Enron Corpus is pretty small ball, and even so, it is decidedly mixed, both because the Enron Corpus constitutes a gross, ongoing privacy violation for a huge number of people; and because a lot of that social graph and natural language work that it jumpstarted has been put to deeply shitty purposes.
Then there's Worldcom: also a gigantic fraud, Worldcom falsified billions of dollars' worth of orders for new fiber optic lines, which it then dug up streets all over the world and installed. When Worldcom went bankrupt, all that fiber stayed in the ground, and many people are still using it today. My home in Burbank has a 2GB symmetrical fiber connection through AT&T that runs on old Worldcom fiber that AT&T bought up for pennies on the dollar.
So while you have to squint really hard to find any benefit that can be salvaged from Enron, it's really easy to point at Worldcom's productive residue it's a ton of fiber and conduit running under the streets of major cities around the world, ready to be lit up and bring the people nearby into the 21st century. Fiber, after all, is amazing, literally thousands of times better than copper or 5G or Starlink:
https://pluralistic.net/2026/04/07/swisscom/#stacked
Even though Enron's CEO Ken Lay and Worldcom's CEO Bernie Ebbers both received prison sentences after their fraud was revealed, the bubbles never stopped, and indeed, they only got worse. AI is the biggest bubble in human history, worse even than the South Sea Bubble:
https://en.wikipedia.org/wiki/South_Sea_Company
And like those earlier bubbles, some of our modern bubbles will leave behind nothing, while others will leave behind some productive residue. Take the cryptocurrency bubble. Crypto will go to zero, and when it does, all it will leave behind is shitty monkey JPEGs and even worse Austrian economics:
https://www.web3isgoinggreat.com/
As with Enron, you can find some productive residue from cryptocurrency if you look hard enough. A lot of programmers have had a heavily subsidized education in Rust programming and cryptographic fundamentals, both of which are unalloyed goods in our otherwise very insecure digital world.
Some of the underlying mechanisms from the crypto are useful, even without blockchains. Take Metalabel, a system that lets collaborators on creative projects automate how they handle revenues from those projects by plugging DAO-like logic into traditional, dollar-based bank accounts. They're recycling some of the tooling from the crypto bubble to create a very useful utility, without the crypto:
https://www.metalabel.com/
But, as with the Enron Corpus, this is pretty small ball. The world has flushed away hundreds of billions to get paltry millions' worth of value out of crypto the rest of that value disappeared into the pockets of crooked insiders who defrauded the public into parting with their savings.
If crypto will be Enron-like in its post-bubble life, what about AI? I think AI is more like Worldcom: there's a bunch of useful stuff that AI can do, after all. Take away the bubble and we'd call the things AI can do "plug-ins" and some people would use them, and others wouldn't, and some of those uses would be productive, and others would be foolish, but we wouldn't bet the world's economy on them, nor would we squander our last dribbles of potable water to cool their data centers.
After the AI bubble pops, there will be a lot of durable residue. The data centers will still stand. The GPUs will still be there, and if we don't "sweat the assets" by running them as hot and hard as they can tolerate, they won't burn out in 2-3 years. There will be lots of applied statisticians, skilled data-labelers, etc, looking for work. And there will be lots of open source models that have barely been optimized (why make an open source model more efficient when you're raising capital based on the promise of outspending everyone else in order to dominate a world of ubiquitous, pluripotent, winner-take-all centralized AI?):
https://pluralistic.net/2025/10/16/post-ai-ai/#productive-residue
That's a situation not unlike the post-dotcom bubble of the early 2000s. Almost overnight, the legion of humanities undergrads who'd been treated to subsidized training in perl, Python and HTML found themselves looking for work. Servers could be purchased in bulk for pennies on the dollar (with user data still on them!). I bought a "dining room set" of six $1,000+ fancy office chairs for $50 each (still wrapped in plastic!) from a dotcom founder who was selling them on the sidewalk out front of his failed startup's office in the Mission. He offered to sell me ten lifetime's supply of branded t-shirts for $20. I turned him down.
That was the birth of Web 2.0. All of a sudden, people who wanted to make real things that were good could do so, because they could find skilled workers, hardware, and office space at such knock-down prices that they could be funded out of pocket or put on a credit card. People got to pursue the web they wanted, free from asshole bosses and VCs. Not everything that got built in those heady days was good, but many good things got built.
I can easily imagine that the post-bubble AI scene will produce benefits comparable to Web 2.0 projects built by and for people who want to do useful and fun things, without being distracted by the mirage of illusory billions promised by the stock swindlers who created the bubble.
I can easily imagine that I will find some of those post-bubble tools useful, and that in 20 years I will still be using them, just as today, I am still using some of those early post-dotcom bubble services and tools.
And despite all that, IT IS NOT WORTH IT.
The residue that is left behind by every bubble is subsidized, but that subsidy doesn't come from the deep-pocketed investors who are gripped by "irrational exuberance." It comes from mom-and-pop, normie, retail investors who have been tricked into giving their money to the insiders who inflated the bubble.
From Worldcom to Enron, from crypto to AI, the point of the bubble wasn't ever the residue or lack thereof it was a transfer from working people to crooks. Bubbles are a system for moving the painfully sequestered life's savings of people who do things to people who steal things.
Since the Carter years, workers have been forced to flush their savings into the stock market, after the traditional "defined benefits pension" (that guarantees you an inflation-adjusted sum every month until you die) was replaced with 401(k)s and other "market-based pensions" (where you only get to survive after retirement if you bet correctly on the movement of stocks):
https://pluralistic.net/2022/05/29/against-cozy-catastrophies/
Despite this having all the appearances of a rigged game finance industry insiders are always going to be better at betting on stocks than teachers, nurses, janitors and other productive workers proponents of this system always insisted that workers weren't really the suckers at the table. But the stock market is like Kalshi or Polymarket in that one bettor's losses are another bettor's gains, and in those markets, nearly all the money is harvested by less than 1% of bettors:
https://www.coindesk.com/markets/2026/04/29/a-tiny-group-is-winning-on-polymarket-as-under-1-of-wallets-take-half-the-profits
Somehow, supposedly, we could beat those insiders and survive into our old age without having to eat dog food or become a burden on our kids by betting on the whole market, through index-tracker funds:
https://pluralistic.net/2022/03/17/shareholder-socialism/#asset-manager-capitalism
Supposedly, this would "diversify" our portfolios, which would insulate us from risks we could not understand, much less estimate. But thanks to private equity and the AI bubble, betting on "the whole market" is basically "betting on AI." 35% of the S&P 500 is tied up in seven AI companies, who are engaged in the obviously fraudulent (and Worldcom-adjacent) practice of passing the same $100b IOU around really quickly and pretending it's in all their bank accounts at once:
https://www.fool.com/investing/2025/11/05/ai-growth-stocks-is-there-still-room-to-run/
When the AI bubble pops, it will vaporize (at least) 35% of the US stock market and wipe out everyday savers who have been swindled into betting their futures on AI, based on the fraudulent representations of AI pitchmen. Millions of people who worked hard all their lives and deprived themselves of small comforts in order to save for their retirement will be wiped out. They will be made dependent on the Social Security system that Republicans are determined to starve into bankruptcy and then turn into (yet another) "market based system" that you will be required to convert into chips at the stock market casino where you're up against professional players who hold all the cards:
https://www.newsweek.com/major-social-security-change-proposed-to-build-wealth-11727844
Annihilating a third of the stock market will have severe knock-on effects, even though the median US worker only has $955 saved for retirement:
https://finance.yahoo.com/news/955-saved-for-retirement-millions-are-in-that-boat-150003868.html
Because wiping out the life's savings of everyone else will tank consumption for a generation. Retirees who have to sell their family homes to pay their medical bills won't be buying breakfast at the local diner or catching a Tuesday night movie. They won't be indulging their grandkids with nice birthday presents or helping their own kids buy their first home.
Worse still: the only thing our society knows how to do about economic catastrophe (for now, anyway) is to impose brutal austerity, and austerity drives voters into the arms of fascist strongmen, who blame all their woes on a scapegoated minority in order to win office, and then steal everything that's not nailed down:
https://pluralistic.net/2026/04/12/always-great/#our-nhs
Which is all to say, there's a world of difference between recognizing that the AI bubble is the superior sort of bubble in that it will leave a productive residue, and endorsing the AI bubble as a productive or morally acceptable way to produce that residue. It's one thing to anticipate salvaging something useful out of a catastrophe, and another thing altogether to deliberate induce or prolong that catastrophe so as to maximize the amount of salvage.
The swindlers who created this bubble are crooks who have set out to destroy the futures of a generation of savers. They are monsters, and their bubble needs to be popped as quickly as possible.
https://pluralistic.net/2026/05/07/dump-the-pumpers/#alpo-eaters-anonymous
highplainsdem
(62,923 posts)done when the bubble bursts.
He's wrong to think there will be useful residue left by generative AI, because it will still be built on theft, it will still hurt the incomes of those whose work was stolen to train AI to replace them (and that is just as harmful as money lost by small investors), it will still harm education, it will still dumb down users, it will still harm the natural environment, and it will still pollute our information ecosystem with AI slop.
dalton99a
(95,130 posts)AZJonnie
(3,971 posts)There's 1000's of companies using custom models built to do business-specific tasks that are in NO WAY trained on stolen data. When you talk to the chatbot at the cable company or whatnot, that is AI, but it's not any of the big AI companies models that used stolen data. That AI just needs to know enough to do it's job (which admittedly they often do fairly poorly). The shop where I work put AI into a project we did for a client that was trained on NOTHING but the companies own data/information, in order to detect fraudulent referrals (this was for a company that pays out for "affiliate links" ).
That's what you're missing in this equation
highplainsdem
(62,923 posts)models that were trained on stolen data. Open source genAI models were also illegally trained. My understanding is that businesses using genAI pay for AI services, partner with AI companies or use open source. Having a customized genAI model doesn't mean it wasn't illegally trained.
That's genAI, though, and not all AI is genAI.
The program you mentioned for detecting fraudulent referrals doesn't sound like genAI.
AZJonnie
(3,971 posts)Despite Sam Altman's claim in court along the lines of "our product HAS to be allowed to (steal copyrighted materials)", that claim was specific to the product that HE wanted to make, to have the capabilities he wanted it to, and to make it competitive (in no small part I imagine "because China", and all that word entails in this context).
But there are foundational models out there that are more narrowly trained on ethically-sourced data.
Prompt: What I would want to use is an AI system I *KNOW* was not trained using unethical means, like pilfering copyrighted materials like songs and literary works that the artists were not compensated for. Is there such a thing? Or is one stuck starting with models of that sort, unless they make an enormous expenditure?
Answer from Claude:
The Uncomfortable Reality
OpenAI, Meta, Google, Anthropic all facing ongoing litigation over training data. The NYT lawsuit against OpenAI, the authors' class actions, the music industry suits are all live cases
"We'll ask forgiveness later" was essentially the implicit industry strategy for the first wave of foundation models
Truly knowing what any model was trained on is difficult because training data documentation ("datasheets"
Cleaner Options That Exist
Fully licensed/ethical training data models:
* Bloomberg GPT trained exclusively on licensed financial data, one of the cleaner provenance stories
* Adobe Firefly (image generation) explicitly trained only on licensed Adobe Stock and public domain content, specifically because Adobe knew their customers needed legal cover
* Fairly Trained an actual certification organization that audits AI companies for licensed training data, worth checking which models carry their certification
Open source with documented data:
* EleutherAI models trained on documented public domain datasets
* Falcon models from UAE's Technology Innovation Institute with published data cards
* Some Llama variants fine-tuned on documented clean datasets, though Meta's base training has its own controversies
The Honest Assessment
For a business fraud detection use case specifically, you'd likely be fine-tuning primarily on your own proprietary transaction data anyway meaning the base model's training controversy becomes somewhat less relevant since your domain-specific layer is entirely clean. The base model is just providing general language/reasoning capability, not the fraud-specific knowledge.
But if clean provenance all the way down is a genuine requirement either ethical or legal Fairly Trained certified models are probably your most defensible starting point right now. The market for provably ethical training data is growing fast precisely because businesses are starting to ask exactly the question you're asking.
Any assertion along the lines that there's basically no way to make an working AI model that's not trained via "illegal" means is not quite accurate. Also, "legality" is still to be determined, and almost certainly the matter will be end up settled in civil courts, with monetary awards rather than criminal charges. So I think "unethically" is a better descriptor, but to each their own
Anyways, there IS a means to have AI, even GenAI, that doesn't rely on stealing copyrighted materials (at least not purposefully), and if these companies get stuck with huge payouts because they LOSE these ongoing cases, then things will change. To be very clear, I WANT THEM TO LOSE, BIGLY, just as (obviously) you do.
I guess you might say I'm more on the bandwagon of "it should be torn down and done morally correctly" than "all genAI must die", but again, to each their own
highplainsdem
(62,923 posts)Midjourmey and other image generators:
https://finance.yahoo.com/news/adobe-ethical-firefly-ai-trained-123004288.html
I don't know any artists opposed to AI who consider Adobe Firefly ethically trained. I know one well-known artist who left Adobe over this:
https://accidental-expert.com/p/the-bob-ross-of-adobe
https://www.reddit.com/r/Adobe/comments/1d6q0v0/kyle_webster_leaves_adobe/
EleutherAI was using The Pile for training and it included copyrighted work. Last June they released Common Pile v0.1 which supposedly doesn't have any copyright issues, but I haven't done any more reading on it.
https://en.wikipedia.org/wiki/The_Pile_(dataset)
Note that it isn't as good as the illegally trained frontier models. Of course it wouldn't be - generative AI works as well as it does at its best only because so much intellectual property was stolen.
And btw, although a lot of coding used by AI companies to train AI was online with a Creative Commons license, the lack of attribution and other violations of that licensing would, I think, make AI companies ripping off that IP illegal as well. I saw that point made by an AI critic recently.
Anyway, the vast majority of generative AI models are illegally and unethically trained.
Ideally, every such AI model would be destroyed, and AI companies would be forced to start over using only what's in the public domain, and what they've obtained clear legal permission to use. And that would immediately make it obvious that the industry's worldwide theft of intellectual property was what created whatever value was in genAI.
usonian
(26,408 posts)Joe investor can't or won't go short, and pays for the rich to get richer.
It's the new old religion.

And as the Beatitudes say:
Blessed are the meek: for they shall inherit shit.

Fiendish Thingy
(23,908 posts)As bad as it was, we survived the 2008 crash.
At the beginning of the year, I did move money out of some of our tech heavy funds and into more diverse funds, including international funds.
edhopper
(37,494 posts)Obama and the Dems in power soon after. With Trump we are looking more like 1929 than 2008. Expect years to recover.
hunter
(40,826 posts)... putting it to use for military uses and surveillance, foreign and domestic.
Privacy and anonymity will no longer exist.