Last week a reader emailed me: I have a funny AI thing that happened to me this week; my friends and I were talking about Carvana on Tuesday night, which got me curious and I asked ChatGPT what the equivalent of Carvana today is. ChatGPT told me Opendoor, so as any responsible investors do, we both bought a fair share of Opendoor. The next day the stock went roaring and went up by over 50%. Nice trade for him; very much not investment advice for you. OpenDoor Technologies Inc. closed at $1.04 per share last Tuesday; this Monday it closed at $3.21. (It's down a bit since.) It is a meme stock, and we're in a meme-stock moment. Bloomberg's Subrat Patnaik wrote this morning that "the meme-stock revival looks like it might stretch into another day with GoPro Inc., Krispy Kreme Inc. and Beyond Meat Inc. all surging in early trading," and that "the massive surges in trading volumes and stock prices have been driven by a mix of social media buzz, short squeezes, and technical breakouts, despite little to no change in the companies' underlying business fundamentals." Claire Ballentine and Carmen Reinicke reported yesterday: Stocks are at all-time highs. Chatter on WallStreetBets is surging. Retail traders are flooding into low-priced shares. It's not 2021, and the shares of the moment aren't GameStop Corp., AMC Entertainment Holdings Inc. or the now-bankrupt Bed Bath & Beyond. In 2025's meme stock mania, the companies du jour are Opendoor Technologies Inc. and Kohl's Corp. ... "I've been seeing signs of a 'flight to crap' recently," said Steve Sosnick, chief strategist at Interactive Brokers. "The recent rally, which was largely powered in its initial phase by individual investors buying large cap stocks and major indices, has emboldened many to engage in more risky types of investing." And the Wall Street Journal reports: Individual investors are once again loading up on a group of unloved stocks and taking to social media to defend them from the haters and the short sellers. ... "Let's goo!!" a user named Hot-Ticket9440 wrote on a subreddit forum Tuesday as shares of Kohl's, the department-store chain, surged by nearly 40%. "Max pain on the shorts buy every dip. Together we strong." "$OPEN has GameStop vibes written all over it," Skip Tradeless wrote Tuesday on X of Opendoor Technologies, the real-estate platform. "WE WON'T STOP UNTIL $82!" ... Opendoor, which traded under $1 as recently as last week, began to take off after social-media users rallied around the company. Then, hedge-fund manager Eric Jackson said in a July 14 post on X that his firm EMJ Capital has taken a position in the stock. Opendoor notched six consecutive sessions of double-digit percentage gains following his endorsement, and the shares are up 439% in a month. Here is Jackson's July 14 X thread about OpenDoor, which includes this: Opendoor is under $1. Most investors have written it off. Wall Street isn't even paying attention anymore. But our AI model at @EMJCapital just flagged it a few weeks ago. It's giving us early $CVNA vibes — and that's not a phrase I use lightly. Jackson also asked his AI to find him the next Carvana, [1] and his AI also told him OpenDoor. Fun coincidence. In the interests of scientific replication, I went to ChatGPT and asked it: "I am looking to buy a stock that might be the next Carvana, something cheap that might get 100x returns. What stocks might fit that profile?" [2] The answer I got was intelligent, appropriately caveated, and did not include OpenDoor, but it did include this framework: Look for stocks that meet some of these traits: Market Cap under $500M (ideally sub-$200M for big upside). High short interest (squeeze potential). Turnaround story or emerging dominance in a niche. Asset-heavy with leverage, but showing improving margins. Founder-led or charismatic management. Misunderstood or hated by the market. What happened here? I don't think I exactly asked ChatGPT "find me some meme stocks" (did I?), but I think ChatGPT kind of gave me an answer like "here are some potential meme stocks." Prompted with Carvana, ChatGPT told me that the stocks that might go up a lot are ones with "high short interest (squeeze potential)" and "founder-led or charismatic management" that are "misunderstood or hated by the market." I didn't say anything about short squeezes, or about sticking it to the elitists who underestimated my stocks, but that was where ChatGPT's head went. ChatGPT's model of good stocks to buy seems a little meme-stock-inflected. ChatGPT seems like it has been reading Reddit. Of course it has. "ChatGPT is a blurry JPEG of the web." At New York Magazine yesterday, John Herrman wrote: Except for Wikipedia, maybe, no independent website has provided as much raw training data for as many AI firms, authorized or not, as Reddit. As a corpus for machines trying to sound or reason like people, it's immensely valuable: pre-organized, pre-moderated, cleaned and sorted by the input of millions of volunteers and users, and written, unlike so much else on the web, without SEO, traffic, or advertisers in mind. Likewise, its relationship with OpenAI runs deeper than the deal announced in early 2024 through which Reddit licensed data to OpenAI for training and to bring Reddit content directly into ChatGPT. Also this week, Ryan Broderick wrote about a venture capitalist named Geoff Lewis who has posted disturbing stuff online after maybe spending too much time talking to ChatGPT. "Lewis does not appear to understand he is conjuring creepypasta from the AI bot," writes Broderick: The first thing you need to know to fully grasp what appears to be happening to Lewis is that large language models absorbed huge amounts of the internet. It's why they're good at astrology, predisposed to incel-style body dysmorphia, and oftentimes talk like a redditor. Think of ChatGPT as a big shuffle button of almost everything we've ever put online. ChatGPT's model of the world is influenced by Reddit; modern generative artificial intelligence models aim to achieve human or superhuman intelligence, but in practice that tends to be the intelligence of a human who spends a lot of time on Reddit. If you ask a modern publicly available large language model which stocks to buy, it will in some sense draw on all of human knowledge and its own powerful reasoning capacity to tell you which stocks to buy. But, among all of human knowledge, it might give extra weight to the knowledge on Reddit. And the knowledge on Reddit about what stocks to buy is "meme stocks." As far as I can tell, with the exception of the reader who emailed me (and maybe Jackson???), this week's meme-stock rally is mostly a traditional meme-stock rally. Individual investors did their own research to find unloved short-squeezable stocks, they bought them, they posted about them on social media, people on Reddit and X read their posts, they rushed out to buy the same stocks, etc. All very much like 2021. But in 2021 advanced generative AI models were not widely available, and in 2025 they are, and I wonder about the future of meme stocks. In 2021, I wrote: Reddit correctly noticed that if everyone buys a stock then it will go up, and then they all did that. I am not sure that there is a word, or a concept, for that. Ordinarily the way markets work is that some people want a stock and others don't and they trade it around until it reflects a sort of distributed consensus of what everyone thinks about its value. Occasionally the way markets work is that one person with a ton of money really wants a stock, or really wants it to go up, so she pays a ton of money to buy a lot of it and push around the price. (This may or may not be illegal, depending on context and her motives.) There are distributed phenomena in markets (prices reflect everyone's private information and differing views), and there are intentional phenomena (prices sometimes reflect a big whale trader moving markets), but the novelty here seems to be a distributed intentional phenomenon. Thousands of people talked it over and decided that they'd like it if the price of a stock were higher, so they made it higher. That is: To me, the essential novel element of a meme-stock rally is that a bunch of independent small investors reach a consensus to buy the same stocks. In 1981, there was no convenient way for thousands of retail investors to get together and agree "let's all buy GameStop." In 2021, there was. It was stereotypically Reddit's WallStreetBets subreddit, though really it was a broader social media ecosystem involving Twitter, YouTube, Discord, etc. Social media created the opportunity for retail investors to coordinate, to reach a distributed consensus. [3] And when they reached that consensus — when they all decided to buy the same stocks — it was, at least in the short run, self-fulfilling. If everyone buys the same stocks at the same time, they'll go up. (Particularly if short sellers are actually getting squeezed out.) In 2025, that technology is all still there (except Twitter is now X). But there might be another, more centralized technology for creating that coordination: - Lots of retail investors, alone in their homes, go to ChatGPT and type in "what stocks should I buy." Perhaps something in their prompts will indicate that, at least subconsciously, they want something a bit meme-y.
- ChatGPT, which has spent more time reading Reddit than any human alive, reasons like a WallStreetBets poster and says "you know what you should buy is OpenDoor, you gotta stick it to those shorts, [diamond emoji][hand emoji][rocket emoji]."
- Everyone buys the same stocks.
- Also they then go post about it on Reddit, reinforcing the cycle.
That is, the fluid consensus that developed in 2021 — that unloved nostalgic retailers are good, that short sellers are bad, that piling into low-priced stocks to cause short squeezes can be fun and profitable for everyone — might have crystallized; that consensus might now be reflected in the thinking of AI models. "Let's all do short squeezes on the same small-cap stocks" was once a niche idea in a small corner of Reddit, and then it became a meme, and then it became an enormous viral phenomenon, and now it is perhaps a permanent part of human culture reflected in the conventional wisdom of AI. Eventually autonomous AI agents will do it all themselves: Identify unloved companies, buy their stocks, engineer short squeezes and post about it on social media. We talk from time to time around here about an unusual but theoretically interesting business plan. [4] The plan is: - Make a good product that competes with the product of a big public company.
- Sell that product below your cost, undercutting your competitor, creating consumer surplus and winning market share.
- Lose money on every unit.
- But: Sell the stock of your big public competitor short, so that you profit as its stock price goes down.
- Because you are taking lots of market share and undercutting your competitor on price, it will lose money, so its stock will go down, so you will make enough money on your short to make up for your losses on your product.
At some level it doesn't feel like this should work. Making money by giving people something they want and charging them for it makes sense; making money by betting on the failure of your competitors makes less sense. The world's upside is unlimited; the downside — the value you can extract from driving your competitors out of business — is finite. Still, you know. Complete markets. It is not natural, as a business matter, to think this way. But it is perfectly natural as a finance matter. [5] Hedge funds sometimes have a thought process like "Company X is a lower-cost competitor to Company Y, so Company Y's margins are at risk, so I will short Company Y." Why shouldn't Company X have that thought process? There are not a lot of clear examples of anyone actually doing this. We talked once about an arguable example in the funeral services space, and when DeepSeek (founded by a hedge fund manager!) seemed to undercut the case for massive artificial intelligence capital expenditure, I argued that it would have been extremely cool if DeepSeek had shorted Nvidia, though I'm pretty sure it didn't. In general, though, companies tend to try to make money the old-fashioned way, by charging more for their product than they spend to produce it. But there is a theoretical appeal. This business model essentially transfers wealth from (competitors') shareholders to consumers: Company X can charge lower prices because it gains as Company Y shareholders lose. That is perhaps, in various ways, unstable, but at first glance it is good for consumers. Which means it could be a good antitrust tool? Here is a fun paper by Ian Ayres, Scott Hemphill and Abraham Wickelgren, titled "Shorting Your Rivals: Negative Ownership as an Antitrust Remedy": Antitrust authorities often have difficulty predicting whether a merger of rivals will enhance or degrade competition. For mergers that produce a mix of benefits and anticompetitive harms, they also have difficulty preserving the benefits while preventing the harms. To help solve these and other problems, we propose the use of negative ownership remedies, wherein the merged firm effectively takes a short position in its competitors. A negative ownership remedy provides multiple distinct benefits: First, approving a merger conditional on negative ownership provides an ex post incentive benefit, because a merged firm with negative ownership in its rivals will have less incentive to engage in conduct that reduces competition. Second, it provides an ex ante signaling benefit. Privately informed firms that volunteer to take a negative ownership position are less likely to have proposed a merger that weakens competition. Third, the availability of the negative ownership tool allows antitrust authorities to make more finely calibrated decisions about whether to approve proposed mergers. A company that is short its competitors naturally has more incentive to compete vigorously than one that isn't. A normal company will think "if we cut prices, we'll gain market share but will make less money on each sale," and will have to weigh the tradeoffs; a company that shorts its rivals might think "if we cut prices, we'll gain market share and our rivals will have to cut prices, which will lower their profits, which will make us some money on our short." So more incentive to cut prices. Anyway the theoretical antitrust problem that we talk about a lot around here is the idea that, because all the public companies are owned by overlapping groups of large diversified shareholders, they have diminished incentives to compete: If all the companies have the same owners, then if Company X gains market share by cutting prices, that's bad for its shareholders who also own Company Y. I suppose negative ownership is a solution to that? If Company X is short Company Y, then it has good incentives, even if its shareholders are also long Company Y themselves. That could get messy, though. We talked last week about how (1) an increasing number of US public companies are pivoting to crypto treasury strategies, (2) those companies are starting to show up in stock indexes and, thus, in the portfolios of index funds, and (3) therefore many boring conventional diversified stock retirement funds actually have some crypto. I thought this was fine, writing: The simplest and laziest way to get sort-of-index-ish exposure to crypto is to own the total US stock market, because the stock market now includes an ever-growing supply of crypto treasury companies. You might not want crypto in your stock index fund — Vanguard doesn't want crypto in its stock index funds — but the whole point of an index fund is that you don't want to invest in what you want! (Or in what a fund manager wants.) You don't trust yourself (or your fund manager) to want the right things. You want to invest in what the market wants, and what the market wants is crypto. That view — "invest in the market portfolio" — is pretty standard, but also a bit extreme. The more normal version of it is something like "your default exposure to financial assets should be roughly proportional to their market weights, but if you have some reason to deviate from the market weights, go right ahead." My assumption is that most normal people don't have any particularly good reason to pick one investment over another — if you are not a professional investment manager, you probably have better things to do with your time than deeply understanding financial markets — but some people do, or at least, have good enough reasons. For instance, a classic reason is what is sometimes called "ESG," environmental, social and governance investing. If you think "I do not want to make particular stock-picking decisions, but I also do not want to own coal companies because global warming is bad," there are investment vehicles for you. There are even indexes; there are indexes that are approximately "the S&P 500, but ESG," for people who want to invest passively in the broad market portfolio but who also have ESG commitments. Crypto is quite popular these days, but it's also pretty obvious that lots of people do want to exclude it from their portfolios. (My Bloomberg Opinion colleague Allison Schrager has a column today titled "Bitcoin in Your 401(k)? That's Not a Risk I Would Take.") And so there's a natural question that at least two readers have emailed me to ask: "Is there a broad stock market index that excludes crypto?" (That is: An index that excludes crypto treasury companies that are meant to be essentially pots of crypto wrapped in public-company stock, and also maybe crypto miners or whatever.) As far as I can tell the answer is "no, but there should be." Some people want to own the entire stock market, and the stock market these days includes a certain amount of Bitcoin in a fake mustache. Other people want to own the entire stock market except the bit of it that is crypto. Those people would buy that, so someone should sell it to them. I'm sorry but Sam Altman is the greatest marketing genius in the history of business. Last week he tweeted about the launch of ChatGPT Agent: Agent represents a new level of capability for AI systems and can accomplish some remarkable, complex tasks for you using its own computer. … For example, we showed a demo in our launch of preparing for a friend's wedding: buying an outfit, booking travel, choosing a gift, etc. We also showed an example of analyzing data and creating a presentation for work. Although the utility is significant, so are the potential risks. We have built a lot of safeguards and warnings into it, and broader mitigations than we've ever developed before from robust training to system safeguards to user controls, but we can't anticipate everything. In the spirit of iterative deployment, we are going to warn users heavily and give users freedom to take actions carefully if they want to. I would explain this to my own family as cutting edge and experimental; a chance to try the future, but not something I'd yet use for high-stakes uses or with a lot of personal information until we have a chance to study and improve it in the wild. At Puck, Ian Krietzberg adds: It was underscored by a warning from cybersecurity researcher Rachel Tobac, who suggested that users avoid the model for now: "Let experts work out the integration issues and build in safeguards before you cause a data breach, leak your sensitive photos, post client personal data, or worse," she wrote on X.
Indeed, OpenAI's own system card for the model noted an accuracy rate below 95 percent across two common benchmarks intended to evaluate the model's propensity for hallucination. OpenAI also noted that the model might, for instance, buy the wrong product or leak private data online—two risks the company is trying to mitigate by training the model to ask users for confirmation before doing anything. In internal tests, however, the company noted that ChatGPT correctly confirmed its actions with the user only 91 percent of the time. OpenAI told The Verge that the model's ability to perform financial transactions has been restricted "for now." I have described Altman's marketing approach as "business negging." If you say "our product is good and will make the world a lot better," that is sort of ho-hum; everyone says that, so your audience will discount it. But if you say "our product will probably make the world a lot better but could destroy humanity, tee hee aren't I a little rascal," that's exciting! People are drawn to danger, and if you go around warning people that your product might destroy humanity then that strongly suggests that you think it's powerful. Similarly, when you launch a product whose pitch is "this product can do your whole job for you," boring, everyone says that about everything. But if you launch a product whose pitch is "this product can do your whole job for you but there's like a 10% chance it will email porn to your boss," more people will be excited to use it. Ooh they restricted its ability to empty your bank account "for now," ooh! [6] The danger makes it feel real. I once wrote — about a different Sam Altman product! — that "the most important paper in economics is the one about how people sometimes give themselves painful electric shocks just because that is an option that's available to them. … All of human culture is explained by that result." The Do Everything For You and Maybe Ruin Your Life AI Assistant is just obviously more psychologically appealing than an omnicompetent AI assistant that definitely won't ruin your life. Aren't you a little curious? Ex-Libor Trader Tom Hayes Wins Bid to Overturn Rigging Conviction. Private equity firms flip assets to themselves in record numbers. Musk Allies to Raise Up to $12 Billion for xAI Chips as Startup Burns Through Cash. Walleye Joins Multistrategy Hedge Funds Saying No to New Cash. Ex-Citadel Money Manager Wins Qube's Backing to Start Hedge Fund. McKinsey bars China practice from generative AI work amid geopolitical tensions. Probe of Davos Founder Finds Unauthorized Spending, Inappropriate Behavior. "Oops … The Cert Died Again." 'Yeti blood oath' divides Denver seminary. If you'd like to get Money Stuff in handy email form, right in your inbox, please subscribe at this link. Or you can subscribe to Money Stuff and other great Bloomberg newsletters here. Thanks! |
No comments