• m_ke 16 hours ago

    It’s amazing how confident all of the recent 20 year olds with “AI” companies are.

    I’ve got to meet a bunch of founders of chatGPT wrapper companies from recent YC batches and other startups that raised a ton of money from top firms and the way they prognosticate compared to all the people I know who built real successful ML products in the past is insane.

    Most of them have “AI expert” in their LinkedIn bios but have not trained a single model, their companies amount to a nodejs app with a few chained prompts and 0 data nor evals to speak of.

    One of these guys just confidently opened up a conversation with me with something along the lines of “once we reach ASI, our accounting agent company will be one of the largest businesses in the world”, as in their ChatGPT wrapper will be useful when OpenAI releases a model that’s smarter than all humans.

    EDIT: this is not meant to be a knock on Luka, who from what I can see seems like a brilliant guy who will probably have an amazing career.

    Same goes for the recent young “AI” startup founders, most of whom are also really talented. Cheers to them for doing the right thing by going after the big new opportunities in the market enabled by LLMs.

    Just maybe take it easy on the grand proclamations and crypto bro style hype.

    • uludag 15 hours ago

      > It’s amazing how confident all of the recent 20 year olds with “AI” companies are.

      Can you blame them? The window for joining the ranks of billionaire tech-founders is slowly closing and AI may be their last hope at entering their echelons.

    • timabdulla 16 hours ago

      I don't think the conclusion of this article is controversial if you accept the premise: If a horizontal AI model is able to serve as a "drop-in remote worker" and all you need to do to get it going is give it access to a computer and some software, then of course vertical AI applications are going to have a hard time.

      I read "drop-in remote worker" as AGI. You can give it any task and it performs at-or-exceeding human level. The real question to me then becomes the implications for the rest of the economy, not simply a question of what happens to vertical AI companies.

      So many B2B tech companies exist to make the work and organization of humans easier. Is Github still as valuable if the vast majority of code is written and reviewed by AI? What about Slack, Linear, or Salesforce? And that's just starting with tech.

      If there are relatively few humans in the white-collar workforce, then we are talking about nothing short of a complete remake of the economy.

      In my opinion, the article spills a lot of ink trying to prove something that to me feels obvious (given that you accept the premise that one day soon we will have AGI) and very little exploring implications beyond this narrow perspective. Perhaps that is coming in future chapters.

      • m_ke 15 hours ago

        Except we’re probably decades away from reliable open ended agents that can be trusted to perform any task.

        There’s a reason why waymo started out in SF and Phoenix, getting to enough 9s to be hands off is really hard and current ML based systems don’t extrapolate well to new environments.

        • timabdulla 15 hours ago

          That's certainly possible. I'm not convinced AGI is just around the corner either, but I can't say with a high degree of certainty that it definitely won't arrive in the next few years.

          • m_ke 14 hours ago

            We’ll definitely get above human level performance for a lot of tasks soon. It just won’t be general and reliable enough to do open ended tasks the way competent humans do.

            So we’ll have models that can fill out and validate a tax return, and give you reasonable financial advice, but we won’t have an off the shelf general LLM from OpenAI that can replace an accountant at any random business anytime soon.

        • tivert 11 hours ago

          > I read "drop-in remote worker" as AGI. You can give it any task and it performs at-or-exceeding human level. The real question to me then becomes the implications for the rest of the economy, not simply a question of what happens to vertical AI companies.

          > ...

          > If there are relatively few humans in the white-collar workforce, then we are talking about nothing short of a complete remake of the economy.

          Which is why I hope AGI is a chimera. It will be a very bad thing for nearly all people, and very, very, *VERY* lucrative for an elite few. I'm reminded of this quote:

          > ...in these pre-modern, agrarian societies the economic divide between regular people and the wealthy elite was vast and functionally unbridgeable...As a result, often the wealthy landholding elite in these societies had access to entire classes of goods that might simply not be available under almost any circumstances to the commons, because they required quantities of money that might be relatively trivial to the elite but which were unobtainable for the masses. (https://acoup.blog/2025/01/03/collections-coinage-and-the-ty...)

          A post-AGI world is almost certainly going to be like that, but worse. At best, most people will lose all economic power be allowed to subsist on a UBI that will provide a tiny living space, some cheap manufactured goods, access to an AI therapist running in the cloud, and no future. In a mideaval feudal society, peasants were at least valuable to their lords. In tech feudalism, they won't even be that.

        • zelda420 15 hours ago

          AI founder yet this company appears to test models?

          • undefined 16 hours ago
            [deleted]