• alexpotato 2 hours ago

    You sometimes hear people say "I mean, we can't just give an AI a bunch of money/important decisions and expect it to do ok" but this is already happening and has been for years.

    Examples:

    - Algorithmic trading: I once embedded on an Options trading desk. The head of desk mentioned that he didn't really know what the PnL was during trading hours b/c the swings were so big that only the computer algos knew if the decisions were correct.

    - Autopilot: planes can now land themselves to an accuracy that is so precise that the front landing gear wheels "thud" as they go over the runway center markers.

    and this has been true for at least 10 years.

    In other words, if the above is possible then we are not far off from some kind of "expert system" that runs a business unit (which may be all robots or a mix of robots and people).

    A great example of this is here: https://marshallbrain.com/manna1

    EDIT: fixed some typos/left out words

    • mjr00 2 hours ago

      > A great example of this is here: https://marshallbrain.com/manna1

      This is a piece of science fiction and has its own (inaccurate, IMO) view on how minimum wage McDonald's employees would react to a robot manager. Extrapolating this to real life is naive at best.

      • pixl97 2 hours ago

        >Extrapolating this to real life is naive at best.

        Why, it's as much of a view of our past adherence to technology without thinking as a well as a view of the future.

        "Computer says no" is a saying for a reason.

        • nirav72 39 minutes ago

          >"Computer says no" is a saying for a reason.

          Current LLMs rarely or seldom say no. Unless, they're specifically configured to block out certain types of requests.

      • djwide an hour ago

        I'm saying there's something structurally different form autonomous systems generally and from an LLM corpus which has all of the information in one place and at least in theory extractable by one user.

        • pavel_lishin an hour ago

          But none of those things are AI in the same sense that we use the term now, to refer to LLMs.

          • alexpotato an hour ago

            But those things were considered on the same level of current LLMs in the sense of "well, a computer might do part of my job but not ALL of it".

            No, algorithmic trading didn't replace everything a trader did but it most certainly replaced large parts of the workload and made it much faster and horizontally scalable.

        • zozbot234 3 hours ago

          The really nice thing about this proposal is that at least now we can all stop anthropomorphizing Larry Ellison, and give Oracle the properly robot-identifying CEO it deserves.

          • Terr_ 2 hours ago

            For those who haven't seen the reference: https://www.youtube.com/watch?v=-zRN7XLCRhc&t=38m27s

            • kmeisthax 3 hours ago

              But then we'd have to call it LawnmowerGPT

              • jeffrallen 2 hours ago

                I came here for this, am not disappoint. :)

                Best meme in hacker space, thanks /u/Cantrill.

              • johnohara 2 hours ago

                > The President sits at the top of the classification hierarchy.

                Constitutionally, and in theory as Commander-In-Chief, perhaps. But in practice, it does not seem so. Worse yet, it's been reported the current President doesn't even bother to read the daily briefing as he doesn't trust it.

                • djwide an hour ago

                  I point that out a little bit when I refer to agencies being discouraged from sharing information. The CIA may be worried about losing HUMINT data to the NSA for example. You may be referring to them worrying about compartmentalizing the information away from the president as well which you are right happens to some extent now but shouldn't 'in theory'. Maybe it's a don't ask don't tell. I think Cheney blew the cover of an intel asset though.

                  • handedness 44 minutes ago

                    > compartmentalizing the information away from the president as well which you are right happens to some extent now

                    This is nothing new, and has been happening since at least the 1940s, to multiple administrations from both parties. Roosevelt, Truman, Kennedy, Nixon, Reagan...and that's just some of the instances which were publicly documented.

                  • handedness 2 hours ago

                    It's not an issue of theory-versus-practice.

                    You're conflating the classification system, established by EO and therefore by definition controlled by the Executive, with the classified products of intel agencies.

                    A particular POTUS's use (or lack thereof) of classified information has no bearing on the nature of the classification system.

                    • SoftTalker an hour ago

                      And the last president couldn't comprehend it.

                      <shrug>

                    • mellosouls 4 hours ago

                      This is an interesting and thoughtful article I think, but worth evaluating in the context of the service ("cognitive security") its author is trying to sell.

                      That's not to undermine the substance of the discussion on political/constitutional risk under the inference-hoarding of authority, but I think it would be useful to bear in mind the author's commercial framing (or more charitably the motivation for the service if this philosophical consideration preceded it).

                      A couple of arguments against the idea of singular control would be that it requires technical experts to produce and manage it, and would be distributed internationally given any countries advanced enough would have their own versions; but it would of course provide tricky questions for elected representatives in the democratic countries to answer.

                      • djwide 4 hours ago

                        There's not a direct tie to what I'm trying to sell admittedly. I just thought it was a worthwhile topic of discussion - it doesn't need to be politically divisive and I might as well post it on my company site.

                        I don't think there are easy answers to the questions I am posing and any engineering solution would fall short. Thanks for reading.

                      • alanbernstein 4 hours ago

                        Considering things like Palantir, and the doge effort running through Musk, it seems inconceivable that this is not already the case.

                        I think I'm more curious about the possibility of using a special government LLM to implement direct democracy in a way that was previously impossible: collecting the preferences of 100M citizens, and synthesizing them into policy suggestions in a coherent way. I'm not necessarily optimistic about the idea, but it's a nice dream.

                        • ativzzz 2 hours ago

                          > special government LLM to implement direct democracy

                          I like your optimism, but I think realistically a special government LLM to implement authoritarianism is much more likely.

                          In the end, someone has to enforce the things an LLM spits out. Who does that? The people in charge. If you read any history, the most likely scenario will be the people in charge guiding the LLM to secure more power & wealth.

                          Now maybe it'll work for a while, depending on how good the safeguards are. Every empire only works for a while. It's a fun experiment

                          • djwide 4 hours ago

                            Thanks for the comment. Interesting to think about but I am also skeptical of who will be doing the "collecting" and "synthesizing". Both tasks are potentially loaded with political bias. Perhaps it's better than our current system though.

                            • Sheeny96 4 hours ago
                              • Zagitta 2 hours ago

                                Centralising it is definitely the wrong way to go about it.

                                It'd be much better to train an agent per citizen, that's in their control, and have it participate in a direct democracy setup.

                                • stewh_eng 4 hours ago

                                  Indirectly, this is kind of what I was trying to get at in this weekend project https://github.com/stewhsource/GovernmentGPT using the British commons debate history as a starting point to capture divergent views from political affiliation, region and role. Changes over time would be super interesting - but I never had time to dig into that. Tldr; it worked surprisingly well and I know a few students have picked it up to continue on this theme in their research projects

                                  • bahmboo 2 hours ago

                                    That looks very interesting. Could use a demo or examples for us short attention spanned individuals. Would be cool to feed it into TTS or video generation like Sora.

                                  • zozbot234 2 hours ago

                                    Real world LLM's cannot even write a proper legal brief without making stuff up, providing fake references and just spouting all sorts of ludicrous nonsense. Expecting them to set policy or even to provide effective suggestions to that effect is a fool's errand.

                                    • pixl97 2 hours ago

                                      >Real world politicians cannot even write a proper legal brief without making stuff up, providing fake references and just spouting all sorts of ludicrous nonsense. Expecting them to set policy or even to provide effective suggestions to that effect is a fool's errand.

                                      This has been a more realistic experience of the average American for the past few years.

                                  • blibble 4 hours ago

                                    think we're already there aren't we?

                                    no human came out with those tariffs on penguin island

                                    • MengerSponge 4 hours ago

                                      A COMPUTER CAN NEVER BE HELD ACCOUNTABLE THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION.

                                      • unyttigfjelltol 2 hours ago

                                        Computers are more accountable. You just pull the plug, wipe the system.

                                        Executives, in contrast, require option strike resets and golden parachutes, no accountability.

                                        Neither will tell you they erred or experience contrition, so at a moral level there may well be some equivalency. :D

                                        • notpushkin 3 hours ago

                                          Let’s assume we live in a hypothetical sane society, and company owners and/or directors are responsible for their actions through this entity. When they decide to delegate management to an LLM, wouldn’t they be held accountable for whatever decisions it makes?

                                          • deelayman 3 hours ago

                                            I wonder if that quote is still applicable to systems that are hardwired to learn from decision outcomes and new information.

                                            • advisedwang an hour ago

                                              LLMs do not learn as they go in the same way people do. People's brains are plastic and immediately adapt to new information but for LLMs:

                                              1. Past decisions and outcomes get into the context window, but that hasn't actually updated any model weights.

                                              2. Your interaction possible eventually gets into the training data for a future LLM. But this is incredibly diluted form of learning.

                                              • svieira 3 hours ago

                                                What (or who) would have been responsible for the Holodomor if it had been caused by an automated system instead of deliberate human action?

                                              • nilamo 3 hours ago

                                                Management is already never held accountable, so replacing them is a net benefit.

                                                • toomuchtodo 4 hours ago

                                                  While I have great respect for this piece of IBM literature, I will also mention that most humans are not held accountable for management decisions, so I suppose this idea was for a more just world that does not exist.

                                                  • skirge 3 hours ago

                                                    human CAN and computer CAN NEVER

                                                    • toomuchtodo 3 hours ago

                                                      Accountability is perhaps irrelevant is my point. You can turn off a computer, you can turn off a human. Is that accountability? Accountability only exists if there are consequences, and those consequences matter. What does it mean for them to "matter"?

                                                      If accountability is taking ownership for mistakes and correcting for improved future outcomes, certainly, I trust the computer more than the human. We are never running out of humans incurring harm within suboptimal systems that continue to allow it.

                                                    • lenerdenator 3 hours ago

                                                      I'd say that the fix then is in creating a more just world where leaders are held accountable than to hand it off to something that, by its very nature, cannot be held accountable.