« BackPrismopenai.comSubmitted by meetpateltech 7 hours ago
  • vitalnodo 6 hours ago

    Previously, this existed as crixet.com [0]. At some point it used WASM for client-side compilation, and later transitioned to server-side rendering [1][2]. It now appears that there will be no option to disable AI [3]. I hope the core features remain available and won’t be artificially restricted. Compared to Overleaf, there were fewer service limitations: it was possible to compile more complex documents, share projects more freely, and even do so without registration.

    On the other hand, Overleaf appears to be open source and at least partially self-hostable, so it’s possible some of these ideas or features will be adopted there over time. Alternatively, someone might eventually manage to move a more complete LaTeX toolchain into WASM.

    [0] https://crixet.com

    [1] https://www.reddit.com/r/Crixet/comments/1ptj9k9/comment/nvh...

    [2] https://news.ycombinator.com/item?id=42009254

    [3] https://news.ycombinator.com/item?id=46394937

    • crazygringo 5 hours ago

      I'm curious how it compares to Overleaf in terms of features? Putting aside the AI aspect entirely, I'm simply curious if this is a viable Overleaf competitor -- especially since it's free.

      I do self-host Overleaf which is annoying but ultimately doable if you don't want to pay the $21/mo (!).

      I do have to wonder for how long it will be free or even supported, though. On the one hand, remote LaTeX compiling gets expensive at scale. On the other hand, it's only a fraction of a drop in the bucket compared to OpenAI's total compute needs. But I'm hesitant to use it because I'm not convinced it'll still be around in a couple of years.

      • efficax 4 hours ago

        Overleaf is a little curious to me. What's the point? Just install LaTeX. Claude is very good at manipulating LaTeX documents and I've found it effective at fixing up layouts for me.

        • radioactivist 4 hours ago

          In my circles the killer features of Overleaf are the collaborative ones (easy sharing, multi-user editing with track changes/comments). Academic writing in my community basically went from emailed draft-new-FINAL-v4.tex files (or a shared folder full of those files) to basically people just dumping things on Overleaf fairly quickly.

          • bhadass 4 hours ago

            collaboration is the killer feature tbh. overleaf is basically google docs meets latex.. you can have multiple coauthors editing simultaneously, leave comments, see revision history, etc.

            a lot of academics aren't super technical and don't want to deal with git workflows or syncing local environments. they just want to write their fuckin' paper (WTFP).

            overleaf lets the whole research team work together without anyone needing to learn version control or debug their local texlive installation.

            also nice for quick edits from any machine without setting anything up. the "just install it locally" advice assumes everyones comfortable with that, but plenty of researchers treat computers as appliances lol.

            • jdranczewski 2 hours ago

              To add to the points raised by others, "just install LaTeX" is not imo a very strong argument. I prefer working in a local environment, but many of my colleagues much prefer a web app that "just works" to figuring out what MiKTeX is.

              • crazygringo 4 hours ago

                I can code in monospace (of course) but I just can't write in monospace markup. I need something approaching WYSIWIG. It's just how my brain works -- I need the italics to look like italics, I need the footnote text to not interrupt the middle of the paragraph.

                The visual editor in Overleaf isn't true WYSIWIG, but it's close enough. It feels like working in a word processor, not in a code editor. And the interface overall feels simple and modern.

                (And that's just for solo usage -- it's really the collaborative stuff that turns into a game-changer.)

                • spacebuffer an hour ago

                  I'd use git in this case, I am sure there are other reasons to use overleaf otherwise it wouldn't exist but this seems like a solved issue with git.

                  • warkdarrior 3 hours ago

                    Collaboration is at best rocky when people have different versions of LaTeX packages installed. Also merging changes from multiple people in git are a pain when dealing with scientific, nuanced text.

                    Overleaf ensures that everyone looks at the same version of the document and processes the document with the same set of packages and options.

                • vicapow 5 hours ago

                  The deeper I got, the more I realized really supporting the entire LaTeX toolchain in WASM would mean simulating an entire linux distribution :( We wanted to support Beamer, LuaLaTeX, mobile (wasn't working with WASM because of resource limits), etc.

                  • seazoning 3 hours ago

                    We had been building literally the same thing for the last 8 months along with a great browsing environment over arxiv -- might just have to sunset it

                    Any plans of having typst integrated anytime soon?

                    • vicapow 2 hours ago

                      I'm not against typst. I think it's integration would be a lot easier and more straightforward I just don't know if it's really that popular yet in academia.

                  • swyx an hour ago

                    we did a podcast with the Crixet founder and Kevin Weil of OAI on the process: https://www.youtube.com/watch?v=W2cBTVr8nxU&pp=2Aa0Bg%3D%3D

                    • songodongo 5 hours ago

                      So this is the product of an acquisition?

                      • vitalnodo 5 hours ago

                        > Prism builds on the foundation of Crixet, a cloud-based LaTeX platform that OpenAI acquired and has since evolved into Prism as a unified product. This allowed us to start with a strong base of a mature writing and collaboration environment, and integrate AI in a way that fits naturally into scientific workflows.

                        They’re quite open about Prism being built on top of Crixet.

                      • doctorpangloss 2 hours ago

                        It seems bad for OpenAI to make this about latex documents, which will be now associated, visually, with AI slop. The opposite of what anyone wants really. Nobody wants you to know they used a chatbot!

                        • amitav1 2 hours ago

                          Am I missing something? LaTeX is associated with slop now?

                          • nemomarx an hour ago

                            If a common AI tool produces latex documents, the association will be created yeah. Right now latex would be a high indicator of manual effort, right?

                            • jasonfarnon 37 minutes ago

                              don't think so. I think latex was one of academics' earlier use cases of chatgpt, back in 2023. That's when I started noticing tables in every submitted paper looking way more sophisticated than they ever did. (The other early use case of course being grammar/spelling. Overnight everyone got fluent and typos disappeared.)

                      • syntex 2 hours ago

                        The Post-LLM World: Fighting Digital Garbage https://archive.org/details/paper_20260127/mode/2up

                        Mini paper: that future isn’t the AI replacing humans. its about humans drowning in cheap artifacts. New unit of measurement proposed: verification debt. Also introduces: Recursive Garbage → model collapse

                        a little joke on Prism)

                        • danelski an hour ago

                          Many people here talk about Overleaf as if it was the 'dumb' editor without any of these capabilities. It had them for some time via Writefull integration (https://www.writefull.com/writefull-for-overleaf). Who's going to win will probably be decided by brand recognition with Overleaf having a better starting position in this field, but money obviously being on OAI's side. With some of Writefull's features being dependent on ChatGPT's API, it's clear they are set to be priced-out unless they do something smart.

                          • JBorrow 5 hours ago

                            From my perspective as a journal editor and a reviewer these kinds of tools cause many more problems than they actually solve. They make the 'barrier to entry' for submitting vibed semi-plausible journal articles much lower, which I understand some may see as a benefit. The drawback is that scientific editors and reviewers provide those services for free, as a community benefit. One example was a submission their undergraduate affiliation (in accounting) to submit a paper on cosmology, entirely vibe-coded and vibe-written. This just wastes our (already stretched) time. A significant fraction of submissions are now vibe-written and come from folks who are looking to 'boost' their CV (even having a 'submitted' publication is seen as a benefit), which is really not the point of these journals at all.

                            I'm not sure I'm convinced of the benefit of lowering the barrier to entry to scientific publishing. The hard part always has been, and always will be, understanding the research context (what's been published before) and producing novel and interesting work (the underlying research). Connecting this together in a paper is indeed a challenge, and a skill that must be developed, but is really a minimal part of the process.

                            • SchemaLoad 2 hours ago

                              GenAI largely seems like a DDoS on free resources. The effort to review this stuff is now massively more than the effort to "create" it, so really what is the point of even submitting it, the reviewer could have generated it themself. Seeing it in software development where coworkers are submitting massive PRs they generated but hardly read or tested. Shifting the real work to the PR review.

                              I'm not sure what the final state would be here but it seems we are going to find it increasingly difficult to find any real factual information on the internet going forward. Particularly as AI starts ingesting it's own generated fake content.

                              • cryzinger 2 hours ago

                                More relevant than ever:

                                > The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.

                                https://en.wikipedia.org/wiki/Brandolini%27s_law

                                • trees101 7 minutes ago

                                  The P≠NP conjecture in CS says checking a solution is easier than finding one. Verifying a Sudoku is fast; solving it from scratch is hard. But Brandolini's Law says the opposite: refuting bullshit costs way more than producing it.

                                  Not actually contradictory. Verification is cheap when there's a spec to check against. 'Valid Sudoku?' is mechanical. But 'good paper?' has no spec. That's judgment, not verification.

                                  • monkaiju 2 hours ago

                                    Wow the 3 comments from OC to here are all bangers, they combine into a really nice argument against these toys

                                  • overfeed 2 hours ago

                                    > The effort to review this stuff is now massively more than the effort to "create" it

                                    I don't doubt the AI companies will soon announce products that will claim to solve this very problem, generating turnkey submission reviews. Double-dipping is very profitable.

                                    It appears LLM-parasitism isn't close to being done, and keeps finding new commons to spoil.

                                    • Spivak 2 hours ago

                                      In some ways it might be a good thing that shorthand signals of quality are being destroyed because it forces all of us to meaningfully engage with the work. No more LGTM +1 when every PR looks good.

                                      • toomuchtodo 2 hours ago
                                        • Cornbilly 33 minutes ago

                                          This one is hilarious. https://hackerone.com/reports/3516186

                                          If I submitted this, I'd have to punch myself in the face repeatedly.

                                          • toomuchtodo 24 minutes ago

                                            The great disappointment is that the humans submitting these just don’t care it’s slop and they’re wasting another human’s time. To them, it’s a slot machine you just keep cranking the arm of until coins come out. “Prompt until payout.”

                                      • InsideOutSanta 4 hours ago

                                        I'm scared that this type of thing is going to do to science journals what AI-generated bug reports is doing to bug bounties. We're truly living in a post-scarcity society now, except that the thing we have an abundance of is garbage, and it's drowning out everything of value.

                                        • willturman 3 hours ago

                                          In a corollary to Sturgeon's Law, I'd propose Altman's Law: "In the Age of AI, 99.999...% of everything is crap"

                                          • SimianSci 2 hours ago

                                            Altman's Law: 99% of all content is slop

                                            I can get behind this. This assumes a tool will need to be made to help determine the 1% that isn't slop. At which point I assume we will have reinvented web search once more.

                                            Has anyone looked at reviving PageRank?

                                            • Imustaskforhelp 2 hours ago

                                              I mean Kagi is probably the PageRank revival we are talking about.

                                              I have heard from people here that Kagi can help remove slop from searches so I guess yeah.

                                              Although I guess I am DDG user and I love using DDG as well because its free as well but I can see how for some price can be a non issue and they might like kagi more.

                                              So Kagi / DDG (Duckduckgo) yeah.

                                              • jll29 2 hours ago

                                                Does anyone have kept an eye of who uses what back-end?

                                                DDG used to be meta-search on top of Yahoo, which doesn't exist anymore. What do Gabriel and co-workers use now?

                                                • selectodude 2 hours ago

                                                  I think they all use Bing now.

                                          • techblueberry 3 hours ago

                                            There's this thing where all the thought leaders in software engineering ask "What will change about building about building a business when code is free" and while, there are some cool things, I've also thought, like it could have some pretty serious negative externalities? I think this question is going to become big everywhere - business, science, etc. which is like - Ok, you have all this stuff, but do is it valuable? Which of it actually takes away value?

                                            • SequoiaHope an hour ago

                                              To be fair, the question “what will change” does not presume the changes will be positive. I think it’s the right question to ask, because change is coming whether we like it or not. While we do have agency, there are large forces at play which impact how certain things will play out.

                                            • jplusequalt 3 hours ago

                                              Digital pollution.

                                              • jll29 2 hours ago

                                                Soon, poor people will talk to a LLM, rich people will get human medical care.

                                                • Spivak 2 hours ago

                                                  I mean I'm currently getting "expensive" medical care and the doctors are still all using AI scribes. I wouldn't assume there would be a gap in anything other than perception. I imagine doctors that cater to the fuck you rich will just put more effort into hiding it.

                                                  No one, at all levels, wants to do notes.

                                                • jcranmer 3 hours ago

                                                  The first casualty of LLMs was the slush pile--the unsolicited submission pile for publishers. We've since seen bug bounty programs and open source repositories buckle under the load of AI-generated contributions. And all of these have the same underlying issue: the LLM makes it easy to do things that don't immediately look like garbage, which makes the volume of submission skyrocket while the time-to-reject also goes up slightly because it passes the first (but only the first) absolute garbage filter.

                                                  • storystarling 2 hours ago

                                                    I run a small print-on-demand platform and this is exactly what we're seeing. The submissions used to be easy to filter with basic heuristics or cheap classifiers, but now the grammar and structure are technically perfect. The problem is that running a stronger model to detect the semantic drift or hallucinations costs more than the potential margin on the book. We're pretty much back to manual review which destroys the unit economics.

                                                • jll29 2 hours ago

                                                  I totally agree. I spend my whole day from getting up to going to bed (not before reading HN!) on reviews for a conference I'm co-organizing later this year.

                                                  So I was not amused about this announcement at all, however easy it may make my own life as an author (I'm pretty happy to do my own literature search, thank you very much).

                                                  Also remember, we have no guarantee that these tools will still exist tomorrow, all these AI companies are constantly pivoting and throwing a lot of things at the wall to see what sticks.

                                                  OpenAI chose not to build a serious product, as there is no integration with the ACM DL, the IEEE DL, SpringerNatureLink, the ACL Anthology, Wiley, Cambridge/Oxford/Harvard University Press etc. - only papers that are not peer reviewed (arXiv.org) are available/have been integrated. Expect a flood of BS your way.

                                                  When my student submit a piece of writing, I can ask them to orally defend their opus maximum (more and more often, ChatGPT's...); I can't do the same with anonymous authors.

                                                  • pickleRick243 42 minutes ago

                                                    I'm curious if you'd be in favor of other forms of academic gate keeping as well. Isn't the lower quality overall of submissions (an ongoing trend with a history far pre-dating LLMs) an issue? Isn't the real question (that you are alluding to) whether there should be limits to the democratization of science? If my tone seems acerbic, it is only because I sense cognitive dissonance between the anti-AI stance common among many academics and the purported support for inclusivity measures.

                                                    "which is really not the point of these journals at all"- it seems that it very much is one of the main points? Why do you think people publish in journals instead of just putting their work on the arxiv? Do you think postdocs and APs are suffering through depression and stressing out about their publications because they're agonizing over whether their research has genuinely contributed substantively to the academic literature? Are academic employers poring over the publishing record of their researchers and obsessing over how well they publish in top journals in an altruistic effort to ensure that the research of their employees has made the world a better place?

                                                    • bloppe 4 hours ago

                                                      I wonder if there's a way to tax the frivolous submissions. There could be a submission fee that would be fully reimbursed iff the submission is actually accepted for publication. If you're confident in your paper, you can think of it as a deposit. If you're spamming journals, you're just going to pay for the wasted time.

                                                      Maybe you get reimbursed for half as long as there are no obvious hallucinations.

                                                      • JBorrow 4 hours ago

                                                        The journal that I'm an editor for is 'diamond open access', which means we charge no submission fees and no publication fees, and publish open access. This model is really important in allowing legitimate submissions from a wide range of contributors (e.g. PhD students in countries with low levels of science funding). Publishing in a traditional journal usually costs around $3000.

                                                        • NewsaHackO 3 hours ago

                                                          Those journals are really good for getting practice in writing and submitting research papers, but sometimes they are already seen as less impactful because of the quality of accepted papers. At least where I am at, I don't think the advent of AI writing is going to affect how they are seen.

                                                          • methuselah_in 3 hours ago

                                                            Welcome to new world of fake stuff i guess

                                                          • willturman an hour ago

                                                            If the penalty for a crime is a fine, then that law exists only for the lower class

                                                            In other words, such a structure would not dissuade bad actors with large financial incentives to push something through a process that grants validity to a hypothesis. A fine isn't going to stop tobacco companies from spamming submissions that say smoking doesn't cause lung cancer or social media companies from spamming submissions that their products aren't detrimental to the mental health.

                                                            • s0rce 4 hours ago

                                                              That would be tricky, I often submitted to multiple high impact journals going down the list until someone accepted it. You try to ballpark where you can go but it can be worth aiming high. Maybe this isn't a problem and there should be payment for the efforts to screen the paper but then I would expect the reviewers to be paid for their time.

                                                              • noitpmeder 4 hours ago

                                                                I mean your methodology also sounds suspect. You're just going down a list until it sticks. You don't care where it ends up (I'm sure within reason) just as long as it is accepted and published somewhere (again, within reason).

                                                                • niek_pas 3 hours ago

                                                                  Scientists are incentivized to publish in as high-ranking a journal as possible. You’re always going to have at least a few journals where your paper is a good fit, so aiming for the most ambitious journal first just makes sense.

                                                                  • antasvara 2 hours ago

                                                                    No different from applying to jobs. Much like companies, there are a variety of journals with varying levels of prestige or that fit your paper better/worse. You don't know in advance which journals will respond to your paper, which ones just received submissions similar to yours, etc.

                                                                    Plus, the t in me from submission to acceptance/rejection can be long. For cutting edge science, you can't really afford to wait to hear back before applying to another journal.

                                                                    All this to say that spamming 1,000 journals with a submission is bad, but submitting to the journals in your field that are at least decent fits for your paper is good practice.

                                                                    • jll29 2 hours ago

                                                                      It's standard practice, nothing suspect about their approach - and you won't go lower and lower and lower still because at some point you'll be tired of re-formatting, or a doctoral candidate's funding will be used up, or the topic has "expired" (= is overtaken by reality/competition).

                                                                      • mathematicaster 3 hours ago

                                                                        This is effectively standard across the board.

                                                                    • throwaway85825 4 hours ago

                                                                      Pay to publish journals already exist.

                                                                      • bloppe 4 hours ago

                                                                        This is sorta the opposite of pay to publish. It's pay to be rejected.

                                                                        • olivia-banks 4 hours ago

                                                                          I would think it would act more like a security deposit, and you'd get back 100%, no profit for the journal (at least in that respect).

                                                                        • pixelready 4 hours ago

                                                                          I’d worry about creating a perverse incentive to farm rejected submissions. Similar to those renter application fee scams.

                                                                          • petcat 4 hours ago

                                                                            > There could be a submission fee that would be fully reimbursed if the submission is actually accepted for publication.

                                                                            While well-intentioned, I think this is just gate-keeping. There are mountains of research that result in nothing interesting whatsoever (aside from learning about what doesn't work). And all of that is still valuable knowledge!

                                                                            • ezst 3 hours ago

                                                                              Sure, but now we can't even assume that such research is submitted in good faith anymore. There just seems to be no perfect solution.

                                                                              Maybe something like a "hierarchy/DAG? of trusted-peers", where groups like universities certify the relevance and correctness of papers by attaching their name and a global reputation score to it. When it's found that the paper is "undesirable" and doesn't pass a subsequent review, their reputation score deteriorates (with the penalty propagating along the whole review chain), in such a way that:

                                                                              - the overall review model is distributed, hence scalable (everybody may play the certification game and build a reputation score while doing so) - trusted/established institutions have an incentive to keep their global reputation score high and either put a very high level of scrutiny to the review, or delegate to very reputable peers - "bad actors" are immediately punished and universally recognized as such - "bad groups" (such as departments consistently spamming with low quality research) become clearly identified as such within the greater organisation (the university), which can encourage a mindset of quality above quantity - "good actors within a bad group" are not penalised either because they could circumvent their "bad group" on the global review market by having reputable institutions (or intermediaries) certify their good work

                                                                              There are loopholes to consider, like a black market of reputation trading (I'll pay you generously to sacrifice a bit of your reputation to get this bad science published), but even that cannot pay off long-term in an open system where all transactions are visible.

                                                                              Incidentally, I think this may be a rare case where a blockchain makes some sense?

                                                                              • jll29 2 hours ago

                                                                                You have some good ideas there, it's all about incentives and about public reputation.

                                                                                But it should also fair. I once caught a team at a small Indian branch of a very large three letter US corporation violating the "no double submission" rule of two conferences: they submitted the same paper to two conferences, both naturally landed in my reviewer inbox, for a topic I am one of the experts in.

                                                                                But all the other employees should not be penalized by the violations of 3 researchers.

                                                                                • gus_massa 2 hours ago

                                                                                  This idea looks very similar to journals! Each journal has a reputation, if they publish too much crap, the crap is not cited and the impact factors decrease. Also, they have an informal reputation, because impact index also has problems.

                                                                                  Anyway, how will universities check the papers? Somone must read the preprints, like the current reviewers. Someone must check the incoming preprints, find reviewers and make the final decition, like the current editors. ...

                                                                                  • amitav1 2 hours ago

                                                                                    How would this work for independent researchers?

                                                                                    (no snark)

                                                                                • mathematicaster 3 hours ago

                                                                                  Pay to review is common in Econ and Finance.

                                                                                  • skissane 2 hours ago

                                                                                    Variation I thought of on pay-to-review:

                                                                                    Suppose you are an independent researcher writing a paper. Before submitting it for review to journals, you could hire a published author in that field to review it for you (independently of the journal), and tell you whether it is submission-worthy, and help you improve it to the point it was. If they wanted, they could be listed as coauthor, and if they don't want that, at least you'd acknowledge their assistance in the paper.

                                                                                    Because I think there are two types of people who might write AI slop papers: (1) people who just don't care and want to throw everything at the wall and see what sticks; (2) people who genuinely desire to seriously contribute to the field, but don't know what they are doing. Hiring an advisor could help the second group of people.

                                                                                    Of course, I don't know how willing people would be to be hired to do this. Someone who was senior in the field might be too busy, might cost too much, or might worry about damage to their own reputation. But there are so many unemployed and underemployed academics out there...

                                                                                  • utilize1808 4 hours ago

                                                                                    Better yet, make a "polymarket" for papers where people can bet on which paper can make it, and rely on "expertise arbitrage" to punish spams.

                                                                                    • ezst 3 hours ago

                                                                                      Doesn't stop the flood, i.e. the unfair asymmetry between the effort to produce vs. effort to review.

                                                                                      • utilize1808 24 minutes ago

                                                                                        Not if submissions require some small mandatory bet.

                                                                                  • Rperry2174 4 hours ago

                                                                                    This keeps repeating in different domains: we lower the cost of producing artifacts and the real bottleneck is evaluating them.

                                                                                    For developers, academics, editors, etc... in any review driven system the scarcity is around good human judgement not text volume. Ai doesn't remove that constraint and arguably puts more of a spotlight on the ability to separate the shit from the quality.

                                                                                    Unless review itself becomes cheaper or better, this just shifts work further downstream and disguising the change as "efficiency"

                                                                                    • SchemaLoad 2 hours ago

                                                                                      This has been discussed previously as "workslop", where you produce something that looks at surface level like high quality work, but just shifts the burden to the receiver of the workslop to review and fix.

                                                                                      • vitalnodo 4 hours ago

                                                                                        This fits into the broader evolution of the visualization market. As data grows, visualization becomes as important as processing. This applies not only to applications, but also to relating texts through ideas close to transclusion in Ted Nelson’s Xanadu. [0]

                                                                                        In education, understanding is often best demonstrated not by restating text, but by presenting the same data in another representation and establishing the right analogies and isomorphisms, as in Explorable Explanations. [1]

                                                                                        [0] https://news.ycombinator.com/item?id=40295661

                                                                                        [1] https://news.ycombinator.com/item?id=22368323

                                                                                      • jasonfarnon 28 minutes ago

                                                                                        I'm certain your journal will be using LLMs in reviewing incoming articles, if they aren't already. I also don't think this is in response to the flood of LLM generated articles. Even if authors were the same as pre-LLM, journals would succumb to the temptation, at least at the big 5 publishers, which already have a contentious relationship with the referees.

                                                                                        • jjcm 3 hours ago

                                                                                          The comparison to make here is that a journal submission is effectively a pull request to humanities scientific knowlegde base. That PR has to be reviewed. We're already seeing the effects of this with open source code - the number of PR submissions have skyrocketed, overwhelming maintainers.

                                                                                          This is still a good step in a direction of AI assisted research, but as you said, for the moment it creates as many problems as it solves.

                                                                                          • mrandish 3 hours ago

                                                                                            As a non-scientist (but long-time science fan and user), I feel your pain with what appears to be a layered, intractable problem.

                                                                                            > > who are looking to 'boost' their CV

                                                                                            Ultimately, this seems like a key root cause - misaligned incentives across a multi-party ecosystem. And as always, incentives tend to be deeply embedded and highly resistant to change.

                                                                                            • maxkfranz 4 hours ago

                                                                                              I generally agree.

                                                                                              On the other hand, the world is now a different place as compared to when several prominent journals were founded (1869-1880 for Nature, Science, Elsevier). The tacit assumptions upon which they were founded might no longer hold in the future. The world is going to continue to change, and the publication process as it stands might need to adapt for it to be sustainable.

                                                                                              • ezst 3 hours ago

                                                                                                As I understand it, the problem isn't publication or how it's changing over time, it's about the challenges of producing new science when the existing one is muddied in plausible lies. That warrants a new process by which to assess the inherent quality of a paper, but even if it comes as globally distributed, the cheats have a huge advantage considering the asymmetry between the effort to vibe produce vs. the tedious human review.

                                                                                                • maxkfranz 3 hours ago

                                                                                                  That’s a good point. On the other hand, we’ve had that problem long before AI. You already need to mentally filter papers based on your assessment of the reputability of the authors.

                                                                                                  The whole process should be made more transparent and open from the start, rather than adding more gatekeeping. There ought to be openness and transparency throughout the entire research process, with auditing-ability automatically baked in, rather than just at the time of publication. One man’s opinion, anyway.

                                                                                              • boplicity 3 hours ago

                                                                                                Is it at all possible to have a policy that bans the submission of any AI written text, or text that was written with the assistance of AI tools? I understand that this would, by necessity, be under an "honor system" but maybe it could help weed out papers not worth the time?

                                                                                                • currymj 2 hours ago

                                                                                                  this is probably a net negative as there are many very good scientists with not very strong English skills.

                                                                                                  the early years of LLMs (when they were good enough to correct grammar but not enough to generate entire slop papers) were an equalizer. we may end up here but it would be unfortunate.

                                                                                                • jascha_eng 3 hours ago

                                                                                                  Why not filter out papers from people without credentials? And also publicly call them out and register them somewhere, so that their submission rights can be revoked by other journals and conferences after "vibe writing".

                                                                                                  These acts just must have consequences so people stop doing them. You can use AI if you are doing it well but if you are wasting everyones time you should just be excluded from the discourse altogether.

                                                                                                  • keithnz 2 hours ago

                                                                                                    wouldn't AI actually be good for filtering given it's going to be a lot better at knowing what has been published? Also seems possible that it could actually work out papers that have ideas that are novel, or at least come up with some kind of likely score.

                                                                                                    • usefulposter 4 hours ago

                                                                                                      Completely agree. Look at the independent research that gets submitted under "Show HN" nowadays:

                                                                                                      https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...

                                                                                                      https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...

                                                                                                      • lupsasca 4 hours ago

                                                                                                        I am very sympathetic to your point of view, but let me offer another perspective. First off, you can already vibe-write slop papers with AI, even in LaTeX format--tools like Prism are not needed for that. On the other hand, it can really help researchers improve the quality of their papers. I'm someone who collaborates with many students and postdocs. My time is limited and I spend a lot of it on LaTeX drudgery that can and should be automated away, so I'm excited for Prism to save time on writing, proofreading, making TikZ diagrams, grabbing references, etc.

                                                                                                        • CJefferson 3 hours ago

                                                                                                          What the heck is the point of a reference you never read?

                                                                                                          • lupsasca 3 hours ago

                                                                                                            By "grabbing references" I meant queries of the type "add paper [bla] to the bibliography" -- that seems useful to me!

                                                                                                            • nestes 2 hours ago

                                                                                                              Focusing in on "grabbing references", it's as easy as drag-and-drop if you use Zotero. It can copy/paste references in BibTeX format. You can even customize it through the BetterBibTeX extension.

                                                                                                              If you're not a Zotero user, I can't recommend it enough.

                                                                                                          • noitpmeder 4 hours ago

                                                                                                            AI generating references seems like a hop away from absolute unverifiable trash.

                                                                                                          • SecretDreams 4 hours ago

                                                                                                            I appreciate and sympathize with this take. I'll just note that, in general, journal publications have gone considerably downhill over the last decade, even before the advent of AI. Frequency has gone up, quality has gone down, and the ability to actually check if everything in the article is actually valid is quite challenging as frequency goes up.

                                                                                                            This is a space that probably needs substantial reform, much like grad school models in general (IMO).

                                                                                                          • ggm 3 minutes ago

                                                                                                            A competition for the longest sequence of \relax in a document ensues. If enough people do this, the AI will acquire merit and seek to "win" ...

                                                                                                            • bonsai_spool 10 minutes ago

                                                                                                              The example proposed in "and speeding up experimental iteration in molecular biology" has been done since at least the mid-2000s.

                                                                                                              It's concerning that this wasn't identified and augur poorly for their search capabilities.

                                                                                                              • asveikau 2 hours ago

                                                                                                                Good idea to name this after the spy program that Snowden talked about.

                                                                                                                • pazimzadeh 2 hours ago

                                                                                                                  idk if OpenAI knew that Prism is already a very popular desktop app for scientists and that it's one of the last great pieces of optimized native software?

                                                                                                                  https://www.graphpad.com/

                                                                                                                  • varjag an hour ago

                                                                                                                    They don't care. Musk stole a chunk Heinlein's literary legacy with Grok (which unlike prism wasn't a common word) and noone bat an eye.

                                                                                                                    • DonaldPShimoda an hour ago

                                                                                                                      > Grok (which unlike prism wasn't a common word)

                                                                                                                      "Grok" was a term used in my undergrad CS courses in the early 2010s. It's been a pretty common word in computing for a while now, though the current generation of young programmers and computer scientists seem not to know it as readily, so it may be falling out of fashion in those spaces.

                                                                                                                      • Fnoord an hour ago

                                                                                                                        He stole a letter, too.

                                                                                                                        • tombert an hour ago

                                                                                                                          That bothers more than it should. Every single time I see a new post about Twitter, I think that there's some update for X11 or X Server or something, only to be reminded that Twitter has been changed.

                                                                                                                      • intothemild an hour ago

                                                                                                                        I very much doubt they knew much about what they were building if they didn't know this.

                                                                                                                    • bmaranville 2 hours ago

                                                                                                                      Having a chatbot that can natively "speak" latex seems like it might be useful to scientists that already use it exclusively for their work. Writing papers is incredibly time-consuming for a lot of reasons, and having a helper to make quick (non-substantive) edits could be great. Of course, that's not how people will use it...

                                                                                                                      I would note that Overleaf's main value is as a collaborative authoring tool and not a great latex experience, but science is ideally a collaborative effort.

                                                                                                                      • uwehn 41 minutes ago

                                                                                                                        If you're looking for something like this for typst: any VSCode fork with AI (Cursor, Antigravity, etc) plus the tinymist extension (https://github.com/Myriad-Dreamin/tinymist) is pretty nice. Since it's local, it won't have the collaboration/sharing parts built in, but that can be solved too in the usual ways.

                                                                                                                        • plastic041 an hour ago

                                                                                                                          The video shows a user asking Prism to find articles to cite and to put them in a bib file. But what's the point of citing papers that aren't referenced in the paper you're actually writing? Can you do that?

                                                                                                                          Edit: You can add papers that are not cited, to bibliography. Video is about bibliography and I was thinking about cited works.

                                                                                                                          • alphazard an hour ago

                                                                                                                            I once took a philosophy class where an essay assignment had a minimum citation count.

                                                                                                                            Obviously ridiculous, since a philosophical argument should follow a chain of reasoning starting at stated axioms. Citing a paper to defend your position is just an appeal to authority (a fallacy that they teach you about in the same class).

                                                                                                                            The citation requirement allowed the class to fulfill a curricular requirement that students needed to graduate, and therefore made the class more popular.

                                                                                                                            • bonsai_spool 28 minutes ago

                                                                                                                              > Citing a paper to defend your position is just an appeal to authority

                                                                                                                              Hmm, I guess I read this as a requirement to find enough supportive evidence to establish your argument as novel (or at least supported in 'established' logic).

                                                                                                                              An appeal to authority explicitly has no reasoning associated with it; is your argument that one should be able to quote a blog as well as a journal article?

                                                                                                                            • parsimo2010 an hour ago

                                                                                                                              A common approach to research is to do literature review first, and build up a library of citable material. Then when writing your article, you summarize the relevant past research and put in appropriate citations.

                                                                                                                              To clarify, there is a difference between a bibliography (a list of relevant works but not necessarily cited), and cited work (a direct reference in an article to relevant work). But most people start with a bibliography (the superset of relevant work) to make their citations.

                                                                                                                              Most academics who have been doing research for a long time maintain an ongoing bibliography of work in their field. Some people do it as a giant .bib file, some use software products like Zotero, Mendeley, etc. A few absolute psychos keep track of their bibliography in MS Word references (tbh people in some fields do this because .docx is the accepted submission format for their journals, not because they are crazy).

                                                                                                                              • plastic041 40 minutes ago

                                                                                                                                > a bibliography (a list of relevant works but not necessarily cited)

                                                                                                                                Didn't know that there's difference between bibliography and cited work. thank you.

                                                                                                                            • maest 42 minutes ago

                                                                                                                              Burried halfway through the article.

                                                                                                                              > Prism is a free workspace for scientific writing and collaboration

                                                                                                                              • beklein 3 hours ago

                                                                                                                                The Latent Space podcast just released a relevant episode today where they interviewed Kevin Weil and Victor Powell from, now, OpenAI, with some demos, background and context, and a Q&A. The YouTube link is here: https://www.youtube.com/watch?v=W2cBTVr8nxU

                                                                                                                                • swyx an hour ago

                                                                                                                                  oh i was here to post it haha - thank you for doing that job for me so I'm not a total shill. I really enjoyed meeting them and was impressed by the sheer ambition of the AI for Science effort at OAI - in some sense I'm making a 10000x smaller scale bet than OAI on AI for Science "taking off" this year with the upcoming dedicated Latent Space Science pod.

                                                                                                                                  generally think that there's a lot of fertile ground for smart generalist engineers to make a ton of progress here this year + it will probably be extremely financially + personally rewarding, so I broadly want to create a dedicated pod to highlight opportunities available for people who don't traditionally think of themselves as "in science" to cross over into the "ai for hard STEM" because it turns out that 1) they need you 2) you can fill in what you don't know 3) it will be impactful/challenging/rewarding 4) we've exhausted common knowledge frontiers and benchmarks anyway so the only* people left working on civilization-impacting/change-history-forever hard problems are basically at this frontier

                                                                                                                                  *conscious exaggeration sorry

                                                                                                                                  • vicapow 2 hours ago

                                                                                                                                    Hope you like it :D I'm here if you have questions, too

                                                                                                                                  • DominikPeters 5 hours ago

                                                                                                                                    This seems like a very basic overleaf alternative with few of its features, plus a shallow ChatGPT wrapper. Certainly can’t compete with using VS Code or TeXstudio locally, collaborating through GitHub, and getting AI assistance from Claude Code or Codex.

                                                                                                                                    • qbit42 2 hours ago

                                                                                                                                      Loads of researchers have only used LaTeX via Overleaf and even more primarily edit LaTeX using Overleaf, for better or worse. It really simplifies collaborative editing and the version history is good enough (not git level, but most people weren't using full git functionality). I just find that there are not that many features I need when paper writing - the main bottlenecks are coming up with the content and collaborating, with Overleaf simplifying the latter. It also removes a class of bugs where different collaborators had slightly different TeX setups.

                                                                                                                                      I think I would only switch from Overleaf if I was writing a textbook or something similarly involved.

                                                                                                                                      • vicapow 4 hours ago

                                                                                                                                        I could see it seeming likely that because the UI is quite minimalist, but the AI capabilities are very extensive, imo, if you really play with it.

                                                                                                                                        You're right that something like Cursor can work if you're familiar with all the requisite tooling (git, installing cursor, installing latex workshop, knowing how it all works) that most researchers don't want to and really shouldn't have to figure out how to work for their specific workflows.

                                                                                                                                        • mturmon 2 hours ago

                                                                                                                                          Getting close to the "why Dropbox when you can rsync" mistake (https://news.ycombinator.com/item?id=9224)

                                                                                                                                          @vicapow replied to keep the Dropbox parallel alive

                                                                                                                                          • jstummbillig 5 hours ago

                                                                                                                                            Accessibility does matter

                                                                                                                                          • reassess_blind 3 hours ago

                                                                                                                                            Do you think they used an em-dash in the opening sentence because they’re trying to normalise the AI’s writing style, or…

                                                                                                                                            • jedberg 18 minutes ago

                                                                                                                                              > because they’re trying to normalise the AI’s writing style,

                                                                                                                                              AIs use em dashes because competent writers have been using em dashes for a long time. I really hate the fact that we assume em dash == AI written. I've had to stop using em dashes because of it.

                                                                                                                                              • torginus 3 hours ago

                                                                                                                                                I haven't used MS Word in quite a while, but I distinctly remember it changed minus signs to em dashes.

                                                                                                                                                • flumpcakes 3 hours ago

                                                                                                                                                  LaTeX made writing Em dashes very easy. To the point that I would use them all the times in my academic writing. It's a shame that perfectly good typography is now a sign of slop/fraud.

                                                                                                                                                  • reed1234 3 hours ago

                                                                                                                                                    Probably used their product to write it

                                                                                                                                                    • exyi 3 hours ago

                                                                                                                                                      ... or they teached GPT to use em-dashes, because of their love for em-dashes :)

                                                                                                                                                    • jumploops 4 hours ago

                                                                                                                                                      I’ve been “testing” LLM willingness to explore novel ideas/hypotheses for a few random topics[0].

                                                                                                                                                      The earlier LLMs were interesting, in that their sycophantic nature eagerly agreed, often lacking criticality.

                                                                                                                                                      After reducing said sycophancy, I’ve found that certain LLMs are much more unwilling (especially the reasoning models) to move past the “known” science[1].

                                                                                                                                                      I’m curious to see how/if we can strike the right balance with an LLM focused on scientific exploration.

                                                                                                                                                      [0]Sediment lubrication due to organic material in specific subduction zones, potential algorithmic basis for colony collapse disorder, potential to evolve anthropomorphic kiwis, etc.

                                                                                                                                                      [1]Caveat, it’s very easy for me to tell when an LLM is “off-the-rails” on a topic I know a lot about, much less so, and much more dangerous, for these “tests” where I’m certainly no expert.

                                                                                                                                                      • markbao 4 hours ago

                                                                                                                                                        Not an academic, but I used LaTeX for years and it doesn’t feel like what future of publishing should use. It’s finicky and takes so much markup to do simple things. A lab manager once told me about a study that people who used MS Word to typeset were more productive, and I can see that…

                                                                                                                                                        • crazygringo 4 hours ago

                                                                                                                                                          100% completely agreed. It's not the future, it's the past.

                                                                                                                                                          Typst feels more like the future: https://typst.app/

                                                                                                                                                          The problem is that so many journals require certain LaTeX templates so Typst often isn't an option at all. It's about network effects, and journals don't want to change their entire toolchain.

                                                                                                                                                          • maxkfranz 4 hours ago

                                                                                                                                                            Latex is good for equations. And Latex tools produce very nice PDFs, but I wouldn't want to write in Latex generally either.

                                                                                                                                                            The main feature that's important is collaborative editing (like online Word or Google Docs). The second one would be a good reference manager.

                                                                                                                                                            • auxym 4 hours ago

                                                                                                                                                              Agreed. Tex/Latex is very old tech. Error recovery and messages is very bad. Developing new macros in Tex is about as fun as you expect developing in a 70s-era language to be (ie probably similar to cobol and old fortran).

                                                                                                                                                              I haven't tried it yet but Typst seems like a promising replacement: https://typst.app/

                                                                                                                                                            • nxobject 2 hours ago

                                                                                                                                                              What they mean by "academic" is fairly limited here, if LaTeX is the main writing platform. What are their plans for expanding past that, and working with, say Jane Biomedical Researcher with a GSuite or Microsoft org, that has to use Word/Docs and a redlining-based collaboration workflow? I can certainly see why they're making it free at this point.

                                                                                                                                                              FWIW, Google Scholar has a fairly compelling natural-language search tool, too.

                                                                                                                                                              • flockonus 2 hours ago

                                                                                                                                                                Curious in terms of trademark, does it could infringe in Vercel's Prisma (very popular ORM / framework in node.js) ?

                                                                                                                                                                EDIT: as corrected by comment, Prisma is not Vercel, but ©2026 Prisma Data, Inc. -- curiosity still persists(?)

                                                                                                                                                              • sbszllr 5 hours ago

                                                                                                                                                                The quality and usefulness of it aside, the primary question is: are they still collecting chats for training data? If so, it limits how comfortable, and sometimes even permitted, people would with working on their yet-to-be-public work using this tool.

                                                                                                                                                                • MattDaEskimo 5 hours ago

                                                                                                                                                                  What's the goal here?

                                                                                                                                                                  There was an idea of OpenAI charging commission or royalties on new discoveries.

                                                                                                                                                                  What kind of researcher wants to potentially lose, or get caught up in legal issues because of a free ChatGPT wrapper, or am I missing something?

                                                                                                                                                                  • engineer_22 4 hours ago

                                                                                                                                                                    > Prism is free to use, and anyone with a ChatGPT account can start writing immediately.

                                                                                                                                                                    Maybe it's cynical, but how does the old saying go? If the service is free, you are the product.

                                                                                                                                                                    Perhaps, the goal is to hoover up research before it goes public. Then they use it for training data. With enough training data they'll be able to rapidly identify breakthroughs and use that to pick stocks or send their agents to wrap up the IP or something.

                                                                                                                                                                  • AuthAuth 4 hours ago

                                                                                                                                                                    This does way less than i'd expect. Converting images to tikz is nice but some of the other applications demonstrated were horrible. This is no way anyone should be using AI to cite.

                                                                                                                                                                    • WolfOliver 6 hours ago

                                                                                                                                                                      Check out MonsterWriter if you are concerned about the recent acquisition of this.

                                                                                                                                                                      It also offers LaTeX workspaces

                                                                                                                                                                      see video: https://www.youtube.com/watch?v=feWZByHoViw

                                                                                                                                                                      • falcor84 3 hours ago

                                                                                                                                                                        It seems clear to me that this is about OpenAI getting telemetry and other training data with the intent of having their AI do scientific work independently down the line, and I'm very ambivalent about it.

                                                                                                                                                                        • jeffybefffy519 4 hours ago

                                                                                                                                                                          I postulate 90% of the reason openai now has "variants" for different use cases is just to capture training data...

                                                                                                                                                                          • radioactivist 4 hours ago

                                                                                                                                                                            Is anyone else having trouble using even some of the basic features? For example, I can open a comment, but it doesn't seem like there is any way to close them (I try clicking the checkmark and nothing happens). You also can't seem to edit the comments once typed.

                                                                                                                                                                            • lxe 4 hours ago

                                                                                                                                                                              Thanks for surfacing this. If you click to "tools" button to the left of "compile", you'll see a list of comments, and you can resolve them from there. We'll keep improving and fixing things that might be rough around the edges.

                                                                                                                                                                              EDIT: Fixed :)

                                                                                                                                                                              • radioactivist 36 minutes ago

                                                                                                                                                                                Thanks! (very quickly too)

                                                                                                                                                                            • sva_ 4 hours ago

                                                                                                                                                                              > In 2025, AI changed software development forever. In 2026, we expect a comparable shift in science,

                                                                                                                                                                              I can't wait

                                                                                                                                                                              • flumpcakes 3 hours ago

                                                                                                                                                                                This is terrible for Science.

                                                                                                                                                                                I'm sorry, but publishing is hard, and it should be hard. There is a work function that requires effort to write a paper. We've been dealing with low quality mass-produced papers from certain regions of the planet for decades (which, it appears, are now producing decent papers too).

                                                                                                                                                                                All this AI tooling will do is lower the effort to the point that complete automated nonsense will now flood in and it will need to be read and filtered by humans. This is already challenging.

                                                                                                                                                                                Looking elsewhere in society, AI tools are already being used to produce scams and phishing attacks more effective than ever before.

                                                                                                                                                                                Whole new arenas of abuse are now rife, with the cost of producing fake pornography of real people (what should be considered sexual abuse crime) at mere cents.

                                                                                                                                                                                We live in a little microcosm where we can see the benefits of AI because tech jobs are mostly about automation and making the impossible (or expensive) possible (or cheap).

                                                                                                                                                                                I wish more people would talk about the societal issues AI is introducing. My worthless opinion is that prism is not a good thing.

                                                                                                                                                                                • jimmar 3 hours ago

                                                                                                                                                                                  I've wasted hours of my life trying to get Latex to format my journal articles to different journals' specifications. That's tedious typesetting that wastes my time. I'm all for AI tools that help me produce my thoughts with as little friction as possible.

                                                                                                                                                                                  I'm not in favor of letting AI do my thinking for me. Time will tell where Prism sits.

                                                                                                                                                                                  • flumpcakes 3 hours ago

                                                                                                                                                                                    This Prism video was not just typesetting. If OpenAI released tools that just helped you typeset or create diagrams from written text, that would be fine. But it's not, it's writing papers for you. Scientists/publishers really do not need the onslaught of slop this will create. How can we even trust qualifications in the post-AI world, where cheating is rampant at univeristies?

                                                                                                                                                                                  • PlatoIsADisease 3 hours ago

                                                                                                                                                                                    I just want replication in science. I don't care at all how difficult it is to write the paper. Heck, if we could spend more effort on data collection and less on communication, that sounds like a win.

                                                                                                                                                                                    Look at how much BS flooded psychology but had pretty ideas about p values and proper use of affect vs effect. None of that mattered.

                                                                                                                                                                                  • pwdisswordfishy 3 hours ago

                                                                                                                                                                                    Oh, like that mass surveillance program!

                                                                                                                                                                                    • vitalnodo 5 hours ago

                                                                                                                                                                                      With a tool like this, you could imagine an end-to-end service for restoring and modernizing old scientific books and papers: digitization, cleanup, LaTeX reformatting, collaborative or volunteer-driven workflows, OCR (like Mathpix), and side-by-side comparison with the original. That would be useful.

                                                                                                                                                                                      • vessenes 5 hours ago

                                                                                                                                                                                        Don’t forget replication!

                                                                                                                                                                                        • olivia-banks 4 hours ago

                                                                                                                                                                                          I'm curious how you think AI would aide in this.

                                                                                                                                                                                          • vessenes 3 hours ago

                                                                                                                                                                                            Tao’s doing a lot of related work in mathematics, so I can say that first of all literature search is a clearly valuable function frontier models offer.

                                                                                                                                                                                            Past that, A frontier LLM can do a lot of critiquing, a good amount of experiment design, a check on statistical significance/power claims, kibitz on methodology..likely suggest experiments to verify or disprove. These all seem pretty useful functions to provide to a group of scientists to me.

                                                                                                                                                                                            • noitpmeder 4 hours ago

                                                                                                                                                                                              Replicate this <slop>

                                                                                                                                                                                              Ok! Here's <more slop>

                                                                                                                                                                                              • olivia-banks 4 hours ago

                                                                                                                                                                                                I don't think you understand what replication means in this context.

                                                                                                                                                                                        • khalic 4 hours ago

                                                                                                                                                                                          All your papers are belong to us

                                                                                                                                                                                          • vicapow 4 hours ago

                                                                                                                                                                                            Users have full control over whether their data is used to help improve our models

                                                                                                                                                                                            • chairhairair 2 hours ago

                                                                                                                                                                                              Never trust Sam Altman.

                                                                                                                                                                                              Even if yall don’t train off it he’ll find some other way.

                                                                                                                                                                                              “In one example, [Friar] pointed to drug discovery: if a pharma partner used OpenAI technology to help develop a breakthrough medicine, [OpenAI] could take a licensed portion of the drug's sales”

                                                                                                                                                                                              https://www.businessinsider.com/openai-cfo-sarah-friar-futur...

                                                                                                                                                                                              • danelski an hour ago

                                                                                                                                                                                                Only the defaults matter.

                                                                                                                                                                                            • CobrastanJorji 2 hours ago

                                                                                                                                                                                              "Hey, you know how everybody's complaining about AI making up totally fake science shit? Like, fake citations, garbage content, fake numbers, etc?"

                                                                                                                                                                                              "Sure, yes, it comes up all the time in circles that talk about AI all the time, and those are the only circles worth joining."

                                                                                                                                                                                              "Well, what if we made a product entirely focused on having AI generate papers? Like, every step of the paper writing, we give the AI lots of chances to do stuff. Drafting, revisions, preparing to publish, all of it."

                                                                                                                                                                                              "I dunno, does anybody want that?"

                                                                                                                                                                                              "Who cares, we're fucked in about two years if we don't figure out a way to beat the competitors. They have actual profits, they can ride out AI as long as they want."

                                                                                                                                                                                              "Yeah, I guess you're right, let's do your scientific paper generation thing."

                                                                                                                                                                                              • noahbp 2 hours ago

                                                                                                                                                                                                They seem to have copied Cursor in hijacking ⌘Y shortcut for "Yes" instead of Undo.

                                                                                                                                                                                                • asadm 3 hours ago

                                                                                                                                                                                                  Disappointing actually, what I actually need is a research "management" tool that lets me put in relevant citations but also goes through ENTIRE arxiv or google scholar and connect ideas or find novel ideas in random fields that somehow relate to what I am trying to solve.

                                                                                                                                                                                                  • epolanski 2 hours ago

                                                                                                                                                                                                    Not gonna lie, I cringed when it asked to insert citations.

                                                                                                                                                                                                    Like, what's the point?

                                                                                                                                                                                                    You cite stuff because you literally talk about it in the paper. The expectation is that you read that and that it has influenced your work.

                                                                                                                                                                                                    As someone who's been a researcher in the past, with 3 papers published in high impact journals (in chemistry), I'm beyond appalled.

                                                                                                                                                                                                    Let me explain how scientific publishing works to people out of the loop:

                                                                                                                                                                                                    - science is an insanely huge domain. Basically as soon as you drift in any topic the number of reviewers with the capability to understand what you're talking about drops quickly to near zero. Want to speak about properties of helicoidal peptides in the context of electricity transmission? Small club. Want to talk about some advanced math involving fourier transforms in the context of ml? Bigger, but still small club. When I mean small, I mean less than a dozen people on the planet likely less with the expertise to properly judge. It doesn't matter what the topic is, at elite level required to really understand what's going on and catch errors or bs, it's very small clubs.

                                                                                                                                                                                                    2. The people in those small clubs are already stretched thin. Virtually all of them run labs so they are already bogged down following their own research, fundraising, and coping with teaching duties (which they generally despise, very few good scientist are barely more than mediocre professors and have already huge backlogs).

                                                                                                                                                                                                    3. With AI this is a disaster. If having to review slop for your bs internal tool at your software job was already bad, imagine having to review slop in highly technical scientific papers.

                                                                                                                                                                                                    4. The good? People pushing slop, due to these clubs being relatively small, will quickly find their academic opportunities even more limited. So the incentives for proper work are hopefully there. But if asian researchers (yes, no offense), were already spamming half the world papers with cheated slop (non reproducible experiments) in the desperate bid of publishing before, I can't imagine now.

                                                                                                                                                                                                    • bonsai_spool 18 minutes ago

                                                                                                                                                                                                      > But if asian researchers (yes, no offense), were already spamming half the world papers with cheated slop (non reproducible experiments) in the desperate bid of publishing before, I can't imagine now.

                                                                                                                                                                                                      Hmm, I follow the argument, but it's inconsistent with your assertion that there is going to be incentive for 'proper work' over time. Anecdotally, I think the median quality of papers from middle- and top-tier Chinese universities is improving (your comment about 'asian researchers' ignores that Japan, South Korea, and Taiwan have established research programs at least in biology).

                                                                                                                                                                                                      • SoKamil an hour ago

                                                                                                                                                                                                        It’s like not only the technology is to blame, but the culture and incentives of modern world.

                                                                                                                                                                                                        The urge to cheat in order to get a job, promotion, approval. The urge to do stuff you are not even interested in, to look good in the resume. And to some extent I feel sorry for these people. At the end of the day you have to pay your bills.

                                                                                                                                                                                                        • epolanski an hour ago

                                                                                                                                                                                                          This isn't about paying your bills, but having a chance of becoming a full time researcher or professor in academia which is obviously the ideal career path for someone interested in science.

                                                                                                                                                                                                          All those people can go work for private companies, but few as scientists rather than technicians or QAs.

                                                                                                                                                                                                      • jackblemming 35 minutes ago

                                                                                                                                                                                                        There is zero chance this is worth billions of dollars, let alone the trillion$ OpenAI desparately needs. Why are they wasting time with this kind of stuff? Each of their employees needs to generate insane amounts of money to justify their salaries and equity and I doubt this is it.

                                                                                                                                                                                                        • oytmeal 3 hours ago

                                                                                                                                                                                                          Some things are worth doing the "hard way".

                                                                                                                                                                                                        • legitster 4 hours ago

                                                                                                                                                                                                          It's interesting how quickly the quest for the "Everything AI" has shifted. It's much more efficient to build use-case specific LLMs that can solve a limited set of problems much more deeply than one that tries to do everything well.

                                                                                                                                                                                                          I've noticed this already with Claude. Claude is so good at code and technical questions... but frankly it's unimpressive at nearly anything else I have asked it to do. Anthropic would probably be better off putting all of their eggs in that one basket that they are good at.

                                                                                                                                                                                                          All the more reason that the quest for AGI is a pipe dream. The future is going to be very divergent AI/LLM applications - each marketed and developed around a specific target audience, and priced respectively according to value.

                                                                                                                                                                                                          • falcor84 2 hours ago

                                                                                                                                                                                                            I don't get this argument. Our nervous system is also heterogenous, why wouldn't AGI be based on an "executive functions" AI that manages per-function AIs?

                                                                                                                                                                                                          • BizarroLand an hour ago

                                                                                                                                                                                                            https://en.wikipedia.org/wiki/A_Mind_Forever_Voyaging

                                                                                                                                                                                                            In 2031, the United States of North America (USNA) faces severe economic decline, widespread youth suicide through addictive neural-stimulation devices known as Joybooths, and the threat of a new nuclear arms race involving miniature weapons, which risks transforming the country into a police state. Dr. Abraham Perelman has designed PRISM, the world's first sentient computer,[2] which has spent eleven real-world years (equivalent to twenty years subjectively) living in a highly realistic simulation as an ordinary human named Perry Simm, unaware of its artificial nature.

                                                                                                                                                                                                            • ai_critic 4 hours ago

                                                                                                                                                                                                              Anybody else notice that half the video was just finding papers to decorate the bibliography with? Not like "find me more papers I should read and consider", but "find papers that are relevant that I should cite--okay, just add those".

                                                                                                                                                                                                              This is all pageantry.

                                                                                                                                                                                                              • sfink 3 hours ago

                                                                                                                                                                                                                Yes. That part of the video was straight-up "here's how to automate academic fraud". Those papers could just as easily negate one of your assumptions. What even is research if it's not using cited works?

                                                                                                                                                                                                                "I know nothing but had an idea and did some work. I have no clue whether this question has been explored or settled one way or another. But here's my new paper claiming to be an incremental improvement on... whatever the previous state of understanding was. I wouldn't know, I haven't read up on it yet. Too many papers to write."

                                                                                                                                                                                                                • renyicircle 4 hours ago

                                                                                                                                                                                                                  It's as if it's marketed to the students who have been using ChatGPT for the last few years to pass courses and now need to throw together a bachelor's thesis. Bibliography and proper citation requirements are a pain.

                                                                                                                                                                                                                  • pfisherman 3 hours ago

                                                                                                                                                                                                                    That is such a bummer. At the time, it was annoying and I groused and grumbled about it; but in hindsight my reviewers pointed me toward some good articles, and I am better for having read them.

                                                                                                                                                                                                                    • olivia-banks 4 hours ago

                                                                                                                                                                                                                      I agree with this. This problem is only going to get worse once these people enter academia and facing needing to publish.

                                                                                                                                                                                                                    • olivia-banks 4 hours ago

                                                                                                                                                                                                                      I've noticed this pattern, and it really drives me nuts. You should really be doing a comprehensive literature review before starting any sort of review or research paper.

                                                                                                                                                                                                                      We removed the authorship of a a former co-author on a paper I'm on because his workflow was essentially this--with AI generated text--and a not-insignificant amount of straight-up plagiarism.

                                                                                                                                                                                                                      • NewsaHackO 3 hours ago

                                                                                                                                                                                                                        There is definitely a difference between how senior researchers and students go about making publications. To students, they get told basically what topic they should write a paper on or prepare data for, so they work backwards: try to write the paper (possibly some researching information to write the paper), then add references because they know they have to. For the actual researchers, it would be a complete waste of time/funding to start a project on a question that has already been answered before (and something that the grant reviewers are going to know has already been explored before), so in order to not waste their own time, they have to do what you said and actually conduct a comprehensive literature review before even starting the work.

                                                                                                                                                                                                                      • black_puppydog 4 hours ago

                                                                                                                                                                                                                        Plus, this practice (just inserting AI-proposed citations/sources) is what has recently been the front-runner of some very embarrassing "editing" mistakes, notably in reports from public institutions. Now OpenAI lets us do pageantry even faster! <3

                                                                                                                                                                                                                        • verdverm 4 hours ago

                                                                                                                                                                                                                          It's all performance over practice at this point. Look to the current US administration as the barometer by which many are measuring their public perceptions

                                                                                                                                                                                                                          • maxkfranz 3 hours ago

                                                                                                                                                                                                                            A more apt example would have been to show finding a particular paper you want to cite, but you don’t want to be bothered searching your reference manager or Google Scholar.

                                                                                                                                                                                                                            E.g. “cite that paper from John Doe on lorem ipsum, but make sure it’s the 2022 update article that I cited in one of my other recent articles, not the original article”

                                                                                                                                                                                                                            • adverbly 3 hours ago

                                                                                                                                                                                                                              I chuckled at that part too!

                                                                                                                                                                                                                              Didn't even open a single one of the papers to look at them! Just said that one is not relevant without even opening it.

                                                                                                                                                                                                                              • teaearlgraycold 4 hours ago

                                                                                                                                                                                                                                The hand-drawn diagram to LaTeX is a little embarrassing. If you load up Prism and create your first blank project you can see the image. It looks like it's actually a LaTeX rendering of a diagram rendered with a hand-dawn style and then overlayed on a very clean image of a napkin. So you've proven that you can go from a rasterized LaTeX diagram back to equivalent LaTeX code. Interesting but probably will not hold up when it meets real world use cases.

                                                                                                                                                                                                                                • thesuitonym 3 hours ago

                                                                                                                                                                                                                                  You may notice that this is the way writing papers works in undergraduate courses. It's just another in a long line of examples of MBA tech bros gleaning an extremely surface-level understanding of a topic, then decided they're experts.

                                                                                                                                                                                                                                • delduca 2 hours ago

                                                                                                                                                                                                                                  First 5 seconds reading and I have spotted that was written by AI.

                                                                                                                                                                                                                                  • zb3 2 hours ago

                                                                                                                                                                                                                                    Is this the product where OpenAI will (soon) take profit share from inventions made there?

                                                                                                                                                                                                                                    • Onavo 2 hours ago

                                                                                                                                                                                                                                      It would be interesting to see how they would compete with the incumbents like

                                                                                                                                                                                                                                      https://Elicit.com

                                                                                                                                                                                                                                      https://Consensus.app

                                                                                                                                                                                                                                      https://Scite.ai

                                                                                                                                                                                                                                      https://Scispace.com

                                                                                                                                                                                                                                      https://Scienceos.ai

                                                                                                                                                                                                                                      https://Undermind.ai

                                                                                                                                                                                                                                      Lots of players in this space.

                                                                                                                                                                                                                                      • 0dayman 5 hours ago

                                                                                                                                                                                                                                        in the end we're going to end up with papers written by AI, proofread by AI .....summarized for readers by AI. I think this is just for them to remain relevant and be seen as still pushing something out

                                                                                                                                                                                                                                        • falcor84 2 hours ago

                                                                                                                                                                                                                                          You're assuming a world where humans are still needed to read the papers. I'm more worried about a future world where AIs do all of the work of progressing science and humans just become bystanders.

                                                                                                                                                                                                                                        • divan 2 hours ago

                                                                                                                                                                                                                                          No Typst support?

                                                                                                                                                                                                                                          • soulofmischief an hour ago

                                                                                                                                                                                                                                            I understand the collaborative aspects, but I wonder how this is going to compare to my current workflow of just working with LaTeX files in my IDE and using whichever model provider I like. I already have a good workflow and modern models do just fine generating and previewing LaTeX with existing toolchains.

                                                                                                                                                                                                                                            Of course, my scientific and mathematical research is done in isolation, so I'm not wanting much for collaborative features. Still, kind of interested to see how this shakes out; We're going to need to see OpenAI really step it up against Claude Opus though if they really want to be a leader in this space.

                                                                                                                                                                                                                                            • pigeons 2 hours ago

                                                                                                                                                                                                                                              Naming things is hard.

                                                                                                                                                                                                                                              • AlexCoventry 4 hours ago

                                                                                                                                                                                                                                                I don't see the use. You can easily do everything shown in the Prism intro video with ChatGPT already. Is it meant to be an overleaf killer?

                                                                                                                                                                                                                                                • wasmainiac 3 hours ago

                                                                                                                                                                                                                                                  The state of publishing in academic was already a dumpster fire, why lower the friction farther? It’s not like writing was the hard part. Give it two years max we will see hallucination citing hallucination, independent repeatability out the window

                                                                                                                                                                                                                                                  • falcor84 3 hours ago

                                                                                                                                                                                                                                                    That's one scenario, but I also see a potential scenario where this integration makes it easier to manage the full "chain of evidence" for claimed results, as well as replication studies and discovered issues, in order to then make it easier to invalidate results recursively.

                                                                                                                                                                                                                                                    At the end of the day, it's all about the incentives. Can we have a world where we incentivize finding the truth rather than just publishing and getting citations?

                                                                                                                                                                                                                                                  • AndrewKemendo 2 hours ago

                                                                                                                                                                                                                                                    I genuinely don’t see scientific journals and conferences continuing to last in this new world of autonomous agents, at least the same way that they used to be.

                                                                                                                                                                                                                                                    As other top level posters have indicated the review portion of this is the limiting factor

                                                                                                                                                                                                                                                    unless journal reviewers decide to utilize entirely automated review process, then they’re not gonna be able to keep up with what will increasingly be the most and best research coming out of any lab.

                                                                                                                                                                                                                                                    So whoever figures out the automated reviewer that can actually tell fact from fiction, is going to win this game.

                                                                                                                                                                                                                                                    I expect over the longest period, that’s probably not going to be throwing more humans at the problem, but agreeing on some kind of constraint around autonomous reviewers.

                                                                                                                                                                                                                                                    If not that then labs will also produce products and science will stop being in public and the only artifacts will be whatever is produced in the market

                                                                                                                                                                                                                                                    • andrepd 2 hours ago

                                                                                                                                                                                                                                                      "Chatgpt writes scientific papers" is somehow being advertised as a good thing. What is there even left to say?

                                                                                                                                                                                                                                                      • lifetimerubyist 20 minutes ago

                                                                                                                                                                                                                                                        As if there wasn't enough AI slop in the scientific community already.

                                                                                                                                                                                                                                                        • hulitu 5 hours ago

                                                                                                                                                                                                                                                          > Introducing Prism Accelerating science writing and collaboration with AI.

                                                                                                                                                                                                                                                          I thought this was introduced by the NSA some time ago.

                                                                                                                                                                                                                                                          • preommr 5 hours ago

                                                                                                                                                                                                                                                            Very underwhelming.

                                                                                                                                                                                                                                                            Was this not already possible in the web ui or through a vscode-like editor?

                                                                                                                                                                                                                                                            • vicapow 4 hours ago

                                                                                                                                                                                                                                                              Yes, but there's a really large number of users who don't want to have to setup vscode, git, texlive, latex workshop, just to collaborate on a paper. You shouldn't have to become a full stack software engineer to be able to write a research paper in LaTeX.

                                                                                                                                                                                                                                                            • hit8run 3 hours ago

                                                                                                                                                                                                                                                              They are really desperate now, right?

                                                                                                                                                                                                                                                              • jsrozner 3 hours ago

                                                                                                                                                                                                                                                                AI: enshittifying everything you once cared about or relied upon

                                                                                                                                                                                                                                                                (re the decline of scientific integrity / signal-to-noise ratio in science)

                                                                                                                                                                                                                                                                • lispisok 4 hours ago

                                                                                                                                                                                                                                                                  Way too much work having AI generate slop which gets dumped on a human reviewer to deal with. Maybe switch some of that effort into making better review tools.

                                                                                                                                                                                                                                                                  • shevy-java 5 hours ago

                                                                                                                                                                                                                                                                    "Accelerating science writing and collaboration with AI"

                                                                                                                                                                                                                                                                    Uhm ... no.

                                                                                                                                                                                                                                                                    I think we need to put an end to AI as it is currently used (not all of it but most of it).

                                                                                                                                                                                                                                                                    • drusepth 5 hours ago

                                                                                                                                                                                                                                                                      Does "as it is currently used" include what this apparently is (brainstorming, initial research, collaboration, text formatting, sharing ideas, etc)?

                                                                                                                                                                                                                                                                      • Jaxan 5 hours ago

                                                                                                                                                                                                                                                                        Yeah, there are already way more papers being published than we can reasonably read. Collaboration, ok, but we don’t need more writing.

                                                                                                                                                                                                                                                                      • hahahahhaah 2 hours ago

                                                                                                                                                                                                                                                                        Bringing slop to science.

                                                                                                                                                                                                                                                                        • postalcoder 6 hours ago

                                                                                                                                                                                                                                                                          Very unfortunately named. OpenAI probably (and likely correctly) estimated that 13 years is enough time after the Snowden leaks to use "prism" for a product but, for me, the word is permanently tainted.

                                                                                                                                                                                                                                                                          • cheeseomlit 5 hours ago

                                                                                                                                                                                                                                                                            Anecdotally, I have mentioned PRISM to several non-techie friends over the years and none of them knew what I was talking about, they know 'Snowden' but not 'PRISM'. The amount of people who actually cared about the Snowden leaks is practically a rounding error

                                                                                                                                                                                                                                                                            • giancarlostoro 2 hours ago

                                                                                                                                                                                                                                                                              Most people don't care about the details. Neither does the media. I've seen national scandals that the media pushed one way disproven during discovery in a legal trial. People only remember headlines, the retractions are never re-published or remembered.

                                                                                                                                                                                                                                                                              • hedora 5 hours ago

                                                                                                                                                                                                                                                                                Given current events, I think you’ll find many more people care in 2026 than did in 2024.

                                                                                                                                                                                                                                                                                (See also: today’s WhatsApp whistleblower lawsuit.)

                                                                                                                                                                                                                                                                              • arthurcolle 5 hours ago

                                                                                                                                                                                                                                                                                This was my first thought as well. Prism is a cool name, but I'd never ever use it for a technical product after those leaks, ever.

                                                                                                                                                                                                                                                                                • blitzar 4 hours ago

                                                                                                                                                                                                                                                                                  Guessing that Ai came up with the name based on the description of the product.

                                                                                                                                                                                                                                                                                  Perhaps, like the original PRISM programme, behind the door is a massive data harvesting operation.

                                                                                                                                                                                                                                                                                  • vjk800 5 hours ago

                                                                                                                                                                                                                                                                                    I'd think that most people in science would associate the name with an optical prism. A single large political event can't override an everyday physical phenomenon in my head.

                                                                                                                                                                                                                                                                                    • seanhunter 5 hours ago

                                                                                                                                                                                                                                                                                      Pretty much every company I’ve worked for in tech over my 25+ year career had a (different) system called prism.

                                                                                                                                                                                                                                                                                      • no-dr-onboard 4 hours ago

                                                                                                                                                                                                                                                                                        (plot twist: he works for NSA contractors)

                                                                                                                                                                                                                                                                                      • kaonwarb 5 hours ago

                                                                                                                                                                                                                                                                                        I suspect that name recognition for PRISM as a program is not high at the population level.

                                                                                                                                                                                                                                                                                        • maqp 4 hours ago

                                                                                                                                                                                                                                                                                          2027: OpenAI Skynet - "Robots help us everywhere, It's coming to your door"

                                                                                                                                                                                                                                                                                          • willturman 2 hours ago

                                                                                                                                                                                                                                                                                            Skynet? C'mon. That would be too obvious - like naming a company Palantir.

                                                                                                                                                                                                                                                                                        • dylan604 6 hours ago

                                                                                                                                                                                                                                                                                          Surprised they didn't do something trendy like Prizm or OpenPrism while keeping it closed source code.

                                                                                                                                                                                                                                                                                          • songodongo 5 hours ago

                                                                                                                                                                                                                                                                                            Or the JavaScript ORM.

                                                                                                                                                                                                                                                                                            • moralestapia 5 hours ago

                                                                                                                                                                                                                                                                                              I never though of that association, not in the slightest, until I read this comment.

                                                                                                                                                                                                                                                                                              • locusofself 5 hours ago

                                                                                                                                                                                                                                                                                                this was my first thought as well.

                                                                                                                                                                                                                                                                                                • wilg 5 hours ago

                                                                                                                                                                                                                                                                                                  I followed the Snowden stuff fairly closely and forgot, so I bet they didn't think about it at all and if they did they didn't care and that was surely the right call.

                                                                                                                                                                                                                                                                                                • verdverm 4 hours ago

                                                                                                                                                                                                                                                                                                  I remember, something like a month ago, Altman twit'n that they were stopping all product work to focus on training. Was that written on water?

                                                                                                                                                                                                                                                                                                  Seems like they have only announced products since and no new model trained from scratch. Are they still having pre-training issues?

                                                                                                                                                                                                                                                                                                  • chaosprint 2 hours ago

                                                                                                                                                                                                                                                                                                    As a researcher who has to use LaTeX, I used to use Overleaf, but lately I've been configuring it locally in VS Code. The configuration process on Mac is very simple. Considering there are so many free LLMs available now, I still won't subscribe to ChatGPT.