• Acacian 4 hours ago

    The verification pipeline is the most valuable part of your workflow. Most people who use AI for literature reviews skip exactly that step — they trust the output and move on.

    What you're describing is closer to building a testing harness than "using AI to write." You're asserting claims, checking them against source PDFs, and reviewing manually. That's more rigorous than most manual lit reviews where people skim abstracts and cite papers they half-read.

    Document the pipeline as methodology in your dissertation. That turns a potential misconduct question into a contribution.

    • love2read 17 hours ago

      Someone against AI will tell you yes, someone for AI will tell you no. The only thing I can really say is that saying you have ADHD so you should have a reprieve from the normal rules is something that I don't agree with.

      • jimbooonooo 15 hours ago

        I was diagnosed later in life with ADHD and struggled academically, but agree with this completely. Everybody faces difficulties in life, and ADHD doesn't justify constant exceptions. Your workplace will be far less accommodating, and you need to figure out how to adapt.

        Using AI for literature review is a great tool, but I think the onus is on you to to both verify the output, AND disclose usage of said tool. Clearly describing your methodologies is it important skill for writing papers anyways.

        • latand6 8 hours ago

          I’d be happy to disclose and even consider to share how I did it all

          I’ve even drafted the acknowledgment part with brief explanation of how I used AI tools

          The only part I’m concerned about is the stigma around the AI use and that it can be treated as misconduct

      • austinjp 15 hours ago

        While your dashboard sounds fancy, this part raises issues:

        > I run ChatGPT Pro to collect all relevant papers

        Any literature review must be reproducible. If you can't say exactly what queries you ran against exactly what databases, you'll get into trouble. Whether or not that's the way things should be is irrelevant: it's the way things are.

        You should ask your supervisor if your approach is okay. If necessary, ask it from a theoretical perspective: "would it be okay if I were to....?" If your supervisor is unavailable then seek advice from their colleagues.

        Since you mention ADHD, you're likely to be strongly motivated by novelty. Don't spend time building a dashboard that you could spend on writing your thesis. If you're not getting support from your university, get it now. It might not help, but it's a signal to the university that you're engaging with the system.

        • latand6 8 hours ago

          Can you really reproduce it though?

          I thought it’s the experiments that have to be able to reproduce, not the literature review

          • austinjp 7 hours ago

            Whether you can or can't in reality is moot, unfortunately. The literature search in biomedical fields should indeed be theoretically reproducible. I don't know about other fields, but it would seem odd to me if a search was not reproducible, that would lead to a very arbitrary literature selection.

            As for the experiments, yes, in experimental fields. But in all (most?) fields, including non-experimental, the whole process should be well documented so it could be reproduced end-to-end if possible. If it's not reproducible there should be good, well explained reasons why not.

            Note that reproduciblity does not necessarily mean the exact same answer will definitely emerge, just that the methods can be followed closely.

            • latand6 6 hours ago

              Got that, thanks for the advice, I'll ask my supervisor how to address that properly

          • BrenBarn 14 hours ago

            > Any literature review must be reproducible.

            That's totally at odds with my understanding, but perhaps this differs between fields.

            • austinjp 7 hours ago

              Quite probably there are differences between fields. In biomedical literature reviews the search terms and databases are detailed, and (in systematic reviews) a PRISMA flowchart [0] provided. The theory being that other researchers could repeat the searches and the in/out decisions and get the same stack of papers to review.

              [0] https://www.prisma-statement.org/prisma-2020-flow-diagram

          • fyredge 17 hours ago

            Yes and no. The first thing to understand is that in academia, knowledge is the work. You are being trained to absorb existing knowledge, hypothesise new knowledge and test if it is valid.

            LLMs are a useful tool if you want it to generate text. But in the context of research, this is quite dangerous. Think of a calculator that spits out the wrong answer 10% of the time, would you trust it to use in an exam? How about 5%? 1%? 0.1%? The business of research is the business of factual knowledge. Every piece of information should and is expected to be scrutinized. That's why dishonesty is severely looked down upon (falsifying data / plagiarism etc.)

            I would say your use case is not dishonest, but I would also like you to think from the perspective of the university. How would they know if their students are using it honestly like you did? How can they, with their limited resource, make sure that research integrity is upheld in the face of automated hallucinations?

            At the end of the day, the question is not what if using AI is dishonest, it's about being able to walk into an antagonistic panel and defend your claim that you understand the knowledge of your field (without live AI help). If you can do that and also make sure that the contents are not hallucinated, then I don't see why not.

            • latand6 8 hours ago

              Yeah that’s exactly my point. The AI is just taking the boring job of collecting evidence and I’m a validator. This way i see that I’m able to process papers much faster than without AI. It’s faster primarily because you don’t have to spend 70% of your time reading abstracts and sections of the papers you’ll never need. Doing manually it’s very exhausting.

              Thats being said, I feel like I’m feeling more productive it terms of generating insights apart from what the AI said. I also have a chat interface where I basically can ask anything I want from the PDF (and yeah I’m aware of the NotebookLM, I just don’t trust Gemini)

            • matzalazar 7 hours ago

              Think about it this way: 70 years ago, would a physicist be considered a cheater for using a calculator to solve complex differential equations in their daily work? People tend to frame the moral dilemmas of new technology through the lens of everyday human tasks, and I think that's just a prejudice.

              • malshe 14 hours ago

                I don't think what you are doing is dishonest. But my opinion hardly matters.

                My advice is to talk to your dissertation committee chair to understand whether they think it is dishonest. Furthermore, read your university's AI usage policies. If they don't consider what you are doing a permissible use of AI, no amount of assurance on HN or any online forum is gonna help you.

                • latand6 8 hours ago

                  I agree with you and that’s exactly what I’m going to do. It’s just that I may be more persuasive if I’m prepared

                • Neosmith_amit 16 hours ago

                  No, I don't think it is dishonest.

                  At the same time I would recommend, document your methodology explicitly in the dissertation, describe the verification pipeline, and make it clear what you reviewed manually versus what was automated. That transparency converts "dishonest?" into "methodologically rigorous."

                  Here is the thing, academic policy is NOT really about honesty. It is about trust. Universities cannot distinguish your workflow from someone who prompted GPT to write their lit review wholesale.

                  More than the ethical distinction, I believe the rule around AI usage is blunt because enforcement is pretty hard.

                  • QubridAI 17 hours ago

                    Not dishonest if you verify everything and understand it deeply but you should be transparent about your AI use since many universities care more about disclosure than the method itself.

                    • bjourne 15 hours ago

                      You cannot copy others' work and claim it is your own. Thus, you cannot copy ChatGPT's work and claim it is your own. There is a qualitative difference between having an LLM generate text and having a program spell- and grammar check text. Since you are not going to highlight which passages in your article ChatGPT wrote for you and instead intend to pass it of as your own creative work it is dishonest. Very dishonest. If caught you will get in trouble and may be kicked out of your academic programme.

                      • latand6 8 hours ago

                        There is not a single paragraph that I might “steal” from ChatGPT. I’m consistently using multiple LLMs to write, polish, rephrase and all other kinds of edits

                        I really don’t get the point of the necessity of typing manually. Can you explain?

                      • adampunk 17 hours ago

                        I don’t know if it is dishonest. What I do know is that it will only save you time if you have a very specific and testable need. Otherwise it will appear to save time and produce something that you won’t be proud of.