• Re: ChatGPT contributing to current science papers

    From Ernest Major@21:1/5 to RonO on Sun Aug 11 11:29:40 2024
    On 10/08/2024 22:32, RonO wrote:
    https://phys.org/news/2024-08-junk-ai-scientific-publishing.html

    Several examples of scientists using AI to write papers with AI
    generated mistakes that passed peer review.  I noted before that ChatGPT could be used to write the introductions of papers, sometimes, better
    than the authors had done.  One example of a figure manipulation
    indicates that some authors are using it to present and discuss their
    data.  That seems crazy.  ChatGPT doesn't evaluate the junk that it is given.  It just basically summarizes what they feed into it on some subject.  I used a graphic AI once.  I asked it to produce a picture of
    a chicken walking towards the viewer.  It did a pretty good job, but
    gave the chicken the wrong number of toes facing forward.  Apparently
    junk like that is making it into science publications.

    With these examples it may be that one of the last papers that I
    reviewed before retiring earlier this year was due to AI.  It was a good introduction and cited the relevant papers and summarized what could be
    found in them, but even though the authors had cited previous work doing
    what they claimed to be doing, their experimental design was incorrect
    for what they were trying to do.  The papers they cited had done things correctly, but they had not.  I rejected the paper and informed the
    journal editor that it needed substantial rewrite for the authors to
    state what they had actually done.  What might have happened is that the researchers may have had an AI write their introduction, but it was for
    what they wanted to do, and not for what they actually did.  English was likely not the primary language for the authors, and they may not have understood the introduction that was written.  If they had understood
    the introduction, they would have figured out that they had not done
    what they claimed to be doing.  Peer review is going to have to deal
    with this type of junk.  The last paper that I reviewed in March came
    with instructions that the reviewers were not to use AI to assist them
    with the review, but it looks like reviewers are going to need software
    that will detect AI generated text.

    Ron Okimoto


    I can understand why journals would not want to authors to use AI in
    writing papers*, but why would they not want reviewers to use AI tools
    if they can assist in reviewing the paper?

    * Even so, the AI rubric includes translation tools (authors might write
    text in their native language, and use AI for a first pass translation
    into English), and the spelling/grammar/style checker Grammerly now
    includes AI features.

    --
    alias Ernest Major

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Burkhard@21:1/5 to Ernest Major on Sun Aug 11 16:25:54 2024
    On Sun, 11 Aug 2024 10:29:40 +0000, Ernest Major wrote:

    On 10/08/2024 22:32, RonO wrote:
    https://phys.org/news/2024-08-junk-ai-scientific-publishing.html

    Several examples of scientists using AI to write papers with AI
    generated mistakes that passed peer review.  I noted before that ChatGPT
    could be used to write the introductions of papers, sometimes, better
    than the authors had done.  One example of a figure manipulation
    indicates that some authors are using it to present and discuss their
    data.  That seems crazy.  ChatGPT doesn't evaluate the junk that it is
    given.  It just basically summarizes what they feed into it on some
    subject.  I used a graphic AI once.  I asked it to produce a picture of
    a chicken walking towards the viewer.  It did a pretty good job, but
    gave the chicken the wrong number of toes facing forward.  Apparently
    junk like that is making it into science publications.

    With these examples it may be that one of the last papers that I
    reviewed before retiring earlier this year was due to AI.  It was a good
    introduction and cited the relevant papers and summarized what could be
    found in them, but even though the authors had cited previous work doing
    what they claimed to be doing, their experimental design was incorrect
    for what they were trying to do.  The papers they cited had done things
    correctly, but they had not.  I rejected the paper and informed the
    journal editor that it needed substantial rewrite for the authors to
    state what they had actually done.  What might have happened is that the
    researchers may have had an AI write their introduction, but it was for
    what they wanted to do, and not for what they actually did.  English was
    likely not the primary language for the authors, and they may not have
    understood the introduction that was written.  If they had understood
    the introduction, they would have figured out that they had not done
    what they claimed to be doing.  Peer review is going to have to deal
    with this type of junk.  The last paper that I reviewed in March came
    with instructions that the reviewers were not to use AI to assist them
    with the review, but it looks like reviewers are going to need software
    that will detect AI generated text.

    Ron Okimoto


    I can understand why journals would not want to authors to use AI in
    writing papers*, but why would they not want reviewers to use AI tools
    if they can assist in reviewing the paper?

    * Even so, the AI rubric includes translation tools (authors might write
    text in their native language, and use AI for a first pass translation
    into English), and the spelling/grammar/style checker Grammerly now
    includes AI features.

    If any of you are in Edinburgh right now, I'm on a panel on
    this topic at the International Bookfestival, presenting the outcome
    of two research projects we had on this, and some workshops
    with publishers.

    https://www.edbookfest.co.uk/the-festival/whats-on/page-against-the-machine

    I'm on the more relaxed side on this myself, and agree in particular
    with
    Ernest that nobody worries about some routine tasks like spell-checking (translation raises some really interesting issues "at the margins" -
    Google
    e.g got some pushback when publishing in its latest list of languages
    also
    Romani, without checking with the community, and many are unhappy as
    they
    considered the "quasi-secret" nature of the language a historical
    survival tool)
    Very interesting questions also on the copyright for translations etc

    For the use by academics, it often depends on the details. GenAI is a
    glorified autocomplete tool, keep that in mind and you'll be fine. So
    helping
    write the review, once you decide on the content, is much less of
    an issue than outsourcing the actual analysis eg.

    And be aware of hallucinations... as some lawyers found to
    their detriment when they submitted files to the court that had made-
    up precedents in them

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ernest Major@21:1/5 to RonO on Sun Aug 11 19:54:14 2024
    On 11/08/2024 18:29, RonO wrote:
    On 8/11/2024 11:25 AM, Burkhard wrote:
    On Sun, 11 Aug 2024 10:29:40 +0000, Ernest Major wrote:

    On 10/08/2024 22:32, RonO wrote:
    https://phys.org/news/2024-08-junk-ai-scientific-publishing.html

    Several examples of scientists using AI to write papers with AI
    generated mistakes that passed peer review.  I noted before that
    ChatGPT
    could be used to write the introductions of papers, sometimes, better
    than the authors had done.  One example of a figure manipulation
    indicates that some authors are using it to present and discuss their
    data.  That seems crazy.  ChatGPT doesn't evaluate the junk that it is >>>> given.  It just basically summarizes what they feed into it on some
    subject.  I used a graphic AI once.  I asked it to produce a picture of >>>> a chicken walking towards the viewer.  It did a pretty good job, but
    gave the chicken the wrong number of toes facing forward.  Apparently >>>> junk like that is making it into science publications.

    With these examples it may be that one of the last papers that I
    reviewed before retiring earlier this year was due to AI.  It was a
    good
    introduction and cited the relevant papers and summarized what could be >>>> found in them, but even though the authors had cited previous work
    doing
    what they claimed to be doing, their experimental design was incorrect >>>> for what they were trying to do.  The papers they cited had done things >>>> correctly, but they had not.  I rejected the paper and informed the
    journal editor that it needed substantial rewrite for the authors to
    state what they had actually done.  What might have happened is that
    the
    researchers may have had an AI write their introduction, but it was for >>>> what they wanted to do, and not for what they actually did.  English
    was
    likely not the primary language for the authors, and they may not have >>>> understood the introduction that was written.  If they had understood >>>> the introduction, they would have figured out that they had not done
    what they claimed to be doing.  Peer review is going to have to deal
    with this type of junk.  The last paper that I reviewed in March came >>>> with instructions that the reviewers were not to use AI to assist them >>>> with the review, but it looks like reviewers are going to need software >>>> that will detect AI generated text.

    Ron Okimoto


    I can understand why journals would not want to authors to use AI in
    writing papers*, but why would they not want reviewers to use AI tools
    if they can assist in reviewing the paper?

    * Even so, the AI rubric includes translation tools (authors might write >>> text in their native language, and use AI for a first pass translation
    into English), and the spelling/grammar/style checker Grammerly now
    includes AI features.

    If any of you are in Edinburgh right now, I'm on a panel on
    this topic at the International Bookfestival, presenting the outcome
    of  two research projects we had on this, and some workshops
    with publishers.

    https://www.edbookfest.co.uk/the-festival/whats-on/page-against-the-machine >>
    I'm on the more relaxed side on this myself, and agree in particular
    with
    Ernest that nobody worries about some routine tasks like spell-checking
    (translation raises some really interesting issues "at the margins" -
    Google
    e.g got some pushback when publishing in its latest list of languages
    also
    Romani, without checking with the community, and many are unhappy as
    they
    considered the "quasi-secret" nature of the language a historical
    survival tool)
    Very interesting questions also on the copyright for translations etc

    For the use by academics, it often depends on the details. GenAI is a
    glorified autocomplete tool, keep that in mind and you'll be fine. So
    helping
    write the review, once you decide on the content, is much less of
    an issue than outsourcing the actual analysis eg.

    And be aware of hallucinations... as some lawyers found to
    their detriment when they submitted files to the court that had made-
    up precedents in them


    One recent paper that I recall reading indicated that AI halucinations resulted from feeding the AI, AI generated summaries.  The AI started
    making things up when it had to deal with AI generated material.

    Ron Okimoto


    From what I've read, using LLM output as training materials for LLMs,
    and similar actions, does cause issues, but it's not required for the generation of AI hallucinations. The training data is already poisoned
    by the presence of misinformation, error, sarcasm, satire, parody and
    fiction in the training set. Even if you have a clean training set, LLMs
    don't understand the limitations of their knowledge, and when they
    transgress them they generate hallucinations. (More recently, I have had
    Bing CoPilot - I occasionally give it a question, when Google et al are
    being particularly obtuse* - tell me that it was unable to answer a
    question, so some guardrails have been added.)

    --
    alias Ernest Major

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Athel Cornish-Bowden@21:1/5 to RonO on Tue Aug 13 16:11:08 2024
    On 2024-08-12 13:37:21 +0000, RonO said:

    On 8/11/2024 8:09 PM, JTEM wrote:
     RonO wrote:

    Peer review has it's flaws, but there is absolutely no doubt that it is
    the best means we have for giving research it's first pass evaluation.

    It's irredeemably flawed. There needs to be transparency.

    The biggest danger, and it does happen, is good science being killed
    off by "Peer Review."

    You are just delusional.

    Of course, but you're using it in its correct well established sense.
    JTEM is using it in the trashy sense used by junk journals to tell you
    that their publcations are peer reviewed.

    There are so many journals publishing similar science that peer
    review is about the last thing that is going to kill off good science.
    The current situation is that there are journals damaging the integrity
    of the science by being paper mills, and publishing junk if the authors
    are willing to pay them.


    How to stop it?  Transparency. Let the rejected papers see the light
    of day.

    When I review a paper, I always check the box that gives the journal
    the right to name me as one of the reviewers, and to forward my reviews
    to other journals if they think that the paper would be better suited
    to those journals, when journals have that policy. My recollection is
    that pretty much all journals warn reviewers about reviewing papers
    where they have a conflict of interest, and pretty much all of them
    have the reviewers claim no conflict.

    There really are so many journals at this time, that the suppression
    that you claim, just doesn't exist.

    Bad junk gets rejected from all legitimate journals.


    Peer review can be manipulated (Sternberg and Meyer), and groups of
    researchers have been exposed for recommending each others papers for
    peer review (some journals ask the authors to recommend possible peer
    reviewers in their field).

    Less concerned about bad science making it through. Science is self
    correcting. Science is repeatable or it isn't science. We can
    reasonably expect garbage to self correct. But the opposite isn't
    true. Good science that is kept from seeing the light of day is a
    loss to the world.

    Science is not narrowly focused, and quite dispersed with many journals publishing similar science. The fact that science is self correcting
    is the reason that you don't have to worry about peer review. Things
    that aren't worth publishing get published all the time. They just get buried in the junk pile, and do not get noticed. My guess is that the
    rate of rejection is pretty low for most journals. I was an associate
    editor for around a decade (off and on) since the 1990's, and have
    reviewed papers from a wide range of journals, and not just that one,
    and I have only outright rejected 2 papers, all the rest were sent back
    for revision, and most were eventually accepted.

    Ron Okimoto


    --
    Athel -- French and British, living in Marseilles for 37 years; mainly
    in England until 1987.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bob Casanova@21:1/5 to All on Sun Aug 18 09:55:11 2024
    On Sat, 17 Aug 2024 13:36:50 -0700, the following appeared
    in talk.origins, posted by erik simpson
    <eastside.erik@gmail.com>:

    On 8/17/24 1:15 PM, RonO wrote:
    On 8/17/2024 10:42 AM, JTEM wrote:
      RonO wrote:

    This seems

    You. Could. Not. Raise. A. Single. Objection.

    Not one.

    Not even after I humiliated you for it.





    I already did, but you have just removed the material and kept plodding
    on with the same misdirection.  Even you can't figure out what you are
    responding to in this post let alone the previous posts.

    Ron Okimoto

    He neither knows nor cares. Ignore the chump.

    Careful; ignoring him/her/it means that you're being
    "willfully blind".

    --

    Bob C.

    "The most exciting phrase to hear in science,
    the one that heralds new discoveries, is not
    'Eureka!' but 'That's funny...'"

    - Isaac Asimov

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)