mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

12K
active users

#turnitin

1 post1 participant0 posts today

i'm taking #deepseek_r1 offline for about 400SGD, it's creepy looking at it's thought processes, but after "thinking" it still fucked up despite it's ability to reason and no more "server busy"

gave me a 376 word speech, despite it being able to reason that the speech length was 5-7 minutes...

#kopitiam #singlish

the skill now is not using AI, but rather, cleaning the AI output so that "our work" passes AI detection tools like #turnitin

it's ok to "build on the work of AI" 🤣

I'm teaching first year writing this semester. Their first major paper was due last week and four out of fourteen clearly used AI to write their papers. This after I had them write repeated drafts by hand in class so I could get a sense of their actual style. But the final version was typed, and that's when they had the computer do it.

It's obvious to me that none of them could have written what they turned in, but this is hard to explain to students and it's not great to challenge them without evidence. Of course the AI never writes the same thing twice, so the submissions aren't reproducible. TurnItIn is reprehensible and not very accurate, so that's out. But it turns out that asking ChatGPT to "finish this paragraph" along with a suspicious first sentence from their papers gets close enough to convince them to admit that they used it.

It returns different sentences and mostly similar or identical words, but the flow of ideas and the content is pretty much the same. This worked reliably. They're all new to college and I'm opposed to it anyway, so I'm not willing to report them, but they're all redoing the assignment.

Oh also, in order to encourage them to take the risk of doing their own writing I'm only grading them on whether they turn in all the assignments, including drafts and revisions, so there's nothing to gain grade wise by using AI, but I guess the temptation was too much. 🤷🏻

I just submitted my assignment for my psych class, and I'm annoyed.

The TurnItIn score I got was 31%, because it's tagged the reference list*, my name and student ID number from the header, and things like titles if theories I was writing about ("the theory of planned behaviour"). A 5 word theory title takes a lot of the percentage of a paper when it's only 6 tasks of 100 words each.

I'm used to a TurnItIn score under 10%.

#uni #Academia #Criminology #CriminalJustice #university #psych #psychology #writing #plagiarism #TurnItIn #AI

* please don't tell me not to add the reference list in TurnItIn. I know that. I also had to submit it as a complete file so I couldn't take the reference list off and then readd it before submitting the paper.

Replying to students using #ChatGPT:

"...we already use many ways to assess that dont need detection bc there is no way to plagiarise, eg process documentation, project lifecycle or critiquing very specific problems. Moving away completely from submitting essays, which is a very old fashioned method. Traditional universities who still use essays will have issues, but modern assessment methods cut out the problem. Eg Ive never used #TurnItIn bc my approach doesnt benefit"

Considering the stampede toward use of proprietary #AI generative tools in UK HE, no one's concerned about what will happen to all the input and output data. Reddit, Stack Overflow and now Slack all caved in to selling user generated data for training models, making profit out of user content. After the total surrender to #Turnitin this will be a very easy step for UK HE, in fact they probably wont even think of it as a betrayal of trust.
#academicchatter #data #trust #UKHE #education #highered

Also also: a person can prove copying objectively by demonstrating another source. A person can only suppose AI creation of a text based on a collection of different factors run through the filter of human experience and expertise.

That is, don’t use an automated tool to accuse someone of using an automated tool. #Education #ChatGPT #TurnItIn
mstdn.social/@maxkennerly/1122

Mastodon 🐘Max Kennerly (@maxkennerly@mstdn.social)Attached: 2 images If you work in education, please please please educate your colleagues that "AI detectors" like TurnItIn are all scams and none of them can reliably detect AI-generated content. https://www.reddit.com/r/Teachers/comments/1bwojmm/comment/ky7o8m4/

To educators everywhere: Please stop trusting #TurnItIn's plagiarism detection tool.
I get you do not decide what shiny objects the powers that be decide to buy, but please double check the plagiarism report before punishing a student.
I am by no means an expert, but in my (limited) experience with their tool, it is prone to false positives.
For example, it once time flagged a student's sources as plagiarized just because someone else on the internet used the same source.

Useful findings from this recent #OIS paper on the use of AI generated text in #university #assessments. #Copyleaks, #TurnItIn, and #Originality.ai demonstrate that they have "very high accuracy" in identifying AI generated submissions. 👍

Walters, William H.. "The Effectiveness of Software Designed to Detect AI-Generated Writing: A Comparison of 16 AI Text Detectors" Open Information Science, vol. 7, no. 1, 2023, pp. 20220158. doi.org/10.1515/opis-2022-0158 #ArtificialIntelligence #ChatGPT

De Gruyter · The Effectiveness of Software Designed to Detect AI-Generated Writing: A Comparison of 16 AI Text DetectorsThis study evaluates the accuracy of 16 publicly available AI text detectors in discriminating between AI-generated and human-generated writing. The evaluated documents include 42 undergraduate essays generated by ChatGPT-3.5, 42 generated by ChatGPT-4, and 42 written by students in a first-year composition course without the use of AI. Each detector’s performance was assessed with regard to its overall accuracy, its accuracy with each type of document, its decisiveness (the relative number of uncertain responses), the number of false positives (human-generated papers designated as AI by the detector), and the number of false negatives (AI-generated papers designated as human ). Three detectors – Copyleaks, TurnItIn, and Originality.ai – have high accuracy with all three sets of documents. Although most of the other 13 detectors can distinguish between GPT-3.5 papers and human-generated papers with reasonably high accuracy, they are generally ineffective at distinguishing between GPT-4 papers and those written by undergraduate students. Overall, the detectors that require registration and payment are only slightly more accurate than the others.