mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

8.2K
active users

I Spent 8 Hours in a Paid Seminar on Getting Published. Here Are the Secrets From the Gatekeepers

Advice from agents & publishers on AI writing, changing attention spans, market trends, and more. Continue reading on The Writing…
writingcooperative.com/i-spent

#debutnovel #writingtips #aiwriting #fictionwriting #publishing
@indieauthors

MediumMedium

AI in the Writing Process: A Problem of Purpose

When we value the product of writing more than the process, we’re bound to see students using GenAI to skip to the end. So, what are we going to do about it? #ArtificialIntelligence #AIEducation #AIEdu #AIInEd #AIInEdu #Writing #AIWriting

leonfurze.com/2025/07/14/ai-in

Leon Furze · AI in the Writing Process: A Problem of Purpose
More from Leon Furze

How to tell if the article you're reading was written by AI. Lisa Larson-Kelley for @FastCompany writes 5 easy steps you can use to help spot AI's "emotionally forgettable" writing.

We love her idea of a support group for the em dash lovers out there!

flip.it/LllTek

Fast Company · How to tell if the article you’re reading was written by AIYou’ve noticed that articles are all starting to sound the same. Here are five reasons why
#AI#AIWriting#Tech

A study published in Scientific Reports in 2024 claims that "AI systems emit between 130 and 1500 times less CO2e per page of text generated compared to human writers, while AI illustration systems emit between 310 and 2900 times less CO2e per image than their human counterparts." 👀

Huge if true.

Here's the kicker: "For the human writing process, we looked at humans’ total annual carbon footprints, and then took a subset of that annual footprint based on how much time they spent writing." 🤔

Of course, writing contributes to carbon footprints in the same way as all other human activities like *checks notes* heavy industry, transport, agriculture, and energy and heating. /s 🙄

Last author Andrew W. Torrance declares holding shares in NVIDIA. 🤦

nature.com/articles/s41598-024

All credit for these insights goes to Higher Ed discussions of AI writing & use facebook.com/groups/6329308355

NatureThe carbon emissions of writing and illustrating are lower for AI than for humans - Scientific ReportsAs AI systems proliferate, their greenhouse gas emissions are an increasingly important concern for human societies. In this article, we present a comparative analysis of the carbon emissions associated with AI systems (ChatGPT, BLOOM, DALL-E2, Midjourney) and human individuals performing equivalent writing and illustrating tasks. Our findings reveal that AI systems emit between 130 and 1500 times less CO2e per page of text generated compared to human writers, while AI illustration systems emit between 310 and 2900 times less CO2e per image than their human counterparts. Emissions analyses do not account for social impacts such as professional displacement, legality, and rebound effects. In addition, AI is not a substitute for all human tasks. Nevertheless, at present, the use of AI holds the potential to carry out several major activities at much lower emission levels than can humans.

LinkedIn CEO Ryan Roslansky is surprised users aren’t embracing AI writing tools on the platform. Unlike Facebook or Twitter, LinkedIn posts reflect your professional image it’s your “resume online.” Many avoid AI-generated content due to fears it may harm credibility or trust.

#LinkedIn #AIWriting #ProfessionalBranding #AuthenticityMatters #PersonalBrand #LinkedInTips #AIContent #TechiTalks #ryanroslansky

Read Full Article Here :- techi.com/why-linkedin-users-a

The cognitive debt of using AI to write essays

I am currently reading MIT’s research paper on the “cognitive debt” you can incur when when using ChatGPT: Your Brain on ChatGPT: Accumulation
of Cognitive Debt when Using an AI Assistant for Essay Writing Task
.

You can find the link to the paper on MIT’s site.

The paragraph below caught my attention:

“This suggests that rewriting an essay using AI tools (after prior AI-free writing) engaged more extensive brain network interactions. In contrast, the LLM-to-Brain group, being exposed to LLM use prior, demonstrated less coordinated neural effort in most bands, as well as bias in LLM specific vocabulary.”

As a professional writer who loves to research random topics, write about them and whose idea of a relaxing time is to tinker with my notes in my Obsidian vault and read the latest nerdy theories about personal knowledge management, I’m very protective of my cognitive abilities.

The fact that using tech like AI assistants (aka ChatGPT) can have an impact on our physical brains alarms me quite a bit, especially in the age when “iPad kids” are a thing. We’re only starting to understand the impact of these devices on young brains, let alone (Large Language Models) LLMs.

The paper seem to suggest that using ChatGPT or any other LLMs to generate the essay first, then improve on it, is bad for your brain.

So, I’m glad I’m not using ChatGPT first before writing my essays.

I’ve always resisted this.

For one, I feel that the output “influences” my writing, so I refuse to ask AI to generate any copy first lest I be influenced to write like AI! (I’m the type of writer who can read someone else’s writing and unconsciously adopt their style.)

Instead, I’ve used this technique learned from my journalism days and when I was writing novels actively: Write a very rough, shitty first draft as fast as I can (I even have a 30-minute timer for this). Then, beautify the prose. Both are human activities.

I only use AI when I hit a “wall” or a “writer’s block”. I usually ask it for suggestions to improve sentence construction. Usually titles, which I admittedly need a lot of help for SEO reasons. I rarely, if ever, use the suggested copy wholesale, but rewrite it.

That said, besides the environmental impacts, we now need to consider the physical impacts on us before we use AI to write essays. And maybe, even from brainstorming, because apparently, if we use AI for advice, explanations or ideas, it can foster dependency?

Alas, I have to admit I love using AI for this use case. It’s like having conversations with another nerd about silly subjects and I can go down rabbit holes that way.

I have to admit, I like using AI to clarify my thoughts about decisions I’ve made, and that is a tad too soothing for me!

To clarify, however, AI isn’t the first person I turn to when I want advice, but I need to remember to reach out to humans first before AI, and not replace human advice with AI! I can see how, once I get too comfortable, I forget to do just this, and I can get dependent.

Recently, I shared a post on Mastodon about a brainstorming technique I stumbled on in Youtube, and was surprised by the pushback I received.

https://www.youtube.com/watch?v=p63MKDEsuFc&feature=youtu.be

I, too, use AI to “stay in the subject” when exploring ideas. I personally think this is a healthy and productive way to use AI that won’t, well, damage your brain.

That said, I hope all of us remember that not only does using AI incur cognitive debt, it’s really damaging to the environment. We should be conscious that using AI to generate an image of a dog flying in space for larks impacts the environment:

Each time a model is used, perhaps by an individual asking ChatGPT to summarize an email, the computing hardware that performs those operations consumes energy. Researchers have estimated that a ChatGPT query consumes about five times more electricity than a simple web search. – Explained: Generative AI’s environmental impacts (MIT)

I hope, moving forward, that we writers can use AI in a way that is good for our brains and our environment.

That means not using AI for everything. A little challenging these days when the AI button option is everywhere, I have to admit. They are like little red buttons, enticing us to push them.

Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task

arxiv.org/abs/2506.08872

arXiv logo
arXiv.orgYour Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing TaskThis study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.
404Not Found