mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

8.1K
active users

#computationalmodelling

0 posts0 participants0 posts today

Generated a quick plot to see usage trends for Open Source Brain v2 (#OSBv2): an integrated #research platform for #neuroscience that indexes multiple model and data sources (#DANDI, #ModelDB, #Biomodels, #Github) and provides compute resources on the #cloud in "workspaces". It also includes specialist applications: #NWBExplorer for working with data in the NeuroData Without Borders (#NWB) format; #NetPyNE-UI for biophysically detailed #ComputationalModelling and a #JupyterLab environment.

We are very happy to provide a consolidated update on the #NeuroML ecosystem in our @eLife paper, “The NeuroML ecosystem for standardized multi-scale modeling in neuroscience”: doi.org/10.7554/eLife.95135.3

#NeuroML is a standard and software ecosystem for data-driven biophysically detailed #ComputationalModelling endorsed by the @INCF and CoMBINE, and includes a large community of users and software developers.

#Neuroscience #ComputationalNeuroscience #ComputationalModelling 1/x

The #Izhikevichmodel is a powerful tool for simulating the #spiking and bursting behavior of #neurons with a remarkable balance between biological relevance and computational efficiency 💫 Here is a short introduction along with a #Python implementation to simulate various types of #cortical neurons, including regular spiking, fast spiking, and bursting neurons:

🌍 fabriziomusacchio.com/blog/202

Feel free to share and experiment with it ☺️

Continued thread

4/
A general issue concerns seductive #research black-box tools (or, equivalently, trending methods "inspired" by published works one doesn't really understand): easy to incur #overfitting, which implies modelling not only the "signal" being studied in too few data, but also (or mostly) their useless noise.

Recursive: if we fall into the trap (no proper #validation), our readers may be led to believe that these shortcuts have a chance to work, perpetuating anti-culture.

1/
A concerning post-publication exercise (which led the original flawed publication to #retraction) on how easy is for our "intuition" of #ComputationalModelling to deceive ourselves and others

Here, "others" seem to include editors and some reviewers (whose expertise maybe was not directly in modelling) of a respected journal. Plus readers who cited a flawed work and propagated the flaw

Comments:
@kordinglab - coauth. @tdverstynen
neuromatch.social/@kordinglab/

@erinnacland
fediscience.org/@erinnacland/1

Neuromatch SocialKonrad Kording (@kordinglab@neuromatch.social)Machine learning can easily produce false positives when the test set is wrongly used. Just et al in @NatureHumBehav suggested that ML can identify suicidal ideation extremely well from fMRI and we were skeptical. Today retraction and our analysis of what went wrong came out. Here is the retracted paper: https://nature.com/articles/s41562-017-0234-y and here is our refutation https://nature.com/articles/s41562-023-01560-6. If true, the paper's approach could revolutionize psychiatric approaches to suicide. So what went wrong? The authors apparently used the test data to select features. Obvious mistake. A reminder for everyone into ML: never use the test set for *anything* but testing. Only practical way to do so in medicine? Lock away the test set till algorithm is registered. Side note: it took 3 years to go through the process of demonstrating that the paper went wrong. Journals need procedures to accelerate this. Also, all the good things of this were by @tdverstynen