corteximplant.com is one of the many independent Mastodon servers you can use to participate in the fediverse.
CORTEX IMPLANT - an international cyberpunk'ish LGBTQIA+ friendly Fediverse instance for edgerunners, netrunners and cyberpunks and all who want to become one.

Administered by:

Server stats:

238
active users

#scientificpublishing

1 post1 participant0 posts today
Public

@jonmsterling @lindsey @JonathanAldrich

A very senior faculty member once told me that "back in the day" (when they were young) an interesting research finding would first get published as a brief note in Nature, and then a year later the lengthy, detailed manuscript with all the data and methods would appear in the Journal of Neurophysiology. And that one wasn't considered a serious scientist if the latter wasn't published, for the former was but the news splash with barely enough substance to understand what the claim was.

Tragic that this separation of concerns has been lost, with everyone now giving more weight to the flashy bit ("significance of findings"), to the point of not bothering to publish the lengthy, detailed, reproducible study ("strength of evidence").

These two axes are what eLife's assessment aims at capturing in a pithy paragraph.

Public

The scientific publishing system is fucked up.
I believe one of the things you can do as a researcher is only to contribute to journals that don't suck.
But what are journals that don't suck™?
I've written down my criteria here: marioangst.com/en/blog/posts/c
#academia #scientificpublishing #bigpublishing #OpenScience

marioangst.comCriteria for journals that don’t suck – Mario Angst
Public
Speaking of widespread low-quality scientific publication and the need to take care with words: https://retractionwatch.com/2025/02/10/vegetative-electron-microscopy-fingerprint-paper-mill/
The phrase was so strange it would have stood out even to a non-scientist. Yet “vegetative electron microscopy” had already made it past reviewers and editors at several journals when a Russian chemist and scientific sleuth noticed the odd wording in a now-retracted paper in Springer Nature’s Environmental Science and Pollution Research.

Today, a Google Scholar search turns up nearly two dozen articles that refer to “vegetative electron microscopy” or “vegetative electron microscope,” including a paper from 2024 whose senior author is an editor at Elsevier, Retraction Watch has learned. The publisher told us it was “content” with the wording.
Note the presence of Nature publishing group, notorious lately for their low-quality AI slop or AI-boosterism, and Elsevier, who is generally terrible.

#AI #GenerativeAI #LLM #AISlop #InformationOilSpill #AcademicPublishing #ScientificPublishing #PaperMill #PeerReview
Retraction Watch · As a nonsense phrase of shady provenance makes the rounds, Elsevier defends its useThe origin of the phrase? The phrase was so strange it would have stood out even to a non-scientist. Yet “vegetative electron microscopy” had already made it past reviewers and editors at several j…
Public

We are inviting applications for the role of Executive Editor for @Dev_journal a prestigious not-for-profit journal serving the international developmental and stem cell biology community. This is an exciting opportunity (resulting from an internal promotion) for a talented scientific editor. If you're an experienced editor with a love of developmental biology and its community, this could be the job for you: biologists.com/wp-content/uplo
Application deadline: 10 March 2025
#scientificpublishing

Public

@brembs When I say this, a question I get often is, how can you tell a paper is good, generically speaking?

A good paper comes in many forms. Some well-known forms are, based on attention:

1. A report on findings or tools that other labs are already relying on even before it's even published.

2. A report that has been ignored for some time, and suddenly starts being referred to, surfacing as late citations. A "sleeper" paper, one that was ahead of its time.

No. 1 are favourites of journals fishing for citations to boost their impact factor, since they are guaranteed to get many in the first years. But No. 2 signals exceptional, visionary work.

Some other, overlapping forms of papers, based on how foundational the findings are:

3. A report whose findings often change the way other labs will from then on approach a particular field or a paradigm within that field.

4. A report that is referred to from an undergraduate textbook.

Most of these, except No. 1, share the same "problem": takes years for the field to appreciate them. The impact factor only considers 2 years and it's journal-wise, not paper-wise: there's a lot of noise. But paper-wise, article-level citations take time anyway to build up, and are very field-dependent, so isn't reliable either.

Truly there are no shortcuts to the evaluation of scientific research. A sensible strategy is hedging one's bets, because the chase for short-term clout can drastically cut out those sweet long-term rewards, and both matter. What also matters a lot is being a scientist yourself, in addition to an administrator, so as to be able to even approach the evaluation problem.

Public

@brembs The deeper issue is that of evaluation in academia. At the moment, and for quite some years, many have taken "more" as better, in both number of papers and of citations, plus the additional axis of perceived importance, i.e., the glamour journal aura and their impact factor use as rubber stamping credentials.

This castle of cards falls down quickly when considering that e.g., a large double-digit percent of papers in glamour journals are never cited at all – motivating the article-level metrics –, and that a significant percent ends up retracted – invalidating any claims that more citations means better.

To evaluate scientists from their published papers, evaluators have to read the papers, discuss them among themselves, contextualize them to the needs and future of their institution, and make up their mind. There are no shortcuts.

Public

A new paper about @joss

"The Journal of Open Source Software (JOSS): Bringing Open-Source Software Practices to the Scholarly Publishing Community for Authors, Reviewers, Editors, and Publishers"

doi.org/10.31274/jlsc.18285

Journal of Librarianship and Scholarly CommunicationThe Journal of Open Source Software (JOSS): Bringing Open-Source Software Practices to the Scholarly Publishing Community for Authors, Reviewers, Editors, and PublishersIntroduction: Open-source software (OSS) is a critical component of open science, but contributions to the OSS ecosystem are systematically undervalued in the current academic system. The Journal of Open Source Software (JOSS) contributes to addressing this by providing a venue (that is itself free, diamond open access, and all open-source, built in a layered structure using widely available elements/services of the scholarly publishing ecosystem) for publishing OSS, run in the style of OSS itself. A particularly distinctive element of JOSS is that it uses open peer review in a collaborative, iterative format, unlike most publishers. Additionally, all the components of the process—from the reviews to the papers to the software that is the subject of the papers to the software that the journal runs—are open. Background: We describe JOSS’s history and its peer review process using an editorial bot, and we present statistics gathered from JOSS’s public review history on GitHub showing an increasing number of peer reviewed papers each year. We discuss the new JOSSCast and use it as a data source to understand reasons why interviewed authors decided to publish in JOSS. Discussion and Outlook: JOSS’s process differs significantly from traditional journals, which has impeded JOSS’s inclusion in indexing services such as Web of Science. In turn, this discourages researchers within certain academic systems, such as Italy’s, which emphasize the importance of Web of Science and/or Scopus indexing for grant applications and promotions. JOSS is a fully diamond open-access journal with a cost of around US$5 per paper for the 401 papers published in 2023. The scalability of running JOSS with volunteers and financing JOSS with grants and donations is discussed.
Public

By the way the tools available nowadays for discovering scientific literature are remarkable. Two examples:

#SemanticScholar lists actually related and interesting papers:
semanticscholar.org/paper/Cogn

And so does #Sciety @sciety but offering a different set of related papers:
sciety.org/articles/activity/1

Public

A post of @11011110 has reminded me that (after a year and a half lurking here) it's never too late for me to toot and pin an intro here.

I am a Canadian mathematician in the Netherlands, and I have been based at the University of Amsterdam since 2022. I also have some rich and longstanding ties to the UK, France, and Japan.

My interests are somewhere in the nexus of Combinatorics, Probability, and Algorithms. Specifically, I like graph colouring, random graphs, and probabilistic/extremal combinatorics. I have an appreciation for randomised algorithms, graph structure theory, and discrete geometry.

Around 2020, I began taking a more active role in the community, especially in efforts towards improved fairness and openness in science. I am proud to be part of a team that founded the journal, Innovations in Graph Theory (igt.centre-mersenne.org/), that launched in 2023. (That is probably the main reason I joined mathstodon!) I have also been a coordinator since 2020 of the informal research network, A Sparse (Graphs) Coalition (sparse-graphs.mimuw.edu.pl/), devoted to online collaborative workshops. In 2024, I helped spearhead the MathOA Diamond Open Access Stimulus Fund (mathoa.org/diamond-open-access).

Until now, my posts have mostly been about scientific publishing and combinatorics.

#introduction
#openscience
#diamondopenaccess
#scientificpublishing
#openaccess
#RemoteConferences
#combinatorics
#graphtheory
#ExtremalCombinatorics
#probability

igt.centre-mersenne.orgInnovations in Graph Theory Innovations in Graph Theory
Public

🏴‍☠️ :blobfoxpirate: ☠️

You know the drama maybe with sci-hub, z-lib, libgen, etc...check out the weird-ass seizure notice with a (IMO) absolute failure in messaging...they don't seem to be swimming in pools of stolen money, rather, looks like they just live in a rural area...but ok.

Just in case you didn't know, there's a meta-search site called annas archive annas-archive.org (with shitty-but-working download mirrors)

and z-library.sk is the current official domain.

and sci-hub seems to be back up and running (multiple domains e.g. sci-hub.se)

Just stop putting your research behind paywalls

Public

I'm pretty sure you could find like a dozen data hoarders who would agree to be permanent seeds of research papers.

So I'm not exactly sure what "publishing" costs there are besides having a pact between research universities that they should dedicate x% of time to review each other's papers and sign off on them.

No need for a publisher.

Considering how much they screw over the academic researchers they gate and profit from

Public

I vote for all journals to impose a [Name et al., Date] in line reference system, not only because it's more useful than some random numbers but generally because it makes it so much more easier to check what is the ref, in a world where we'll soon be inundated by AI-generated manuscripts. And no, we shouldn't care more about space limits than accuracy.
Who's in? 😃

Public

#fluConf2025 will include a track on independent publishing and archival. We want to hear stories of what's being done to create non-corporate spaces on the web and preserve the media big companies so often erase.

Tell us about your motivations and experience moving from big platforms like Substack, Twitter, Instagram, and Wix to self-hosted or communally-operated alternatives.

Share your insights into the world of for-profit journals in academia, and efforts to establish better options not based on extraction.

How do you adapt to challenges like the falling adoption of established syndication protocols like RSS, the costs of AI scraping, and ever-changing search engine algorithms? How do you keep up with legal requirements for content moderation and age verification?

With so many corporate platforms shutting down, changing policies on media retention, or moving to monetize content for AI training: how have you gone about archiving your media? What tools and techniques have you used to ensure it isn't lost? How do we resist corporate capture of independent media and foster conditions for more long-lived infrastructure?

Apply up until midnight of January 19th, 2025 (anywhere on Earth)

fluconf.online/apply/

fluconf.onlineSubmit a proposalSubmit your proposal for FluConf 2025 by January 19th, 2025