Efficient Estimation of Word Representations in Vector Space[1], one of the most influential papers in the space with tens of thousands of citations[2]? Or the RoBERTa[3] paper (dramatically improved upon BERT; RoBERTa and derived models currently have tens of millions of downloads on HF and still serve as a reliable industry workhorse)? Or the Mamba paper[4] (pretty much the only alternative to transformers that actually gets used)? Do you want me to keep going?
Honestly, I find that whether a paper gets rejected or not means diddly squat considering how broken the review system is, and through how much honestly terrible papers I have to wade through every time I'm looking through the conference submissions for anything good.
I've realised that making my note taking process simpler leads to much better productivity.
At times I've just stopped taking notes because of the high activation energy required, now I just work with my Tablet or Notepad and worry about organising or integrating into my knowledge graph later.
Traditional note-taking is flawed. Take a look into Zettelkasten and stop caring about "folder organization". Logseq is great for that. I much prefer it over Obsidian because it's an outliner. Backlinks and tags make note-taking not only a breeze, but fun.
I strongly suggest that you completely ignore methodologies. Write wherever it first seems fit, and keep making backlinks as a way to breadcrumb your way back to your notes. That's how I do it and it has served me extremely well.
Thanks a lot for the suggestion! I had a look into Zettelkasten and Logseq, and I must say, it's been a game-changer for my note-taking process. The idea of not worrying too much about folder organization and just focusing on creating backlinks has made it so much more intuitive and enjoyable.
By the way, I was pleasantly surprised to find out that Logseq is open source! That's a fantastic bonus. Thanks again for pointing me in this direction – it's making a real difference for me.