I work in a field that has a strong glass ceiling, and the only way through is with a PhD (some employers are starting to recognize that, with several years of experience, people with a bachelor's or master's degree can attain the same level of independence and mastery that you'd demand from a PhD, but it's still kinda rare).
But even having a PhD isn't enough to establish credibility during the interview process, as there are plenty of PhDs who are incompetent. It's really more of a thing that gets your foot in the door - a signal that it's plausible that this candidate could have the juice.
The solution is that after the usual interview steps (resume, phone screen, etc), the candidate gives an hour long seminar on their research to 10-20 people (really 45 minutes of material + 15 minutes of questions). It's basically impossible to talk for that long about your research and the prior literature with a critical audience of experts and not reveal whether you actually know your stuff.
So in a sense, your vision has actually been achieved, but only within the group of people who have traditional credentials. The question of how to open this up to anyone regardless of credentials is not something I'm going to be able to answer, but I certainly hope it does.
In terms of frameworks for evaluating competence, here's the questions I ask when deciding if someone is an expert (and what I get out of these questions in parentheses):
1) Has this person spent years doing something where they constantly discovered that they were wrong? (if you're never wrong, you're not learning and you're certainly not doing anything interesting, and might be a crank)
2) Did they have a mentor or group of expert peers who helped them grow and critiqued their work? (this both helps them grow on a daily basis and also plugs gaps in their knowledge and skills, and gives them new ways of thinking)
3) Have they built or discovered something non-trivial, and in the LLM era, do they actually understand what they built? (You can't really be sure your knowledge and skills are meaningful until you apply them)
4) Can they hold their own when being grilled by other people with deep experience in the same field, or adjacent fields? (This assumes the experts are arguing in good faith, but if you can either answer critical questions or convince someone that their questions are flawed, that's a great sign)
5) Do they have both depth and breadth in their knowledge of their field? (I think this one might get downplayed as it smacks of gatekeeping, but it's so easy to make huge errors or reinvent the wheel when you don't know what other people have already done, and don't know how your contribution fits into the work of others)
6) Can they explain their work on multiple levels of complexity? (Filters out people who are just trying to hide their incompetence with jargon)
7) Are they willing to say they don't know, when asked a question they don't know the answer to? (Cranks will never admit this)
You don't need to synthesize an entire bacterial genome from scratch to do this. You can just insert them one at a time into existing bacteria. Or just give them plasmids. Anyway, the ability to achieve the outcome you're describing has existed for decades.
This is so riddled with inaccuracies that I can spot them immediately despite not being a phage biologist. For example, the PhiX image has a DNA with about 20 base pairs - wildly not to scale. M13 is also wildly scaled, and it clearly has a double stranded DNA which is labeled as single stranded.
What the hell is this amino acid view? This is not how genes work at all. This is biology 101 and it's completely wrong. Why did you buy a domain name to share disinformation that you don't even understand?
None of this is displayed in a way that would be useful to working biologists, and I don't see how this could be used as a teaching tool even if all the errors were corrected. This simply doesn't provide any insight into how phages work. Looking at a raw sequence is pointless (also that color scheme is incredibly garish) - you need annotations! The 3D structures don't have their domains labeled and you can't connect sequence features to structural elements.
Why wouldn't you just use all of the existing tools that already do all of this correctly? Look, I don't mean to gate keep, and it's great that you learned something (assuming you didn't vibe code this), but this is a lot of effort that could have been avoided if you had had a single conversation with a biologist of any background, or asked an LLM to critique your idea, or made a single reddit post asking if this would be useful.
Edit: This may come across as super harsh - but really, I love the enthusiasm and I hope you keep pursuing this. But the right place for this passion at this point in your life is a classroom or some kind of structured course.
Yeah we're all quickly figuring out that LLMs shift the engineering work from computer science to bullshit detection. You basically have to become that guy on the Internet who's always trying to prove you wrong when working with Claude Code. Otherwise you're going to build yourself a false reality and get skewered if you try to share it. I mean I've done it myself, because we're so used to blindly trusting the things other people built, that we forget we're the ones building it. Nothing in life is free.
LLMs may be enabling, but OP explicitly stated "I wanted to dig deeper into the subject, but not by reading a boring textbook", which, of course, would have eliminated the issues in this tool, or at least made it clear to them that they needed to dive deeper before publishing. I feel like there might be some analogy with blaming cars for drunk driving - by definition, not possible without cars, but you can drive responsibly if you choose to.
That's how Plato taught Aristotle. I'd much rather have a dialog than read a textbook. You just have to think critically and fact check. You can't just trust whatever the robot says because it interpolates knowledge.
Sure, but there apparently wasn't enough of a dialog either. With a textbook, you're confronted with facts and explanations that you didn't ask about, or even knew to ask about. Don't get me wrong - my original recommendation to take an interactive course is still the best option in my mind, as simplifications made for the benefit of the learner often lead to apparent contradictions that an instructor can clarify. But at some point you do just need the set of raw facts to be able to work with these systems.
Only genuine socialization does that, since you can flip through a textbook. It's the main reason why people go to universities. To hear all the answers they didn't think to ask. LLMs actually do that a little bit. It's both a wonderful and terrifying prospect, since there's so much risk with that kind of feature to introduce bias, but it's great when it works.
Not solved at all. Certainly AF is very useful but what it outputs is a fundamentally a prediction with important limitations. There’s plenty of utility in physical modeling.
This post might get the record for people responding to the title without reading the article. Jeez people, it takes five seconds to discover that it subverts expectations.
Years ago I worked at an insurance company where the whole job was doing this - essentially reading through long PDFs with mostly unrelated information and extracting 3-4 numbers of interest. It paid terrible and few people who worked there cared about doing a good job. I’m sure mistakes were constantly being made.
But in the proposed scenario, there wouldn’t be any technical hurdles or effort required by the phone’s owner - you could have this be a service offered by businesses. Maybe even the place that sells the phone would pre-jailbreak it for you.
To answer your first question, I work on cancer therapeutics. But maybe some perspective from the other end of the “meaningful work” continuum would be helpful in answering the real question.
If your company creates any real utility to a person living a good life (modulo externalities) then I would absolutely consider that to be making the world a better place. We’d all be worse off if everyone at the box factory quit their jobs to go to medical school or run an orphanage or whatever.
So I’d ask yourself: if the thing you work on didn’t exist as a concept in the world, would that be a detriment to anyone? I don’t want to go back to not being able to get an insurance policy online or learning math from youtube. It’s rad that I could email my grandma when I was on the other side of the world. It’s great that I can throw results into PowerPoint (as buggy and flawed as it is) to share with my colleagues at a moment’s notice. There’s a bunch of corporate smarm about bringing people together, but it really is true.
Now, are you making ads more addictive or enabling crypto scammers? Then sure, change jobs. But an economy of our complexity really does take all kinds. There’s no point in saving lives if life isn’t worth living.
Doors no, couch yes (if it fits). I wouldn’t get one unless you see value in having 80% of your home vacuumed once a day. For me that’s still a huge improvement and spending a few minutes spot vacuuming every two weeks or so is all I need to handle the corner cases.
But even having a PhD isn't enough to establish credibility during the interview process, as there are plenty of PhDs who are incompetent. It's really more of a thing that gets your foot in the door - a signal that it's plausible that this candidate could have the juice.
The solution is that after the usual interview steps (resume, phone screen, etc), the candidate gives an hour long seminar on their research to 10-20 people (really 45 minutes of material + 15 minutes of questions). It's basically impossible to talk for that long about your research and the prior literature with a critical audience of experts and not reveal whether you actually know your stuff.
So in a sense, your vision has actually been achieved, but only within the group of people who have traditional credentials. The question of how to open this up to anyone regardless of credentials is not something I'm going to be able to answer, but I certainly hope it does.
In terms of frameworks for evaluating competence, here's the questions I ask when deciding if someone is an expert (and what I get out of these questions in parentheses):
1) Has this person spent years doing something where they constantly discovered that they were wrong? (if you're never wrong, you're not learning and you're certainly not doing anything interesting, and might be a crank)
2) Did they have a mentor or group of expert peers who helped them grow and critiqued their work? (this both helps them grow on a daily basis and also plugs gaps in their knowledge and skills, and gives them new ways of thinking)
3) Have they built or discovered something non-trivial, and in the LLM era, do they actually understand what they built? (You can't really be sure your knowledge and skills are meaningful until you apply them)
4) Can they hold their own when being grilled by other people with deep experience in the same field, or adjacent fields? (This assumes the experts are arguing in good faith, but if you can either answer critical questions or convince someone that their questions are flawed, that's a great sign)
5) Do they have both depth and breadth in their knowledge of their field? (I think this one might get downplayed as it smacks of gatekeeping, but it's so easy to make huge errors or reinvent the wheel when you don't know what other people have already done, and don't know how your contribution fits into the work of others)
6) Can they explain their work on multiple levels of complexity? (Filters out people who are just trying to hide their incompetence with jargon)
7) Are they willing to say they don't know, when asked a question they don't know the answer to? (Cranks will never admit this)
reply