Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's funny is that I specifically remember a conference where some library that a google employee wrote was terrible. The one google developer talking about it, disavowed any responsibility of it.

Whatever metrics that google is using in their interviews have probably become worthless in the past decade as people game the system.



In my startup I interviewed a 48 years old senior Java programmer with excellent resume, who took 1hr to write a String.contains(), it only worked for the requested 4 letters, didn’t work if a letter was repeated twice, and didn’t work with Chinese characters. At least it had the JUnit. I asked an employee to do it too and he made his code pass the JUnit in 6 minutes.

The candidate hated the interview, claiming it was discouraging. Coding is erratic, talent is strange, it really is a craft and we still don’t know how to reliably raise someone to competency.


But which is more likely?

1. The candidate was a complete and utter fraud and their previous (and apparently well-regarded) employers were too stupid or negligent to notice this, wasting literally millions of dollars (48-21 * $100,000+).

2. Something about the interview failed to let this person demonstrate the skills that had kept them employed for two decades. Maybe their mind went blank under pressure, or at the end of a long day. Maybe they got hung up on something trivial (that a quick search—-or nudge from the interviewer—-would have resolved), or the question was unclear.


To add to this, I’ve found engineers more likely to hang on time series and string manipulation problems. Likely due to a combination of not having to code low level functions in these areas, as well as infrequently encountering the problem.


Yes, strings are hard and times/dates have a ridiculous number of edge cases, and sometimes very poor language support. This works both ways though; if the problem is easy enough (calculate average cycle time) it can give you lots of edge cases to discuss and really show how someone problem solves, which is really the point of a programming interview. If someone even mentioned non-english language support that would be enough for me, forget about implementing it.


I personally dislike giving questions with too many rabbit holes. My observation on a few questions is that it’s a 50/50 shot if the candidate who freezes on a question recognized more nuances than the candidate who didn’t which means I’m not getting any data.

Fizz buzz was a great question in that it had pretty much a Boolean success criteria.


The interviewer needs to be good/prepared to make it work.

If the interviewer only says "Write String.contains that passes these test cases" goes back to playing with their phone, several things may happen. One person will take that absolutely literally ("It's a test; better do as I'm told"), and you'll dismiss that apparent garbage or move onto the "real" assessment where they're hoping to shine.

Another will get bogged down in something the interviewer regards as a distraction, and "waste" a bunch of time on something the interviewer regards as a distraction. "He handled unicode, but not substring matching (KMP or Booyer-Moore) or vice versa." Maybe someone will goldilocks it and hit the right (not-explicitly-specified) balance of (also unspecified) features and time, but...

If you structure it as "Please, do the dumbest possible thing and we'll iterate"--and don't hold that initial pass against them--I could see it working well.


Sounds like the solution then would be to give interviewers all of the systems and support they would normally have access to on the job and see how well they adapt to a conventional task or an issue that was recently solved by someone in a similar position on your company, and have their result evaluated by someone involved with the implementation or fix.

That would tell you if their workflow would fit your company much more than knowing how to run a coding challenge would.


On the other hand, tools tend to be fast to teach and pick up relative to fundamentals. Most companies have rough around the edges tooling that a candidate either wouldn’t know about or need a couple weeks to get productive in.


I would say 1 is possible enough that it’s worth checking for. Remember that insanely too hard programming detail is a reaction against a situation where 99% of candidates couldn’t program at all.[1]

[1] https://wiki.c2.com/?FizzBuzzTest


What proportion of your colleagues do you think are wildly incompetent? Not just a bit sluggish, subpar, or sloppy, but not even remotely able to do something resembling their job description.

There are certainly a few. The job market, being a combination of people who want new jobs and those that can’t keep their old ones, is undoubtedly enriched for them.

Even so, it seems unlikely to me that there are anywhere near as many as most people say. You certainly don’t have to hire someone who flubs your interview, but you also don’t have to assume they are frauds.


> What proportion of your colleagues do you think are wildly incompetent

30%, minimum. I was hired alongside a guy with a fantastic resume. He pushed zero lines of usable code in 4 months. When he left I purged about 20 files which were tests that were just completely commented out (but I guess those count for LOC according to github's crude measure). I would say that not only was he incompetent, he was worth negative (thankfully, nothing critical) - Maybe in order to cover his tracks? he had moved certain classes of tagged tests (e.g. skip, broken) to "ignore" status instead of 'yellow star'/red dot, I now, months after his departure, have a pr reverting those changes months after because I didn't notice he had done that. Thankfully it had not covered up any major defect in our codebase (someone could have left a corner case test as "broken" with the intent to fix it later and wound up forgetting to and sending it to prod).

But hey. Programming isn't that bad. In the physical sciences it was 60-70%.


> about 20 files which were tests that were just completely commented out

How was this not uncovered during code reviews?


That's a good question. I wasn't reviewing the code he pushed.


The problem is that it only takes one or two wildly incompetent people to completely disrupt the quality of the software. These are the kinds of developers who actively create bugs, usually by building (or copy/pasting) solutions that only work by accident, or who decrease the velocity of everyone around them by generating reams of overcomplicated and brittle code that is hard to test, hard to review and hard to maintain. It costs a lot of management time too, trying to find a way to get them to improve, or to build a solid case for letting them go.

I think the reason why every developer tends to have a story about these sorts of incompetent colleagues is not necessarily because 50% of their colleages are incompetent, but because even if just 2% (one person in the department) or 5% (one person in your larger project team) is incompetent, that can be enough to cause a seriously negative impact.


I should clarify I lifted the 99% stat from the linked wiki. I agree it seems high.

I’ll estimate zero to 10% wildly incompetent. Many of the folks who aren’t able to program find other ways to be useful: Testing, requirements, prod support, sys admin, config. It’s not even clear they couldn’t program, but maybe came to prefer the other work at some point.

What’s your wildly incompetent estimate?


The problem is the percentage of wildly incompetent applying for your job is a lot higher than the percentage of wildly incompetent overall.


absolutely. The incentives to train for the job search and then apply (and succeed at) a job with zero relevant competency, are quite high. And there are... geographies... which have a deserved reputation of being mills for those sorts of individuals, likely because the economic incentive is even stronger than the median, which I suspect is quite annoying for actually competent people that come from those geographies.


Didn’t mattkrause acknowledge as much in his comment?

> The job market, …, is undoubtedly enriched for them.


A few percent maybe, but not as high as 10 percent. It's also not just people who "can't" do it, but also those that aren't motivated or cooperative (for whatever reason).


I interview for my company. 80% of the DS applicants (some of them with SWE background) that apply for our senior positions fail with FizzBuzz or some riddle of similar difficulty. This is already pre-filtering for seniors from established companies. We do not pay bad for the market. They also do equally bad with other FizzBuzz-level tests in other areas that they claim to have worked in.

It is still a very useful test.


This is exactly my experience too. Sometimes it's incredible just how little applicants understand about how to develop software. I've even interviewed people where they were allowed to have a web browser and IDE while coding a solution, and they still struggled.

Personally I am a much bigger fan of using FizzBuzz as a gate than an algorithm question. I think algorithm questions optimize for the kind of developer who doesn't mind memorizing algorithms to get a job, which might be a useful skill, but you can test that same skill of memorization using FizzBuzz, and then you don't end up also filtering out people who can code but don't care about memorizing algorithms.

In any case, I always think it's worth using their solution as a jumping-off point to ask other, more language-specific questions. Things like: how would you change this if it was intended for use in a FizzBuzz library, how would you annotate this if you needed it to be injected as a Spring dependency, why did you use a for loop instead of a Java 8 stream (or vice versa), what are the implications of declaring this thing as final or static, can you write a unit test for this, and so on. That's when you can get past the point of memorization into figuring out if they actually understand what they typed, which is helpful to ascertain their level.


"how would you annotate this if you needed it to be injected as a Spring dependency"

well, I mean, you get to ask what you like... but this is how you determine if someone understands what they've typed on a conceptual level?


No, it's just an opening to discussion. For example, depending on the experience of the person, it might lead to a conversation about dependency injection in general, the transition from Spring-specific to JSR-330 notation, maybe they can give some examples of where Spring-specific annotations are still useful, they could talk about constructor over field injection, or when it might be better to use a static/pure function instead of a bean, all kinds of stuff.

For me there are basically two questions to answer when I am interviewing someone. The first is if they have any real programming ability at all, which hopefully FizzBuzz should answer. (Many people do not pass that threshold.) After that I'm looking to figure out where they could fit into the team, or the company. That means seeing if they are already familiar with the frameworks they will be working with in the position (usually, but not always the case for junior applicants who have held at least one job before), but then also if they can speak critically about some of concepts used in those frameworks, and perhaps compare different approaches that have been taken to solving similar problems over the years (if they are more senior).

It's not a wrong answer if they don't know the framework or the concepts behind it at all, since they might be switching specializations, but that's important to know at the interview stage because they might be better suited for a different role than someone who is deep in the framework and more likely to be able to hit the ground running.


Thanks for posting. I'm always very interested in hearing form people who mention how ostensibly senior people fail fizz buzz.

My question is: what happens after people pass fizz buzz? Failing fizz buzz is how you filter people out, but it's unlikely that coding up fizz buzz passes the technical screen. What kind of questions do you use to establish this, once you're past fizz buzz?

I've failed far more tech screenings than I've passed. I could easily do fizz buzz, and when I've prepped for an interview, I could some tree and set permutation stuff. But the questions get so much more difficult than this. Since difficulty varies, an example of a difficult question for me is "find all matching subtrees in a binary tree" (at the whiteboard, in 45 minutes). When I got feedback about the no-hire, the explanation was that I had a good grasp of algorithms and made some progress, I didn't solve enough of the problem in code (tight pseudocode would have been ok) in time allotted (again, this was ~45 min at the whiteboard, one in a series of 5 one-hour technical exam style interviews during a day of interviewing).

I can't claim to be a great coder. I have understood how to code merge sort and quick sort and more complicated tree structures, and I could do them again if I studied and loaded it all back into short term working memory, but I'm content to know how the algorithms work generally and get back into the details when I need... but when anyone mentioned "Fizz buzz", I do insist on stating that my impression, based on quite a few interviews, is that fizz buzz isn't what is screening out software engineers. Lots and lots of people who can write fizz buzz (and build and print a binary tree pre order and post order, and do dfs and bfs, and solve problems with them) are still frequently screened out.

I'm at the point where I just won't do tech interviews anymore (or take home tests). I won't study for exams or do mini capstone projects for an interview that may or may not work out. I would do these things for a degree or licensing exam, but not for a job interviews. It's just too much of a time sink.

I accept that this may cost me good opportunities (in fact, it has), though of course I don't know if the interview would have gone anywhere, other than costing me another long prep session with "cracking the coding interview".

I'll finish the way I usually do, by 1) acknowledging that you are free to interview how you like, and that nobody owes me a job, and 2) mentioning that many companies complain incessantly about hiring difficulties without realizing that their own interview processes may be filtering out talented people and that nobody owes them an employee either.


> My question is: what happens after people pass fizz buzz?

We tune up a little bit the difficulty. The point is to start a conversation and see what the candidate knows about, ¿does he knows about time complexity?¿differences between passing by reference/value?. Afterwards we talk about the technology that they use, what they like, and what they will like to use in the future. Just to see if they read about their field and are able to talk without saying something egregious.

And if they do fine we bit the bullet and hire them.

> nobody owes them an employee either

Now that I am "in the other side", I can see a lot of things that will definitively would improve the problem at micro/team level, like posting salary or increasing the WFH days. But ¿would they improve at macro/company level? The things is, companies, including tech companies, from small startups to big corps, usually have much more problems than the quality or quantity of their software.

(signed, someone that has been rejected, and will be rejected, to more interviews that he has passed)


I’m the interviewer, I’m still wondering what happened.

This was the introductory question before launching 200 threads and asking him to solve the deadlocks/inefficiencies, which was the real question supposed to let him show off his skills in front of my employees, specifically crafted for him because I wanted to persuade my employees he was an excellent hire. So he had a taylored chance to show off his skills but failed at the introductory question.

But on the other hand, how can you be asked “Here’s a substring, return true or false if it contains the substring, this is the introduction of 5 questions so don’t sweat it” and not just write two nested loops and an if? I’d pass on UTF8 problems, but when you’ve been working with Java for CRUD apps, you still should have your UTF8 correct. This is how you end up with passwords that must be ASCII because the programmer is bad.


I've seen an actual Nobel Prize winner get stuck describing their research.

People's brains just occasionally lock up.


I had number two happen on an interview recently and I am incredibly happy the interviewer didn't hold it against me. I forgot an otherwise simple word/term, but the pressure of the interview just made my mind go completely blank. I think everyone has a tendency sometimes to forget what it's like to be on the other side, and will hammer on small mistakes, or not consider all the factors.


Me too!

I'm sure I'm on somone's list of incompetent bozos for an interview that went like this: "Please, describe Python programming language." That's it. I had no idea what I was supposed to be doing, and the two fellows interviewing me would not elaborate.

I talked about what I had done with Python. Stony stares. Do I talked about the nature of Python itself (interpreted, multi-paradigm, lexical/LEGB scope, the GIL). Stony stares. I wrote some trivial programs on the board. Stony stares. Had I brought an actual snake, I might have tried to charm it.

At the end, the CEO told me they weren't overwhelmingly sold on me, but would think about it. Never heard from them again.


I've repeatedly had employers very happy with my abilities & results, and am also entirely sure I've, on a few occasions, convinced interviewers I'm entirely unable to write code and am one of these frauds everyone's sure exist and that they need these coding tests to "catch".


The second is certainly more likely, but I'd wager the likelihood of the first is greater than 10%. I've encountered my share.

There are so many "developers" just faking it, I can certainly understand using a test that would reject 90% of the good candidates if it could reject 99% of the bad ones.


Unfortunately, both are fairly likely.


What was the goal of the question? Why did you want the person to implement a contains method? Did you really want to verify they understood String implementation in Java?

And if the candidate was able to do this in 6 minutes, what would you have thought? "Great, let's hire"?

In my humble opinion, the question is a waste of time either way. You'll get much further trying to probe what the candidate does know rather than randomly creating an exercise that you think they "should be able to do if woken up in the middle of the night". People forget how stressful interviews are and how easy it is to assume shared context.

The fact that your employee was able to do the test might be indicative of the fact that you share context with the employee that you did not share with the candidate, thus confirming your bias.


Beating up on this example some more:

Multi-lingual support seems really really hard, especially in six minutes. I would think most people would need to look at technical (i.e., unicode) and linguistic references to get it right.

Should does the ligature f l match itself, or the ASCII constituents 'f' and 'l'? How about combining vs. pre-composed characters? Some Chinese characters show up in other languages (Japanese, Korean) and are sometimes split between Hong Kong/Taiwan/Mainland language tags too. In fact, there's a mess of work devoted to this ("Unihan" https://www.unicode.org/versions/Unicode13.0.0/ch18.pdf). Having figured out what you can do, you then need to decide what you ought to do. Not being a Chinese-speaker, I have no idea which options would seem natural....

In fact, having written this all out, there's no way someone "solved" it from scratch in six minutes. It would be a great discussion question though....


    for (int i = 0 to text.length() - substring.length()) {
      boolean found = true;
      for (int j = 0 to substring.length()) {
        if (text.charAt(i) != substring.charAt(j)) {
          found = false; break;
  }
      if (found) return true;
    }
We’re not taking rocket science here. This code already properly handles surrogates and Chinese characters. The question about characters that can be written in two different ways should only be raised as a second level, once the first implementation is done.


Parent here.

> What was the goal of the question?

This is the introductory question before solving concurrency problems, because it’s much easier to understand what a thread does when you’ve coded the body yourself.

> Why did you want the person to implement a contains method?

The job is CRUD + integrating with Confluence + parsing search queries from the user, so finding “<XML” in a page and answering “Yes! This is totally xml, I’m positive!” is a gross simplification of realistic tasks in the real job (and in fact in most webapps), with characters instead of XML or JSON.

I have the feeling that you think this question is entirely abstract, but I both tailored the exercise because he touted being good at improving app performance on his resume (including using JProfiler) and I took care of using a realistic on-the-job example.

> Did you really want to verify they understood String implementation in Java?

Well, what consumer product can you work on if you trip into all UTF-8 traps? Telling customers “Just write English because we can’t be bothered to learn the easy thing in Java that handles UTF-8 properly” is… is acceptable unless he also fails the fuzzbizz test. And once UTF8 is mastered, it’s good for life! I wouldn’t mind teaching him if he didn’t fail the rest, but as a senior you should really know the difference between .getBytes() and .codePointAt(i).

> If the candidate was able to do it in 6 minutes, what would you have thought? “Great, let’s hire”?

The 4 other questions were classic gross concurrency errors, tailored because he touted it in his resume and I wanted him to shine. A senior should be able to guess them blindfolded as soon as I tell them “There are concurrency problems”, without even looking at the code ;) Volatile, atomic, ArrayList non-synchronized, 200 threads for a connection pool of 10, a DB accepting 7 cnx (note the prime numbers make it easy to spot which multiple is causing the issue), and strings of 10MB each with Xmx=100m, if he finds any 3 of the 12 problems, and 2 more with help, I’d hire him. If he ditched the code and postes tasks into an ExecutorService (as they teach in the Java certification level 1), I’d hire immediately.


In essence, you write:

1) We want to test concurrency but start with implementing String#contains.

2) You have to know how to implement String#contains because you might use contains in our environment (not really, but theoretically, so you better know how to implement it).

3) You must absolutely avoid basic UTF-8 traps because users use UTF-8.

Neither of the above tells me what would you gain if the candidate nailed the question. It just tells me that:

- Your team might or might not use contains to verify something is XML (I truly hope not).

- Your team uses UTF-8 strings (which is one piece of the shared context that the candidate probably does not have).

- You tested candidate abilities of performing under pressure rather than testing their knowledge or skill.

- You are trying to hire the exactly same senior developer as if you promoted someone on your team with your codebase.

You come to the interview full with assumptions and biases about what a senior candidate absolutely must know instead of seeking what they bring to the table and why they call themselves senior. Let me tell you there are lvl 4 and 5 Java candidates that have never touched UTF-8.

Finally, and let me blow your mind here, there are senior developers that haven't really used String#contains in the last X years of their career either.

I don't know what was the quality of the candidate, but I feel, from my limited PoV and lacking all the info, that your interview process is deeply flawed.


Honest question: I'd really love to know what UTF-8 traps people fall into all the time when working on a consumer product with Java - especially given that Java basically stores all Strings in UTF-16 (well, starting with Java9+ there's some "optimizations" made, but still). I literally can count those issues on one hand in over a decade of working on such (multilingual) products.

I also completely fail to see what a CRUD app (i.e. java + db) + shooting REST requests to confluence has to do with your concurrency questions, as in interview != job fit, but that might have to do with some missing context.


> The fact that your employee was able to do the test might be indicative of the fact that you share context with the employee that you did not share with the candidate, thus confirming your bias.

This! When giving interviews last I really worried if the questions I asked where just indicative of my own Dunning-Kruger effect.

i.e. Do I only ask questions I already know the answer to and not questions I don't know the answer to?

If I do then am I just filtering for people with the same background and knowledge and missing out on people with other skills I don't know, because they're in my blind spot, I need yet?


> i.e. Do I only ask questions I already know the answer to and not questions I don't know the answer to?

I've been advocating for a "coding interview" where both the interviewer _and_ interviewee draw some random question from leetcode or other problem bank, and try to work at it together.

This would show collaboration skills, and you can tell pretty easily how helpful the candidate is with his/her contributions, and whether you find there is an impedance mismatch somewhere.

It probably also maps more closely to the kinds of interactions you'd have after the person's been hired.

I think it would also help calibrate: if you can't figure it out, is it fair to expect the candidate to figure it out? Maybe it's just a hard problem!


Curious about the age of the person who supposedly wrote it in 6 minutes. If you're fresh out of school, "String.contains" may be top of mind, but the vast majority of people never write that function in practice, so it's easy to not think about.


I don't do interviews in my current job (mostly because the pandemic really did a number on hiring) but in my previous job the only coding question I asked was what I thought was a fairly simple string problem. They could assume ascii, use any language they wanted, and make any other assumptions to simplify the problem.

My then-co-workers liked to ask harder algorithmic questions but I wanted to give the candidates a little bit of a break with something easier.

It didn't always work, but at least I tried.


What was the problem statement you gave to the candidate?

What were you trying to tease out from the problem statement?


Did they have to write it in a google doc with no ability to compile and run their code?


Why would you not use the language built-ins?

I am not a Java programmer but it took me 30 seconds to find the contains() method.


Just check the Android code, specially the early versions looked like "C dev (not even C++) tries to create a Java based framework".

And the NDK clearly is anything but modern C or C++.


Google is famous for having a C++ implementation that eschews a lot of what makes C++ powerful.

I’ve heard it referred to as “C+-“.


Yes and apparently clang is now suffering from Google not caring about latest C++ compliance.

Apple mostly cares about LLVM based tooling in the context of Objective-C and Swift, Metal is C++14 dialect, and IO/Driver Kit require only a subset similar in goals to Embedded C++, so that leaves the remaing of the clang community to actually provide the efforts for ISO C++20 compliancy.

https://en.cppreference.com/w/cpp/compiler_support/20


Yep. If Clang doesn't have C++20 support by the time C++23 is out, I'm pretty sure my workplace at least will completely drop Clang and build solely with GCC. A win for open source, if nothing else.


Is clang not also open source?


Clang has a permissive license, GCC is (of course) GPL.

Which one is best for open source is debatable.

Permissive licenses make it easier for companies to make proprietary software, or even a closed source version of the original project.

Copyleft licenses (like GPL) are intended to promote free/open source software but they can be a legal headache, which can make users favor proprietary solutions.

On HN, I think that people tend to prefer permissive licenses (but complain when large companies "steal" their work, go figure...).


That’s fine. Having 2 (free software) tool sets competing on features is a good thing. Both need to stay relevant.


Well there might be a defense of that one.

"Data-oriented programming" (to distinguish from object-oriented) is largely C-style C++ that is written for performance rather than reusablility/abstractness/whatever. In the embedded programming world where performance is paramount, a lot of people have low opinions of many C++ features. One could also never completely trust compilers to implement everything correctly.


I'm not saying that's a bad thing. Google usually has a good reason for what they do (not everyone is always happy with the reason, but Google can always explain why they do stuff).

I come from an embedded background, and understand that.


Go is also an outgrowth of the Google idea that was first expressed in their style guide of basically "engineers are too dumb for harder features, let's ban them in the style guide (for C++) or just not have them (for Go)"


But I thought that Google engineers were all genius-level.

Aren't they the company that pretty much requires an Ivy-League sheepskin to be a janitor?


Apparently not,

"The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt. – Rob Pike 1"

"It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical. – Rob Pike 2"

Sources:

https://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2014/Pa...

https://talks.golang.org/2012/splash.article


It's my understanding that there is a colossal gap between the hiring bar imposed by recruiters and the companies' HR department and what's actually the company's engineering culture and practice.

My pet theory is that HR minions feel compelled to portray their role in the process as something that adds a lot of value and outputs candidates which meet a high hiring bar, even though in practice they just repeat meaningless rituals which only have a cursory relationship with aptitude and the engineering dept's needs.


I don't think HR is really that involved in the hiring process. They certainly aren't at my company. They'll read the job posting and make sure that there isn't anything illegal in it, but then that's it. It's up to the Engineering Managers to come up with a process, and make adjustments when there are roles to be filled.

This is the first time I've been involved in coming up with the process, but from what I've observed, it's a similar situation in other organizations.


> the hiring bar imposed by recruiters and the companies' HR department

That is simply not how things are. The hiring bar is designed and upheld by people on the same software engineering job ladder as a candidate. The role of recruiters is primarily coordination. The role of HR is compliance with local employment laws.


This one is extra weird to me because I've written a lot of C++. I don't think I've ever committed a bug related to dynamic dispatch, templates, or some other "fancy" features. Not that I haven't committed bugs, but they're mostly either language agnostic logic issues or things one could have written just as easily in C.


How much of the early Android code was made by Google? Android existed for two years as an independent company before Google bought it.


The remaining 8 years have been written by Google.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: