Ever feel like a fraud? Like any moment your boss and colleagues are going to realize you've been scamming them all along? That you're not the engineer they think you are?

Turns out, engineers at every level of success suffer from impostor syndrome. In fact, the majority of professionals (and probably engineers) — from the most junior to the most eminent — have at some point held a paradoxical belief of being a fraud despite job success and other evidence to the contrary.

But how do you know if you suffer from 'impostor syndrome' or are actually an impostor?

Cold. Hard. Data. Properly interpreted, of course!

To be clear, it's very unlikely that you're actually an impostor. But no one expects you to take my word for it. Only data that's been rigorously validated and crunched has the power to free someone from the fetters of parabolic self-doubt.

Here’s a process for software engineers to understand their impostor suspicions, put them to the test, then meaningfully interpret the results with a healthy, critical eye.


Static analysis: Check the code driving your impostorhood

The first step in resolving your impostor suspicions is identifying precisely what they are to begin with.

Think of impostorhood as a program your brain is constantly running. When it discovers sufficient beliefs to support identifying as an impostor, it sets impostor to equal true. When those beliefs are invalidated, it sets to false.

In other words, feeling like an impostor doesn't just come from nowhere. You have a technical and professional self-image: a set of beliefs that lead you to feeling like an impostor in some areas and confident in others. And these beliefs, furthermore, tend to be grounded in a very specific set of experiences whether on the job, off the job, or before you even started working!

We need to rigorously uncover the nuances of these beliefs (positive and negative) so we can later align them with reality and architect a more stable foundation going forward.

And lucky for us, there are various techniques for gathering this data:

High-level self-survey

Before we employ any formal processes, it's best to start with a free-form exploration, similar to brainstorming or ideation in product development.

That is, ask yourself the following questions, and see what comes to the surface:

  • What do I feel like an impostor about?
  • What do I feel insecure about?
  • What weaknesses do I fear my boss or team (or others) will find out about?
  • In what ways am I below average?

These questions are intentionally high-level so as not to anchor your thinking. The goal is to allow anything that's especially potent to reveal itself before we get into the nitty gritty. And that includes positive feelings, as well. So you should also ask yourself:

  • What am I proud of?
  • What do I feel strong in?
  • What do I feel confident about?
  • What skills do I have that other engineers don't? or In what ways am I above average?

As items come to mind, write them somewhere and put them aside. The next steps will allow us to bring some organization to these high-level feelings.

And remember, if nothing comes to mind for a particular question, that's fine. It's best to leave something unanswered than to force an answer that doesn't ring true.

Systematic skill review

The next technique is intended to fill in the gaps for things that may have been missed from the high-level exploration, by taking a moment to consider each of your skills, one-by-one.

So rather than asking a broad question and listing skills, you're iterating through the universe of possible skills and bucketing them into meaningful groups based on how you feel.

Before we do any bucketing, though, we need a list of skills that's relevant to your particular area of engineering. The following is a great place to start, but be sure to brainstorm on your own to arrive at a list that's a bit more exhaustive for you personally.

  • Writing working code
  • Debugging
  • Writing readable code
  • Communication skills
  • Problem solving in general
  • UI-focused work
  • Teamwork
  • Algorithms and academic CS
  • System design and architecture
  • Domain specific knowledge (e.g. backend, frontend, iOS, or low-level systems)
  • Communicating your strengths
  • Not introducing serious bugs
  • Making estimates
  • Side projects

With this list in hand, simply go through each skill and ask yourself how you feel about it. More specifically, bucket the skill into one of three categories:

  1. Suspected strength
  2. Suspected weakness
  3. Not sure

The above categories are mutually exclusive. You should pick one per skill. And if that's hard to do, such as with "communication skills," break the skill down into components like "setting expectations" and "clearly communicating to the product team" until such a categorization is possible.

And remember, these are suspected strengths and weaknesses. Evaluations should be based simply on how you feel about the skill, not what evidence necessarily suggests one way or the other. The goal is to uncover your current self-image as it is, not what it should be (that comes later).

/// Buckets
var suspectedWeaknesses = [Skill]()
var suspectedStrengths = [Skill]()
var notSure = [Skill]()
var impostorCandidates = [Skill]()

/// Bucket skills based purely on how you feel about them
func bucketBasedOnCurrentFeelings(_ skills: [Skill]) {
      for skill in allMySkills {

                /// If I currently feel like the skill is a weakness...
                if self.feelsWeak(about: skill) {
                        suspectedWeaknesses.append(skill)

                /// If I currently feel like the skill is a strength...
                } else if self.feelsStrong(about: skill) {
                        suspectedWeaknesses.append(skill)

                /// If I'm not sure or could go either way...
                } else {
                        notSure.append(skill)
                }

                /// If I have any impostor feelings about this skill, aside from whether I think it's a strength or weakness...
                if self.hasImpostorFeelings(about: skill) {
                        suspectedWeaknesses.append(skill)
                }
        }
}

Now there's one more category that is separate from the rest: impostor candidates.

That is, it's possible for you to feel like something is a weakness, but not also feel like an impostor about it. That weakness could be known to others and something you're not afraid to reveal. But when you feel like others think you're better at something than you feel you are, that poses a risk for impostorhood.

Likewise, it's even possible for you to feel like something is a strength but worry your interpretation is not accurate and at any moment you will be found out for actually being bad at it.

If that's the case for any of your skills, add it to the "impostor candidates" bucket in addition to one of the three previous buckets.

Now you have a much more thorough sense of your self-image about all of your skills that will supplement the most potent discoveries you made with your high-level self-survey.

Past feedback, successes, and failures

Finally, we want to make a list of past feedback (positive and negative), failures, and successes

For example, your manager might have given you a compliment last week during code review. Perhaps you were praised for getting something in ahead of time, or criticized for not estimating accurately. Perhaps someone really loved the way you helped a junior developer, or the email you sent to the product team, or anything else (technical or non-technical) for which a value judgement was made.

List 10-20 or so items that stand out to you if you can.


Debugging: Validate (and refine) code with authoritative sources

By this point, you should have a fairly meaty list of the beliefs and ideas that fuel your personal sense of being an impostor.

Until now, we’ve taken those beliefs at face value. That is, we’ve done whatever we can to reveal what they are, but allowed them to exist as is so we could get an accurate account of the actual code we are currently running.

This is just like debugging. When you discover some chunk of code is not working as expected, you need to first understand exactly how it’s working before you know which part to change so it gives you the results you want.

This step is all about testing each line of code to determine whether it should remain, be changed, or deleted entirely — starting with the items that are most important to you.

And the best way to do that is to judge it’s legitimacy with an authority on the topics about which it makes a claim. These authorities come in a variety of forms:

Expert mentors

If you want to know how good you are at system design, it’s probably beneficial to find someone who is widely considered to be great at system design and start a mentorship relationship with them. Not only will they be able to help you grow in the skill, but also give you valuable insight into their journey into getting as great as they did. They can tell you how long it took them to achieve competence, whether the learning curve is steep or linear, and how you compare to them at your stage in developing that skill. Best of all, their recognized expertise makes it easier for you to actually trust what they say, so you can quell doubts that might otherwise come with asking someone whose success is not as clearly apparent.

Managers and leaders (that aren’t your own)

Likewise, there are plenty of engineers in management and leadership positions that have worked with plenty of other individuals in your current position across the span of their careers. Revealing your insecurities to them and getting feedback on how you measure up can provide invaluable, specific data points that you can use to drive your sense of where you stand compared to others they have or currently manage. They don’t necessarily need to be experts in a specific area, but tend to be better sources for feedback on higher-level skills that span all of software engineering. And people in this position who aren’t your boss can sometimes be in a better position to offer helpful feedback because there’s no pressure on the relationship. That is, your relationship with your actual boss is impacted by what he needs from you as a contributor to the team with which he’s trying to achieve results, and attachment to outcomes can sometimes lead to distortions in feedback.

Your actual boss

With that said, sometimes nothing will soothe your sense of impostorhood more than a clear sense of where you stand with your actual boss. High achieving individuals tend to set overly high expectations for themselves, and sometimes wrongfully believe that their boss (and colleagues) are as tough on them as they are on themselves. On top of that, a junior engineer can sometimes fault herself for something that she hasn’t yet realized is actually par for the course for even the most senior engineers.

Regularly touching base with your boss about your performance is a great way to remain in sync and feel secure about whether the person in charge of your salary thinks your an impostor, and that often has positive effects on your own self-evaluation because you tend to adjust in realtime when you find you’re not on the same page as you go.

Mock interviews, real interviews, and other kinds of evaluations

If technical interviews are one of the things that lead to a sense of impostorhood, then taking as many interviews and interview-style evaluations as you can is a great way to collect data on where you stand in your interviewing skill. The Triplebyte quiz is one style of evaluation that can give you a sense of how you compare to others in demonstrating certain kinds of domain knowledge. Interview Kickstart and Interviewing.io offer mock interviews, while Leet Code offers a boatload of interview-style questions. And there’s always the technique of taking interviews at real companies when offered (even when you’re not actively looking) to expose yourself to as many styles of interview as possible.

Opinion pieces, studies, and other online (and offline) materials

Engineers with every possible background have written extensively on what makes someone great at many of the skills you may be concerned about. This ranges from short opinion pieces like those found on Triplebyte’s Compiler, to tomes about writing clean code, to posts on Reddit, to studies by academics and startups with access to unique data sets. If you want to learn more about where you stand, sometimes all you need is to see stats that pop up in Google about just how many engineers write clean code, or how many think it’s important, or whether you really should know SQL as a frontend engineer (or whether full stack is the future). You might just find there is no canonical opinion and that there’s really no such thing as being an impostor on this topic at all!

Interpret the data with a critical eye

Just because you’ve collected external data from an authoritative source, doesn’t mean you should accept all of it at face value. Everyone has their blind spots, and everyone is influenced by their specific set of experience which always introduces bias into their opinions.

You need to gather data, but you also need to evaluate it with a high degree of critical thought.

Strong opinions != truth

Engineers tend to have incredibly strongly held opinions, a fact which can sometimes really distort your sense of reality. We tend to state opinions as fact, and very harshly argue against opinions with which we disagree with.

The fact that someone is vocal about clean code being an absolute must, or which process to use for estimating your engineering tasks, or what style of git commit is acceptable, doesn’t mean they’re “right.” For most strong opinions, you are pretty much guaranteed to find scores of other engineers with the exact opposite opinion who sound just as confident. There’s still a brutal controversy between vim and emacs that’s been going on since the dawn of text editors! And I’ve personally been skewered on numerous occasions for points I’ve made in my own writing because the wording wasn’t to someone’s liking or they shared a different set of values.

The point is that you can collect all the external data you want, but you will often find that there is no single, ultimate authority to free you to safely join one camp or another. You’re going to have to weigh the nuance of what’s often a fuzzy playing field and draw conclusions of your own — and get used to the uncertainty of not being 100% sure. When in doubt, try a survey approach (finding several sources on something) to try to uncover what is a useful opinion on anything.

Consider stage of development when analyzing your data

Sometimes a strong piece of criticism can stick with someone for their entire careers. For example, someone who was really bad at debugging at year one of their career may internalize “bad debugger” as an immovable property of their personhood, when it’s really a skill that can be developed.

Likewise, you might gather feedback from a boss you had several years ago, or provide evidence for something that hasn’t happened in several years to one of your mentors, and internalize it as just as valid now as it was then.

The point is, people change with time. Sometimes we get stuck on one thing or another, but sometimes we grow in ways we haven’t even realized. As you’re weighing feedback and experiential data, always ask yourself how the timing of that data may be relevant to the conclusion you’re trying to draw now.

Compare yourself to the appropriate people

It’s great to seek out mentors and strive to be excellent. But not being world-class at a particular skill does not imply you’re an impostor at it either. That’s why it’s usually not a good idea to compare yourself to the experts you’re trying to learn from, at least when you’re making a judgement about your outright legitimacy.

In other words, when answering the question, “Am I at an acceptable skill level for x skill?” always compare to others at the same stage of development where possible. But when asking yourself, “Have I achieved excellence?” it’s find to compare to your role models so long as you do so with realistic expectations of when excellence is actually achievable.

Authority has its limitations

Even the most authoritative experts have their blind spots. They may be great at a sharding a database, but not be self-aware enough to communicate what led them to be great (or whether you are actually on the path to greatness as well). As such, if something doesn’t fully resonate with you, even from an expert, you may need to do the legwork of reading between the lines of their own blindspots to separate the truth from the bias. While this may seem scary, the flip side is that you realize even the most celebrate among us have weaknesses, just like you, and you’re in a better position than you think to have opinions of your own, weigh data reasonably, and trust your instincts.

Discussion

Categories
Share Article

Continue Reading