Coding interviews have been broken for a while.

For years, companies have treated LeetCode-style questions like the gold standard for hiring engineers. Grind enough DSA, memorize enough patterns, and suddenly you are considered “interview ready.”

Then Roy Lee came along and made the whole thing look ridiculous.

Roy, a Columbia University computer science student, was already known in some online circles for solving hundreds of DSA problems. Then he built Interview Coder, an AI tool designed to help candidates solve coding interview questions in real time.

Depending on who you ask, it is either a cheating tool, a protest against broken interviews, or both.

Absolute Flex

What Is Interview Coder?

Interview Coder is a desktop app that costs around $60/month.

The pitch is simple: use AI during technical interviews without getting caught.

It can:

  • Solve LeetCode-style problems in real time
  • Help debug and optimize code
  • Work through shortcuts like ⌘ + Enter
  • Run during interviews on platforms like Zoom, HackerRank, and Microsoft Teams
  • Avoid some common screen-recording and browser-based detection methods

The model details are still a little unclear, but the idea is obvious. It acts like a coding copilot during live technical interviews.

Roy’s pitch was basically this:

Why grind LeetCode for months if AI can solve the problem for you?

That is exactly why people got mad.

The Amazon Interview Incident

The biggest controversy came from Roy allegedly using Interview Coder during an Amazon interview.

The story goes that he applied for a software engineering role and used the tool during a virtual technical round. When the coding problem came up, Interview Coder helped generate the solution live.

From the outside, it looked like a clean interview performance.

Then Roy posted about it.

That is where everything exploded.

The clip and story spread online, with people arguing over whether this was cheating, activism, or just a very public way to expose a weak hiring process.

Amazon reportedly did not take it well. His offer was rescinded and the situation reached Columbia.

Whether you see Roy as reckless or smart, the point landed. If a candidate can use AI to beat a coding interview this easily, the interview format itself has a problem.

Columbia’s Response

Columbia was pulled into the situation after the controversy spread.

The university’s response seemed careful. It did not exactly celebrate what happened, but it also did not turn the whole thing into a major punishment story.

The general message was clear enough:

Innovation is fine. Undermining third-party processes is not something the university wants to endorse.

Roy leaned into it anyway.

He framed the backlash as proof that institutions are fine with innovation until it makes them uncomfortable. That framing worked well online because the tech crowd already has a love-hate relationship with LeetCode interviews.

Some people saw him as exposing a broken system.

Others saw him as proving why companies are paranoid in the first place.

Why This Hit a Nerve

The reason Interview Coder got so much attention is not because it solved coding questions.

AI tools can already do that.

It got attention because it attacked one of the most annoying rituals in software hiring.

For years, candidates have been told that DSA interviews are necessary because they test problem-solving ability. In practice, they often reward pattern memorization, interview prep time, and the ability to perform under artificial pressure.

That creates a weird system.

You may be great at building real software, but still fail because you did not remember a graph trick.

At the same time, someone can grind hundreds of problems and pass interviews without necessarily being good at shipping production code.

Interview Coder made that contradiction impossible to ignore.

The Problem With DSA Interviews

DSA interviews are not useless.

They can test fundamentals. They can show how someone thinks. They can help filter candidates at scale.

The problem is how much weight companies put on them.

A typical LeetCode-style interview often tests:

  1. Whether you have seen the pattern before
  2. Whether you can stay calm under pressure
  3. Whether you practiced enough similar problems
  4. Whether you can explain your thinking while coding
  5. Whether you can avoid small mistakes in a stressful environment

That is not the same as testing whether someone can build good software.

Most engineering work is not solving “Hard” LeetCode problems under a timer. It is reading messy code, debugging weird edge cases, making tradeoffs, communicating clearly, and building things that survive real users.

That gap is why so many people were ready to cheer when someone made the system look dumb.

Roy’s Marketing Was the Real Product

Interview Coder became controversial because of what it did.

It went viral because of how Roy marketed it.

He understood the internet perfectly.

The messaging was provocative, simple, and designed to make people argue. It hit all the right pressure points:

  • Candidates hate LeetCode grinding
  • Companies hate interview cheating
  • Engineers love arguing about hiring
  • AI makes every old process look fragile
  • A college student taking on Big Tech is naturally clickable

The product was useful to some people, but the controversy was the growth engine.

That is what made the launch so effective. Every angry quote tweet was still free distribution.

Sooo was it a Cheat Tool

This is where the debate gets interesting.

One side says Interview Coder is obviously cheating.

They are not wrong.

If a company asks you to solve a problem live and you secretly use an AI tool, you are misrepresenting your ability. That creates real risk. A bad hire can waste time, break systems, and make teams worse.

The other side says the system deserved this.

They are not entirely wrong either.

If interviews reward memorized puzzle patterns more than real engineering ability, candidates will optimize for the test. AI is simply the newest optimization.

That does not make cheating ethical.

It does make the hiring process look outdated.

The Accessibility Argument

One of Roy’s strongest arguments is that not everyone has months to grind LeetCode.

That part is fair.

Some candidates have school, jobs, family responsibilities, financial pressure, or less access to coaching and prep resources. The interview system already favors people with time and support.

But using AI secretly does not fully solve that inequality.

It may even create a new one. Now the advantage goes to people who can afford better tools, know how to hide them, and are willing to take the risk.

So the accessibility argument is real, but it does not automatically make the tool harmless.

What Companies Might Do Next

The obvious response is that companies will try to make interviews harder to cheat.

That could mean:

  • More in-person interviews
  • Stricter proctoring
  • Better screen monitoring
  • Live pair programming
  • More follow-up questions
  • More system design rounds
  • Take-home projects
  • Work trials or practical tasks

Some of these are better than LeetCode.

Some are worse.

Take-home projects can become unpaid labor. Pair programming can still be stressful and artificial. Proctoring can become invasive. In-person interviews make access harder for candidates who cannot travel.

There is no perfect fix.

But the old model of “solve this puzzle while we watch” is clearly under pressure.

Better Interviews Are Possible

The best interviews should look more like the actual job.

That could mean asking candidates to:

  • Debug a small broken app
  • Explain tradeoffs in an existing system
  • Review code and suggest improvements
  • Build a small feature with reasonable constraints
  • Talk through past projects in detail
  • Use AI openly and explain the output
  • Make judgment calls instead of only writing algorithms

The last one matters a lot.

AI is already part of software development. Pretending candidates will never use it is unrealistic.

A better interview might allow AI but test whether the candidate can use it well. Can they verify the answer? Can they catch mistakes? Can they adapt the solution? Can they explain what the code actually does?

That feels much closer to modern engineering than pretending everyone codes in a vacuum.

Final Take

Roy Lee did not kill LeetCode interviews by himself.

He exposed how fragile they already were.

Interview Coder is uncomfortable because it sits right in the middle of a real problem. Secretly using AI in interviews is dishonest. At the same time, the format it exploits is often a poor measure of actual engineering ability.

That is why the story spread so far.

It gave candidates a villain, companies a threat, and everyone else a reason to argue about whether technical hiring still makes sense.

The era of pure LeetCode supremacy is probably ending.

Good riddance, honestly.