카테고리 없음

Difference between first-class and second-class

떵꼬 2025. 3. 3. 07:06

Well, most people don't think simply enough. All right, so you know the difference between a recipe and understanding.

Imagine you're going to make a loaf of bread. Yep, the recipe says: get some flour, add some water, add some yeast, mix it up, let it rise, put it in a pan, and put it in the oven. It's a recipe.

Understanding bread, however, involves biology, supply chains, grain grinders, yeast, physics, thermodynamics—there are so many levels of understanding there. And then, when people build and design things, they frequently execute some stack of recipes. The problem with that is the recipes all have limited scope. For example, if you have a really good recipe book for making bread, it won't tell you anything about how to make an omelet.

But if you have a deep understanding of cooking, then bread, omelets, sandwiches—all these things come from a different way of viewing everything. And most people, when they become experts at something, hope to achieve a deeper understanding, not just a large set of recipes to execute.

It's interesting to walk through groups of people because executing recipes is unbelievably efficient if it's what you want to do. But if it's not what you want to do, you're really stuck.


Does every stage of development need deep understanding on the team?

This goes back to the art versus science question. Sure, if you constantly unpack everything for deeper understanding, you never get anything done. But if you don’t unpack understanding when you need to, you’ll do the wrong thing.

With deep understanding, do you mean also fundamental questions of things like "What is a computer?" or "Why?" The "why" question is: why are we even building this? Is it about purpose, or do you mean more like going toward the fundamental limits of physics—really getting into the core of the science?


In terms of building a computer:

Think simpler. In practice, you build a computer, and then when somebody says, "I want to make it 10% faster," you go in and say, "All right, I need to make this buffer bigger, maybe add an add unit, or have this thing that's three instructions wide, make it four instructions wide." What you see is each piece gets incrementally more complicated.

And then at some point, you hit a limit. Adding another feature or buffer doesn’t seem to make it any faster. People will say, "Well, that's because it's a fundamental limit." But then, somebody else will look at it and say, "Actually, the way you divided the problem up and the way the different features are interacting is limiting you, and it has to be rethought and rewritten."

So, you refactor and rewrite it. What people commonly find is that the rewrite is not only faster but half as complicated as when it was first built.


How often in your career have you seen it needed, maybe more generally, to just throw the whole thing out?

This is where I’m on one end of it—every 3 to 5 years. If you want to make a lot of progress in computer architecture, every five years, you should do one from scratch.


Isn’t it scary?

Yeah, of course. Well, scary to who? To everybody involved. Because, like you said, repeating the recipe is efficient. Companies want to make money. Well, no—individual engineers want to succeed. So, you want to incrementally improve—like increasing the buffer from 3 to 4—but this is where you get into diminishing returns.

I think Steve Jobs said this: every project starts here, it goes up, and then you hit diminishing returns. To get to the next level, you have to start over. The initial starting point will be lower than the old optimization point, but it’ll get higher. So now, you have two kinds of fear: short-term disaster and long-term disaster.


And you’re grown-ups, right?

People with a quarter-by-quarter business objective are terrified about changing everything. But people who are trying to run a business or build a computer for a long-term objective know that short-term limitations block them from long-term success.

If you look at leaders of companies that have had really good long-term success, every time they saw the need to redo something, they did it. Someone has to speak up, or you do multiple projects in parallel—optimize the old one while you build a new one. But the marketing guys are always like, "Promise me the new computer will be faster on every single thing."


And the computer architect says...

Well, the new computer will be faster on average, but there's a distribution of results and performance, and you’ll have some outliers that are slower. That’s very hard because they have one customer who cares about that one.