100%. Read it. No commentary needed.
- Services meshes should provide reliability (retries, timeouts, traffic shifting, rate limiting), observability (error tracking, request volume, other logs), and security (mutual TLS, access control, etc.)
- Common feature set to avoid the need to implement in the application level within individual services. Uniform across stack in a way that's agnostic to the stack's implementation. Decoupled from business logic / application code as a whole. Avoids issues like thundering herd due to different services all implementing their own retry logic.
- Also organizationally decouples platform and service owners. We all know how tricky it is to properly implement retrying at the app level -- it's great if the underlying infra gave that for free. Rather than get every service owner to implement error logging / retries / TLS, lift that responsibility out.
- Note that due to this decoupling, certain things remain the application's responsibility. Things like tracing, responding to errors, preventing IDOR happens within application code. Some things are infra concerns but are not handled by the service mesh, like data encryption at rest, data replication, log aggregation.
- Ancestor of service mesh was an application-level shared RPC library that was culturally enforced ("fat client" library). E.g. Google Stubby. These were generally not stack independent and were opinionated re: programming model, and might contain business logic (e.g. middleware). Also similar to a decoupled service bus (which was generally a monolithic communication bus).
- Necessity / popularity / viability of service mesh tied to reduction in cost of microservice architecture. Today we have more infra and more operational experience. Also trends in containerization and other related tooling make handling microservices easier. Trend towards reducing downside of polyglot systems.
- Docker: package any system into a portable black box.
- Commoditized cloud infra: elastic access to compute.
- K8s: Map packages to compute; operational cost of running 10 things isn't much different than 100.
- Service mesh: proxy in front of each of the black box service packages providing uniform platform behavior.
I've always wondered why the KMT appeared to have unlimited money for much of Taiwan's early history. China's "princelings" get a lot of media attention in the west; however, it's less known that Taiwan has its own princeling brood as well. Growing up in Taiwan, I've seen bits and pieces of Taiwanese "old money" and have always wondered where it came from.
In 1944, the United States Army Air Corps commenced Operation Matterhorn to bomb Japan's steel industry from bases to be constructed in mainland China. This was meant to fulfill President Roosevelt's promise to Chiang Kai-shek to begin bombing operations against Japan by November 1944. However, Chiang Kai-shek's subordinates refused to take airbase construction seriously until enough capital had been delivered to permit embezzlement on a massive scale. Stilwell estimated that at least half of the $100 million spent on construction of airbases was embezzled by Nationalist party officials.
Note that this is not the full story. The KMT brought (exfiltrated, really) ~$200 million with them when they fled to Taiwan. according to US Inflation Calculator), whose numbers I haven't verified, this adds up to about $2 billion in 2021 dollars. A large sum to be sure, but it's about what Jeff Bezos makes in a week, and hardly seems like enough to foster an army of Taiwanese princelings (while also, you know, being used for actual governmental purposes). One other piece is the massive amounts of land the KMT claimed. Much of the land was held by Japanese landowners who had fled after the conclusion of World War II, and was ripe for the taking. Other land was held by indiginous landowners, and, well, we all know what happened to the Native Americans.
There are various convincing reasons for Xi Jinping's harsh policies towards Xinjiang: quelling separatists, building a strong national identity, etc. One possible other reason that I haven't seen mentioned as much: several Muslim sects in western China pledged allegiance to the KMT in the 1940s.
Why did the Muslims ally with the KMT? Well, it's not like the CCP only decided to start exterminating Muslims this century...
Japan and Taiwan have generally enjoyed warm relations, and some point to Taiwan's international aid to Japan after the 2011 Tohoku earthquake as a prime example of Taiwanese goodwill. What's less mentioned is the fact that the KMT assimilated many members of Japan's military top brass after the Japanese surrender in World War II, and granted many more lenient treatment. Many such members would later go on to become leaders of Japan's new government.
This fact more or less skirts by under the radar, while the US's similar absorption of German and Japanese scientists is routinely criticized.
The self-awareness of an evil man
A quote from Chiang Kai-Shek himself:
If when I die, I am still a dictator, I will certainly go down into the oblivion of all dictators. If, on the other hand, I succeed in establishing a truly stable foundation for a democratic government, I will live forever in every home in China.
Certainly hearkens back to my previous post about Peter Thiel and his self-awareness around his own "founding murder."
Aka. the "philosophical foundation of fascism." An interesting lens through which to view the American right. Also has interesting parallels to socialism with regards to the ideal conditions of its birth: the promise of a brighter future, targeting of those who have lost faith in traditional politics, the desire to overthrow the old order, belief in irrepairable corruption in the currrent leading class. The initial Fascist Manifesto is hard to distinguish from the COmmunist Manifesto.
British political theorist Roger Griffin has coined the term palingenetic ultranationalism as a core tenet of fascism, stressing the notion of fascism as an ideology of rebirth of a state or empire in the image of that which came before it – its ancestral political underpinnings.
Historically, fascism tends to gain popularity when individuals feel the current "state of the state" has failed them, and strive for the glory days of the previous state. This can happen when a previously dominant group resents losing power and influence to some other foreign group, be it in a top-down matter (early 20th century China vs. its colonizers), bottom-up (the American right vs. immigrants and minorities), or both (Nazi Germany vs. the Treaty of Versailles signatories from the top, and the Jews from the botttom).
As part of this death and rebirth Fascism sought to target what it perceived as degenerative elements of society, notably decadence, materialism, rationalism and enlightenment ideology. Out of this death society would regenerate by returning to a more spiritual and emotional state, with the role of the individual core.
Tracks with the fascination of the "intellectual right" with ancient Greece and Rome, as well as the increasingly Puritanical stance of the religious right.
Through all this there will be one great leader who battles the representatives of the old system with grassroots support. They appear as one mass of people who have only one goal: to create their new future. They have infinite faith in their mythical hero as he stands for everything they believe in. With him, the country will rise like a phoenix from the ashes of corruption and decadence.
- Multiple layers -- SQL layer (SQL -> KV lookup instructions), KV layer (HA node storage, with transaction / distribution layer)
- Multi-tenant architecture keeps SQL layer single-tenant but makes KV layer multi-tenant
- SQL layer prefixes key with unique single-tenant identifier; KV layer does authN (what's the impact on perf?)
- Range rebalance, hot/cold mgmt are still done? This means that other tenant usage patterns may affect your perf?
- Throttles to avoid one tenant dominating node resources...
- Ultimately their free offering is enabled by efficient packing of small tenants, such that the overhead of each tenant is effectively zero
My opinion on misunderstood perks of startups, stream of consciousness style. Ended up writing more than I had expected, so will revisit and clean this... someday.
- Money. The EV of startup equity is horrible, to the point where, by one estimate, you have less than 1% odds at out-earning what you would have made at a mega-corp.
- What's misunderstood: the notion that startups are ALL cash-light and equity-heavy is outdated. Many "hot" startups these days pay competitive base salaries (well, at least up to the staff-ish level, where mega-corp salaries skyrocket to the realm of absurdity).
- Learning (in a vague sense). You will probably be exposed to more breadth at a startup, at a tradeoff of depth. I mentioned this when talking about growth potential and the difference between engineers from large and small companies, in the context of databases, architecture, and domain modeling.
- What's misunderstood: the type of learning that's exclusive to startups is very narrow. (Valuable, but narrow.) Generally, working at a startup is invaluable for product learning, learning to work effectively cross-functionally, and learning to scale human elements like culture (NOT machine elements like databases).
- Impact. This is a question of definition. Yes, the final product will have more of your touch on it, but its reach and customer impact will likely be far smaller than if you worked at a megacorp. Comparing "increase performance by 10%" to "increase performance by 0.01%" is meaningless without considering total net impact (e.g., actual user base).
- What's misunderstood: I think the part of impact that is more unique to startups is the cross-functional nature of it. It's much more likely that your work on a product touches areas outside your business function; for example, engineers at startups will very likely have a larger role in both design and product at a startup. Another, less sexy area where individuals have much more impact at startups is that individual bad decisions are much, MUCH more likely to straight up destroy the company at a startup.
- Interesting problems. Related to both the question of breadth / depth in "learning" and the question of definition in "impact." Startups generally don't have "basic research" projects concerning kernels, programming languages, cutting-edge computer science research, "innovation investments,"" etc. (startups like, say, CockroachDB are much rarer than yet another startup for your dog's mental health or something.) Things are generally much more procedural.
- What's misunderstood: I think the "interesting problems" of startups are mostly product / business problems, not technical ones. Mega-corps are often happy to sit back and enjoy their massive second-mover advantage one some new product area is explored and proven, or to just throw money at buzzwords as hedges (hello, blockchain).
- Autonomy. At startups, you are usually at the whim of business requirements with regards to what you actually work on; you need to do what it takes for the business to succeed. There's less process, bureaucracy, and general large company insanity, but it's replaced by more danger and general small company insanity. And it's not like mega-corps are generally less flexible when it comes to quality of life things like remote work, either. People tout new-age remote-only startups, but it's easy to cherry pick -- my experience is that the average startup is much less remote-friendly than the average mega-corp, even with the remote-only outliers.
- What's misunderstood: I think autonomy at startups is just overrated.
- I think the supposedly higher level of learning, impact, and amount of interesting problems at startup are generally constrained to the product space. If this is an area you are passionate about, it makes startups much more appealing.
- Regarding money: equity is overrated, while base salary doesn't get the attention it deserves. With that being said, it's almost guaranteed you'll make less at a startup than at a large company.
- I do think there is a kind of camaraderie that arises from startups that is hard to replicate at larger companies. But it's also the same kind of camaraderie that soldiers or sweat shop workers develop from living the same horrendous existence, so it's more of a silver lining on a crap situation than anything else.
- With all that said, I still consider my current job (at a startup) the best job I've had. I don't think it's due to any systemic difference between startups and large companies. I think I just got really, really, really damn lucky.
These two ideas are intricately linked to me. The better you optimize rote tasks, the faster you can improve with directed practice; and because so few people practice, optimizing rote tasks will inherently make you way more effective than average.
I love how he also brings up video games. When I was in college, I would observe (and occasionally play with) a "high level" Dota stack in college (which, after various iterations, would become FDL). Rick (who is now the CEO of Persona, where I work) would always harp on everyone to watch replays. Watch replays. Watch replays. Over and over. Not just your replays. Replays of pros. Replays of your opponents in games you lost.
It feels like such a chore at first. Why would you spend your time doing something besides grinding more and honing your skills? But after doing it just a few times, it became crushingly obvious how important it was for improving. Dan already covered this better than I can, but the insight about "everyone spending more time than they think on rote tasks" rings particularly true. And targeted practice of those skills is incredibly potent.
You probably are already rolling your eyes at the title, but trust me, this one is a good read. There is a deeper exploration into the 10x programmer myth that I think is insightful, but misses a key characteristic of value generation that is actually mentioned in the first article: building the right thing. Engineers that generally do what it takes to ensure they build the right thing (and do so consistently) are basically automatically 10x engineers in my head.
While I find "baby ivies" ridiculous and consider myself a relatively low-intensity parent, some parts of this definitely forced me to reexamine the ugly parts of my upbringing and my own child-raising philosophy.
I think there's a large overlap in the 9.9% mentality and the Asian-American mentality. Not necessarily due to the same beliefs, but many of the outcomes are the same. Tiger parenting isn't limited to Asians any more, apparently.
Shoshin (初心) is a word from Zen Buddhism meaning "beginner's mind." It refers to having an attitude of openness, eagerness, and lack of preconceptions when studying a subject, even when studying at an advanced level, just as a beginner would.
I think a lot about the confidence-competence gap a lot. The inflection point where confidence overtakes competence generally marks the zenith of one's career, and from my experience, it's incredibly rare to reverse this trajectory.
Loss of "shoshin" seems tightly correlated with this inflection point. An open mind depends on the belief that one does not know it all. With a closed mind comes cynicism, elitism, and loss of perspective.
- Teslas are common sights in plenty of places in the US now. But San Francisco remains the only place I've been where I've seen dirty Teslas. Everywhere else, Teslas are status symbols -- meticulously polished, a symbol of success. In San Francisco, Teslas are just cars with nice features.
- I've lived in San Francisco for about half a year, and visited maybe 5 times since I left. San Francisco has never smelled nicer than during lockdown.
- I can't believe some people actually dislike San Francisco fog. I was sorely disappointed when every day of my visit was bright and sunny. The fog is beautiful and the sun is the enemy.
- San Francisco has 179 playgrounds -- one of the most impressive collection in the US. I've always thought it was a terrible place for young kids, but I've been finding out that there are parts that are extremely nice.
- Everyone's heard of the Big Mac index. If there were an omakase index, you'd think San Francisco was downright cheap to live in. You get so much more for your dollar than pretty much anywhere in the East Coast (even in much cheaper cities). A dim sum index would also perform similarly, I think.
- Captures much of my experience with therapists I assume are inexperienced. I've been lucky to be able to witness experts of cognitive behavioral therapy, and it's much, much better than the run-of-the-mill sounding board many therapists are. Also just a fun read, as always.
- I got some additional insight from a psychiatrist friend that supportive therapy is really just the foundation of training, and is generally used when more specific therapies are ineffective due to severity of current symptoms or other factors. This explains a lot -- I'd assume that therapy from clinically practitioners has a higher level of rigor and structure.
- Lots of less interesting history here. One core insight is Patrick's remark on scaling the company: "When you come out of [the growth period where product development slows], if you do it well, then stand back." They recognized to persevere through the growing pains, and put the effort to build a culture (or civilization?) up front.
- Callback to the idea of taste again. "While “taste” can be hard to define precisely, in some sense, whatever the domain — whether it’s music or something else — if you spend the time and you put a lot of thought into appreciating something, teasing apart what makes it great, and building a thoughtful, opinionated perspective, that’s taste."
- On investing in potential future competitors: "As Stripe grows, we want to avoid the “we must win everything” mindset that can easily set in. We’d rather help enable a successful ecosystem… it’s a big, abundant world out there."
Thiel starts off by criticizing modern rationalist (homo economicus) behavior of nation-states (MAD, etc.), and describes how they were caught off guard by the "handful of crazy, determined, and suicidal persons" operating outside of Western norms in the 9/11 attacks. A new world has new risks, and modern society is underequipped to handle these new risks in Thiel's eyes.
The solution? Unilateral action. Implied is the establishment of hegemonies, suspension of unalienable civil rights, and the dismissal of the needs of the minority to achieve the supposed needs of the many (a familiar neoconservative stance, and one that has been deeply associated with Straussianism).
(As an aside, there's definitely an analogue between these "crazy suicidal terrorists" and the "definitely optimistic visionary skeptics" Thiel describes in his portrait of hyper-successful entrepreneurs in Zero to One. So, while Thiel is an avowedly anti-Islam, it seems he does have a certain degree of respect for them.)
Thiel further muses on how fundamentalist terrorists do not fit into the modern mold of human behavior. Society has overlooked the possibility of these individuals, he claims, something which the "older Western tradition," with its Hobbesian view of the potentially evil nature of humanity, never would have done. The "older tradition," according to Thiel, declined due to the world getting sick of endless war, and eventually deciding that finding an answer to "the question of human nature" simply wasn't worth it. Dulce et decorum est pro patria mori was "the old lie."
And now that everyone's brains were sitting idle, what filled the gap? Economics and capitalism. Thiel cites John Locke, who bridged the gap between "the pursuit of the virtuous life" and "the pursuit of happiness" with the following rationale:
- 1.The strongest instinct God planted in Man was self-preservation (and not loving God, loving thy neighbor, etc.)
- 2.Nature is harsh and cruel, and doesn't provide adequate means to fulfill God's commandment to "go forth and multiply."
- 3.Thus, God intends for Man to labor and to create their own value. "Avarice is no longer a mortal sin, and there is nothing wrong with the infinite accumulation of wealth."
Pretty good explanation for how the discrepancy between the modern American Christian's purported beliefs and actions developed over time (since before the United States even existed!).
Back to 2001. The 9/11 attack jolted the West back to the reality that many parts of the world still believed in the old lie. And to answer this, the West must awaken from the Enlightenment and remember what it discarded in its pursuit of happiness and peace, and the "newer world of commerce and capitalism."
- 1.The question of human nature is foundational, and unlike what Locke thinks, humans must pick sides, and thus establish friends and enemies. (The assumption seems to be that the question of human nature cannot be answered.)
- 2.There will always be friends and enemies. Historically, those who refused to recognize anyone as enemies and pacifistically seek "unilateral disarmament" were destroyed (citing both the aristocracy in the Russian and French revolutions).
- 3.Politics (and conflict) is thus unavoidable.
In the (supposedly endless) battle between Christianity (characterized by the Western world) and Islam, the Western world no longer remembers why it is fighting. It no longer believes the old lie. And thus, unless the Western world "wakes up," it will eventually be destroyed.
Tired stuff we've all been sick of hearing since Bush, but Thiel raises an interesing, Nietzschean dilemma: if the West were in fact awake, and could fight Islam with the ferocity of the Crusaders, this would "do away with everything that fundamentally distinguishes the modern West from Islam." And this is where Thiel finally introduces Leo Strauss as a counterpoint to the extremes of Locke and Schmitt. (This is also where Thiel's own world view starts visibly leaking in: his penchant for secrets and idolization of the curious free thinker mark him as a potential Straussian.)
What follows is (appropriately) the most opaque section of the essay: Thiel starts with a breakdown of Strauss' philosophy and style, then meanders into some of Strauss' views on if it is possible for "glorious societies" to be founded without an original sin. He them shifts to suggesting Strauss' esotericism as a potentially valid "middle way" of governmental policy between Locke and Schmitt (presenting a facade to the people while carrying on shadowy operations with little oversight behind the scenes -- again, something that's been standard practice since the Bush administration), but rejects this: esotericism will invariably come to light when enacted as policy. (There's also a slightly concerning criticism of checks and balances in government and a subtle recommendation of authoritarianism as avoiding "political paralysis.")
The essay concludes by introducing a fourth and final philosopher: René Girard. This is where I think Thiel loses track of his original thesis and goes too deep into philosophizing. He brings up the "Girardian Apocalypse," where the "terrible knowledge" of the "founding murder" of the city comes to light (the founding murder being the scapegoat sacrificed for the glorious society -- the aforementioned murder of Remus, or the more modern example of unsupervised global surveillance for the sake of security). He returns to the previous philosophers by outlining criticisms from Straussian contemporaries of Girard, who claimed that his laying bare of the founding murder wasn't "esoteric" enough and risked the destruction of society.
And Thiel claims that in the end, Girard will be right: while founding murders -- underhanded and secretive tactics -- may be necessary to push the Western world forwards they will inevitably come to light some day. The true "good guys" (the pacifist Girardians) will eventually win, and evil means will meet evil ends. In the fifteen years since Thiel authored this essay, he has managed to become one of the most hated figures in America (for quite good reason). And nothing can be more telling than Palantir, which exemplifies all the shadowy, conspiratorial, esoteric activities Thiel describes in relation to the founding murder. It's interesting to note that Thiel likely does not see himself as blameless, and that he will one day meet his deserved "evil end..." and that the world would be better for it. Dulce et decorum est pro patria mori.
While the essay claims that the Straussian view is incomplete, Thiel really does strike me as a Straussian. The secrecy and obsession with conspiracy, the Messianistic tone, the fetishization of the classics, the steadfast belief that the ends justify the means... the shoe fits perfectly. If anything, I think this essay gives a clearer picture of Thiel's true world view than Zero to One does, and provides a clear rationale for his allegiance to Trump. However, Thiel does break the mold with his apparent rejection of esotericism. Maybe I'm simply not smart enough to even detect any undercurrent of hidden meaning in his essay, but Thiel seems to prefers a straight-to-the-point, prosaic writing style, free of the frustrating "hidden meanings" that other Strauss-related works are chock full of. He doesn't mask his beliefs under endless layers of allegory or analogy, and even calls out Strauss' writings as "prohibitively obscure." Also, Thiel is generally against foreign wars, which sets him apart for other prominent Straussians (but not necessarily from Strauss himself).
Girard's idea of the founding murder is essentially that any successful society has a terrible, secret "original sin" that brings about a great differentiation. Stability within society is preserved by violently sacrificing scapegoats to re-establish differentiation. And the (conscious or subconscious) terror of this murder keeps the society on the right path.
Examples of founding murder:
- The murder of Remus and the founding of Rome
- The Thirty Years' War and the Treaty (and Peace) of Westphalia
- (In Christianity) the murder of Christ itself, and the subsequent Christian world
- The (hypothetical) slaughter of Islam by Christianity or vice versa (hearkening back to Thiel's point that if the Wests responds to violence with violence, it loses differentiation with Islam).
- George Floyd and (hopefully) a world conscious of racial struggle and continued inequality
- (Maybe) the dot-com crash and the beginning of the modern tech industry (though many question if we've learned our lesson at all...)
- (Maybe) the crypto crash and end of the heady ICO days (same question)
The nature of Bloom’s thoughts on Communism (and relatedly Fascism) was quite subtle and sophisticated, of which there were three primary, intertwined concerns:First, the nature of Communist society was inherently unjust and ineffectual, because it didn’t align with aspects of human nature foundational to society’s progress and ultimate happiness. Though one could argue or assume that everyone is created, in God or the laws’ eyes, equal, in actuality, there are differences in skills and talents that enable individuals to be better in certain instances than other individuals — better scientists, philosophers, musicians, basketball players, etc. Communism, by its very nature, would aim to minimize such naturally occurring differences, and consequently, prevent more talented individuals from reaching their potential and also benefiting society. As such, it was and is “unjust”. (Bloom’s interest in Plato’s Republic partly stemmed from its discussion of those inequalities’ potential benefits to society.)Second, given that labor would not be paid on skill, capability or ostensibly even work for wages, people are not incentivized to work diligently. In one lecture, Bloom mentioned that when the Communists assumed power, food shortages (and consequent famine) ensued because the farmers stopped being productive: “They simply stopped working.”Third, establishing a communist society was corrupted by the fact that it required a totalitarian regime to implement, oversee and often brutally enforce it existence, qualities it shared with the other twin spectre of 20th century government, Fascism. Workers might unite and revolt, but a ruthless tyrant was required to establish the government. Furthermore, enforcing its foundational requirement — a near-classless society — required forcefully “equalizing” the differences amongst people, the very ones which, in concert, benefited society: progress, invention, etc. I don’t need to describe the nature and destruction that such tyrannical force, justified with rabble-rousing rhetoric, wreaked on not only the countries themselves, but the world at large.
- How to get organization buy-in for the (supposedly) short-term pain of componentization? Had a strong long-term orientation culture from the start.
- Rather than directly jumping to microservices, started with a componentization approach that would leave the door open to microservices if desired. Decided it wasn't worth it after componentization. Architectural change rather than (service) topological change.
- Runtime analysis: built a tool that tracked cross-class calls that was run in test suite. Tracked violations of component boundaries. Numbers helped to gamify things and get people interested. However, canned tool because feedback was not actionable.
- Followed up by Packwerk which is a much simpler static analysis tool. Only tracks static references. Less comprehensive and doesn't attempt to handle metaprogramming, but also less false positives. Same loading rules as Zeitwerk.
- Architecture: need to think about component interfaces (aggregate of all classes) instead of just class interfaces. Bad class interfaces muddle component interfaces.
- Two types of cohesion:
- Functional cohesion: code that performs the same tasks live together
- Data cohesion: code that operates on the same data lives together (very OOP, pushed by ORMs)
- Insights into Shopify team structure
- Traditional teams
- Opt-in "guilds" that are like meetups; don't try to carry out changes but strategize on grass roots approaches
- Technical leadership committee -- rotational; participates in tech design reviews (basically cross team tech leads)
- "My experience tells me that a temporary incomplete state [of any migration] will at least last longer than you expect. So choose an approach based on which intermediate state is most useful for your situation."
Some nice snippets here for my fellow academia refugees, and for my fellow maximizers (vs. satisficers).
I like having ideas, launching businesses around ideas, and bringing them to profitability (winning Level 1). And I'm pretty good at this, evidently. But I really don't want to be the guy whose job it is to win Level 2. I don’t like playing Level 2, it's way harder and more toilsome, and it militates against what I most want to be doing, which requires copious leisure. Leisure isn't just time not working. It's a distinct state.
On net, I probably spend more time engaged in free thinking and creating now than I did as a professor, but one has to grind much harder to be a winner in the business world. I don't even care about the business world, but once I start playing a game, I want to be a winner in that game. I don't need to be a top 1% winner, but I can't help but want to be at least a top 5% winner. If I'm going to do it, then why not try to win? But managing these things and keeping my effort reflective of my real ultimate goals has turned out to be extremely stressful. A lot of existential anxiety as well.
I've been thinking about the maximizer question again lately. Why work so hard to do something that doesn't immediately bring you utility (happiness, satisfaction, etc.)? What's the cost of not doing this and simply disengaging? The existing literature cites "anticipated regret" (FOMO, more or less) as the primary motivator. Let's try to unpack that.
What do I lose if I were to choose to coast through life? Let's start with the practical aspects:
- 1.Money. Barring anomalies like Google (where, honestly, the culture of "coasting" is overly exaggerated), my income would probably drop by over 50%.
- 2.Flexibility. I talk to my peers at "easy" jobs, and they tend to give limited PTO, have more red tape, etc.
- 3.Loss of control and influence. Simply put, if you are unfirable, so is everyone else. Without the specter of punishment for bad behavior, everyone has the opportunity to be selfish at the expense of everyone else, withholding cooperation for personal gain.
Next, the psychological aspects:
- 1.Fulfillment. There is a culture war today between the 996ers and the "lying flatters" (which, for the record, is one of the funniest social movements I have ever seen). I am somewhere in between in that I actually do get a good deal of personal satisfaction from my professional success. I hear people parrot all the time that tying one's career to one's personal fulfillment is unhealthy, but what can I do, the thing that fulfills me just so happens to pay the big bucks, and I don't like being bored.
- 2.Meeting my own potential. This is a tougher, more abstract version of fulfillment. I consider myself reasonably smart, with parents who worked extremely hard to give me a good education. At this point in my life, it feels like it would be a waste of my own potential to coast instead of trying to strive for achievement. Many people similarly parrot that one has no obligation to anyone to rise to the expectations of them at the expense of one's own comfort, but... I don't know. I disagree. I don't have a good way of putting it into words, but I disagree. Vehemently. Maybe it's the sense of duty so ingrained into Chinese culture.
- 3.Competitiveness. This calls back to Justin Murphy's need to be a "top 5% winner". I'm an inherently competitive person, be it for work or for play, and when I get to work with an engineer that I consider to be better than me, I basically obsess over how I can get to (and exceed!) their level. I think about our relative merits and weaknesses, and think about the edges I can hone. Is it psychologically unhealthy? Maybe, but I can't stop! It's my natural impulse.
I've framed these three items in a positive light, but they can be flipped to reveal what sort of regret I anticipate by not "maximizing" my life: regret at not feeling fulfilled, regret at not meeting my own potential, regret at "losing" to others.
Maybe with a change in perspective, I can see how unhealthy this all is, and increase my net well being by satisficing instead of maximizing. But frankly, I like who I am, and I like the decisions I make. I'm pretty happy where I am now.
Lots of Lethain.
- One of my biggest concerns at Persona is VC growth pressure throwing off the ratio of onboarding vs. experienced engineers, so this was a timely read.
- "There are a non-zero number of companies which do internal documentation well, but I’m less sure if there are a non-zero number of companies with more than twenty engineers who do this well. If you know any, please let me know so I can pick their brains."
- "The best system rewrite is the one that didn’t happen, and if you can avoid baking in arbitrary policy decisions which will change frequently over time, then you are much more likely to be able to keep using a system for the long term."
- I've been doing a lot of reading on scaling (company-wise, not just architecturally). The term "blast radius" comes up. A LOT.
- No other summary. Everything is timely. Just... read this again once in a while, future me.
- "When you have surplus engineering capacity, folks tend to have a long backlog of stuff they’d like to work on, and many teams immediately jump on those, but I think it’s useful to fight that instinct and to step back and do deliberate discovery." Rings true to me. Everyone I know where I work has "shadow backlogs", and LIFO feels natural, but it really may not be the most ideal. Be more thoughtful about prioritizing excess capacity.
- "If you’re prioritizing without your users' voice in the room, your priorities are wrong."
In my eyes, "monolith vs. composition" is one of the modern day holy wars.
Monoliths vs. microservices. Rails vs. Express. K8s vs. a bunch of Hashicorp offerings. Heroku vs. cloud giants. React vs. Angular. Enterprise tools that do everything vs. focused tools. Even OOP vs. FP to a degree.
Cynical graybeards claim that fashion is cyclical, and that today's trend du jour is simply a reaction against overcommitments to yesterday's fad. But I'm wary of overfitting -- there are really only three or four cycles to look at since the advent of "modern programming."
It seems like compositional approachs are less effective at causing large paradigm shifts. Without an understanding of what is "good" (in the sense of having "good taste" -- see "Building Products at Stripe" which I linked previously), one can't effectively assemble a well-functioning system out of compositional parts. On the other hand, monoliths built by tasteful people can immediately demonstrate such a high degree of value that the accompanying loss of flexibility seems like an obvious tradeoff to make.
My current unfounded hypothesis is that in the early days of a domain's existence, compostional approachs take the lead due to the problem space being smaller. No one really knows what they're doing, and small projects are easier to understand and deliver than large ones. Some time later, a tastemaker may assemble (or reimplement) existing tools into one cohesive, beautiful system whose value speaks for itself. Given enough time, however, the compositional projects start catching up again. Effective monoliths aren't just tools, they're also blueprints of how to do things effectively. Once the blueprint is available, it is much easier to assemble one's own collection of compositional tools that has the power of the monolith without the rigid boundaries that often accompany it. (Not going to talk about self-hosted vs. managed here, which to me is ultimately orthogonal.)
(TODO: think of some real world examples that may fit this.)
- During the advent of SQL, there generally wasn't that big of a need for scale. Consistent (due to everything being in one node) and available. Sharding was hard but was a comparatively rarer need compared to today (peak thoroughput was 10-100 transactions/sec; peak storage was a handful of terabytes). Developers disliked sharding as it was a leaky abstraction at times; "eBay’s middleware in 2002, for example, required their developers to implement all join operations [when partitioned] in application-level code" (Pavlo & Aslett, 2016). Additionally, sharding makes indexing hard as indexes might not be partitionable the same way as the data.
- In the "web scale" / NoSQL era, people found that they needed massive scalability, support for many concurrent users, and to be online at all times. High availability was chosen over consistency, and NoSQL DBs generally chose to rely on eventual consistency.
- "Transactions don't scale;" two-phase commit (change + verify all nodes) performance viewed as unacceptable; replicas allowed to diverge; writes may not persist after merge conflicts; reads may not be latest; state at any given read may not have yet converged; given enough time after any inputs, all replicas will return the same results for a read).
- Schemaless nature made sharding easy (see Mongo auto-sharding).
- The relational data model was seen to often be unnecessary overhead, with alternatives like k-v, graphs, etc. being seen as better fits at times. However, due to the lack of transactions, developers switched from wasting time implementing joins at the application level to wasting time handling inconsistent data.
- In today's cloud-native / NewSQL era, many new entrants purport to offer strong consistency, enough availability, and strong partition tolerance, with the goal of ACID-guaranteed transactions + scalable performance of NoSQL. The rationale is that even with a perfectly available DB, you will never get "true" real world availability given things like network errors, so it's better to optimize for the other two. Lack of transactional guarantees was often the dealbreaker for NoSQL.
- New architectural patterns, more efficient storage engines, better sync strategies for nodes (e.g. atomic clocks), better abstraction of sharding middleware
- Moving from structured data model with appending rows to tables with IDs as primary keys + index (which is hard to distribute) to tables as K-V store -- each row has a non-ID column identified as a primary key (link). Keys are
table/index/key/columnNameand value is column value. (Imagine CSV index-based access via ID + column name.) (Embedding extra data in key, e.g. country code, allows easy partitioning by region.)
- Partitioned indexes instead of replicated indexes
- Improved overall technology; commoditization of compute and memory
- Memory became cheaper vs. storage; feasible to keep entire databases in-memory. Optimizations like moving cold tuples to disk and replacing in-memory record with a pointer allow support for "larger-than-memory" loads.
- Aside 1: OLAP data warehouses meet some of the criteria sought by NewSQL. However they are read-only (barring data ingestion etc.), have much less performance demands (e.g. responses in the order of minutes can be acceptable), and are optimized for full table scans and large joins due to analytical requirements (which is overhead for most application needs).
- Aside 2: there were previous attempts to optimize existing RDBMS by swapping out the storage engine; replacements of row-based storage with column-based has found a niche with OLAP, but nowadays most NewSQL attempts aim at ground-up rewrites and brand new architectures.
- Besides operational benefits, easier partitioning + global cloud hosting solves for non-operational constraints like data residency laws
- Page 5: Starts describing data model
- Page 6: "We believe it is better to have application programmers deal with performance problems due to overuse of transactions as bottlenecks arise, rather than always coding around the lack of transactions."
- Page 19: summary of prior art: layering transactions on replicated stores; Calvin for the idea of preassigning timestamps for transactions; temporal databasses allowing reads of past data; clock uncertainty bounds
- Key insight on TrueTime: if the maximum error bounds are known, all we need to guarantee linearizability (complete global ordering) is to "wait out" the error time. Spanner's atomic clock based error time has an upper bound of 7ms (NTP is around 100ms~250ms which would be impractical).
- In-depth historical overview of database trends, as well as a comparison of current NewSQL entrants
- Page 48: covers shared architectural principles
- Page 53: Table 1 shows a comparison of NewSQL systems
An insight I liked: Instead of trying to balance speed vs. depth, balance speed AND depth vs. breadth. (aka don't spend energy on the wrong thing)
An excerpt I liked:
You also have to have some degree of taste. While “taste” can be hard to define precisely, in some sense, whatever the domain — whether it’s music or something else — if you spend the time and you put a lot of thought into appreciating something, teasing apart what makes it great, and building a thoughtful, opinionated perspective, that’s taste.
People I talk to tend to be surprised (and often personally offended) when I tell them that my company doesn't interview junior engineers. It's an understandable reaction; a blanket policy of not hiring juniors implies certain negative things. It can imply arrogance, in that I think I'm so smart and talented that junior candidates simply can't keep up. Or it can imply poor value assessments, in that I don't think juniors are worth training. Or it can imply selfishness, in that I think juniors are worth training, but it's expensive, so I'll let some other sap eat the cost of training them and then poach them.
On one hand, I don't decide hiring strategy at my company, so chill! On the other hand, however, I must admit that I think this is a sensible policy that's worth justifying. To do so, I need to add several qualifiers: we don't interview junior engineers at present because we are a small growth-focused company, and I don't think small, growth-focused companies should hire juniors.
(Note that the main question this post attempts to answer is "is hiring junior engineers beneficial for early stage companies?" It's not attempting to answer the related questions "is hiring junior engineers beneficial for junior engineers?" or "is hiring junior engineers the right thing to do, even if it's not personally beneficial?")
1. Risk and time
Small, growth-focused companies are in a cutthroat race against time. The company is not yet profitable and is burning money in R&D and customer acquisition. This generally means building as fast as humanly possible, prostrating oneself at the feet of customers, and pivoting so much your product roadmap looks like an Etch-A-Sketch. In such an environment, I can't see how hiring a junior engineer and dedicating several months worth of resources to get them to net zero productivity is an effective use of resources.
The problem isn't one of expense, or even one of rate of return. Junior engineers are cheaper to hire than seniors, and proportionally benefit far more from training. Hiring and training juniors has an incredibly high rate of return. Rather, the problem is one of risk, and one of time to return. Juniors are inherently higher variance due to having less of a work history or references that can be evaluated, and young companies are drowning in risk already. Additionally, the lifeline of young companies is measured in months, not in years. For these companies, every day and every dollar is spent in order to get the next, longer lifeline (venture funding, hockey stick growth, etc.) before everyone goes bankrupt and moves back home with their parents. A junior engineer may be ready to be a star player after 3 months of training and learning, but there may no longer be a team to play on at that point.
In poker, short stacked players have no choice but to tighten their ranges and play low variance hands. Early stage companies are in the same boat: regardless of how big a payoff hiring junior engineers may have (or how altruistic it may be), these companies simply lack the resources for it to be an appealing option.
2. Hire fast, fire fast
If the problem is that hiring juniors requires more resources than early stage companies have, a solution could be to reduce the amount of resources needed. The "hire fast and fire fast" strategy is one popular approach. In this strategy, juniors are given the bare minimum resources (maybe an onboarding guide, some code labs, and a scattered list of online resources) needed and told to figure it out -- and if they don't, out they go.
This is similar to how large mega-corps hire juniors: at Facebook, Google, and the like, new hires go through weeks or even months of orientation and onboarding boot camp, and emerge ready to hit the ground running. Our strategy is basically the same as the mega-corps, only without the time, money, and resources, and with an atmosphere less like college welcome week and more like the Hunger Games (unless said mega-corp is Amazon, in which case they really are basically the same).
I have to admit that this seems like the most cost-effective way for small companies to evaluate junior candidates: the level of risk is capped, and the negative impact of hires that don't work out is minimized. The only sticking point is that it's evil, and a reputation for being evil tends to be bad for hiring. While this strategy works out in the company's favor, it is predatory from the perspective of the new hires.
(Some may argue that a predatory relationship is better than no relationship, which is the argument used by companies offering unpaid work or work paid via "exposure." I'm sure the hire/fire fast strategy can be tweaked to be more equitable, but I haven't yet found any implementation that's compelling enough. Bad early career experiences can be crippling, and I'd rather stay away unless I am sure the experience would be great for junior engineers. There are enough companies out there that hire anyone with a pulse and ruthlessly cull the herd.)
3. Growth potential
There's a belief that engineers from large companies tend to be overspecialized, and that engineers from small companies are more independent. I've found this belief to be entirely inconsistent with my experiences. Over the years, I've had the opportunity to work with mid-level engineers from large companies and mid-level engineers from small ones. In general, the ones from large companies felt years ahead in experience: they were the ones who had a large repository of patterns and architectures to draw from, who knew how to effectively work cross-functionally, who could scale databases, and who could do domain modeling effectively.
Even if you're willing to bet on your mentorship capabilities and commit the resources to train up juniors, doing so may not lead to the best long-term growth for juniors. Pattern recognition and analyzing existing systems are two important ways in which junior engineers learn. Engineers witness more successes and more failures at large companies than at small ones; this evidence of what works at scale and what doesn't is powerful insulation against a cargo cult mentality. When it comes to systems, startups generally aren't know for being shining bastions of best practices and stable architectures; things change a lot in the pursuit of product-market fit, and products are often made to be more or less throwaway.
It's not impossible for juniors to thrive at small companies, but the rate of growth at a larger company will likely be much higher. I believe the converse of this article's title is also true: if given the choice, juniors shouldn't join early stage companies. (Of course, the options available to junior engineers are inherently limited. It's probably sensible for prospective juniors to value their long term growth potential lower than their short term ability to pay rent.)
4. When should companies start hiring juniors?
To me, the question of hiring juniors is not "if", but "when." I believe that early stage companies should not hire juniors; at the same time, I believe that companies that hire juniors are doomed in the long term. At a certain scale, having a steady supply of junior talent is critical to a company's continued survival: there simply are not enough senior engineers to go around. (Companies like Netflix are the exception, where sky high compensation and a compelling engineering brand are used to continue attracting senior talent.) Whenever taking a stance, it's helpful to identify the boundary conditions under which the stance will change.
So where is the inflection point? There isn't a universal answer. Early stage startups generally don't think in the long term, but there are indicators of success (or, more conservatively, "signals of a reasonable expectation of continued existence") that may influence the decision of whether or not to hire juniors. Milestones like hitting 100 people, or reaching a certain valuation target, or raising a certain amount of money may all be such inflection points around which hiring policy should be revisited.
"Why is this a problem? Everything would be fixed if we just did X!"
Everyone knows an idea guy. Idea guys are found anywhere there are complex, nuanced problems without clear resolutions. It could be in the workplace, in casual conversations... anywhere, really. There is no escape.
Idea guys can be deceptively compelling: their suggestions are rarely wrong, and really do illustrate a better alternative to whatever dumpster fire is currently being discussed. Once it's time for actual implementation, however, these ideas fall short. The devil really is in the details, and every edge case, every contingency must be examined. Complex problems are complex because they are nuanced, and idea guys perceive nuance like how dogs perceive color. Anyone with a pulse can idly theoreticize and come up with their own vision of utopia; idea guys embrace and evangelize their visions, while ignoring the fact that they might have to commit atrocities to enact their utopias, or that their utopias would last for maybe ten minutes before some neglected real world detail causes the whole thing to crash down.
Ideas that do not describe execution are simply dreams. Only when they are fleshed out to cover implementation and maintenance do they become actionable proposal. Good proposals cover three things: 1) where we want to be, 2) how to get there, and 3) how to stay there. Idea guys only concern themselves with the first item. Socialism looks great when examining the theoretical outcomes. Cheaper healthcare costs, free money, more income equality, higher levels of educational attainment? Sign me up! Oh, another socialist society transitioned to dictatorship and started burning books, killing scholars and committing genocide? Curious why that keeps happening, but it's surely not important -- the core idea is sound, it's the implementation that's flawed!
(I understand that criticizing socialism is both cliché and passé. It was just the first example that came to mind.)
Good proposals have enough detail to be thoroughly evaluated, criticized, and possibly implemented. This leads to an unfortunate dilemma where proposals often die by their own merits, and vague ideas are favored instead. Many well thought out proposals die on the examination table: due to the level of detail; the surface area available for criticism expands greatly. Oftentimes, a well thought out proposal really is insufficient: hard problems are hard, after all. Ideas, on the other hand, simply sweep the details under the rug. They can be hard to disagree with because they're not saying anything disagreeable; however, on closer inspection, they're not saying much of anything at all.
Idea guys take pride in their steamrolling of nuance. They revel in being the voice of reason and common sense in a sea of idiots lost in the weeds. If only, if only they were in charge of it all.
Don't be an idea guy.
To me, there are two flavors of trust. "Implicit" trust is trust derived from shared beliefs or opinions. "Explicit" trust, on the other hand, is trust that exists despite a difference in beliefs or opinions.
When people say they trust others in personal or professional settings, they mean that they trust the judgment of the other party. Implicit trust is easy to explain: if both parties hold the same beliefs, it's reasonable to expect that both will arrive at similar conclusions when presented with a problem. In other words, the same input (situation) should lead to the same output (judgment) because both individuals share the same priors (beliefs). This is obvious; if I'm reasonably certain you would make the same decision as I would in a certain situation, why wouldn't I trust you?
Explicit trust is more interesting to me because it implies a level of emotionality and optimism due to incomplete information. The trustor does not know what conclusion the trustee will arrive at, yet still expresses confidence that the decision will be a favorable one. When framed this way, it becomes tempting to view explicit trust as inherently irrational. However, explicit trust seems to be a critical element of strong relationships (both personal and professional).
PS 1: What about trust that arises not from shared opinion, but from a pattern of positive outcomes? Someone's decision making process may be a black box to me, but if they consistently achieve good outcomes, I'd be inclined to trust them as well. This form of trust disregards priors entirely and only focuses on outcomes. I'm not sure if this is a reframing of explicit trust, or yet another flavor.
PS 2: The concept of explicit trust seems to have a lot in common with the concept of faith.
- Horizon charts - nice data visualization technique for line charts with multimodal peaks. Imagine a dataset where most values are between 1-100, but a subset spike to 1000-1100, and a further subset spike to ~10000. Compressing the Y-axis will destroy readability for the smaller ranges. Alternative to logarithmic scales for better glance value.
- Uber - Designing Edge Gateway, Uber’s API Lifecycle Management Platform - beyond being technically interesting, this article gives a sense of the time scales involved in building a massively scalable architecture. I think it also highlights that the viability of a design is a function of current business need.
There are some dumpster fires of codebases for which most viewers immediately agree "yeah, this is a lost cause; a full rewrite would be easier and cheaper." Less pathological cases cause more disagreement. What are the variables here?
The core one is logically "perceived difficulty of modifying the codebase." This might mean a lack of tests (no confidence in whether new changes introduce regressions), too many overly sensitive tests (even small changes enact a heavy refactoring burden), overly tight coupling (changes have many downstream effects), overly loose coupling (constituent parts are too hard to understand), limitations from core, unchangeable architectural decisions (e.g. performance problems), etc.
The key word to me is "perceived." Someone intimately familiar with the codebase may not be bothered by a lack of tests, as the tests are essentially in their brain (and are not particularly trustworthy, but what can you do). Similarly, they might know how to navigate the minefield of overly sensitive tests, instintively knowing what related tests must be updated. Similarly, someone with familiarity with DI, etc. might not view overly tight coupling as an intractable problem, as they know where to start to untangle the mess.
So really, the evaluation is a function of code quality, individual contextual knowledge, and individual expertise.
Individual contextual knowledge discouraging one from giving up on a dumpster fire project is suboptimal. As a first step, this contextual knowledge must be documented and made shareable. Even so, it's better to improve code quality than to use increased knowledge as a buffer for dealing with bad code. This is commonly seen by people getting "Stockholmed" after working on low quality codebases for a while -- they've picked up the coping mechanisms for productively working with the bad code.
Individual expertise discouraging one from giving up, on the other hand, is a good thing. While contextual knowledge represents how easy it is to cope with bad code, expertise represents how feasible it is to improve the bad code in the future.
A fourth variable is patience. Writing new code is funner than treading the minefield of existing code, and churning out new code makes people feel more productive. (Of course, there's no guarantee the rewrite will be any better than the old code, leading to an endless death-and-rebirth cycle.)
It seems like Phase 3 is where transitions to socialist economies break down. Capital flight strikes me as being the root cause.
Without explicit support from owners of capital, attempts at redistribution will be met with capital flight. To counter this, the government can:
- 1.Take this capital by force, either by preventing people from leaving with their wealth, or by simply seizing it. (China is the most immediate example.)
- 2.Preemptively kill all owners of capital (Southeast Asian Communists).
- 3.Attempt appeasement (which has apparently never worked) (1972 Chile).
When capital owners fight tooth and nail against redistribution, the end result is seemingly always either a brutal transition to totalitarianism (if the government is willing to dirty its hands), or collapse of the government (if not).
The only counterexample I can identify is Sweden. The buy-in from the wealthy seems to have arisen from lack of government corruption, a strong education system, a strong sense of camaraderie due to monoculturalism, and manageability due to the relatively small size of the country. The willingness of Swedes to cede many "freedoms" to the government is similar to Taiwan and its healthcare system, which requires a similar level of government involvement.
I've long considered my ability and drive to seize initiative as one of my greatest strengths. Despite not being a "natural manager," I am willing to grab the reins in a directionless project, or take responsibility for an undermanaged area.
This is one of the strengths I've consciously decided to give up while balancing childcare, personal and mental health, and work. I've decided that, at least for the short term, I will recede into the background. If I disagree with an opinion someone holds strongly, I will stay quiet.
My decision is an experiment as much as it is a coping mechanism. If work outcomes remain the same (or improve), then this is definitive evidence that I hold my opinions too strongly and crowd others out of decision making. The flip side of this strength is a glaring weakness: the mindset of always worrying about incorrect decisions being future landmines can cause conversations to be high-friction and emotionally charged.
Timely SPAC paper on HN front page today. TL;DR:
- One overlooked downside of SPACs is that how dilutive the merge ends up being is unclear up til the merge (because SPAC investors can choose whether or not to exchange their holdings). One consequence is that the claimed advantage of “price certainty” of SPACs vs. IPOs is overstated. Part 3 gets into this more (pp 18) and is the most interesting section IMO.
- Post-merger share prices tend to drop in price post-merger; people who exit their positions prior to the merger have better returns on average (resembles options in a way)
- On average, the larger the SPAC sponsor (e.g. if it’s a large private equity fund), the better the returns.
- SPACs being less regulated is a real advantage they have over IPOs as of now. NOTABLY, going public via SPAC doesn’t require you to file an S-1, so you’re a black box to retail investors and are essentially trading off of name brand alone (of both the company and the SPAC sponsor).
- SPAC sponsors take most of the returns, whereas SPAC shareholders end up bearing most of the risk/cost.
- TL;DR of the TL;DR: SPAC sponsors and SPAC IPO investors make out the best. SPAC holders who ride the wave and exit pre-merger make out ok. SPAC holders who hold post-merger are bagholders.
- Perfect really is the enemy of good. I'm a tinkerer, which leads me to stumble upon local maxima of "elegant code." But elegant code isn't the end goal -- product (and value) are.
- Spiking and dictatorially making decisions on areas of high uncertainty isn't actually that bad... as long as you remain open to change and are not afraid to throw code away.
- Running meetings and scoping features democratically is suboptimal. Parallelize work aggressively, grouping up as needed rather than as a default.
- Decisions should be made before meetings, not during meetings.
- Unit tests are overrated on the frontend. Integration tests are underrated.
- Writing suboptimal (or straight up incorrect) code is okay if the subject is easy to change. Being able to identify what's easy to change is the mark of a good engineer.
- Set hard date requirements instead of hard feature requirements. This will act as a forcing function to make you scope correctly. Even if you don't scope correctly, at least you have stuff done.
- When working with unfamiliar tech, just start writing code. Read documentation like novels (front-to-back), not like dictionaries (random access).
- Set dates first instead of features first. With featuers first, all dates end up being padded. Setting dates first forces you to really scope correctly and identify the MVP, and get to a state where you can start iterating. "Plan for 6 weeks and schedule for 3 weeks; plan to miss but aim to succeed."
- For the MVP, plan out acceptance tests, and adhere to them strictly. Use these as accountability tools for an objective measurement of progress. If an acceptance test is not passing by a deadline, really hone in on it.
- Learning how to correctly handle a big launch is hard. "This is why I don't recommend juniors join startups." At larger companies, you can fail without risking the existence of the company, and you can observe good practices. Because there are more large launches, you can evaluate them against each other for quality.
I never thought history was interesting. In high school, I treated AP US History as simply another bundle of credits to jumpstart college.
The NYT published How Neil Sheehan Got the Pentagon Papers today, and it is an engrossing read. This got me reading more background about the papers, the Vietnam war, and the US' China containment policy.
- Much of the US' continued presence in the Middle East is due to this policy.
- I've always wondered why the US maintained such close ties to the Philippines. This explains it, at least in part.
- Most interestingly, much of the basis of the Vietnam War was in the name of this policy.
- Nixon's 1972 China visit marked a priority shift from containing China to containing Russia.
- Much of this comes down to the US' hatred of communism. American interventionism itself can largely be explained by anti-communism. A secondary rationale was the intent to not repeat the mistake of appeasement towards Nazi Germany pre-WWII.
- One of Obama's big geopolitical moves in 2011 was a pivot of military resources out of the Middle East and into the APAC region.
- On the topic of Obama, the drone strike program was viewed as a black mark on his legacy. However, it's now known that the program was masterminded by the Bush administration, and during transition Bush pushed strongly for Obama to maintain this policy position. Similarly, Trump's trade war with China is seen as a black mark on his (fairly laughable) legacy; I wouldn't be surprised if this general policy position was carried over from the Obama administration.
Good discussion about the "ad-exchange DSP bubble," of which Rocket Fuel (my first company) was a part of. I'm sure glad to be a part of that any more.