In my eyes, "monolith vs. composition" is one of the modern day holy wars.
Monoliths vs. microservices. Rails vs. Express. K8s vs. a bunch of Hashicorp offerings. Heroku vs. cloud giants. React vs. Angular. Enterprise tools that do everything vs. focused tools. Even OOP vs. FP to a degree.
Cynical graybeards claim that fashion is cyclical, and that today's trend du jour is simply a reaction against overcommitments to yesterday's fad. But I'm wary of overfitting -- there are really only three or four cycles to look at since the advent of "modern programming."
It seems like compositional approachs are less effective at causing large paradigm shifts. Without an understanding of what is "good" (in the sense of having "good taste" -- see "Building Products at Stripe" which I linked previously), one can't effectively assemble a well-functioning system out of compositional parts. On the other hand, monoliths built by tasteful people can immediately demonstrate such a high degree of value that the accompanying loss of flexibility seems like an obvious tradeoff to make.
My current unfounded hypothesis is that in the early days of a domain's existence, compostional approachs take the lead due to the problem space being smaller. No one really knows what they're doing, and small projects are easier to understand and deliver than large ones. Some time later, a tastemaker may assemble (or reimplement) existing tools into one cohesive, beautiful system whose value speaks for itself. Given enough time, however, the compositional projects start catching up again. Effective monoliths aren't just tools, they're also blueprints of how to do things effectively. Once the blueprint is available, it is much easier to assemble one's own collection of compositional tools that has the power of the monolith without the rigid boundaries that often accompany it. (Not going to talk about self-hosted vs. managed here, which to me is ultimately orthogonal.)
(TODO: think of some real world examples that may fit this.)
During the advent of SQL, there generally wasn't that big of a need for scale. Consistent (due to everything being in one node) and available. Sharding was hard but was a comparatively rarer need compared to today (peak thoroughput was 10-100 transactions/sec; peak storage was a handful of terabytes). Developers disliked sharding as it was a leaky abstraction at times; "eBay’s middleware in 2002, for example, required their developers to implement all join operations [when partitioned] in application-level code" (Pavlo & Aslett, 2016). Additionally, sharding makes indexing hard as indexes might not be partitionable the same way as the data.
In the "web scale" / NoSQL era, people found that they needed massive scalability, support for many concurrent users, and to be online at all times. High availability was chosen over consistency, and NoSQL DBs generally chose to rely on eventual consistency.
"Transactions don't scale;" two-phase commit (change + verify all nodes) performance viewed as unacceptable; replicas allowed to diverge; writes may not persist after merge conflicts; reads may not be latest; state at any given read may not have yet converged; given enough time after any inputs, all replicas will return the same results for a read).
Schemaless nature made sharding easy (see Mongo auto-sharding).
The relational data model was seen to often be unnecessary overhead, with alternatives like k-v, graphs, etc. being seen as better fits at times. However, due to the lack of transactions, developers switched from wasting time implementing joins at the application level to wasting time handling inconsistent data.
In today's cloud-native / NewSQL era, many new entrants purport to offer strong consistency, enough availability, and strong partition tolerance, with the goal of ACID-guaranteed transactions + scalable performance of NoSQL. The rationale is that even with a perfectly available DB, you will never get "true" real world availability given things like network errors, so it's better to optimize for the other two. Lack of transactional guarantees was often the dealbreaker for NoSQL.
New architectural patterns, more efficient storage engines, better sync strategies for nodes (e.g. atomic clocks), better abstraction of sharding middleware
Moving from structured data model with appending rows to tables with IDs as primary keys + index (which is hard to distribute) to tables as K-V store -- each row has a non-ID column identified as a primary key (link). Keys are
table/index/key/columnName and value is column value. (Imagine CSV index-based access via ID + column name.) (Embedding extra data in key, e.g. country code, allows easy partitioning by region.)
Partitioned indexes instead of replicated indexes
Improved overall technology; commoditization of compute and memory
Memory became cheaper vs. storage; feasible to keep entire databases in-memory. Optimizations like moving cold tuples to disk and replacing in-memory record with a pointer allow support for "larger-than-memory" loads.
Aside 1: OLAP data warehouses meet some of the criteria sought by NewSQL. However they are read-only (barring data ingestion etc.), have much less performance demands (e.g. responses in the order of minutes can be acceptable), and are optimized for full table scans and large joins due to analytical requirements (which is overhead for most application needs).
Aside 2: there were previous attempts to optimize existing RDBMS by swapping out the storage engine; replacements of row-based storage with column-based has found a niche with OLAP, but nowadays most NewSQL attempts aim at ground-up rewrites and brand new architectures.
Besides operational benefits, easier partitioning + global cloud hosting solves for non-operational constraints like data residency laws
Page 5: Starts describing data model
Page 6: "We believe it is better to have application programmers deal with performance problems due to overuse of transactions as bottlenecks arise, rather than always coding around the lack of transactions."
Page 19: summary of prior art: layering transactions on replicated stores; Calvin for the idea of preassigning timestamps for transactions; temporal databasses allowing reads of past data; clock uncertainty bounds
Key insight on TrueTime: if the maximum error bounds are known, all we need to guarantee linearizability (complete global ordering) is to "wait out" the error time. Spanner's atomic clock based error time has an upper bound of 7ms (NTP is around 100ms~250ms which would be impractical).
In-depth historical overview of database trends, as well as a comparison of current NewSQL entrants
Page 48: covers shared architectural principles
Page 53: Table 1 shows a comparison of NewSQL systems
Historical overview of database trends; fairly light reading
Fun, accessible breakdown of how Spanner works (and how CockroachDB compensastes for not having "magic clocks")
Linked to section describing architecture / underlying data storage
An insight I liked: Instead of trying to balance speed vs. depth, balance speed AND depth vs. breadth. (aka don't spend energy on the wrong thing)
An excerpt I liked:
You also have to have some degree of taste. While “taste” can be hard to define precisely, in some sense, whatever the domain — whether it’s music or something else — if you spend the time and you put a lot of thought into appreciating something, teasing apart what makes it great, and building a thoughtful, opinionated perspective, that’s taste.
People I talk to tend to be surprised (and often personally offended) when I tell them that my company doesn't interview junior engineers. It's an understandable reaction; a blanket policy of not hiring juniors implies certain negative things. It can imply arrogance, in that I think I'm so smart and talented that junior candidates simply can't keep up. Or it can imply poor value assessments, in that I don't think juniors are worth training. Or it can imply selfishness, in that I think juniors are worth training, but it's expensive, so I'll let some other sap eat the cost of training them and then poach them.
On one hand, I don't decide hiring strategy at my company, so chill! On the other hand, however, I must admit that I think this is a sensible policy that's worth justifying. To do so, I need to add several qualifiers: we don't interview junior engineers at present because we are a small growth-focused company, and I don't think small, growth-focused companies should hire juniors.
(Note that the main question this post attempts to answer is "is hiring junior engineers beneficial for early stage companies?" It's not attempting to answer the related questions "is hiring junior engineers beneficial for junior engineers?" or "is hiring junior engineers the right thing to do, even if it's not personally beneficial?")
1. Risk and time
Small, growth-focused companies are in a cutthroat race against time. The company is not yet profitable and is burning money in R&D and customer acquisition. This generally means building as fast as humanly possible, prostrating oneself at the feet of customers, and pivoting so much your product roadmap looks like an Etch-A-Sketch. In such an environment, I can't see how hiring a junior engineer and dedicating several months worth of resources to get them to net zero productivity is an effective use of resources.
The problem isn't one of expense, or even one of rate of return. Junior engineers are cheaper to hire than seniors, and proportionally benefit far more from training. Hiring and training juniors has an incredibly high rate of return. Rather, the problem is one of risk, and one of time to return. Juniors are inherently higher variance due to having less of a work history or references that can be evaluated, and young companies are drowning in risk already. Additionally, the lifeline of young companies is measured in months, not in years. For these companies, every day and every dollar is spent in order to get the next, longer lifeline (venture funding, hockey stick growth, etc.) before everyone goes bankrupt and moves back home with their parents. A junior engineer may be ready to be a star player after 3 months of training and learning, but there may no longer be a team to play on at that point.
In poker, short stacked players have no choice but to tighten their ranges and play low variance hands. Early stage companies are in the same boat: regardless of how big a payoff hiring junior engineers may have (or how altruistic it may be), these companies simply lack the resources for it to be an appealing option.
2. Hire fast, fire fast
If the problem is that hiring juniors requires more resources than early stage companies have, a solution could be to reduce the amount of resources needed. The "hire fast and fire fast" strategy is one popular approach. In this strategy, juniors are given the bare minimum resources (maybe an onboarding guide, some code labs, and a scattered list of online resources) needed and told to figure it out -- and if they don't, out they go.
This is similar to how large mega-corps hire juniors: at Facebook, Google, and the like, new hires go through weeks or even months of orientation and onboarding boot camp, and emerge ready to hit the ground running. Our strategy is basically the same as the mega-corps, only without the time, money, and resources, and with an atmosphere less like college welcome week and more like the Hunger Games (unless said mega-corp is Amazon, in which case they really are basically the same).
I have to admit that this seems like the most cost-effective way for small companies to evaluate junior candidates: the level of risk is capped, and the negative impact of hires that don't work out is minimized. The only sticking point is that it's evil, and a reputation for being evil tends to be bad for hiring. While this strategy works out in the company's favor, it is predatory from the perspective of the new hires.
(Some may argue that a predatory relationship is better than no relationship, which is the argument used by companies offering unpaid work or work paid via "exposure." I'm sure the hire/fire fast strategy can be tweaked to be more equitable, but I haven't yet found any implementation that's compelling enough. Bad early career experiences can be crippling, and I'd rather stay away unless I am sure the experience would be great for junior engineers. There are enough companies out there that hire anyone with a pulse and ruthlessly cull the herd.)
3. Growth potential
There's a belief that engineers from large companies tend to be overspecialized, and that engineers from small companies are more independent. I've found this belief to be entirely inconsistent with my experiences. Over the years, I've had the opportunity to work with mid-level engineers from large companies and mid-level engineers from small ones. In general, the ones from large companies felt years ahead in experience: they were the ones who had a large repository of patterns and architectures to draw from, who knew how to effectively work cross-functionally, who could scale databases, and who could do domain modeling effectively.
Even if you're willing to bet on your mentorship capabilities and commit the resources to train up juniors, doing so may not lead to the best long-term growth for juniors. Pattern recognition and analyzing existing systems are two important ways in which junior engineers learn. Engineers witness more successes and more failures at large companies than at small ones; this evidence of what works at scale and what doesn't is powerful insulation against a cargo cult mentality. When it comes to systems, startups generally aren't know for being shining bastions of best practices and stable architectures; things change a lot in the pursuit of product-market fit, and products are often made to be more or less throwaway.
It's not impossible for juniors to thrive at small companies, but the rate of growth at a larger company will likely be much higher. I believe the converse of this article's title is also true: if given the choice, juniors shouldn't join early stage companies. (Of course, the options available to junior engineers are inherently limited. It's probably sensible for prospective juniors to value their long term growth potential lower than their short term ability to pay rent.)
4. When should companies start hiring juniors?
To me, the question of hiring juniors is not "if", but "when." I believe that early stage companies should not hire juniors; at the same time, I believe that companies that hire juniors are doomed in the long term. At a certain scale, having a steady supply of junior talent is critical to a company's continued survival: there simply are not enough senior engineers to go around. (Companies like Netflix are the exception, where sky high compensation and a compelling engineering brand are used to continue attracting senior talent.) Whenever taking a stance, it's helpful to identify the boundary conditions under which the stance will change.
So where is the inflection point? There isn't a universal answer. Early stage startups generally don't think in the long term, but there are indicators of success (or, more conservatively, "signals of a reasonable expectation of continued existence") that may influence the decision of whether or not to hire juniors. Milestones like hitting 100 people, or reaching a certain valuation target, or raising a certain amount of money may all be such inflection points around which hiring policy should be revisited.
"Why is this a problem? Everything would be fixed if we just did X!"
Everyone knows an idea guy. Idea guys are found anywhere there are complex, nuanced problems without clear resolutions. It could be in the workplace, in casual conversations... anywhere, really. There is no escape.
Idea guys can be deceptively compelling: their suggestions are rarely wrong, and really do illustrate a better alternative to whatever dumpster fire is currently being discussed. Once it's time for actual implementation, however, these ideas fall short. The devil really is in the details, and every edge case, every contingency must be examined. Complex problems are complex because they are nuanced, and idea guys perceive nuance like how dogs perceive color. Anyone with a pulse can idly theoreticize and come up with their own vision of utopia; idea guys embrace and evangelize their visions, while ignoring the fact that they might have to commit atrocities to enact their utopias, or that their utopias would last for maybe ten minutes before some neglected real world detail causes the whole thing to crash down.
Ideas that do not describe execution are simply dreams. Only when they are fleshed out to cover implementation and maintenance do they become actionable proposal. Good proposals cover three things: 1) where we want to be, 2) how to get there, and 3) how to stay there. Idea guys only concern themselves with the first item. Socialism looks great when examining the theoretical outcomes. Cheaper healthcare costs, free money, more income equality, higher levels of educational attainment? Sign me up! Oh, another socialist society transitioned to dictatorship and started burning books, killing scholars and committing genocide? Curious why that keeps happening, but it's surely not important -- the core idea is sound, it's the implementation that's flawed!
(I understand that criticizing socialism is both cliché and passé. It was just the first example that came to mind.)
Good proposals have enough detail to be thoroughly evaluated, criticized, and possibly implemented. This leads to an unfortunate dilemma where proposals often die by their own merits, and vague ideas are favored instead. Many well thought out proposals die on the examination table: due to the level of detail; the surface area available for criticism expands greatly. Oftentimes, a well thought out proposal really is insufficient: hard problems are hard, after all. Ideas, on the other hand, simply sweep the details under the rug. They can be hard to disagree with because they're not saying anything disagreeable; however, on closer inspection, they're not saying much of anything at all.
Idea guys take pride in their steamrolling of nuance. They revel in being the voice of reason and common sense in a sea of idiots lost in the weeds. If only, if only they were in charge of it all.
Don't be an idea guy.
To me, there are two flavors of trust. "Implicit" trust is trust derived from shared beliefs or opinions. "Explicit" trust, on the other hand, is trust that exists despite a difference in beliefs or opinions.
When people say they trust others in personal or professional settings, they mean that they trust the judgment of the other party. Implicit trust is easy to explain: if both parties hold the same beliefs, it's reasonable to expect that both will arrive at similar conclusions when presented with a problem. In other words, the same input (situation) should lead to the same output (judgment) because both individuals share the same priors (beliefs). This is obvious; if I'm reasonably certain you would make the same decision as I would in a certain situation, why wouldn't I trust you?
Explicit trust is more interesting to me because it implies a level of emotionality and optimism due to incomplete information. The trustor does not know what conclusion the trustee will arrive at, yet still expresses confidence that the decision will be a favorable one. When framed this way, it becomes tempting to view explicit trust as inherently irrational. However, explicit trust seems to be a critical element of strong relationships (both personal and professional).
PS 1: What about trust that arises not from shared opinion, but from a pattern of positive outcomes? Someone's decision making process may be a black box to me, but if they consistently achieve good outcomes, I'd be inclined to trust them as well. This form of trust disregards priors entirely and only focuses on outcomes. I'm not sure if this is a reframing of explicit trust, or yet another flavor.
PS 2: The concept of explicit trust seems to have a lot in common with the concept of faith.
Horizon charts - nice data visualization technique for line charts with multimodal peaks. Imagine a dataset where most values are between 1-100, but a subset spike to 1000-1100, and a further subset spike to ~10000. Compressing the Y-axis will destroy readability for the smaller ranges. Alternative to logarithmic scales for better glance value.
Uber - Designing Edge Gateway, Uber’s API Lifecycle Management Platform - beyond being technically interesting, this article gives a sense of the time scales involved in building a massively scalable architecture. I think it also highlights that the viability of a design is a function of current business need.
Cindy Sridharan - Testing in Production, the safe way - Cindy's articles are always a joy to read, jam-packed with information, and coupled with easily understandable examples.
There are some dumpster fires of codebases for which most viewers immediately agree "yeah, this is a lost cause; a full rewrite would be easier and cheaper." Less pathological cases cause more disagreement. What are the variables here?
The core one is logically "perceived difficulty of modifying the codebase." This might mean a lack of tests (no confidence in whether new changes introduce regressions), too many overly sensitive tests (even small changes enact a heavy refactoring burden), overly tight coupling (changes have many downstream effects), overly loose coupling (constituent parts are too hard to understand), limitations from core, unchangeable architectural decisions (e.g. performance problems), etc.
The key word to me is "perceived." Someone intimately familiar with the codebase may not be bothered by a lack of tests, as the tests are essentially in their brain (and are not particularly trustworthy, but what can you do). Similarly, they might know how to navigate the minefield of overly sensitive tests, instintively knowing what related tests must be updated. Similarly, someone with familiarity with DI, etc. might not view overly tight coupling as an intractable problem, as they know where to start to untangle the mess.
So really, the evaluation is a function of code quality, individual contextual knowledge, and individual expertise.
Individual contextual knowledge discouraging one from giving up on a dumpster fire project is suboptimal. As a first step, this contextual knowledge must be documented and made shareable. Even so, it's better to improve code quality than to use increased knowledge as a buffer for dealing with bad code. This is commonly seen by people getting "Stockholmed" after working on low quality codebases for a while -- they've picked up the coping mechanisms for productively working with the bad code.
Individual expertise discouraging one from giving up, on the other hand, is a good thing. While contextual knowledge represents how easy it is to cope with bad code, expertise represents how feasible it is to improve the bad code in the future.
A fourth variable is patience. Writing new code is funner than treading the minefield of existing code, and churning out new code makes people feel more productive. (Of course, there's no guarantee the rewrite will be any better than the old code, leading to an endless death-and-rebirth cycle.)
It seems like Phase 3 is where transitions to socialist economies break down. Capital flight strikes me as being the root cause.
Without explicit support from owners of capital, attempts at redistribution will be met with capital flight. To counter this, the government can:
Take this capital by force, either by preventing people from leaving with their wealth, or by simply seizing it. (China is the most immediate example.)
Preemptively kill all owners of capital (Southeast Asian Communists).
Attempt appeasement (which has apparently never worked) (1972 Chile).
When capital owners fight tooth and nail against redistribution, the end result is seemingly always either a brutal transition to totalitarianism (if the government is willing to dirty its hands), or collapse of the government (if not).
The only counterexample I can identify is Sweden. The buy-in from the wealthy seems to have arisen from lack of government corruption, a strong education system, a strong sense of camaraderie due to monoculturalism, and manageability due to the relatively small size of the country. The willingness of Swedes to cede many "freedoms" to the government is similar to Taiwan and its healthcare system, which requires a similar level of government involvement.
I've long considered my ability and drive to seize initiative as one of my greatest strengths. Despite not being a "natural manager," I am willing to grab the reins in a directionless project, or take responsibility for an undermanaged area.
This is one of the strengths I've consciously decided to give up while balancing childcare, personal and mental health, and work. I've decided that, at least for the short term, I will recede into the background. If I disagree with an opinion someone holds strongly, I will stay quiet.
My decision is an experiment as much as it is a coping mechanism. If work outcomes remain the same (or improve), then this is definitive evidence that I hold my opinions too strongly and crowd others out of decision making. The flip side of this strength is a glaring weakness: the mindset of always worrying about incorrect decisions being future landmines can cause conversations to be high-friction and emotionally charged.
Timely SPAC paper on HN front page today. TL;DR:
One overlooked downside of SPACs is that how dilutive the merge ends up being is unclear up til the merge (because SPAC investors can choose whether or not to exchange their holdings). One consequence is that the claimed advantage of “price certainty” of SPACs vs. IPOs is overstated. Part 3 gets into this more (pp 18) and is the most interesting section IMO.
Post-merger share prices tend to drop in price post-merger; people who exit their positions prior to the merger have better returns on average (resembles options in a way)
On average, the larger the SPAC sponsor (e.g. if it’s a large private equity fund), the better the returns.
SPACs being less regulated is a real advantage they have over IPOs as of now. NOTABLY, going public via SPAC doesn’t require you to file an S-1, so you’re a black box to retail investors and are essentially trading off of name brand alone (of both the company and the SPAC sponsor).
SPAC sponsors take most of the returns, whereas SPAC shareholders end up bearing most of the risk/cost.
TL;DR of the TL;DR: SPAC sponsors and SPAC IPO investors make out the best. SPAC holders who ride the wave and exit pre-merger make out ok. SPAC holders who hold post-merger are bagholders.
Things I learned as an engineer this year (somewhat in the vein of Chris Kiehl's popular post):
Perfect really is the enemy of good. I'm a tinkerer, which leads me to stumble upon local maxima of "elegant code." But elegant code isn't the end goal -- product (and value) are.
Spiking and dictatorially making decisions on areas of high uncertainty isn't actually that bad... as long as you remain open to change and are not afraid to throw code away.
Running meetings and scoping features democratically is suboptimal. Parallelize work aggressively, grouping up as needed rather than as a default.
Decisions should be made before meetings, not during meetings.
Unit tests are overrated on the frontend. Integration tests are underrated.
Writing suboptimal (or straight up incorrect) code is okay if the subject is easy to change. Being able to identify what's easy to change is the mark of a good engineer.
Set hard date requirements instead of hard feature requirements. This will act as a forcing function to make you scope correctly. Even if you don't scope correctly, at least you have stuff done.
When working with unfamiliar tech, just start writing code. Read documentation like novels (front-to-back), not like dictionaries (random access).
Set dates first instead of features first. With featuers first, all dates end up being padded. Setting dates first forces you to really scope correctly and identify the MVP, and get to a state where you can start iterating. "Plan for 6 weeks and schedule for 3 weeks; plan to miss but aim to succeed."
For the MVP, plan out acceptance tests, and adhere to them strictly. Use these as accountability tools for an objective measurement of progress. If an acceptance test is not passing by a deadline, really hone in on it.
Learning how to correctly handle a big launch is hard. "This is why I don't recommend juniors join startups." At larger companies, you can fail without risking the existence of the company, and you can observe good practices. Because there are more large launches, you can evaluate them against each other for quality.
I never thought history was interesting. In high school, I treated AP US History as simply another bundle of credits to jumpstart college.
The NYT published How Neil Sheehan Got the Pentagon Papers today, and it is an engrossing read. This got me reading more background about the papers, the Vietnam war, and the US' China containment policy.
Much of the US' continued presence in the Middle East is due to this policy.
I've always wondered why the US maintained such close ties to the Philippines. This explains it, at least in part.
Most interestingly, much of the basis of the Vietnam War was in the name of this policy.
Nixon's 1972 China visit marked a priority shift from containing China to containing Russia.
Much of this comes down to the US' hatred of communism. American interventionism itself can largely be explained by anti-communism. A secondary rationale was the intent to not repeat the mistake of appeasement towards Nazi Germany pre-WWII.
The X Article gives some background into the US' ingrown anti-communism.
One of Obama's big geopolitical moves in 2011 was a pivot of military resources out of the Middle East and into the APAC region.
On the topic of Obama, the drone strike program was viewed as a black mark on his legacy. However, it's now known that the program was masterminded by the Bush administration, and during transition Bush pushed strongly for Obama to maintain this policy position. Similarly, Trump's trade war with China is seen as a black mark on his (fairly laughable) legacy; I wouldn't be surprised if this general policy position was carried over from the Obama administration.
Good discussion about the "ad-exchange DSP bubble," of which Rocket Fuel (my first company) was a part of. I'm sure glad to be a part of that any more.
(New year, new
This is a great post about language API quality. It's quite opinionated, but I thought the arguments were sound. The specific pattern under attack is the multiple-return pattern found in Go (and also commonly encountered in Node's error-first callbacks). I agree with the author's overall point that "misuse-resistant design" is a standard to strive for, and Rust definitely does a great job with this (at the cost of high up-front complexity, notably with the borrow checker). In contrast, the author claims that Go presents an exceedingly simple (and hence attractive) interface to developers; however, this is just a facade, with the true complexity arising as various gotchas.