Learning Across Domains

Alyssia J
Alyssia J
¡20 minutes

Most writing about learning across domains (or learning in general) starts with research citations or success stories. This is the opposite: one person's internal experience, documented as honestly as introspection allows.*

I think there's a gap in how we talk about interdisciplinary learning. We focus on time management, deliberate practice, finding the right books, role models and mentors: all useful, but none of them explain the cognitive part. What's actually happening in your head when you pursue excellence in multiple domains and start recognizing that a problem in Domain A shares structure with Domain B? When a new field stops feeling foreign and starts feeling like a dialect you almost already speak?

You might also be asking: why bother learning across domains at all? Beyond the intrinsic satisfaction of watching seemingly unrelated ideas suddenly click into place—which honestly, never gets old and is my main reason for doing so!—there's a practical reason: some of the most significant breakthroughs come from cross-pollination rather than pure specialization. E.g, CRISPR gene editing from studying bacterial immune systems, DNA structure discovery by applying X-ray crystallography from physics to biology, recognizing tumors as an immune system problem rather than purely a cell division problem, realizing gaming GPUs could massively parallelize neural network training.

A person who recognizes that Domain A's solved problem is Domain B's open question has a structural advantage: they're translating and adapting rather than inventing from scratch. I've personally found this to be useful in learning about and making progress in new fields.

First, On What Breadth Feels Like

Moving between domains doesn't feel like constantly switching between entirely different things. It feels more like exploring different expressions of the same underlying patterns.

When people look at diverse work and see separate buckets—ML research, biological systems, competitive analytics, athletic training, economics and finance, artistic and musical pursuits—I'm seeing variations on the same fundamental questions, for example:

How do you design robustness under adversarial conditions?

  • ML: adversarial training and robust optimization
  • Biology: immune system redundancy and adaptive responses
  • Competition: strategic flexibility in sports or trading
  • Security: defense in depth and fail-safes
  • Performance: maintaining technique under pressure, improvisation when things go wrong

How do you measure genuine capability versus surface-level pattern matching?

  • AI: distinguishing memorization from generalization in evals
  • Hiring: interview performance vs. actual job competence
  • Education: test-taking skills vs. deep understanding
  • Athletics: gym lifts vs. sport-specific performance
  • Music: technical proficiency vs. musicality and expression
  • Art: copying techniques vs. developing original voice

How do you optimize for long-term robustness versus short-term performance?

  • ML: preventing overfitting while maximizing training performance
  • Biology: evolution's long game vs. individual survival urgency
  • Athletics: periodization and recovery vs. daily intensity
  • Business: sustainable growth vs. quarterly earnings pressure
  • Music: fundamental technique building vs. learning pieces quickly
  • Art: developing craft vs. producing for shows and sales

How do you identify when a system is gaming a metric versus actually learning?

  • ML: Goodhart's Law in reward functions and benchmark saturation
  • Education: teaching to the test vs. conceptual understanding
  • Organizations: hitting KPIs while undermining actual goals
  • Fitness: tracking vanity metrics vs. building real capacity
  • Music: playing for competition judges vs. musical development
  • Art: creating for likes and sales vs. genuine artistic exploration

These are examples of questions that show up everywhere, at different levels of abstraction. Once you learn to answer them in one domain, IMO, you've built transferable machinery. The surface details differ enormously (vocabulary, tools, community norms) but the deep structure rhymes enough that the cognitive overhead of "switching domains" is much lower than it appears from the outside.

This doesn't mean the domains are identical or that expertise doesn't matter. But they're all running variations on similar structural themes like feedback loops, optimization pressure, emergent properties, information bottlenecks, phase transitions, robustness-efficiency tradeoffs.

Thinking in Structures

I don't always think in language first. I usually "know" something as a pattern or structure before I can articulate it. The translation from structure to language comes later, and it's not always clean. This creates an inherent limitation in explaining how this works—language is a lossy compression format for thought. What feels like "sudden structural resonance" gets flattened into linear narrative, and different readers will reconstruct different mental models from the same words.

But when I encounter a new concept, my brain immediately starts asking: what shape is this? Where have I seen this pattern before? What are the load-bearing assumptions (assumptions where, if you remove or change them, the entire system/argument/approach collapses)? How does this break down into components? It's less like reading a manual and more like recognizing the architecture of a building you've never entered but somehow know how to navigate.

The "what shape is this?" question is especially great for me. When I started archery for example, I immediately recognized it as programming in physical space. Both are deterministic systems with unforgiving precision requirements and immediate feedback.

In programming: miss a semicolon, code doesn't compile. In archery: hand position off by millimeters, arrow misses the target entirely. The feedback loop is identical. You make an input, get an output, adjust systematically. The debugging process is the same: isolate variables, test hypotheses, iterate, applied to proprioception, muscle memory, and physical fatigue.

I treat each shot as a function with inputs (stance, draw, release, follow-through), and systematically vary one parameter at a time while logging results to find the optimal configuration. The structural pattern: high-sensitivity precision systems where small input deviations create large output differences, requiring systematic error isolation and reproducible execution. This is why some domains that look "difficult" to others feel immediately familiar—I'm not learning something new, I'm translating existing expertise into different notation to not start from zero.

This type of thinking can make you feel like you're approaching new domains at high levels while "only knowing the basics." Classical programming problems, for instance, become significantly easier when you recognize they're just applied math and logic—the deep structure is what matters, not memorizing language-specific syntax or common patterns. What looks like mastering a complex new skill to others feels like translating something you already understand. And that's also why people often emphasize the importance of "learning and not skipping the foundations", because they really do help a lot on the tail-end of learning and coming up with solutions.

Another example on thinking in structures—when people debate online, whether it's about technical approaches or politics or whether a hot dog is a sandwich, I'm not really focused on tracking which argument is "winning." Instead, I'm noticing the meta-structure: what framing is each person using? What unstated premises are they operating from? What kind of logic does each person think they're employing, and why does that logic feel valid to them? What are the unspoken rules of the conversation?

This is system thinking but at multiple layers simultaneously. Not just understanding the system itself, but understanding the frames we use to think about systems, and how those frames constrain or enable certain kinds of insights.

The Bidirectional Zoom: Abstraction and Decomposition as Tools

Abstraction up means identifying the general principles that apply across contexts. When you study enough systems, you start recognizing the same patterns appearing in different guises. A neural network, a biological system, a market, an organization: they're all running variations on similar themes.

Decomposition down means taking something complex and breaking it into minimal functional components. What are the actual moving parts? Which assumptions are load-bearing? What can you strip away before it stops working?

The combination lets you zoom out to see the abstract pattern, zoom in to understand the specific implementation, then zoom out again to see how this instance relates to others. You can move fluidly between "this specific thing" and "the class of things this represents."

As a simple example, when I wanted to get a specific payload to space with zero aerospace experience:

Requirements: less than 10 grams, high-density visual information (photographic), lower earth orbit, little to no budget

High-level process: I didn't start by learning "how space launches work." First instinct: what moves high enough? Weather balloons reach high altitude—could I attach something to a balloon? (Zoom in: no, balloons don't reach orbit, and recovery is uncertain). What else moves that high and could reach lower earth orbit? (Zoom out: satellites, rockets, ISS resupply). What already goes to space regularly? (Abstraction: SpaceX missions, research payloads, satellite deployments). Why don't they fly more often? (Decomposition: launch costs are prohibitive). What drives the cost? (Decomposition: weight, form factor, integration complexity). What's the minimum viable payload? (Decomposition: strip to essentials). What existing infrastructure could accommodate this? (Abstraction: rideshare programs, payload opportunities).

By moving between "what's physically possible?" and "what are my specific constraints?", the solution space revealed itself and I ended up going from idea to officially launching on a SpaceX mission in 3 months, with the actual payload deployed into lower-earth-orbit shortly after that. We now have annual subsequent missions planned, too.

If you imagine constantly thinking this way, after a while, learning domain-specific details starts to look more like translation than starting from scratch. Once you practice both abstraction up and decomposition down enough, they start to feel like the same skill operating in different directions. The ability to move between levels of abstraction becomes fluid—you're not consciously switching modes anymore. You just zoom.

Another fun bonus is that this creates a compounding effect: each domain you learn makes the next one easier. Not because the content is easier, but because you've expanded your library of patterns to recognize. The first few domains require building the entire framework from scratch. The fifth domain might reveal itself as a variation on patterns you've seen three times before. By the next, you might be recognizing structural similarities within days that took months to spot initially. The skill of seeing patterns recursively improves your ability to see patterns.

Useful Questions That Map Territory

When entering a new field, certain questions help me build a map quickly. For example:

Core Questions

  • What is the fundamental problem this field is trying to solve? Not the current buzzwords, but the core challenge that would still exist even if all our current methods disappeared.
  • What are the main schools of thought, and what assumptions does each one make? Understanding the paradigm wars tells you what the actual unresolved questions are.
  • What are the edges of current understanding? Where do practitioners say "we don't really know why this works"? Those edges are where the interesting work happens.

Meta Questions

  • What does success look like here, and who gets to define it? This reveals a field's maturity and internal politics.
  • What's the difference between what experts think is important and what gets public attention?
  • Where are the conceptual bottlenecks? Every field has a few ideas that, once you understand them, make everything else click into place.

Practical Questions

  • What can I ignore for now?
  • Are there explanations that have internal coherence vs ones that seem to be held together by duct tape? Does this explanation predict things it wasn't designed to predict?
  • When you encounter multiple explanations for something, you can usually tell which ones are built on solid foundations: Do they predict things they weren't designed to predict? Do they break down in edge cases or handle them elegantly? Are the assumptions explicit or hidden? The explanations built from first principles tend to be simpler in structure even if they're harder to grasp initially. They make fewer ad-hoc adjustments and suggest new questions naturally.

The Beginner's Structural Advantage

Beginners can sometimes be surprisingly effective at identifying core problems in a field, precisely because they haven't yet internalized all the reasons why "that won't work."

There's a sweet spot: after you understand enough to ask informed questions, but before you've absorbed all the field's defensive reasoning about why certain approaches are off-limits. In that window, naive optimism combined with pattern recognition from other domains can be genuinely valuable (but beware of the Dunning Kruger effect! More on Where Pattern Recognition Fails in a later section).

Someone new asks "why don't we just..." and the experts explain all the reasons why not, but occasionally that naive question reveals that the reasons are historical or social rather than fundamental. The field organized itself around certain constraints that may no longer apply, or it simply hasn't noticed that a solution from Domain A could transfer to Domain B.

This is especially powerful for people without deep expertise in any single domain. You're not weighed down by "this is how we do things here." You can see structural similarities that domain experts miss because their pattern matching is too domain-specific.

But like I said, this advantage still comes with severe failure modes.

Testing Pattern Recognition: A Concrete Case

A few friends at big AI labs asked me what I thought about early research directions in model safety—any low-hanging fruit that I could contribute? I thought the field was interesting and emerging so I thought about it for a bit.

Early on, two things felt obvious: human psychology and cybersecurity both had solutions that should transfer directly.

Human psychology: Humans are suggestible and can be "jailbroken" all the time through social engineering. Models trained on human-generated data, filtered through RLHF'd human interfaces, and having their own chain-of-thought, should be susceptible to similar exploits. (This turned out to be correct.)

Cybersecurity: Way more mature field with similar problem dynamics—good/bad uses, adversarial environments. Cyber has an "assume misalignment" approach that to me was missing in early AI safety research but seemed obviously necessary as capabilities scaled.

Honeypots—fake vulnerable targets that catch adversaries by how they interact—seemed like an obvious experiment for sandbagging and scheming detection that I didn't see anyone doing.

I ran some experiments, got results, connected with a DeepMind senior research scientist who'd independently arrived at similar conclusions. We're now co-authoring for a top conference. Timeline: weeks. Prior publications: zero.

When researchers later mentioned the field was shifting from inability research to safety/misalignment and control research, that confirmed what had seemed obvious from the cybersecurity parallel.

Similar compressed timelines for progress are possible when other structural transfers were valid.

Where Pattern Recognition Fails: Critical Limitations

The pattern-recognition approach isn't universally applicable, and it can fail spectacularly. Here are the ways it goes wrong:

  • False Pattern Matching: Seeing structure that isn't actually there. Exponential growth in tech adoption, viral spread, and compound interest all look superficially similar, but they have critically different dynamics. Treating them as "the same pattern" leads to terrible predictions. The structural similarity is real at one level of abstraction but breaks down at the implementation level.
  • Premature Abstraction: Zooming up to the general pattern before understanding enough domain-specific detail. You miss the crucial differences that make Domain A's solution actively wrong for Domain B. This is the classic mistake of the consultant who's seen "similar problems" in other industries but doesn't understand why this context is different.
  • Dunning-Kruger Acceleration: Pattern matching feels like understanding, which can make you confidently wrong faster. "I've seen this pattern before" becomes "I already know how this works" which is a dangerous leap. The pattern recognition gives you an initial foothold, but that foothold can create false confidence. You feel like you understand more than you actually do because the structure is familiar, even when the implementation details (which matter enormously) are not.
  • Expertise Dismissal: Thinking "the experts just haven't noticed this simple pattern" when actually they have, considered it thoroughly, and rejected it for good reasons you haven't encountered yet. The field's accumulated wisdom often includes knowledge about which patterns transfer and which don't. Ignoring this because it's not immediately obvious is intellectual arrogance masquerading as fresh thinking.
  • Analogical Reasoning Traps: Analogies are useful for initial understanding but dangerous for decision-making. "This is like X" helps you get oriented, but policy decisions based on "well, in Domain A this worked" can be catastrophic if the analogy breaks down in non-obvious ways.

The key is recognizing when you're operating at the "useful initial model" stage versus the "actually understand this domain" stage. Pattern recognition gets you to the first stage faster, but it can trick you into thinking you've reached the second.

Calibrating: When to Trust Pattern Recognition vs. Defer to Expertise

This might be the most important question in the entire piece, and I'm going to be honest: I don't have a complete answer. What follows are heuristics that have worked for me, but this deserves much deeper exploration, probably its own post entirely.

One underrated skill: knowing where you actually stand in a field's competence distribution. Not where you want to stand, not where you feel you should stand, but where you actually are.

I think of it like a game character assessing which zone to enter. Too easy and you waste time. Too hard and you thrash unproductively. The right difficulty level is where you're stretching but not breaking.

This requires unusual honesty with yourself. It means being willing to say "I don't understand this yet" without that being an identity threat. It means recognizing when you've actually mastered something and it's time to move on, even if it feels comfortable to stay.

More importantly, it means developing a sense for when your pattern recognition is giving you genuine insight versus when it's making you overconfident. Some heuristics I use:

Signal you're at "useful initial model" stage:

  • If experts seem to be ignoring an "obvious" solution, assume you're missing something important until proven otherwise
  • If your understanding comes entirely from analogy to other domains, you're probably not at "deep understanding" yet
  • If you can't explain why the domain-specific details matter, you don't understand it yet
  • If you find yourself getting defensive when experts push back, that's a red flag that you're overindexing on pattern recognition
  • You can articulate the general structure but not predict edge cases or failure modes

Signal you're approaching "actual understanding":

  • You can explain why superficially similar patterns from other domains don't apply here
  • You start naturally noticing when domain experts disagree with each other and can follow why
  • You can generate novel predictions that experts would agree are plausible, even if uncertain
  • Your questions shift from "how does this work?" to "why did the field choose this approach over alternatives?"
  • You catch yourself naturally thinking in the domain's frameworks without forcing the translation

The hardest calibration challenge: knowing when your cross-domain insight is genuinely novel versus when you're rediscovering something the field already knows but expresses differently. This requires immersion in the field's literature and conversations, there's no shortcut.

Another hard problem: distinguishing between "the field has good reasons I don't understand yet" and "the field has historical inertia around a solvable problem." Both look identical from the outside: experts dismissing what seems like a simple solution. The only way through this is to actually understand their objections in detail, which brings you back to needing domain expertise.

Some fields are more amenable to pattern-transfer than others. In mathematics, the structure often is the substance—recognizing that two problems share an underlying form can be genuinely valuable even without deep domain knowledge. In medicine or policy, pattern-transfer without understanding implementation details can be actively dangerous.

There's another dimension worth noting: my examples cluster heavily around optimization, strategic dynamics, and adversarial environments. I've also worked in creative and physical domains—I'm a sketch artist and vocalist, learned music theory quickly to cover a song I liked, and pageant choreography drew heavily on dance and runway training. Pattern recognition was useful here too: music theory decomposes into learnable patterns, sketch art has underlying structure (anatomy, composition, light), movement has transferable principles. But there's a layer these don't fully capture: what makes art emotionally resonate, or the proprioceptive feel that makes choreography look natural rather than just technically correct. Pattern recognition gets you the technical foundation; the expressive or embodied part operates on different machinery. Whether this is a limitation of the framework or simply where I've articulated it most clearly, I'm not certain yet.

The goal isn't to eliminate pattern recognition—it's genuinely useful—but to use it as a starting point for deeper learning, not a substitute for it. Think of it as a compass that orients you quickly, but you still need to actually walk the terrain.

The question of calibration, how to know when you actually understand something versus when you just think you do, deserves its own extensive exploration. How do different fields require different calibration? What are the early warning signs of false confidence? How do you build accurate self-assessment skills? These are crucial questions I'm still working through.

Some Closing Thoughts

I suspect some people find this style of thinking more natural than others—the same way some people naturally think more verbally and others more visually. There might be dispositional factors that make cross-domain pattern recognition feel more intuitive for some but not others. But even if that's true, these skills can be developed deliberately: asking structural questions, practicing abstraction and decomposition, maintaining curiosity about edge cases. Natural inclination might affect the starting point and ease of development, but it doesn't determine the ceiling.

This also isn't a claim that everything is easy or that domain expertise doesn't matter. The pattern recognition doesn't replace deep work—it makes the work more efficient. It helps you ask better questions earlier. It lets you spot which rabbit holes are worth going down.

If you've only ever hiked in one type of terrain, every new mountain feels completely novel. But once you've hiked in enough different environments—deserts, rainforests, alpine zones—you start recognizing patterns. You know what kind of preparation different conditions require. You can read terrain more quickly. A new mountain is still a challenge, but it's not starting from zero.

The pattern recognition is the same. It doesn't eliminate the challenge of new domains. It just means fewer of them feel like starting from absolute zero.


*Caveat: introspection is notoriously unreliable. What feels like "seeing patterns" might be something else entirely. Take this as field notes from one mind, not a prescription.