Skip to content

Commit d81d97a

Browse files
committed
backfill oct
1 parent f30dfab commit d81d97a

File tree

7 files changed

+208
-0
lines changed

7 files changed

+208
-0
lines changed

posts/141025.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
---
2+
title: 'responding effectively'
3+
tags: 'journal'
4+
date: 'Oct 14, 2025'
5+
---
6+
7+
felt anxious today when i got a message that i wasn't sure how to respond. so i ask for help on suggestions on how to respond. but my help is not responding me as well. which means i'm leaving that person on read. as each minute passes by, i overthink about whether that person is waiting for my response as well. so i'm stuck in between, i have all the context about the project, but not the authority nor skill to communicate effectively. me not replying says something about my character, and what i actually say also tells a lot about my character as well, so i'm paralyzed. i need to learn how to avoid this in the future.

posts/151025.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
---
2+
title: 'presentation skills'
3+
tags: 'journal'
4+
date: 'Oct 15, 2025'
5+
---
6+
7+
watching my colleagues present during prod review, i observed a few things that make them good presenters, they:
8+
9+
- show they are excited and proud about the thing more than anyone else
10+
- are brief on the technical details
11+
- present a scenario and gives you a problem
12+
- jump right into a demo
13+
- explain what problem it solves
14+
- presents clear metrics, instead of 10 hours, it's now 10 minutes
15+
- talk about what can be better, and how it's going to be used today, how it will evolve
16+
17+
most people are so skillful at presenting that i can't help but admire them, and want to learn how to present like them as well.
18+
19+
after the presentations and news, i felt immensely grateful for being at this company. and i was filled with even more excitement and hope for the upcoming months, as i continue to work on interesting projects that have a huge impact on health in the US.

posts/161025.md

Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
---
2+
title: 'limitations of embeddings'
3+
tags: 'journal'
4+
date: 'Oct 16, 2025'
5+
---
6+
7+
here's a scenario:
8+
9+
you have a vector db of disease terms mapping to information you want, say journal_id, and given a user query, you want to find all the journal_ids for that disease term.
10+
11+
say the disease term is "autoimmune polyendocrine syndrome type 1"
12+
13+
and you use using text-embedding-ada-002 and got this back:
14+
15+
```txt
16+
0.6462 │ Type 1 Diabetes Mellitus │ ID_001, ID_003 (+92 more)
17+
0.6452 │ Type 1 Diabetes │ ID_002, ID_004 (+95 more)
18+
0.6443 │ Autoimmune Polyendocrinopathy... │ ID_099
19+
```
20+
21+
my first thought, being medically illiterate like me, is oh the model saw Type 1 in both terms, so it latched on to that heavily. these general embeddings are bad at medical concepts. but it understood it more than i give it credit for.
22+
23+
here's the gist:
24+
25+
type 1 diabetes is actually a component diseases of [APS-1](https://pmc.ncbi.nlm.nih.gov/articles/PMC2859971/). APS-1 (aka [APECED](https://emedicine.medscape.com/article/124183-overview)) is caused by mutation in the [AIRE](https://en.wikipedia.org/wiki/Autoimmune_regulator) gene.
26+
27+
The AIRE protein normally teaches your immune system not to attack your own organs, but when its broken, it goes rogue and attacks multiple organs.
28+
29+
diagnosis requires at least 2 out of 3 classicial triad symptoms:
30+
31+
- chronic candidiasis (yeast infections)
32+
- hypoparathyroidism (low calcium)
33+
- addison's disease (adrenal failure)
34+
35+
but only 45-67% of patients actually develop all three, many patients develop other autoimmune conditions or in addition, including: type 1 diabetes (18% of APS-1 patients), autoimmune hepatitis, vitiligo (skin), alopecia (hair loss), thyroid problems
36+
37+
so APS-1 is a syndrome that can include T1D as one of the many possible manifestation, and the embedding model picked up on this relationship
38+
39+
```txt
40+
APS-1 (the syndrome - broken AIRE gene)
41+
├── Chronic candidiasis (73-100%)
42+
├── Hypoparathyroidism (76-93%)
43+
├── Addison's disease (72-100%)
44+
├── Type 1 diabetes (~18%)
45+
├── Alopecia (29-40%)
46+
└── Other manifestations
47+
```
48+
49+
so searching for APS-1 and getting a T1D is like seraching heart attack and getting "chest pain" back. yes chest pain is a symptom of heart attack, but if someone needs info on heart attacks, giving them chest pain resources misses the point. they want the specific condition, not one of its symptoms.
50+
51+
the solution?
52+
53+
medical specific embeddings, a two stage retrieval (common in industry), contrastive finetuning w triplet loss where positive pairs are synonyms and exact matches, and hard negatives are manifestations and siblings.

posts/171025.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
---
2+
title: 'look at the data'
3+
tags: 'journal'
4+
date: 'Oct 17, 2025'
5+
---
6+
7+
friendly reminders for self: when you refactor code please please document it somewhere, and make sure that change is reflected on all code that uses it. this way you won't face an egg on face situation where you train a model on bad data. and always. look. at. the. data.

posts/181025.md

Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
---
2+
title: 'training with less data'
3+
tags: 'journal'
4+
date: 'Oct 18, 2025'
5+
---
6+
7+
i was wondering if there's a way to know which data actually matters before you even train. like can you look at your dataset and say "these 2k examples are worth more than those 10k"
8+
9+
does more data always = better model?
10+
11+
[research](https://arxiv.org/abs/2001.08361) shows it's a power law, not linear:
12+
13+
```
14+
100 samples → loss = 10
15+
1,000 samples (10x more) → loss = 5 (not 1)
16+
10,000 samples (10x more) → loss = 2.5 (not 0.5)
17+
```
18+
19+
it has diminishing returns that [holds across seven orders of magnitude](https://www.pnas.org/doi/10.1073/pnas.2311878121).
20+
21+
what about finetuning? since the base model already knows a lot, you're just teaching it something specific, does the same rule apply?
22+
23+
yes but you might only need 20-50% of your data to get 95% performance. so which 20-50%?
24+
25+
j morris showed that models have a [capacity limit](https://arxiv.org/abs/2505.24832). GPT-style models memorize ~3.6 bits per parameter.
26+
27+
this means a 1B parameter model can only memorize ~450MB of information. that's your budget.
28+
29+
training on more data doesn't increase budget. just spreads it thinner.
30+
31+
when you exceed capacity, model is forced to generalize instead of memorize. this explains grokking - that moment when performance suddenly jumps.
32+
33+
so the question becomes: which data fills the budget?
34+
35+
if you have lots of data, keep hard examples. easy ones are redundant.
36+
37+
if you have little data, keep easy examples. hard ones might just be noise.
38+
39+
[someone showed](https://arxiv.org/abs/2206.14486) you can discard 20% of ImageNet without hurting performance. potentially breaking power law scaling.
40+
41+
how do you actually do this though?
42+
43+
there's [information bottleneck](https://adityashrm21.github.io/Information-Theory-In-Deep-Learning/) theory - find maximally compressed mapping that preserves info about output. keep only data that tells you something useful.
44+
45+
practical methods exist:
46+
- [coreset selection](https://arxiv.org/abs/1907.04018) - finds small weighted subset that approximates full dataset
47+
- geometry-based pruning - preserve feature space structure
48+
- uncertainty-based - keep what model is uncertain about
49+
- error-based - keep high-loss examples
50+
51+
problem: most don't scale well. best ones are expensive to compute.
52+
53+
there's also this idea of [four scaling regimes](https://www.pnas.org/doi/10.1073/pnas.2311878121). basically asking two questions:
54+
55+
1. is the bottleneck your data or your model?
56+
2. is the problem noise or lack of detail?
57+
58+
two limitations:
59+
60+
- **variance-limited:** error from noise in limited samples (like photos in a dark room)
61+
- **resolution-limited:** can't capture fine-grained patterns (like a pixelated image)
62+
63+
knowing which regime you're in tells you if more data helps or if you need something else.
64+
65+
j morris continues to show [embeddings](https://arxiv.org/abs/2505.12540) from different models converge to similar representation geometries.
66+
67+
if there's a universal geometry, maybe there's an optimal compression of training data that fills that structure efficiently.
68+
69+
there's also a ton of research on synthetic data that can fit into the equation as well. a rabbit hole that i would love to dive into some other time.

posts/191025.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
---
2+
title: 'adding some features'
3+
tags: 'journal'
4+
date: 'Oct 19, 2025'
5+
---
6+
7+
had a bit of free time as things slowed down.
8+
9+
i added a few things to my blog
10+
11+
- a [heatmap](/posts) like the github commits graph, but of my posts, and the shades represent length of my posts.
12+
- a spotify listening [activity](/now), showing current, and recent 3 songs
13+
- and a [umap](/viz) plot of posts embeddings
14+
- backfilled tags by asking claude code to read all my posts
15+
16+
claude code is amazing. what can it NOT do?

posts/201025.md

Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
---
2+
title: 'meaning beyond the sun'
3+
tags: 'journal'
4+
date: 'Oct 20, 2025'
5+
---
6+
7+
## create meaning vs discovering meaning
8+
9+
you can't create meaning for yourself.
10+
11+
viktor frankl, the jewish psychologist who was put into death camps during wwii, noticed something. some prisoners who lost everything became bad—they started to steal from other prisoners. others became zombies, they just curled up and died. but there were other prisoners who stayed strong, who stayed courageous.
12+
13+
why? why do some people lose themselves and some remain themselves?
14+
15+
it depended on what they made their meaning in life. if you make your meaning of life something that death camps can take away, then you got no self left.
16+
17+
so what can it take away? anything under the sun. if you live for status, career, family, money, sex, whatever under the sun—anything, it can be taken away.
18+
19+
the only people who stayed strong were people who lived for something that wasn't under the sun. something like God, something like faith.
20+
21+
the lesson: you can't create meaning for yourself. you have to discover meaning in some reality higher than yourself.
22+
23+
## what it means to serve God
24+
what does it mean to live for God vs living for pleasure? it is possible to obey God not for God's sake but for your own sake. you may think you're very religious, but **if you live for God just to get things—to get blessings, health, success—then you're just as unstable as everyone else**. and you will get that spiritual nausea.
25+
26+
you should serve and obey God just to give him pleasure.
27+
28+
## what is real love?
29+
30+
real love is not just emotional. it's not just "i want a relationship with you because i'm attracted to you," desiring them because you feel happy. it's also not mainly volitional, like "i'm doing my duty."
31+
32+
you know you love somebody when you put your happiness into their happiness. so your greatest happiness in love is just to see them happy. you don't make them happy to feel good about yourself—their joy is your joy, their delight is your delight. there's nothing beyond it.
33+
34+
the only way to get a meaningful life is not just obeying God in some dutiful way. you need to obey God because you love God.
35+
36+
37+
[sickness unto death, tim keller](https://podcast.gospelinlife.com/e/the-sickness-unto-death-1754056766/)

0 commit comments

Comments
 (0)