Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We have had this for a while and maybe say 'de facto' law for most any kind of p…
rdc_fjf762c
G
One trucking job the self driving trucks wont be able to take over is going to b…
ytc_Ugw2R13vZ…
G
Whenever we reach the point where energy for datacenters is too expensive and ta…
rdc_n80enhf
G
@p4kd0lmost of the things we consider morally accepted as of today were preache…
ytr_Ugz0P5bSu…
G
Looking at the comments it’s not AI that finishing the human race, it’s guys thi…
ytc_Ugw2Fzrrb…
G
real artists dont like using AI. it feels weird. if they claimed they dont feel …
ytr_Ugy2EvZFU…
G
I mean AI is going to outperform humans at everything in a few years. They'll be…
ytc_UgzssxzeQ…
G
What are the people that are being replaced by all these AI items going to do to…
ytc_Ugyh_cTVv…
Comment
Karen Hao clearly is speaking beyond her knowledge here. She literally said scaling model parameters wasn't research driven in regards to having to choose between housing and innovation. Yes smaller models can be useful, but producing larger models was clearly the way to go based on how neural scaling laws were working. You get emergent properties like in context learning. Most research with smaller LLMs use distillation anyway, so they require the bigger models to create useful smaller models.
She says A LOT of things that made me cringe.
youtube
Cross-Cultural
2025-06-30T02:3…
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzSW4Q1xgZF12GykVl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwzaa6_sSCF1SNL4Kx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxiNdcjAkRfjWYqujV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyWgH9j3VELIq03Sj14AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzO2ZBO3CMuVH7hCG54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyROJP45-2RInyCcMt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyAmHoz-ZwSNAz1M1J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugy6mQsco82yGxjvsqt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzDcBXH_FkIIhFK9Yt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy7y6XJtDFZRbh7_8J4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]