Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bruh everyone fook off with “I feel bad for you if you think this was real” ever…
rdc_muq455y
G
Most of these folks will lose their jobs and still vote for the GOP, the party t…
ytc_UgytofhPt…
G
If governments monopolize, control or regulate AI, it can never be used to benef…
ytc_Ugyph0KOa…
G
Dang, he helped put the words in my mind on how we can tell whether something fe…
ytc_UgzlupaNj…
G
I can see ai being good for like medical fields spotting potential problems peop…
ytc_UgzZFLG0S…
G
Ai is a pretty good thing tbh cuz' it can help us alot like artist can get an id…
ytc_Ugyimxbrq…
G
It cant be bargained with ,It cant be reasoned with ! It doesnt feel fear or pit…
ytc_UgzPcQROJ…
G
Except no one will be kept around for entertainment when that entertainment can …
ytc_UgzBU2Src…
Comment
Never ceases to amaze me...how casually so many formidably intelligent people use the word 'AGI'...and yet there does not exist anything remotely resembling a coherent empirical definition of it. It's ..."Human level intelligence." Ok...what does that mean? If you squeezed every cognitive scientist on the planet into a tiny ball of pure mind...it still could not explain how I produced a single letter of a single word of a single sentence of this entire post. So...if an LLM does reach AGI...how on earth will we know that this has happened if we have absolutely zero ability to measure the thing????
youtube
AI Governance
2025-12-06T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzqKVz_1i3pxWepUQJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxpKZW557AxzA_bJih4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzpz8nFlj8b8tq2Pbt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwHI3YS3HB1MBlB8Ut4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxkT33H6fUbd4oINk94AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy5LwgiHd-s0IHnmot4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwz-finEnPgcSVT6J94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzJexL1SdjrYowVtAV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxwGFyBRLEhWOUxykB4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwJfTI7NT6MbjZU9tR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]