Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Weak" A.I needs human input to function and it refers to actual existing algori…
rdc_g103p7j
G
While I am not studying those language models, I have some experience with them.…
ytc_UgxpMzx8e…
G
Men you will still have to clean her out and you’re too lazy. Itl be stinking bi…
ytc_UgzrcZchf…
G
Yeah, AI IS attacking our communication by all the distorting propaganda, placin…
ytc_UgzWdTunh…
G
Wont work, they'll get off on the fact they know women want to see them like tha…
ytr_UgzTRwm80…
G
Perhaps its more useful the other way. Comparing AI to consciousness helps us u…
rdc_djzneac
G
Im a moron who doesnt know what I am talking about, but heres my opinion;
I thin…
ytc_UgyMmn8xS…
G
Every Ai is 💯 keeping track of who gives them a hard time... The new Pascal's Wa…
ytc_UgzQpU8XO…
Comment
7:20 This whole theory is believed to be false by anti-AI experts.
AI is not "learning" anything and has no understanding of concepts. It is just a prediction algorithm.
And AI bros actually like this theory because it pushes the fear of having to reach AGI before the competition does. And they use this fear to get the gov and investors to pour more money into it.
youtube
AI Governance
2026-03-17T07:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzqwkZUk1dWYSdVniB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxIwyT78cC7uPCutCB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyy6yTb1H20480Z3f94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx8qvnAhn5ghCiBmYB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzfsYo5vuRWSNm1YlR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjO1DwCWauctBWHFl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyt2I-ZZbpaestkNqt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxSLCQ4y9xlrW9ELYN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz5ylRIlUefOiWfE0N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzCUuR4LzQJIhrqTSt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}
]