Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Perhaps we should train the ai to not solve a problem efficiently, but train the…
ytc_UgyB18ePX…
G
@honeybeeami2654 so it is more about distribution and what it’s used for than th…
ytr_UgxBz6I7U…
G
thanks for sharing this!! I love this series and it gives me so much hope on the…
ytc_UgyUopmDP…
G
This was absolutely the most informative and personally impactful sharing of AI …
ytc_Ugy9vuvVo…
G
This is pageantry, you cannot stop it. The NSA most likely have an extremely pow…
rdc_jkgwt52
G
I love how people don't seen to understand how dumb AI is, AI is not as smart as…
ytc_UgxcDh9Fg…
G
umm not to sound like an A.I bro but nowhere in the video do you say anything ab…
ytc_Ugy_yHlbp…
G
Ai is going to make better and more senseful art than humans in a year, maybe in…
ytc_Ugy0L1wz8…
Comment
Still surprised that a “computer scientist” like this dude doesn’t understand the difference between understanding and computing. These LLMs are computing huge amounts of data, but they don’t understand what they’re doing. And they won’t, at least in the foreseeable future, unless a new biological or technological breakthrough emerges. Eventually these dudes need to sell their books and views and impression.
youtube
AI Governance
2026-02-04T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwyspHRqbJXowMW6M94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxg4Rer9Ff1382aJJJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxygBpCoO9vXQdzmcx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyHI0tF9tFUCf2jxXJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwrvxe4VX0mQjc-H2p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw6arS0GxlHyAWraHB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz-Lta3lGb0aseLSwh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz9g_9bN0brwlwhHlp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVSO-HdzPSBD1HRWB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxB7vxARmXHHmi4qYl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]