Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Awesome work man. You have bad takes on AMD and open sourced AI (in my opinion) …
rdc_jjbc03d
G
you guys are acting like this is proof ai is a bigger threat to humanity than hu…
ytc_UgzBiiL04…
G
I get where you're coming from! Sophia's responses might seem straightforward, b…
ytr_UgwSETdkO…
G
@Marcerx1x2 well if you arent doing it for money youre doing it for enjoyment a…
ytr_UgyIxRdOs…
G
Chatgpt:
9.9 = 9x9 = 81
9.11 = 9×11 = 99
81>99 -> 9.11>9.9
Deepseek
9.9 = 9,9
9…
ytc_UgwElJZsY…
G
I prefer to not paint a digital target on my back speaking in an ill fashion in …
ytc_Ugwwlq6rM…
G
its obvious now that something as powerful as AI is slowly being taken away from…
ytc_Ugzanz6SU…
G
William Riggs You should be in prison for abuse of power, lying to the public, a…
ytc_UgyoSgj94…
Comment
I feel like Neil wouldn't agree with the clickbait title that misrepresents his views. He never said AI was overrated, in fact, he said "it's not hype, it's real". He just disagreed with AI doomerism. Entire interview would be better if you could mute Hasan, maybe AI could help with that
youtube
AI Moral Status
2025-07-31T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxDwJxsviz873aqH-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzMAIbiee_l3jFVEjZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzU5jflk0VRHvPYeDt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwEZnAwT_ngVx1ahIB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwtFThDM9gSq1FbW8R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxSnfxVBB6Jj3nLBuB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzVJw3dmB5dftqfhj54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxqLpIumeTYlfwoiFh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzIlHya3EIHHQRHJaV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw6YXAJHzE0jPyb3gB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}
]