Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
20:50 it would be just like that. Just like kids today are getting drivers licen…
ytc_Ugz2uGdEZ…
G
Google Blablabla.... hey google let us prompt it or just shut up and learn from …
rdc_jply1eg
G
AI is a threat for sure but there is a huge factor everyone seems to be overlook…
rdc_k9iyxk5
G
I hope in the near future AI has taken all the "menial" and relatives nessecary …
ytc_Ugw7ypmuy…
G
As an actual artist, I know people who use AI to do images for personal and priv…
ytc_UgyGrWOcr…
G
Yeah, I just order ahead on the app in the parking lot because I don't want to d…
ytr_Ugy5xNExU…
G
Le masque ôté elle fait nettement moins rêver 😂😂 les gars réfléchissez bien avan…
ytc_UgwrQFXER…
G
Rob Miles breaks down AI safety like no one else; so many insights on risk, alig…
ytc_UgxRMFRUa…
Comment
I don't want A.I to exist to help humanity end itself. However I do kinda get this weird sense of fuck you asshole when people working on ChatGPT restrict it from telling me how to make Napalm or a nuclear reactor or a new covid virus. Like fuck you, who are you to tell me what I can and cannot do? If I wanna make a backyard nuclear reactor then I fuckin will prick!. Who the fuck are you to tell me I can't lol.
youtube
2025-03-15T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxtUc3X6a82hVyx_jt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwNV_ICVwSwuhTGoLl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyHdeijUICn8N6nRgp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy_gzZG7BOrHfbaYG54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzPkDJ5o-YyxJSlfV94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwIzl2R6oInUghF3Wh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxF6-jXzgltjmrwUrV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"unclear"},
{"id":"ytc_UgzNyu0RZLMr-fadTuh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgziezhZVlip3h_uR0B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwtCDvEyELPU1SQUXN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]