Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
All jobs are at risked of being replaced by AI. However it doesn't really matter…
ytc_UgiF_iehr…
G
Nothing but fearmongering. There are too many variables for all this to work on …
ytc_UgyWWNtuk…
G
If we consider the cars to be able to make meaningful decisions, shouldn't that …
ytc_UgwtXdfQu…
G
I'm not going to argue in favour of current LLM consciousness (or the lack there…
rdc_mdj0zmu
G
2:55 need an example please.(edit: halting problem can not decide to stop or ru…
ytc_UgznwOiEN…
G
@ was hoping for more than just rage bait but congrats on my 2 replies I guess.…
ytr_UgxMbZ-Js…
G
Fr I missed when I hear ai I think of sci fi movies about ai going crazy and wan…
ytr_Ugx5ZYn-4…
G
On my mom whoever created AI is gonna get dropped just kidding why would I even …
ytc_Ugwl1BZ28…
Comment
ChatGPT and AI are not what you think they are. Think about WHO is responsible for designing and engineering them. WHO has the financial means to develope them? What are they good for and how could they be used for evil or destructive purposes? Think really hard and you will see that this type of scenario is only the beginning.
youtube
AI Harm Incident
2025-11-12T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy0_6HEFl_b5O8zf5d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxcx1D58v-Xnct9ttp4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwrmZPVthnIMfLBTMJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzxCK_igvK-pNIqdeN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyjg-ScLvEWfiyYuWl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugx2nKCsEsSEXKH3p9F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyHB_Xe894dBuceB054AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyfhPkBE61G6cR1V7l4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxCQtuTCSdy7CXY8Nt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy3bahUuH-SQpzf7YZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"})