Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai 1: "How many humans does it take to build a superintelligence that will kill …
ytc_UgwE9BYwN…
G
Stop listening to your Realtor, they are terrible. Hopefully you have proof of t…
rdc_kxuwq9d
G
He answers all yes/no with both answering yes and then no and then arguing for b…
ytc_Ugx_Fehfh…
G
Funny, AI seems to be across a varied spectrum of obvious trash and might as wel…
ytc_Ugwi6gutE…
G
Hey @camdenkelp9506, thanks for the hilarious comment! I agree, taking on an AI …
ytr_UgxQrL23a…
G
I've had this same thought and it seems to be a good solution.
However, the str…
rdc_ohbtm7h
G
With 99% unemployment, who will buy what AI is producing?who tf believe that the…
ytc_UgyVCj387…
G
Great video and in my eyes, you're easily one of the top 5 interviewers in the w…
ytc_Ugw2uYI6S…
Comment
There is no way a robot outsmarter human. Robot is programmed, it will not design something to its free will. Lets leave a robot to its free will making a simple chair for his own need for sitting on it.
youtube
2023-09-15T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwEKgeFmE_vAH57uL94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwRYK2KOp80bf6UxDd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy5QbRVkE8k9BYNbS54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1BUHkeRMOSh11_6B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw_oQXSfjyAfCnKYql4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzKiR9T1TFevf7yOvN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw5GmQhNpykH4oiCkp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzPe1RZ3FpUvhTtiQB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxPkYzgKmtN3b9ebzJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzSDwuhkcYmp4x8sMB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]