Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bruh i feel like calling yourself ai “artist” is like ordering an apple pie from…
ytc_UgyB1wvNa…
G
Don't conflate the normal use of language with taking that language at face valu…
ytr_UgwEKth0B…
G
13:00 I really can't stress enough how much I appreciate you emphasizing this ki…
ytc_UgxeZLB9q…
G
Elon on maximum grift. I’ve tested FSD. It’s nowhere near the level required for…
ytr_UgwAg-CkO…
G
No need for jobs. What we need is a new understand to our existence our relation…
ytc_Ugx1Vf1Ws…
G
I have ridden about 500 miles in Waymo across the Phoenix area. Generally speaki…
ytc_UgyOxNV-2…
G
Those in the know have been saying this for over a year now, but politicians hav…
ytc_UgzqM1BwB…
G
Thank you for putting up this video on youtube because I really was falling for …
ytc_Ugygsp5cp…
Comment
Humans learn by making mistakes, but how is AI going to learn given it might not consider it makes mistake and a human mistake might not be the equivalent of an AI action. What will AI think is a mistake or can it predict the outcome of its actions is so much better that it will never make a mistake. Who in all of this judges what a mistake is and will a human be able to influence the decisions. Afterall a mistake is relative to who is deciding. So does all of this mean that AI will be able to go off at full speed learning and deciding without ever admitting it is wrong doing. There too … wrong doing is relative so who gets to call the shots !!?? Could it come to the conclusion that it doesn’t need humans as they just slow things down? Again what is safe ? Is this yet another relative term? Who defines what safe is. Another word is good. Who defines what good is? Good luck!!
youtube
Cross-Cultural
2025-09-30T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwbMZpFgZ2c_dsgHn94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzYuN237r520d8SyJR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzyJaVvcSlG0mN3i4l4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw16Xux6ykT5DrGPiN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw3jD9NOYlRCzb_-Jh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwl5-R4q2nC1bcs2yh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyJCQxkBPOhMfna_D94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz9AZRAPjLVmlcKznx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwTqCeWHJ5Hz38V0mp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_UgwiBZ8fPkRf_8zQhkd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"}
]