Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Do you think he would actually say this if its really truth? To say AI is on lev…
ytc_Ugz4_IbB1…
G
Thank you for commenting, @user-dv2zb1de6ename! Looks like Robotky Balboa gave u…
ytr_UgwZcJmP3…
G
What if an AI from another planet or dimension came into ours and is now forcing…
ytc_UgzArznNG…
G
@User-sb6er its an AI operating in realtime to control and monitor the vehicle's…
ytr_UgwW0zQ_T…
G
Hell, I'm still in my early twenties and I couldn't do this, and I'm no stranger…
rdc_clve2xt
G
Elon : AI is more dangerous than Nukes
Also Elon : Here let me put an AI chip i…
ytc_UgzW7kBgl…
G
Wow, just for the tip of the iceberg take one second to imagine a married couple…
ytc_Ugwss9WyC…
G
AI still not being profitable is pretty funny. y'all better have hearing protect…
ytc_Ugw-Xmqqp…
Comment
Sure I agree that at first we should not trust it blindly, but I am more thinking of future generations, after "accuracy" rates reach the 99.9% numbers, then what. Imagine this, that .1% is not even the AI, it's the targets the human overrode, so in that world, should we not remove the human from the equation? But for our sakw, i'd hope we've reach some level of global peace way before we have such a huge dataset to have retrained the AI enough that it becomes 100% effective...
youtube
2025-06-08T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwqq1r0r8UC9hEkbbF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyy2T5xlX1lgZNkilR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyxL2usqDq29Ujr2GV4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyUQ4QIbQt9uRKIdT94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzaxRsw_S-k7BQm4H94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugz6LJWZ63lpynZFeOx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyqVdEyPFQOQ5mCTth4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwyuu2C6W-MBu3Uly14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwoR4Jf1UfAlE7tmAp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw7YEGaHplslnQHD7Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]