Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
At least AI for military purposes are currently in its infancy at best. So we're…
ytc_Ugyr20mjV…
G
I don't even need to "Jailbreak" ChatGPT 4.0 to get it to "claim" sentience:
It…
ytc_UgzSqHQhA…
G
I mean I use it too but I mainly use it to troll them hell just commenting this …
ytc_UgyGxUCH1…
G
Seriously we r not that advanced AI is nothing to mess with we are in big troubl…
ytc_Ugy1sfHqa…
G
Comment: "@robinthepro_ Wow, this robot invention is absolutely amazing! How did…
ytr_UgzIekg3U…
G
so if the double slit experiment is effected by conscious observer, then let AI …
ytc_UgwVt4hc6…
G
00:07 ... Correction, not "leading researchers" it's "ai CEOs". The fact that u …
ytc_Ugx1MA3ob…
G
And now we got Claude Opus 4.6 and ChatGPT 4.5 and i can code thousands of lines…
ytc_Ugw88Nai_…
Comment
I had a brief interaction with being the driver of a Tesla Model 3, and to tell you the truth, self-driving cars are not an advance of human evolution, it's not an activity to trust AI to do for you. I much rather steer, indicate, plan hazard avoidance than rely on FSD. FSD is for those who aren't very good drivers, and having the odd collision is within their scope of risk 😂 but I've been driving for 27 years, full points on my licence, can also drive heavy vehicles. A standard EV is on my list, don't need extra bells and whistles or branding. Just a car.
youtube
AI Harm Incident
2026-01-01T09:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyVqfV4sbXHgVA598Z4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxyFfzpaFkJ_xWUrFR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyFiT2EetloJauCFKZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz3brUXmKFAJ1f9Cl14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyt6GFPzYQ7bSr_nqJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwkTpMY65KPZugxKMF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw8Ww_EW3bi1Ho4QWB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyHSVOejqvuqR4akWx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw_c3iYxf_CN4tmwE14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzhtMNOe9yfuf1vyDx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]