Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Eventually some human beings control AI . Only worry is they shd not go rogue😀😀…
ytc_Ugz9VHIw3…
G
I have a career gap and want to transition into an AI/ML engineering role. My on…
ytc_Ugzu8BiDj…
G
As someone who leads AI transformation within organizations, anyone who says AI …
ytc_UgwvS2c52…
G
At around 10:45+, if nobody's employed, who will have money buy/consume what the…
ytc_Ugx1e8Bee…
G
@battlehotdog That is why it is a hobby, of course it takes time? If you could d…
ytr_Ugxm0d_Tm…
G
You gotta be kidding. Those professors were clearly being lazy in their work and…
ytr_UgxmJQKnG…
G
Except it isn't. The same way DOGE didn't help shit, the same way hyperloop trai…
ytc_Ugzu_0a0P…
G
@Hfordham-x3i Yes, and all you have to do is manage to survive the period of tim…
ytr_UgyDYewic…
Comment
Someone once posited that self-driving cars will be 'ethical' so if yours might go over a cliff, lest you hit a bus instead, then the computer will sacrifice you. Umm, I want my car to be self-centered and think about me, its owner. So, in that rosy future, where cars sacrifice one to save others, just tell your computer that morning that you're on the way to the orphanage to give out toys to the needy. In fact, tell the computer that every morning...And, hopefully, you actually will go to the orphanage occasionally, to give out toys...
youtube
AI Harm Incident
2021-03-05T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxzrJFHtwQVGAhSoTR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxU5tCZM8HakEDvsFl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMY4iDTn7n18gY6-14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz4ux_V7SfOPVtCZ9V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxScl_CUE54ZUahNwt4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxSt_7nWVXXZuuXwvV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwalSdv_7oC0WE64eZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzo1Jhy8gfrjzTAU1Z4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxdohbhxra2IeZqFE54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugxx9whhaDWkfEJOjy14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"industry_self","emotion":"approval"}]