Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If I am as brief as possible when expressing my thoughts to AI, it will give me …
ytc_UgxJI8COm…
G
@marksmithwas12 There is also the not as much explored aspect of AI discouraging…
ytr_UgyQmg4yK…
G
If you knew shit about computer science, statistics and AI you would delete that…
ytc_UgzHujxgY…
G
That means they can only use it within the limits of this license. That's okay. …
ytr_UgyYL6QOK…
G
Now that's a great role for AI for sure. It's like having another opinion in the…
ytc_UgzBZxGdI…
G
The important issue is safety. I don't think route finding is an issue long term…
ytc_UgxLWCy4p…
G
A few questions on the ground robot. Will it dig itself out of a collapsed build…
ytc_Ugyh1lwmn…
G
@takisk.7698So because you do not like some artists that means they lose their j…
ytr_Ugx9vN964…
Comment
As a Star Trek fan, I do feel the need to state that Star Trek Discovery tackled the AI topic in season 2, leading the ship/show to move forward in time from the 23rd to 29th century's. They did this to prevent the Federation's rogue AI system from getting it's hands on something that would allow it to wipe out all organic life. (IF I'm remembering correctly, it was along those lines - it's been awhile since I watched that season)
youtube
AI Governance
2025-09-15T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwbMd_M1sSoolQPNj54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwfI2BkiJS_EPG9bKF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyp0skps8O7ekhM49V4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyUjECTsD7m0g4hcHF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwnbfHTu_bs86A5k7Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwU2XA9gbQbjYxF4A14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwH4BbmsWCSSUE-Tgp4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyih_iBofxWp_rdxbd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwjbwJIa_GeaHmdjct4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwaIMZYMBRyEBzgaxx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"regulate","emotion":"approval"}
]