Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This Video demonstrates 17 minutes of an AI Literally in it's own other words sa…
ytc_Ugwz4vhHn…
G
No way you used chatgpt… to complain about chatgpt… what a world we live in cate…
rdc_n7lzqmz
G
Duche bag thinks truck driving seems a little bit complex until a robot is behin…
ytc_Ugxhg06_C…
G
Withdrawing access to labor used to be the main leverage that the average person…
rdc_ktsozpx
G
Oh don’t worry, they are working on this. They keep trying to sneak riders into…
rdc_mt8p9pl
G
So it will create thousands of high paying medical jobs, right? Wrong. Check yo…
ytc_Ugy0DXTji…
G
@Shotgunz999I would agree to you if any LLM were completely open source and had …
ytr_UgyBHM3sj…
G
It's the human's choice on how much AI we want. Laws need to get passed on bound…
ytc_UgyfoIk-_…
Comment
My biggest issue with the doomsday scenario is that I have yet to see AI be truly creative.
Exploitative? Yes, but not creative.
Edit: that "made in a location" example, there is actually a practical reason. Quality is better depending on who makes certain things.
Further edit: what if these "improved test scores" are all automatically fails if breaking the rules is considered a failed test?
youtube
AI Governance
2025-10-03T20:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx5d1E0Wbdvy_NTTl54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwONjXUEi0T3kBq_qF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwoumtwLCvjk4LqNI54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwMVtVad2rk1lajCot4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgznlbAHr8zMFwIfkUd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx5z67-W2ptRQEeZOB4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxeghA6eMmCgQObyv94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8QrI0Lamr_0vvudt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugw8j8H2g0sTJ7gNhQp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxAJvcd41YfQfmK46l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]