Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The whitest school I've ever seen.
Exactly what PBD (host, whatever his name is)…
ytc_Ugx76skEr…
G
i understand your confusion regarding why an ai white knight doesn't understand …
ytc_UgwNClWe7…
G
"ai is taking away jobs"
Also ai; causing artists to recreate the same piece wit…
ytc_Ugw6E6_GJ…
G
Not me writing a book without using AI, with ADHD who still supports the use of …
ytc_UgxsGH1FK…
G
Just put the fired people on a bicycle that generates electricity for the AI ser…
ytc_UgwgPTxSt…
G
Claiming AI is capable of abstract thought is a huge claim that needs more evide…
ytc_UgyLCcgu6…
G
So what are the arguments against facial recognition in this case? How are innoc…
rdc_euddy2g
G
@RewindOGTeeHee well, that’s kinda what I explained in my original comment. Arti…
ytr_Ugwh9EmNX…
Comment
He is right in a sense that we need to regulate before a problem has already happened because the AI will be one step ahead and catching up won't be easy. The problem is we are still in the early stages and simply do not have enough data on the different possibilities and outcomes of what will become problematic. It is a difficult topic that needs to be addressed soon as the rate of advancement for AI seems to be faster than the rate at which a government can create a task force with a team capable of comprehending the difficult task of regulating the unknown.
youtube
AI Governance
2023-04-25T13:3…
♥ 78
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyXhORa9X8ximnfx1d4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwzTDlRBctwMzqiqM14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy06XANzbY4TOOedPF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxzndDuTqVN7xQhAnB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwjMj_VJuIbiEON8cJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyWg3zR5AVl2esaSr94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxOIlnDKQmBj5eHzUx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz2lSz5xaBvq92P6P14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwZntqPGIgWYv4htSJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxwdASWnhXavqazf3R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]