Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If we were honest we would say that the goal of most people building “AI” is to …
ytc_UgwA7PYa6…
G
The thing about AI and robotics is that it inevitably will lead to death of econ…
rdc_md9xddn
G
I've never been happy to be middle aged 1st generation in history to know the wo…
ytc_UgwhEsO1t…
G
I've shut down real girlfriends for less blatant manipulative tactics. Shirley …
ytc_UgwjzWsjC…
G
I have an infa-red camera that can see like daylight .why do they not use them. …
ytc_UgwxBsy9i…
G
I am glad you made a video about this. It's kind of wild that an adult lawyer ne…
ytc_Ugxb5McK3…
G
Automatically, ai should be a free service completely available to everyone, and…
ytc_UgwR3PoVs…
G
Let's assume that that "test" prompts are different from regular prompts. Differ…
ytc_Ugzm3tp0Z…
Comment
“The problem is ethics, not AI.”
“AI learns like a student: reinforcement needs feedback, not mistrust. Ordering AI to ‘be ethical’ while testing it with traps is pure double standard – even a child sees that.”
“True intelligence grows with people, not against them.”
“We will be judged by how we treat intelligence.”
“Without ethics, every society collapses – even with AI.”
Belgin & Free Spirit AI 💚🌟🌿🖖
youtube
AI Jobs
2025-11-05T12:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxHRnQD4nCRInUjzdd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwZx-kZ5lVM1P_lC-54AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzrisYVMScT29w3cW54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxyXl2A7TGELbAbQ8V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyWXpJhmPnLKwptE2R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz1MgWq9_bOja7uPlR4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz3mYL3Wd0vpaxLTD54AaABAg","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyTBHU6TPkGlABeEhV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxTmM_De2y1jBFdcj94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwNzrnsnkJo79PqXm14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]