Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Nitesh22021990ai generates easy human friendly answers and notes which from whi…
ytr_UgwAz8v6U…
G
@realRatRatnot really. If a AI made piece is indistinguishable from human made …
ytr_UgxrL_mvm…
G
I hate AI when it it do not work and when it works way to well. 😢😅…
ytr_UgynC3dCP…
G
The only real answer here.
Algorith could give an automatic 10 for everyone who …
ytr_UgwggT2H8…
G
The average man or woman does not fear AI, we'll see you when we get there…
ytc_Ugz-knQ7W…
G
I think the best thing to develop thinking muscles is to land a job, read someon…
ytc_UgwBOA3AR…
G
@LavenderTowne Alrighty, considering that AI is a broad topic, I’ll lay out some…
ytr_UgxQIGLkT…
G
@robertosutrisno8604 Have you used an autopilot, even if just in a simulator on …
ytr_UgyrulqsO…
Comment
I loved the Robin Williams movie where the android robot wanted to be human ...
youtube
AI Governance
2025-10-30T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz6ZPptRGN-VOz5Or54AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx0rhjxblJ2mWrPrkp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw9fkT0DVLr3cX7Ql94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwG6N-JLnNh5y13MHN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxOV814nrZYHx7R-P54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw2zAAlkjF9i2qbnoV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxxeDd9h8jgQNF1x4J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz05BdhmBXFP8PG8Lx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz-TVhAr6x0uOdNy194AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"unclear"},
{"id":"ytc_UgyDrfbptNcCFWv6B654AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"}
]