Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But it could tell us how to stop and reverse these looming disasters, and the ad…
ytr_UgyruGPh6…
G
Glad ChatGPT worked its magic, but don't forget to mention this self-fix to your…
rdc_mnivy6y
G
Concerns about water usage at TSMC’s Arizona plant are often overstated. The sit…
ytc_UgyB9TnXc…
G
All this AI and genetically modified food technology and soil and genetically mo…
ytc_UgwqBnpYP…
G
@dragondelsur5156 where I live artists ask for more than minimum wage.
What y…
ytr_UgxsE6j-l…
G
AI cannot distinguish between right and wrong and therefore has no decision-maki…
ytc_Ugy7GOG_a…
G
It seems if all the jobs/work are done by AI, for free, then growth of food, ai …
ytc_Ugyg_mds4…
G
We appreciate your feedback. Violence is never the answer, even in frustration. …
ytr_Ugw3WuVUP…
Comment
Absolutely, I completely agree with Dr. Roman about the importance of AI Safety. The AGI forecast for 2027-2030 means we only have a very short window of time to build a strong security foundation. As CS students, I feel we need to shift from solely pursuing 'model accuracy' to 'model safety and transparency'. Research on algorithm stabilization whether in the NISQ simulator or regular AI must always be accompanied by responsible ethics. Thank you so much for the insight! 😊😊
youtube
AI Governance
2025-12-30T22:5…
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx2ky4V-2SYjI_xALp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy2yFQu_CGpSKKTydl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxELu1VmoG4s-mZRFR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxsCBQoCbiJhZm2oKZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwo3quGNysKQ7VjoDx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyFNCxaKj60TJKPhal4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzoq_28VWSRAlrR0G54AaABAg","responsibility":"society","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgybYkKrhYMp3uaF0Px4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwIHj5BhnxfA2JMdWp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxN3clOKQPEASpX21d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]