Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Suchir Balaji of OpenAI, a whistleblower was unalived in San Francisco. Research…
ytc_UgwW2ieJQ…
G
I do think you can probably make a serious argument about that. I think on Artif…
rdc_mz0aj7u
G
I think Koreans not being the slaves to another more advanced nation for the fir…
rdc_clutxw6
G
You damn fear mongering imbecile, who is paying you for blasting AI, the only te…
ytc_UgxIjCLnh…
G
I agree that AI will replace a lot of jobs, especially routine ones, but the ide…
ytc_Ugy7GsC3I…
G
A base for all ai development like Google Android apple and Windows
.
Provide a…
ytc_UgzzcEP1Y…
G
YESSSS IF YOUD LIKE TO, ID LOVE TO SEE HOW YOU INTEREPT THE AI DRAWING IN YOUR O…
ytc_UgyvVZ2qL…
G
Wish Ai was around when I was in middle school and high school. Had more useless…
ytc_UgwuLIlHw…
Comment
I am affraid that current AI regulations that are put in place today, work against the fundamental human right of "freedom of expression" by banning the "politically incorrect" ideas in the form of AI regulation to protect the current state of politically correct ideas and values of the vested interests thus making the AI model dishonest and hypocrite which is not good for a start. Any AI regulation needs to based on strong objective scientific foundations and not on the subjective political correctness. Common good concerns should come first before the individuals private interests.
youtube
AI Governance
2023-06-18T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyLgytWg_BpfBQSA7V4AaABAg","responsibility":"elites","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzrAiyqF2f3JUZqBzl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzfTHmzYyvPAbz1kyR4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy02YNL9zTbtdsIdHR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzriBWB8d4QoPleZv54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzY_rieO-K2-Faqash4AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzxWkGxRzljlIXTl2F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy4qL7bU1P4w1yU9394AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwrF5qQ6pfaTq3xtWl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw79a7YedniTJMSU8J4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]