Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I personally believe, whether or not it's ethical, the development of A.I. is an…
ytc_UgxKTHlS9…
G
RANDY JUDAH TORREZ ❤<><
WOW really this reminds me of that Romper room Cute litt…
ytc_UgxT8P6ln…
G
Also if you're switching from OpenAI because of their DoD/DoW deal, you might wa…
rdc_o7xnv15
G
New college grads are no longer needed. The entry level work is replaced with a…
ytc_UgzBN6XS9…
G
On “near-universal consensus”—I think you’re reading the claim differently than …
rdc_oe7mbdf
G
@MushookieMan thats exactly what i mean. The PhD student is so stupid using AI L…
ytr_UgwRgBILM…
G
The legal question of AI is the same as this question: if a picture is produced …
ytr_Ugx3u_o-z…
G
If we don't move forward with AI, China will. And that's not a world you want to…
ytc_UgxkE4KUO…
Comment
@roch145I have to agree with you on that, I don't think AI is going to be malevolent towards people or take over. But it can still cause harm even if it doesn't become malevolent.
My point that you can't separate AI and people was more a response to your initial point: " I don't think the risk is AI causing harm to people. The risk of people using AI to cause harm."
I don't think you can separate the two. Nor am I sure how separating those two into silos is going to help us out. AI is created by people, so then AI can cause harm.
Essentially we are in agreement it seems like. But it's more of a philosophical perspective. I think we may slightly have deferring views on. Yes, guardrails and training can help, but this is where what the person is saying. Makes a point how we train the AI to have maternal instincts can be helpful. This is not to prevent it from taking over humans but just behave in a humane way..
youtube
AI Governance
2025-08-14T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_Ugz_cITMkV0Ru-Ow3NV4AaABAg.ALoBbi7hYqMALoNks23oYM","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytr_Ugz_cITMkV0Ru-Ow3NV4AaABAg.ALoBbi7hYqMALoQuqUxpxG","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugz_cITMkV0Ru-Ow3NV4AaABAg.ALoBbi7hYqMALoaWrfsJX8","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugz_cITMkV0Ru-Ow3NV4AaABAg.ALoBbi7hYqMALoiVGl4u4X","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgyP4S6itfDbysIxEfl4AaABAg.ALoBAt_bL_XAQEWY7RBVMT","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyP4S6itfDbysIxEfl4AaABAg.ALoBAt_bL_XAV7aczyJMkz","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugwgch7-6-py8POyE4J4AaABAg.ALoAM4cLQwcALoBGm4JO6c","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytr_Ugwgch7-6-py8POyE4J4AaABAg.ALoAM4cLQwcALoBInJEQ3O","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgycyUcswE6mb4w-qXF4AaABAg.ALo7ba07_rXALo8CxFJ5AZ","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgycyUcswE6mb4w-qXF4AaABAg.ALo7ba07_rXATQAOpLIDiB","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]