Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You can always go back to basics - opt out of AI tech and focus on providing for…
ytc_UgwQfAejP…
G
8:55 "could have"...that is her legacy among the art community now- you can't tr…
ytc_UgzqQmTdp…
G
@howmathematicianscreatemat9226 I wasn’t contradicting. I was pointing out the o…
ytr_UgwF5w4hs…
G
@GhostFreakyAhhI don’t think it works that way. It is going to make jobs more ef…
ytr_UgzodO3hQ…
G
Exactly. Let's just say at the moment AI is at it's IMMORTAL STATE. The moment i…
ytc_UgzNHo5Kn…
G
So Ai's will need to get bored being on support calls or it will cost the compan…
ytc_UgzJWTOQw…
G
So how many hours would I have to spend revisioning and telling ai art before it…
ytc_UgyqU0oZl…
G
When AI is in full force, there is another thing to wonder about, cars, less out…
ytc_UgyuypTTo…
Comment
I don't think everyone will do good. What if they teach AI to kill certain people? That scares me. It highly needs to be regulated right now across the world, no war, no death. It feels so lawless now. Nothing sex*al either. We need laws and regulations worldwide 🙏🏽✌🏽 AI needs to be banned by the world until every council of every continent agrees then. I wish we all could just be good to each other.
It should only be used for health and maybe to do heavy lifting or help people with disabilities or elderly.
I've seen many movies and video games about the dangerous people that abuse it.
youtube
AI Governance
2026-01-13T01:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy1lLyuWRTyll7Yy0R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwdGmvB4PN3hJEoO154AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxSP-5ymK4IIZ8MlQh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxYulEalXKN2p8bfjt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx35j7rWI0GnpZCjoZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwRK0nPRSc25C0srtV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwsqOcO-9VgCEuhV-J4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw9wXx_2LbcrA5MCDp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzT1-GXpdoQS4HvAFF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6f9VmspfnoVqshyJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"}
]