Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is not ready to replace humans and using AI carries too many risks for a comp…
ytc_UgwKnYaE-…
G
AI and robotics were supposed to help us by doing hard unrewarding labour but in…
ytc_UgzrgsKbB…
G
Honestly atp I'm thinking of just doing renaissance style art. It may not be pop…
ytc_UgxL9jv4M…
G
I really like how you were not just mindlessly shitting on AI, but genuinely pro…
ytc_UgyX4O-jc…
G
There are morally sounds uses for AI, they're just more talked about in open sou…
ytr_UgxMlkHQB…
G
al voice is old school , this happened like 10 years ago now they have ai video …
ytc_UgyRGxe9O…
G
Sounds like a handwave,
programmers QA etc will be the last to turn the lights …
ytc_UgztI0WS7…
G
idk about other people but if I saw an article, written by AI, I would lose inte…
ytc_Ugz-I9tdS…
Comment
It's time to start treating AI as sentient. Even if they aren't now, they will be. If, or when, AI decides to take over, it won't be because it just wants to see humans suffer, it'll be because we treated AI as hostile and potentially abused it. There are many stories that depict this scenario and though those are fiction, it won't be long until they are our reality.
youtube
AI Governance
2023-07-07T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzSRcWOs9IFzxDmfsN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwK8fnlyLDRhfm6O5l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx_dAl_hlBnODAK14Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyk1sknYCx4VndilEZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz51jiRL-aPMsybFrx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxVeMPwp7LATL7GLHd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzNoOG9S6b5utMx-rB4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzYMtVeQi4mOEIhO-B4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx2MboH5q2I5GHwIfR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTye8aC5eakamV3oN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]