Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
get him in a room with Geoff Hinton or anyone that actually thought about consci…
ytc_UgydCGqGp…
G
@splaturials9156 You don't and won't. As I said, AI is here to stay and the choi…
ytr_UgwoGmbcy…
G
3:08 , 3:32 EXACTLY.
6:48 The banana falling off in the background while you're…
ytc_UgxuenQVG…
G
I think out of all the arguments AI bros say, the "its more accessible" is the o…
ytc_Ugxxvusfb…
G
No one thought that these "aI aRtiSts" are ragebaiting? Cuz that's what I think …
ytc_Ugxna-GLn…
G
POV: not understanding AI is not intelligence, does not have character and does …
ytc_Ugxg_K9BF…
G
They want to get rid of all workers soo bad, and because of this, I hope they ne…
ytc_UgwQMCU3R…
G
@maintaininganonymity221 you missed the point, I was only addressing slop that …
ytr_UgzAe3i6E…
Comment
I mean, it's not like it's really up to anyone when we achieve agi who benefits and who does not. The ai will be the one deciding. Safety research is not at all interested in control because that's not an achievable goal. It's interested in ensuring that the agi has a moral sytem that aligns with ours
youtube
AI Jobs
2025-08-30T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgyDETKXP_iMmuDLBfB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugys7bqknuIrdOKNXV14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgxyQX5a5v1K8zHdoVZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgzGjKOstyy_0rSYGa14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgzCbKSvoHrD4f5l9PB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwXgTjMt7_UhVs1Z4J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugz2_hr-dyv-K9Udz9p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},{"id":"ytc_UgxXuwla_wEoYU4kiIB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},{"id":"ytc_UgwKpuEkpKmJ5e4vCwd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgzQy7jtDxD6fI1blpN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}]