Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
id be PISSED if i was a artist at artists alley and a mother fucker with ai art …
ytc_UgwIpie_Y…
G
Problems with AI "therapy":
1) HIPAA conformity problems (what was discussed in …
ytc_Ugyymxt0R…
G
Every progress generates another jobs, it will never stops. I have yet to see ho…
ytc_Ugzij1BDK…
G
The only people I have heard say that AI is going to replace human artists are p…
ytc_UgwSv-dMT…
G
This is a clear and obvious reason to support socialism. If these lower-level po…
rdc_ksleifq
G
Why ?cars ?what if the Ai chooses to reduce safety in cars to save itself(wow it…
ytc_Ugy7wC9HZ…
G
Now that's a topic I never really thought about. I guess I'm not prepared for th…
ytc_Ugh-m7XYI…
G
guys.. after a while it finnaly spoke when i said i just wanna talk to it direct…
ytc_UgxurOzvU…
Comment
Autonomous cars are built for safety and will stop if someone walks in front of it. All it will take to rob the occupant is for one bad guy to walk in front of it, another to then stand behind it and the rest of the gang can do as they please. Same with walking robots. Just throw a fishing net over it. Trip splat bye bye.
youtube
AI Governance
2025-10-09T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgyQrPG9HRsg3Vd10Gt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxa3x2IpeLtVsShGfx4AaABAg","responsibility":"company","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzUrYoll1SbRRWDrjF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQmmjzOUGQX4Q1-XB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw0mUGfJJLE7BSnYbJ4AaABAg","responsibility":"elite","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgweYvzR7AbEU0dfKkF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzB_7YWmQh3JeUu6Ft4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgziS4cgROeka8Gr8uB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwYAQpMwqv28vEHMDp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyMCQJNDYjImCfXi_N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}]