Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We appreciate your feedback. If you're interested in engaging with more advanced…
ytr_UgyPyTgsb…
G
I work at a fan convention, and in the last year or so, we've had people attempt…
ytc_UgwFGkzOd…
G
The end has already arrived. So handing the robot the gun was the correct decis…
ytc_UgxBo6o7S…
G
13:24 You just explained also what AI psychosis is about...most think AI can cau…
ytc_Ugx9k3RE-…
G
Just wait until AI can churn out entire profiles of people who don't exist, with…
rdc_g1iu57x
G
I remember watching videos about AI art back in 2019. Applications such as Nvidi…
ytc_Ugzwx_JS2…
G
But but the gubment can control our cars and make them run off bridges and shit …
rdc_cpnhge7
G
@nimrodery photoshop and fan art has existed for years, its virtually impossibl…
ytr_Ugxwmcvln…
Comment
I really don’t like that you are using prompts to make chat gpt read your spot scripts. It makes it unclear what is base programming and what has been tampered with by your additional instructions.
I also think it’s clear why ChatGPT hit these walls. This is a mix of the developers not wanting to rely on ai for moral decisions, while also trying to protect against feeding into unaliving plans, all mixed in with the web results from a common philosophical problem that’s discussed in the academic.
youtube
2026-04-10T14:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugygk3yyG4UBavktzBN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugx8hGqvTXH4SCdNeyB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugza2BgArsDvnRk0F354AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx1a6URFwicFVDdBax4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyuAi7s3i5M1_ho2gp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyBbo692bv6UhOJPHl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyeexFILZ_JGgtijER4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzFeyv9pwE0NmpfcwN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxWpywBpAR57q23Ukl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwNl-55Uuk4x6J7qvd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}
]