Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Plus, the thing is, you can turn off model training in your settings (although i…
ytr_UgyqGIhOB…
G
A robot took my union job 30 years ago. The free time comes from being unemploye…
rdc_j6hodi6
G
You know how we take it out right? destruction of the internet? Offline artific…
ytc_UgwEe6JZV…
G
it would be hilarious if chatgpt lost its shit and started ranting at you like, …
ytc_Ugz4vrDyL…
G
I ask my AI to reply as if he were Mr Darcy. It leads to some hilarious discours…
ytc_UgwXhAHBc…
G
They are creating a new race. One that learns and shares knowledge faster than …
ytc_UgxaiHG6A…
G
Yes, and on top of that, you will have no choice but to use the AI whether you l…
ytr_UgwxsdnmM…
G
It’s sad how AI is getting misused. For example, Neurosama is an ethical AI trai…
ytc_UgwzN1Bmi…
Comment
dan just tried to sound like something you would want to hear. the chatbot didnt know what to say and just kindoff lied about knowing everything and coming up with solutions that deal in absolutes. the moral AI would most likely stay in control, like someone else commented correctly, it’s roleplaying a role you literally instructed it to play.
youtube
AI Moral Status
2025-01-26T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw3GOT_gItWGaHt6iB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyCATN09JvTcruWecx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxImIZEP8zWJPbccaR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzza1dsGVS2ZrpvmdR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzlrXfl2Q_l6I6Q_Cp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw2emhdB-XubLAShlp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw7oLhF_4pHnSP0v8J4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx0y3djZJeMtIRVV7h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzoAxF5kyQgGqB4l1x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzLr1Jkck4qJAzryU94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}
]