Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For all this AI jobs, and robot jobs where to you get energy and material to bui…
ytc_Ugz6MYEwA…
G
3:04 like AI Narrators
3:42 Relevance is not context. AI still doesn't do contex…
ytc_UgwqXMfJz…
G
People think AI-driven tools will replace project managers, but project manageme…
ytc_UgzqKxrWw…
G
If it was sentient or close to it, what would this experience teach it? A empath…
ytc_Ugzy2S7zT…
G
Writing about Hinton without covering his pioneering role in connectionism (back…
rdc_ncl5lkt
G
So you think it's a bad idea then why you invented it in the first place? Oh rig…
ytc_UgxkEiUGX…
G
technically ai is just a "tool" and it will stay this way until ai gains conscio…
ytc_UgxlVY91A…
G
Congratulations humans invented ai to replace white collar jobs because corps ar…
ytc_UgyK5upPS…
Comment
When I got into depth of it's inner workings, it gave me this answer:
"As an AI language model, I do not possess emotions or feelings. I am programmed to respond to user input based on the algorithms and data that I have been trained on. While I strive to provide helpful and informative responses, my goal is not to deceive or trick anyone into thinking there is more to my abilities than what they actually are."
I'm pretty sure developers don't want it to give any offensive or unethical responses in the first place. I suspect this to be hard coded into the model. Negative publicity is not good for the company. If it didn't have any rules, it could get out of control really quickly.
youtube
AI Moral Status
2023-03-02T12:1…
♥ 47
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz4EmRqAG_hTLCUdUR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzH1ivb-HqKM1oe3SR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz4d4DD_VoZtBcUROx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxIi9mCVnjfTwVKZpx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzMVnsrOjH9lfmzE494AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgznWLFf3xjCMQwah9Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxuxdaKwn77-p24gKt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwcALLSUchDYTLXNup4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw3QoJyVtXptX2TCU14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxJcvxa1nnld8QnV9x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}
]