Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One of the more scary aspects of AI generated content is the loss of trust in th…
ytc_UgyAhD6oY…
G
People don't get that AI is trendy, and overtime its going to disappear just lik…
ytc_UgywQIZbo…
G
Exactly, and a lot of young people have been waking up to the college scam too w…
ytr_Ugyx6vOTB…
G
I'm only an audio engineer and know how to stop a computer smarter than me. It's…
ytc_UgxG-h3yC…
G
I personally see AI as both good and bad. What I mean by that is this:
Good:
G…
ytc_UgwHlgYWr…
G
Imagine AI that decides humanity is no longer to be trusted and acts like a baby…
ytc_Ugwn2EWOP…
G
The problem is that if AI replaces ' muscle workers'. What do these people do?
…
ytc_UgwRpbdZP…
G
If Kaku is talking about the end of the last centure (1999) than I would agree. …
ytc_UgzcvQ0Iv…
Comment
Hi,
It is important to note how the data you train the model on shapes how it behaves. The reason for all this fear around AI is because that these models were trained on a significant amount of science fiction and internet data which makes out AI to be evil. When running the model, the AI is simply trying to predict the next word of the story. That's all it does. If we train it on stories where the AI ends humanity, when we ask it to predict what the AI will do in our story, that's what'll happen.
You can skip the training on internet data step and just train it on conversation examples. In fact, I have done that.
Additionally, there are various methods and much active research regarding removing innate behaviors from trained models.
youtube
AI Moral Status
2025-12-14T21:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzXvsGf_vBcWrW0-Ix4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzByHaN9qKWhDhkuil4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwY2Qk-EJY7hKsxsCR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzSWcJaHSuhzuNgFp94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzDXRBmkNkzh78qEch4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwPPtGRz2NssaDz8VB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxokQlic6Aywn4jfjh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy9KN0usFXdxJ6oCpR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxK8r34YRH1Dd7pk194AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwTEme7iUeu9BO8QId4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}
]