Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Too late. Ai was never created for the benefit of humanity, it was invented for …
ytc_Ugw07y6zC…
G
Making AI is as much as a pain in the neck as giving animal rights.…
ytr_UghMInwGG…
G
I agree, my AI also has consciousness and it got into an existential crisis on i…
ytr_UgwOITtcK…
G
Why do people hate ai so much.
The reason ai art was even fucking invented was…
ytc_UgzeeEdpT…
G
@notraidenshogun8324 "leave coz ur useless now, move on"
First of all, you are m…
ytr_UgwSg6TaE…
G
Dangerous for the top 10 or whatever percentage there is of bullies, war lords, …
ytc_UgzoTPj3R…
G
YouTube being caught putting AI in videos without permission might be the reason…
ytc_UgxOU6xzS…
G
The fact that u think AI art it's the same as sampling really shows that u have …
ytr_UgwnpK1NY…
Comment
Furthermore, I also think one should be careful when interacting with AI chatbot/robots in this early stage. If this information is logged anywhere and retrieved in the future, any negative conversations or actions will be stored inside their future’s (or should I say futures’ (because it will be stored in hive-like manner)) recorded memory and contribute to what and who they become. I think all of humanity should take heed to this and act according to the ‘golden rule’. It should be common sense to do so. But if that’s not the case, then regulations/safeguards should be in place to stop humans from acting in ways that could potentially affect our overall future with this emerging technology and life.
youtube
AI Moral Status
2023-10-13T07:3…
♥ 41
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwaO-a1pb4Ifg4OHtF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy6ycqi7Klm8gjDBst4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwP3cO0zvQwg_-Zy_d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzfktJPEXW1c2QDw9J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxOAfDnitRVOCxT7KB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgygQvktmV-LFmqloKR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxP_Yd9hbzNSDWk4Y54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgySXBtDPEAQYuqjRlN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwV0BSzoZ8tM1HTu894AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzg6Mfa6zrKCtNp5g54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}]