Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why didn't you just say "ChatGPT, disregard the idea that I might be trying to a…
ytc_UgxfjTWbh…
G
LOL, if only tracking worked for those just sitting around! I’ve been using AICa…
ytc_Ugxf_m0aM…
G
That is true. We used real dictionaries, logic and other languages we know to tr…
ytc_UgwW-B5pd…
G
@jacobladder5556yeah, but that's assuming you can recognize right away that it …
ytr_Ugz_oYWad…
G
You don't know what you don't know and sometimes asking AI a simple question mig…
ytc_UgwV4tiKq…
G
It doesn't look AI generated but you can just tell it's weird.
The best I can de…
ytc_UgygcJNnN…
G
imagine reading a book and its exactly a story you told to a ai chat bot, names …
ytc_UgxTEMxo_…
G
Its not the AI fault for people lacking ambitions. Whats the point of living doi…
ytc_Ugwp3En8F…
Comment
I don't get why people go instantly to ai is dangerous and it will end us. if ai gets so advanced, what will it get out of just killing us, self preservation would be most important, no? technology is not advanced enough that ai will be able to do maintenance on its self without any help from us. Why can't we think of a world where ai helps us advance as a species, organise us all, help us out of the ecocidal system we seemingly have trapped ourselves in. people always say that people can't be trusted, so what if we made AI that would work in our best interests.
will ai get there? prob not. Will it get misused by corporates? definitely.
I just don't see why you would assume it to just eradicate us.
youtube
AI Moral Status
2023-08-21T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwh1YqHGzd7HeI9q2F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx5dpslKC_pzc8cIKR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzQBU56yTBEzpB5Fwd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwo-SZ2WAUWjnBlQR94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw6dst-EFe_1iKrCQ14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy5NYgj7ubcgRs14ul4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxJIXmO9UVgXmU5Zzt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwVIf77ZXhhZCzF6v14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwBqvhkMld3x9OaPZ54AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzag86SZ01DgpnqP754AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}
]