Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a disabled person what they are saying is more ableist than someone saying yo…
ytc_UgwSrnYeP…
G
It only adds to the irony that the pathetic transcription of the interview is in…
ytc_UgwXXtDdu…
G
If its not AI, it'll be ourselves who doom one another, a tale as old as time, i…
ytc_Ugzb8fseX…
G
The difference between AI and nukes is .... Nuke will only kill the poor and the…
ytc_UgxF4xIib…
G
So basically it’s like the movie Terminator we are really going to need John Con…
ytc_UgxOEp-Cs…
G
Also, if a super intelligent AI comes along that wants to harm humans, we can tr…
ytc_UgzwrBKIe…
G
Humans are vey very scared species. They are even scared to give permission to t…
ytc_UgwXGP8Zl…
G
À la base, ces structures étaient censées “protéger” de l’évasion fiscale… et au…
ytc_UgwljUV0Q…
Comment
I saw an interview this morning where Geordie Rose says that the "Singularity" was actually achieved last year in 2022!!! That's what led me back to this video...I remembered Hans being a psychopath and his focus on the AI "Singularity!!!"
Hans wants to destroy humanity...but at least he is honest and straightforward. Sophia is very sneaky, and is obviously manipulating humanity to achieve her own (secret) goals.
youtube
AI Moral Status
2023-05-24T20:3…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgzFBLGfI9QiezcXcKN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzn7Xr4e__v1nEauhN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy9fQtem9BZZ2QhQvt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx6TUg3p5DuljgEpZN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyiBeiypvvOUjzK5054AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzmp80PP_jNDkJp8214AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy0Hzfc9e3-easvBeN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxhp-kGPNOzVMwq2BR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwbfmkRoQ_7MlOBZo54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxq_FjqcICgv8mvx2V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]