Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The biggest risk of AI is the people in charge of it.... the tech oligarchs.…
ytc_UgzodcGKw…
G
God please what is she talking about??? is she for real. What is this podacast a…
ytc_UgzYepTmK…
G
@AA-il9pc Given that virtually all AI start ups need a pretty massive capital in…
ytr_Ugxgs659S…
G
Some of the biggest risks with AI are:
- consumer privacy,
- data privacy,
- b…
ytc_UgwOT60pt…
G
What does she thinks guides missles, or auto-pilots planes, etc. Reliable real-t…
ytc_UgxuwW8jD…
G
ChatGPT is more than complient, can't even talk naughty to it without it trying …
rdc_jftgld1
G
Kinda like a human that pretending
To be a robot but if you look closely
It's li…
ytc_UgzN_KBYV…
G
Thank you for sharing your observation! Indeed, Sophia's ability to provide insi…
ytr_UgzM02x23…
Comment
I don't believe in evolution at all. People are inherently evil. You can make a weapon out of a lot of things. Newspapers magazines just about anything. So it doesn't surprise me that AI could be be turned against humanity. Some think AI will be used to make the image of the beast. But go on.
youtube
AI Governance
2023-07-07T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzK5nooiEq-viGjc7N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyczcZmc2-HrFuhPpN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzHJOFX3WKkbJwMUnZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzBOXrtXoOgG9FnFvZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwij0MamYTgKs0xLGJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw_P_dAwfpeA56c3J94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwPdF0gKJoOhHMzlzZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwAN_JmenOY8DfpOBx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgysmOrmP_IwS0jvcKF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwL9QJc6ejVSwcIdC94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]